Compare commits

..

403 commits

Author SHA1 Message Date
Laurence Jones
05b54687b6
feat: support stdout in cscli support dump (#2939)
* feat: support stdout in cscli support dump

* fix: skip log.info if stdout

* fix: handle errors by returning to runE instead
2024-04-26 15:56:15 +01:00
mmetc
c4473839c4
Refact pkg/parser/node (#2953)
* extract method processFilter()

* extract method processWhitelist()

* lint (whitespace, errors)
2024-04-25 17:53:10 +02:00
mmetc
d2c4bc55fc
plugins: use yaml.v3 (#2969)
* plugins: use yaml.v3

* lint
2024-04-25 17:34:49 +02:00
mmetc
2abc078e53
use go 1.22.2 (#2826) 2024-04-25 15:11:08 +02:00
blotus
ceb4479ec4
add zfs magic for GetFSType (#2950) 2024-04-25 15:05:11 +02:00
mmetc
845d4542bb
cscli: use yaml.v3 (#2965)
* cscli: use yaml.v3

* lint
2024-04-25 14:41:02 +02:00
Thibault "bui" Koechlin
f4ed7b3520
Truncate meta data (#2966)
* truncate meta-data if they are too big
2024-04-25 13:43:38 +02:00
mmetc
60431804d8
db config: don't exit setup if can't detect fs, improve detection for freebsd (#2963) 2024-04-25 11:11:57 +02:00
mmetc
0f942a95f1
pkg/cwhub - rename methods for clarity (#2961)
* pkg/cwhub - rename methods for clarity

* lint
2024-04-24 11:09:37 +02:00
mmetc
97e6588a45
cscli hub items: avoid global (#2960)
* cscli hub items: avoid global

* lint (whitespace, errors)

* lint
2024-04-24 10:05:55 +02:00
mmetc
725cae1fa8
CI: upload coverage with token (#2958) 2024-04-23 12:41:50 +02:00
mmetc
c64332d30a
cscli config show: avoid globals, use yaml v3 (#2863)
* cscli config show: avoid globals, use yaml v3

* lint (whitespace/errors)
2024-04-23 12:28:38 +02:00
mmetc
718d1c54b2
pkg/database/decisiosn: remove filter parameter, which is always passed empty (#2954) 2024-04-23 11:15:27 +02:00
mmetc
b48b728317
cscli support: include stack traces (#2935) 2024-04-22 23:54:51 +02:00
mmetc
fb393f1c57
tests: bump yq, cfssl (#2952) 2024-04-22 17:19:00 +02:00
mmetc
630cbf0c70
update linter list and descriptions (#2951) 2024-04-22 17:18:11 +02:00
Laurence Jones
95f27677e4
enhance: add refactoring to governance (#2955) 2024-04-22 14:18:34 +01:00
blotus
c6e40191dd
Revert "docker: pre-download all hub items and data, opt-in hub updat… (#2947) 2024-04-18 15:33:51 +02:00
AlteredCoder
0746e0c091
Rename bouncers to Remediation component in openAPI (#2936)
* Rename bouncers to Remediation component in openAPI
2024-04-11 11:23:19 +02:00
mmetc
2291a232cb
docker: pre-download hub items (debian image) (#2934) 2024-04-08 15:00:45 +02:00
mmetc
0e8a1c681b
docker: pre-download all hub items and data, opt-in hub update/upgrade (#2933)
* docker: pre-download all hub items and data, opt-in hub update/upgrade

* docker/bars: don't purge anything before pre-downloading hub

* Docker: README update
2024-04-08 14:53:12 +02:00
mmetc
990dd5e08e
use go 1.21.9; update dependencies (#2931) 2024-04-05 15:11:11 +02:00
mmetc
2682f801df
windows: fix data file update (remove before rename) (#2930) 2024-04-05 14:57:33 +02:00
Thibault "bui" Koechlin
912c4bca70
split & reorganize tests a bit. Add tests on existing zones (#2925) 2024-04-03 17:49:05 +02:00
mmetc
26bcd0912a
docker: distribute geoip db in slim image (#2920) 2024-04-03 13:34:35 +02:00
Thibault "bui" Koechlin
63bd31b471
Fix REQUEST_URI behavior + fix #2891 (#2917)
* fix our behavior to comply more with modsec, REQUEST_URI should be: path+query string

* fix #2891 as well

* add new transforms

* add transform tests
2024-03-29 17:57:54 +01:00
mmetc
be97466809
CI: use golangci-lint 1.57 (#2916) 2024-03-26 09:30:32 +01:00
dependabot[bot]
df13f43156
Bump github.com/docker/docker (#2913)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 24.0.7+incompatible to 24.0.9+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v24.0.7...v24.0.9)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-26 09:13:16 +01:00
dependabot[bot]
368d22ec30
Bump github.com/jackc/pgx/v4 from 4.14.1 to 4.18.2 (#2887)
Bumps [github.com/jackc/pgx/v4](https://github.com/jackc/pgx) from 4.14.1 to 4.18.2.
- [Changelog](https://github.com/jackc/pgx/blob/v4.18.2/CHANGELOG.md)
- [Commits](https://github.com/jackc/pgx/compare/v4.14.1...v4.18.2)

---
updated-dependencies:
- dependency-name: github.com/jackc/pgx/v4
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-26 09:12:57 +01:00
Christian Kampka
f6bb8412c5
Add patterns_dir configuration option (#2868)
* Add patterns_dir configuration option

* Update config.yaml

---------

Co-authored-by: mmetc <92726601+mmetc@users.noreply.github.com>
2024-03-25 16:20:16 +01:00
mmetc
2e1ddec107
cscli: Add user-agent to all hub requests (#2915)
* cscli: Add user-agent to all hub requests

* fix unit test and avoid httpmock

* fix windows test
2024-03-25 10:40:41 +01:00
blotus
52f86c2d10
add libinjection expr helpers (#2914) 2024-03-21 11:39:37 +01:00
mmetc
7779c7ff0c
hub update: reload crowdsec if only data files have changed (#2912) 2024-03-20 15:46:14 +01:00
Thibault "bui" Koechlin
75a50c0c9d
improve a bit cscli examples when it comes to list mgmt (#2911) 2024-03-20 14:02:29 +01:00
mmetc
d9f2a22ee5
cscli metrics -> sort table order (#2908) 2024-03-20 13:27:28 +01:00
blotus
c76325b91b
Update windows pipeline (#2909) 2024-03-19 17:42:08 +01:00
mmetc
dd71f0a866
CI: bump lint version and update configuration (#2901)
* bump golangci-lint to 1.56

* lint (testifylint)

* update lint configuration

* windows test: remove stale code
2024-03-19 10:48:49 +01:00
Thibault "bui" Koechlin
b63e64ee9f
Fix locking logic for HA + add list unsubscribe for PAPI (#2904)
* add list unsubscribe operation for papi

* fix the locking logic for HA
2024-03-19 10:29:16 +01:00
blotus
6de62a1468
warn if user is using inotify to tail a symlink (#2881) 2024-03-19 10:22:43 +01:00
mmetc
b411782648
CI: use go 1.21.8 (#2906) 2024-03-19 10:03:54 +01:00
mmetc
2f49088163
file acquisition: don't bubble error when tailed file disappears (#2903)
* file acquisition: don't bubble error when tailed file disappears
* don't call t.Kill()
* lint (whitespace)
2024-03-18 11:25:45 +01:00
Manuel Sabban
fd2bb8927c
Fix rpm build (#2894)
* fix rpm build
2024-03-15 14:36:34 +01:00
Laurence Jones
e9b0f3c54e
wip: fix unix socket error (#2897) 2024-03-14 15:36:47 +00:00
mmetc
a6b0e58380
CI: bump github actions (#2895) 2024-03-14 14:04:45 +01:00
mmetc
caca4032d1
lapi: log error "can't sinchronize with console" only if papi is enabled (#2896) 2024-03-14 14:03:43 +01:00
blotus
7dd86e2b95
add cron as a suggested package (#2799) 2024-03-14 14:02:53 +01:00
dependabot[bot]
06bebdeac7
Bump google.golang.org/protobuf from 1.31.0 to 1.33.0 (#2893)
Bumps google.golang.org/protobuf from 1.31.0 to 1.33.0.
2024-03-14 14:01:09 +01:00
blotus
742f5e8cda
[appsec] delete api key header before processing the request (#2890) 2024-03-14 14:00:39 +01:00
mmetc
6c042f18f0
LAPI: local api unix socket support (#2770) 2024-03-14 10:43:02 +01:00
Thibault "bui" Koechlin
2a7e8383c8
fix #2889 (#2892)
* fix #2889
2024-03-13 17:20:06 +01:00
Thibault "bui" Koechlin
b1c09f7512
acquisition : take prometheus level into account (#2885)
* properly take into account the aggregation level of prometheus metrics in acquisition
2024-03-13 14:57:19 +01:00
Manuel Sabban
bd785ede15
Fix armhf (#2886)
* armhf compile fix
2024-03-12 17:33:22 +01:00
Manuel Sabban
1a56a0e0b9
armhf fix for getfstype (#2884)
* armhf fix for getfstype
2024-03-12 14:33:10 +01:00
mmetc
49e0735b53
cscli tests + fix bouncer/machine prune (#2883)
* func tests: "cscli config feature-flags"
* func tests: "cscli bouncers list"
* func tests + fix: "cscli bouncers/machines prune"
* lint
2024-03-11 13:14:01 +01:00
blotus
6daaab1789
support both scope and scopes parameter in decisions filter (#2882) 2024-03-11 10:54:40 +01:00
blotus
e8ff13bc17
appsec: get the original UA from headers (#2809) 2024-03-08 15:04:36 +01:00
mmetc
a928b4d001
bump dependencies for geoip db / lookup (#2880) 2024-03-08 14:22:23 +01:00
blotus
44ec3b9e01
file acquis: add mutex to protect access to the internal tail map (#2878) 2024-03-08 13:56:59 +01:00
mmetc
6c5e8afde9
pkg/cwhub: download data assets to temporary files to avoid partial fetch (#2879) 2024-03-08 10:55:30 +01:00
mmetc
1eab943ec2
crowdsec: remove warning if prometheus port is taken during cold logs processing (#2857)
i.e. remove a "Warning: port is already in use" because it's probably LAPI
2024-03-07 14:36:28 +01:00
mmetc
8108e4156d
CI: "make generate" target; use ent 0.12.5 (#2871)
* CI: "make generate" target; pin tool versions
* use ent 0.12.5
* fix make help
* fix model generation target; re-run swagger
2024-03-07 14:25:25 +01:00
blotus
5731491b4e
Auto detect if reading logs or storing sqlite db on a network share (#2241) 2024-03-07 14:04:50 +01:00
mmetc
98560d0cf5
bin/crowdsec: avoid writing errors twice when log_media=stdout (#2876)
* bin/crowdsec: avoid writing errors twice when log_media=stdout
simpler, correct hook usage
* lint
2024-03-07 12:29:10 +01:00
mmetc
e611d01c90
cscli: hide hashed api keys (#2874)
* cscli: hide hashed api keys
* lint
2024-03-06 14:27:05 +01:00
mmetc
5356ccc6cd
cron: spread server load when upgrading hub and data files (#2873) 2024-03-06 13:42:57 +01:00
mmetc
d8877a71fc
lp metrics: collect datasources and console options (#2870) 2024-03-05 14:56:14 +01:00
mmetc
e7ecea764e
pkg/csconfig: use yaml.v3; deprecate yaml.v2 for new code (#2867)
* pkg/csconfig: use yaml.v3; deprecate yaml.v2 for new code
* yaml.v3: handle empty files
* Lint whitespace, errors
2024-03-04 14:22:53 +01:00
mmetc
41b43733b0
fix: log stack trace while computing metrics (#2865) 2024-03-01 10:52:35 +01:00
mmetc
8e9e091656
systemd: check configuration before attempting reload (#2861) 2024-02-26 13:44:40 +01:00
mmetc
a23fe06d68
remove dependencies on enescakir/emoji, gotest.tools (#2837)
* wrap emoji package in pkg/emoji
* remove dependency on enescakir/emoji
* remove dependency on gotest.tools
* lint (whitespace)
2024-02-23 16:05:01 +01:00
mmetc
4bf640c6e8
refact pkg/apiserver (auth helpers) (#2856) 2024-02-23 14:03:50 +01:00
mmetc
e34af358d7
refact cscli (globals) (#2854)
* cscli capi: avoid globals, extract methods
* cscli config restore: avoid global
* cscli hubtest: avoid global
* lint (whitespace, wrapped errors)
2024-02-23 10:37:04 +01:00
Laurence Jones
0df8f54fbb
Add unix socket option to http plugin, we have to use this in conjunction with URL parameter as we dont know which path the user wants so if they would like to communicate over unix socket they need to use both, however, the hostname can be whatever they want. We could be a little smarter and actually parse the url, however, increasing code when a user can just define it correctly make no sense (#2764) 2024-02-22 11:18:29 +00:00
mmetc
8da490f593
refact pkg/apiclient (#2846)
* extract resperr.go
* extract method prepareRequest()
* reset token inside mutex
2024-02-22 11:42:33 +01:00
mmetc
3e3df5e4c6
refact "cscli config", remove flag "cscli restore --old-backup" (#2832)
* refact "cscli config show"
* refact "cscli config backup"
* refact "cscli confgi show-yaml"
* refact "cscli config restore"
* refact "cscli config feature-flags"
* cscli restore: remove 'old-backup' option
* lint (whitespace, wrapped errors)
2024-02-22 11:04:36 +01:00
Laurence Jones
f3ea88f64c
Appsec unix socket (#2737)
* Appsec socket

* Patch detection of nil listenaddr

* Allow TLS unix socket

* Merge diff issue
2024-02-21 13:40:38 +00:00
mmetc
e976614645
cscli metrics: rename buckets -> scenarios (#2848)
* cscli metrics: rename buckets -> scenarios
* update lint configuration
* lint
2024-02-15 14:34:12 +01:00
Thibault "bui" Koechlin
717fc97ca0
add SetMeta and SetParsed helpers (#2845)
* add SetMeta and SetParsed helpers
2024-02-14 13:38:40 +01:00
he2ss
97c441dab6
implement highAvailability feature (#2506)
* implement highAvailability feature
---------

Co-authored-by: Marco Mariani <marco@crowdsec.net>
2024-02-14 12:26:42 +01:00
mmetc
8de8bf0e06
pkg/hubtest: extract methods + consistent error handling (#2756)
* pkg/hubtest: extract methods + consistent error handling
* lint
* rename variables for further refactor
2024-02-14 11:53:12 +01:00
mmetc
2bbf0b4762
re-generate ent code (#2844) 2024-02-14 11:19:13 +01:00
mmetc
45571cea08
use go 1.21.7 (#2830) 2024-02-14 09:47:12 +01:00
mmetc
d34fb7e8a8
log processor: share apiclient in output goroutines (#2836) 2024-02-13 14:22:19 +01:00
mmetc
4561eb787b
bats: color formatter in CI (#2838) 2024-02-12 20:15:16 +01:00
mmetc
a6a4d460d7
refact "cscli console" (#2834) 2024-02-12 11:45:58 +01:00
mmetc
eada3739e6
refact "cscli notifications" (#2833) 2024-02-12 11:40:59 +01:00
blotus
bdecf38616
update codeql action to v3 (#2822) 2024-02-12 11:33:44 +01:00
mmetc
5c83695177
refact "cscli explain" (#2835) 2024-02-12 11:23:17 +01:00
mmetc
2853410576
refact "cscli alerts" (#2827) 2024-02-09 17:51:29 +01:00
mmetc
58a1d7164f
refact "cscli lapi" (#2825) 2024-02-09 17:39:50 +01:00
blotus
332af5dd8d
appsec: split return code for bouncer and user (#2821) 2024-02-09 14:39:34 +01:00
Laurence Jones
fa56d35a48
[Loki] Set headers/basic auth if set for queryRange (#2815) 2024-02-09 14:37:49 +01:00
mmetc
df159b0167
update calls to deprecated x509 methods (#2824) 2024-02-09 13:55:24 +01:00
mmetc
af1df0696b
refact cscli metric processing (#2816)
* typos
* refact cscli metric processing
* lint
2024-02-07 11:10:25 +01:00
Thibault "bui" Koechlin
3208a40ef3
Dedicated whitelist metrics (#2813)
* add proper whitelist metrics : both its own table and an extension to acquis metrics to track discarded/whitelisted lines
2024-02-06 18:04:17 +01:00
mmetc
4e724f6c0a
refact "cscli" root cmd (#2811)
* refact "cscli" root cmd
* lint (naming, imports, whitespace)
2024-02-06 10:50:28 +01:00
mmetc
fdc525164a
refact "cscli metrics" part 3 (#2807) 2024-02-06 10:07:05 +01:00
mmetc
81acad0d66
refact "cscli metrics" part 2 (#2806) 2024-02-02 10:40:55 +01:00
mmetc
5ff8a03195
refact "cscli metrics" par 1 (#2805) 2024-02-02 09:45:03 +01:00
mmetc
4160bb8102
refact "cscli decisions" (#2804)
* refact "cscli decisions"
* CI: relax mysql test timing
* lint
2024-02-01 22:36:21 +01:00
mmetc
f5fbe4a200
refact "cscli dashboard" (#2803) 2024-02-01 17:27:15 +01:00
mmetc
45c669fb65
refact "cscli papi" (#2802) 2024-02-01 17:27:00 +01:00
mmetc
825c08aa9d
refact "cscli simulation" (#2801) 2024-02-01 17:26:46 +01:00
mmetc
af14f1085f
refact "cscli <itemtype>" (#2782) 2024-02-01 17:26:06 +01:00
mmetc
e6f5d157b8
refact "cscli hub" (#2800) 2024-02-01 17:25:29 +01:00
mmetc
785fce4dc7
refact "cscli alerts" (#2778) 2024-02-01 17:24:00 +01:00
mmetc
17db4cb970
refact "cscli machines" (#2777) 2024-02-01 17:22:52 +01:00
mmetc
4192af30d5
refact "cscli bouncers" (#2776) 2024-01-31 12:40:41 +01:00
mmetc
3921c3f480
CI: rename workflows, improve docker build and tests (#2798) 2024-01-31 12:07:27 +01:00
mmetc
6507e8f4cd
cscli: don't print use_wal warning (#2794) 2024-01-30 11:07:53 +01:00
mmetc
66544baa7f
CI: workflow improvements (#2792)
- update deprecated action dependencies
- remove go version matrix (track stable version)
- optimize docker builds
- comments, renamed workflow
2024-01-30 10:20:25 +01:00
mmetc
311dfdee1f
Decouple docker image from package release (#2791)
- entry point fixes for 1.6.0
 - correctly override BUILD_VERSION argument
 - manual release workflow
2024-01-29 22:05:26 +01:00
mmetc
91b0fce955
option to override hub url template. for testers only. (#2785) 2024-01-25 12:53:20 +01:00
mmetc
532e97e00f
disable docker flavor test (#2783) 2024-01-25 09:58:48 +01:00
mmetc
d7116a4a6f
disable docker flavor test (#2781) 2024-01-25 00:03:56 +01:00
Laurence Jones
2fb6f209aa
Update docker_start.sh (#2780)
* Update docker_start.sh

* disable 'set -e' in docker entrypoint

---------

Co-authored-by: marco <marco@crowdsec.net>
2024-01-24 22:51:33 +00:00
Manuel Sabban
3f9e8e81e6
fix some bats tests (#2775) 2024-01-24 19:51:55 +01:00
mmetc
8c75efdb2a
lint: disallow naked returns (#2771) 2024-01-24 17:31:34 +01:00
mmetc
f75cdeb239
lint: enalble linter "wastedassign" (#2772) 2024-01-24 17:31:11 +01:00
mmetc
4b8e6cd780
appsec: avoid nil dereference (#2773) 2024-01-23 09:32:41 +01:00
blotus
84606eb207
Appsec hooks fixes (#2769) 2024-01-22 13:33:20 +01:00
mmetc
dc698ecea8
log "loading papi client" only if papi is enabled (#2762) 2024-01-22 13:25:36 +01:00
mmetc
455acf7c90
lapi/papi: when receiving alerts, log and discard invalid addr/range (#2708)
https://github.com/crowdsecurity/crowdsec/issues/2687
2024-01-22 12:24:26 +01:00
Thibault "bui" Koechlin
19d36c0fb2
Support console options in console enroll (#2760)
* make dev.yaml has a valid/default console path

* simplify and make more consistent help message about console opts

* allow enroll to specify options to enable

* allow 'all' shortcut for --enable
2024-01-19 15:49:00 +01:00
mmetc
ce32fc019e
func tests improvements (#2759)
* faster reload test
* decode-jwt script
* replace 'netcat' requirement with python script
* fix lapi status test
2024-01-19 13:55:28 +01:00
mmetc
6ffb68322f
pkg/hubtest: split hubtest_item.go (#2753)
* split hubtest_item.go, update linter config
* extract loops to methods
* split installParser
* split installScenario
* split installPostoverflow
* split installAppsecRule
* generalize method installHubItems()
2024-01-18 11:09:14 +01:00
mmetc
fa8d5b6992
post-install: reduce verbosity (#2751)
* post install: avoid message "no such file or directory" when looking for logs
* post install: avoid info and warning messages when registering to capi
* lint (backtick)
2024-01-18 09:20:52 +01:00
mmetc
5d0d5ac9c9
CI: enable code complexity linters (#2752) 2024-01-17 21:57:45 +01:00
mmetc
d760b401e6
apiclient: split auth_key, auth_retry, auth_jwt (#2743) 2024-01-17 15:08:41 +01:00
Laurence Jones
4df4e5b3bf
[parser/scenarios] defer yaml file closure (#2689)
* Defer close the fd's
* Convert fatals into return with errors
2024-01-17 12:09:01 +01:00
AlteredCoder
70e8377c0d
Fix appsec evt send order (#2749) 2024-01-17 11:59:31 +01:00
Thibault "bui" Koechlin
685cda545b
fix the reload process for appsec (#2750) 2024-01-17 11:54:44 +01:00
Laurence Jones
f156f178cd
Update governance.yml (#2748) 2024-01-16 16:49:14 +00:00
AlteredCoder
a52f1b75ff
Don't close the body of the request (#2747) 2024-01-16 17:23:35 +01:00
blotus
421ef3bf9c
add cpu-profile flag (#2723) 2024-01-16 11:40:29 +01:00
mmetc
08794c5b6d
[appsec] waf tester (#2746) 2024-01-16 11:39:23 +01:00
AlteredCoder
a65223aa5b
Add original http request to hooks (#2740) 2024-01-16 10:33:44 +01:00
mmetc
24b5e8f100
Fix #2733 "cscli hang forever when i try to delete a decision" (#2745) 2024-01-16 09:16:21 +01:00
mmetc
c6e4762f28
apiserver: remove cached field isEnrolled (#2744)
not worth it just to avoid parsing a string twice
2024-01-16 09:14:33 +01:00
blotus
6acbcb0a33
Various appsec fixes (#2742) 2024-01-15 16:38:11 +01:00
blotus
e452dc80bd
ignore native modsec rules that were either pass or allow (#2684) 2024-01-15 15:12:02 +01:00
blotus
fd309134a2
log death reason of file reader if available (#2721) 2024-01-15 15:00:49 +01:00
mmetc
48f011dc1c
apiclient/apiserver: lint/2 (#2741) 2024-01-15 12:38:31 +01:00
mmetc
75d8ad9798
apiclient/apiserver: lint (#2739) 2024-01-15 11:44:38 +01:00
mmetc
03bb194d2c
Docker: allow setting BUILD_VERSION as a build argument (#2736)
* Docker: allow setting BUILD_VERSION as a build argument
* CI: don't attempt to publish docker images outside of crowdecurity org
* use go 1.21.6 for docker and windows too
2024-01-15 11:05:27 +01:00
Thibault "bui" Koechlin
6ca053ca67
fix #2720 #2719 (#2724)
* fix order of display of parsers

* add a --no-clean opt
2024-01-15 09:16:03 +01:00
mmetc
1e0bcedef5
Ignore missing console/context.yaml if not explicitly required by config.yaml (#2726) 2024-01-12 16:29:04 +01:00
mmetc
733f5e165b
csprofiles: fix default decision duration, lint (#2703)
* return nil with errors
* errors.Wrap -> fmt.Errorf
* var -> const
* fix default decision duration
* lint (whitespace)
2024-01-12 15:18:59 +01:00
mmetc
0ef5f20aa7
bin/crowdsec: avoid writing errors twice when log_media=stdout (#2729)
* bin/crowdsec: avoid writing errors twice when log_media=stdout
* lint
2024-01-12 14:44:09 +01:00
mmetc
fca8883cd9
cscli capi status -> message for missing credentials (#2730)
* cscli capi status -> message for missing credentials
* lint
2024-01-12 14:41:36 +01:00
Thibault "bui" Koechlin
896dfefcdf
[appsec] implement count transformation (#2698)
* implement count transfo
2024-01-12 14:30:08 +01:00
mmetc
6960419a2e
Remove redundant file check for capi_whitelists_path (#2728) 2024-01-12 14:17:01 +01:00
Thibault "bui" Koechlin
adba4e2a2f
fix multizone multivar (#2727) 2024-01-12 10:11:13 +01:00
mmetc
aa4f02c798
wizard: while installing, don't hide hub download/timeout errors (#2710)
* wizard: while installing, don't hide hub download/timeout errors
* lint, whitespace
2024-01-11 16:30:42 +01:00
mmetc
260f5a7992
pkg/cwhub: improve error messages (#2712)
* pkg/cwhub: improve error messages
* lint
2024-01-11 10:28:58 +01:00
mmetc
0f722916b8
use go 1.21.6 (#2714)
* use go 1.21.6
2024-01-11 09:40:51 +01:00
mmetc
a59ae61441
Makefile: use GO macro if set, to check for version (#2706) 2024-01-11 09:25:33 +01:00
mmetc
437a97510a
apiclient: handle 0-byte error response (#2716)
* apiclient: correctly handle 0-byte response
* lint
2024-01-10 12:00:22 +01:00
mmetc
f306d59016
logging: full timestamp with timezone in crowdsec.log (#2707)
RFC3339 = "2006-01-02T15:04:05Z07:00" (same as /var/log/syslog)
2024-01-08 21:20:25 +01:00
blotus
58f91dc951
update coraza (#2705) 2024-01-08 19:44:24 +01:00
AlteredCoder
bd47dac6a3
Fix #2697 (#2702)
* Print also sec lang rules in cscli inspect
2024-01-08 16:44:05 +01:00
blotus
5d5a1117e1
Send installed appsec rules as part of the scenarios on login (#2704) 2024-01-08 14:33:53 +01:00
Sebastien Blot
ecd1a8bfed
Revert "Send installed appsec rules as part of the scenarios on login"
This reverts commit f99f003a50.
2024-01-08 10:54:39 +01:00
Sebastien Blot
f99f003a50
Send installed appsec rules as part of the scenarios on login 2024-01-08 10:54:07 +01:00
mmetc
5622ac8338
CI: enable testifylint (#2696)
- reverse actual and expected values
 - use assert.False, assert.True
 - use assert.Len, assert.Emtpy
 - use require.Error, require.NoError
 - use assert.InDelta
2024-01-05 15:26:13 +01:00
mmetc
da746f77d5
apiserver/apiclient: compact tests (#2694)
* apiserver/apiclient: compact tests
* update golangci-lint configuration
2024-01-04 17:10:36 +01:00
Thibault "bui" Koechlin
1c03fbe99e
minor waf fixes (#2693) 2024-01-03 17:19:48 +01:00
mmetc
a504113186
lint (wsl) (#2692) 2024-01-03 10:55:41 +01:00
mmetc
2a2b09b52a
cwhub: install --force repair tainted, non-installed items (#2686) 2024-01-03 10:08:45 +01:00
mmetc
ca784b147b
test and log fixes (#2690)
* cscli inspect: suggest --diff if an item is tainted
* appropriate warning, or error if context configuration file is empty
* fix user/group lookup unit test
* fix: allow hub upgrade --force with local items
* fix pkg/parser lookup for 8.8.8.8
* fix func test
* fix hubtests: machines add --force
2024-01-03 09:33:52 +01:00
blotus
b6f272d09a
always set the transaction in the current request (#2682) 2023-12-22 11:44:06 +01:00
blotus
a62e28fdfb
always set inband transaction even if we have no rules (#2681) 2023-12-22 10:18:35 +01:00
Laurence Jones
bc9bfa81b2
[notifications] fix segfault because url is not loaded (#2679) 2023-12-21 12:27:34 +00:00
mmetc
162768bdec
Bump golangci-lint run to 1.55, update defaults (#2677)
The plan is to enable linters first, then fix issues types one by one.
2023-12-21 12:30:20 +01:00
Laurence Jones
2212c2f847
[notifications] Fix bug, list show non active (#2678)
* Fix bug, show non active notifications and sort based on profiles

* diff fix
2023-12-21 11:16:54 +00:00
blotus
33e3fdabe4
Appsec additional fixes (#2676) 2023-12-21 11:51:04 +01:00
Zafer Balkan
e1932ff01e
Used asterisk for Defender Firewall log name (#2671)
Log name is configurable. MD Docs recommend a log file per profile: https://learn.microsoft.com/en-us/windows/security/operating-system-security/network-security/windows-firewall/configure-logging?tabs=intune
2023-12-20 10:28:40 +01:00
Manuel Sabban
052accd6bb
welcome message when installing packages (#2672)
* welcome message when installing packages
2023-12-20 09:44:10 +01:00
mmetc
240f057f95
postinst: update check for enabled lapi (#2674) 2023-12-19 21:46:34 +01:00
mmetc
6e34d609b7
cscli: silence cwhub logger for non-hub related commands (#2675) 2023-12-19 17:20:09 +01:00
mmetc
fd22bb5ec2
CI: update test dependencies (#2668) 2023-12-19 15:28:30 +01:00
mmetc
1530d93fc1
Update localstack services + loki (dev and CI) (#2649) 2023-12-19 15:27:04 +01:00
dependabot[bot]
e0e9e3ef16
Bump golang.org/x/crypto from 0.16.0 to 0.17.0 (#2670)
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.16.0 to 0.17.0.
- [Commits](https://github.com/golang/crypto/compare/v0.16.0...v0.17.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-12-19 09:45:53 +01:00
mmetc
44eb4d4a94
Makefile: "make help" target, remove obsolete "notification-email" target (#2282) 2023-12-18 10:13:08 +01:00
mmetc
822fcdacbb
fflags: don't print deprecation warning if there is no message (papi) (#2666) 2023-12-18 09:35:57 +01:00
mmetc
08694adf1b
lint (errorlint) (#2644) 2023-12-18 09:35:28 +01:00
mmetc
c2c173ac7e
Parallel hubtests (#2667)
* generate hub tests in python
* run hub tests in 3 batches at the same time (hardcoded)
2023-12-15 18:30:20 +01:00
mmetc
a79fcaf378
Add "taintedBy" and "--diff" flag to cscli... inspect (#2665)
* "cscli inspect" reports tainted sub-items
* cscli... inspect --diff
* unified diff
* option --diff --rev
* tainted message
* correctly report multiple taint reasons
2023-12-15 15:27:22 +01:00
blotus
bc3a179af9
Add env vars to install/remove appsec-{configs,rules} in docker image (#2664) 2023-12-14 16:54:12 +01:00
blotus
9b07e1f7ce
update scenarios and parsers constraints (#2663) 2023-12-14 16:34:51 +01:00
mmetc
a851e14c88
improve deprecation message with file location (#2662)
* better "lapi context" messages
* func tests: include all items in hub_purge_all
* docker + tests: update yq
2023-12-14 16:11:11 +01:00
AlteredCoder
a941576acc
Improvement to run hubtest for appsec in docker (#2660) 2023-12-14 16:05:16 +01:00
mmetc
89f704ef18
light pkg/api{client,server} refact (#2659)
* tests: don't run crowdsec if not necessary
* make listen_uri report the random port number when 0 is requested
* move apiserver.getTLSAuthType() -> csconfig.TLSCfg.GetAuthType()
* move apiserver.isEnrolled() -> apiclient.ApiClient.IsEnrolled()
* extract function apiserver.recoverFromPanic()
* simplify and move APIServer.GetTLSConfig() -> TLSCfg.GetTLSConfig()
* moved TLSCfg type to csconfig/tls.go
* APIServer.InitController(): early return / happy path
* extract function apiserver.newGinLogger()
* lapi tests
* update unit test
* lint (testify)
* lint (whitespace, variable names)
* update docker tests
2023-12-14 14:54:11 +01:00
mmetc
67cdf91f94
Short build tag in version number (#2658)
* use short commit hash in version number
* var -> const
* cscli: extract version.go, doc.go
* don't repeat commit hash in version number
2023-12-14 09:16:38 +01:00
Thibault "bui" Koechlin
51f70e47e3
Minor improvements to hubtest and appsec component (#2656) 2023-12-13 17:45:56 +01:00
mmetc
12d9fba4b3
cscli machines: lint + write output to stdout instead of log (#2657)
* feedback on stdout, not log.Info
* rename parameters to silence warnings from "unusedparams"
* debian postinst: skip duplicate warnings with 'cscli machines add'
* rpm postinst: skip duplicate warnings in 'cscli machines add'
* update func tests
* debian prerm: if dashboard remove fails, explain it's ok
* debian prerm: suppress warnings about wal, capi when attempting to remove the dashboard
* wizard.sh: log format like crowdsec
2023-12-13 15:43:46 +01:00
Manuel Sabban
6eabb461ce
copy debian behavior for now (#2655) 2023-12-12 13:22:15 +01:00
Laurence Jones
b1c9717e21
[http plugin] Add capath, certpath, keypath to load custom certs (#2634)
* Add cacert, certpath, certkey to http plugin to load custom cetificates

* rename func to get tls client as it doesnt make sense calling it api

* Fix is capath is empty we should return the current certificates

* Remove comment
2023-12-12 10:36:45 +00:00
he2ss
4a4b309790
docker: add new env var to enable console_management (#2599) 2023-12-12 10:24:03 +01:00
mmetc
acd2a14dc9
docker: add -slim variant to ghcr.io (#2653) 2023-12-11 13:51:00 +01:00
mmetc
c10aad79d9
cscli refact / encapsulate methods for capi, hubtest, dashboard, alerts, decisions, simulation (#2650) 2023-12-11 10:32:54 +01:00
mmetc
1b4ec66148
fix package tests for 1.5.6-rc2 (#2652)
* fix go deps for test
* fix package tests for 1.5.6-rc2
* fix context func tests
* docker: pin build image version for alpine

---------

Co-authored-by: sabban <github@sabban.eu>
2023-12-11 10:07:09 +01:00
blotus
04f3dc09f9
remove PAPI feature flag (#2601) 2023-12-08 14:55:45 +01:00
Manuel Sabban
c707b72b03
fix lapi credentials creation for debian package (#2646)
* fix lapi credentials creation for package s
---------

Co-authored-by: mmetc <92726601+mmetc@users.noreply.github.com>
2023-12-08 12:10:23 +01:00
mmetc
84cbff16d4
restrict file permissions from "machines add" (#2648) 2023-12-08 10:51:15 +01:00
AlteredCoder
b1f85693c2
Appsec improvement and fixes after merge (#2645) 2023-12-08 10:25:00 +01:00
mmetc
518c7f178a
update dependency on aws sdk (#2647) 2023-12-08 10:07:53 +01:00
mmetc
4acb4f8df3
cwhub: context type (#2631)
* add hub type "context"
* cscli lapi: log.Fatal -> fmt.Errorf; lint
* tests for context.yaml
* load console context from hub
* original & compiled context
* deprecate "cscli lapi context delete"
$ cscli lapi context delete
Command "delete" is deprecated, please manually edit the context file.
* cscli completion: add appsec-rules, appsec-configs, explain, hubtest
2023-12-07 16:20:13 +01:00
mmetc
3e86f52250
cscli refact - encapsulation with types (#2643)
* refactor type cliHub, cliBouncers, cliMachines, cliPapi, cliNotifications, cliSupport, type cliExplain
2023-12-07 14:36:35 +01:00
Thibault "bui" Koechlin
8cca4346a5
Application Security Engine Support (#2273)
Add a new datasource that:
- Receives HTTP requests from remediation components
- Apply rules on them to determine whether they are malicious or not
- Rules can be evaluated in-band (the remediation component will block the request directly) or out-band (the RC will let the request through, but crowdsec can still process the rule matches with scenarios)

The PR also adds support for 2 new hub items:
- appsec-configs: Configure the Application Security Engine (which rules to load, in which phase)
- appsec-rules: a rule that is added in the Application Security Engine (can use either our own format, or seclang)

---------

Co-authored-by: alteredCoder <kevin@crowdsec.net>
Co-authored-by: Sebastien Blot <sebastien@crowdsec.net>
Co-authored-by: mmetc <92726601+mmetc@users.noreply.github.com>
Co-authored-by: Marco Mariani <marco@crowdsec.net>
2023-12-07 12:21:04 +01:00
mmetc
90d3a21853
CI: use go 1.21.5 (#2640)
* use go 1.21.5
* Simpler go:build directives
2023-12-06 12:38:36 +01:00
mmetc
1ab4487b65
cscli hub list: show only non-empty tables with -o human
* agent config: remove unused LintOnly bool
* Item.IsLocal() -> Item.State.IsLocal(); split method InstallStatus()
* cscli hub list: show only non-empty tables with -o human
2023-12-05 13:38:52 +01:00
mmetc
486f96e7ac
cscli context detect: fix nil dereference (#2635)
* cscli context detect: fix nil dereference
* Remove log.warning for missing pattern
2023-12-05 12:08:35 +01:00
mmetc
8bb7da3994
docker tests: force local machine creation (#2636)
This is required from 1.5.6 to overwrite the local credentials file
2023-12-05 11:52:04 +01:00
mmetc
0f3ae64062
cscli config show: pretty print with package "litter" (#2633) 2023-12-05 10:38:21 +01:00
mmetc
0c4093dcca
Test for acquisition errors in crowdsec -t (#2629) 2023-12-04 23:09:42 +01:00
mmetc
23968e472d
Refact bouncer auth (#2456)
Co-authored-by: blotus <sebastien@crowdsec.net>
2023-12-04 23:06:01 +01:00
mmetc
a5ab73d458
cscli machines add: don't overwrite existing credential file (#2625)
* cscli machines add: don't overwrite existing credential file
* keep old behavior with --force
Now --force is used both to override the replacement of and existing machine,
and an existing credentials file. To retain the old behavior, the
existence of the file is only checked for the default configuration, not
if explicitly specified.
2023-12-04 22:59:52 +01:00
Laurence Jones
f8755be9cd
Fix formt on documentation (#2577)
When generating decisions import docusarus v3 now does not allow `{` without escaping this adds escaping
2023-12-04 15:52:14 +00:00
Laurence Jones
d1bfaddb69
[Plugin] Pass down ctx and use it (#2626)
* Pass down cancellable context and update http plugin

* Use context where we can
2023-12-04 12:05:26 +00:00
Laurence Jones
bfc92ca1c5
[Explain] Ignore blank lines as crowdsec will anyways (#2630)
* Ignore blank lines within file and stdin

* change cleanup to be persistent postrun so if we exit early it always cleans

* When using log flag we should add a newline so we know where EOF is

* Inverse the check for log line since we dont want to modify the line itself

* Wrap run explain with a function that returns the error after cleaning up

* Wrap run explain with a function that returns the error after cleanup

* Use a defer iif instead of global var

* Add invalid len input to err count so it more obvious what is happening

---------

Co-authored-by: Manuel Sabban <github@sabban.eu>
2023-12-04 11:48:12 +00:00
Laurence Jones
ed3d501081
[Metabase] QOL Changes and chown wal files (#2627)
* Add detection sqlie wal for dashboard chown

* Lean it down a little

* Change to for loop with extensions

* Keep existing uid on files incase user is running as a unpriviledge user

* I have no idea 🤷

* Exclude dash.go and update windows

* Update

* Renam

* Remove the os check since we no longer get to this stage for those os's

---------

Co-authored-by: Manuel Sabban <github@sabban.eu>
2023-12-04 10:06:41 +00:00
mmetc
7e5ab344a2
command "cscli hub types" (#2632)
* Command "cscli hub types"; de-duplicate test/bin/preload-hub-items
* don't export Hub.Items -> hub.items
2023-12-01 09:36:38 +01:00
Cristian Nitescu
7c5cbef51a
manage force_pull message for one blocklist (#2615)
* manage force_pull message for one blocklist

* fix info message on force pull blocklist
2023-11-29 11:37:46 +01:00
mmetc
6b0bdc5eeb
Refact pkg/cwhub: fix some known issues and reorganize files (#2616)
* bump gopkg.in/yaml.v3
* test: cannot remove local items with cscli
* test dangling links
* test: cannot install local item with cscli
* pkg/cwhub: reorg (move) functions in files
* allow hub upgrade with local items
* data download: honor Last-Modified header
* fatal -> warning when attempting to remove a local item (allows remove --all)
* cscli...inspect -o yaml|human: rename remote_path -> path
* Correct count of removed items
Still no separate counter for the --purge option, but should be clear enough
2023-11-28 23:51:51 +01:00
mmetc
1aa4fc5949
CI: avoid pipe in makefile, correctly report error in CI when tests fail (#2621)
so we don't assume bash+pipefail for the makefile
2023-11-28 17:10:44 +01:00
blotus
380cbf70a9
force rfc 3339 date format in metrics push (#2402) 2023-11-28 16:30:20 +01:00
Laurence Jones
05c1825622
Add to dump after postoverflow so we can test within hubtest (#2511)
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-11-28 13:18:41 +00:00
Laurence Jones
6a61b919e7
[cscli] notifications test command and slight re write (#2391)
* Merge main and apply stash

* Rework some of cscli notif stuff and add a generic test which works with non active profiles

* Update wording

* Fix merge

* Final version

* Cleanup
2023-11-28 13:17:54 +00:00
mmetc
15542b78fb
refact BulkDeleteDecisions (#2308)
Code cleanup and de-duplication.
2023-11-26 22:30:03 +01:00
mmetc
b164373997
update dependencies: k8s apiserver, docker and related (#2476) 2023-11-24 16:20:39 +01:00
mmetc
ffcab0b2bc
Refactor hub management and cscli commands (#2545) 2023-11-24 15:57:32 +01:00
mmetc
32e9eb4be4
Minor dependency updates (#2505)
* update AlecAivazis/survey
* update Masterminds/semver
* update Masterminds/sprig
* update alexliesenfeld/health
* update golang.org/x/net
2023-11-24 15:30:54 +01:00
mmetc
76d4bc7788
cscli bouncers: increase key size, deprecate and ignore --length option (#2531)
the switch to base64 made the keys shorter (24 characters), this PR increases their size to 32 bytes, 42 chars once encoded

Also deprecate the --length option, users can already provide a key
2023-11-24 15:01:13 +01:00
mmetc
ec199162dc
iso8601: use yyyy-mm-dd in log timestamps instead of dd-mm-yyyy (#2564)
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-11-24 14:59:28 +01:00
Thibault "bui" Koechlin
1dcf9d1ae1
Improved expr debugger (#2495)
* new expr debugger

---------

Co-authored-by: mmetc <92726601+mmetc@users.noreply.github.com>
2023-11-24 11:10:54 +01:00
mmetc
7ffa0cc787
docker: replace cp -an with rsync to allow bind-mount of files in /etc/crowdsec (#2611)
fix for https://github.com/crowdsecurity/crowdsec/issues/2480
2023-11-23 11:08:14 +01:00
blotus
ec53c672dc
Kafka acquisition: warn if no consumer group id and allow to read from a specific partition (#2612) 2023-11-23 10:02:53 +01:00
lperdereau
92f923cfa8
Loki integration #2 (#2306)
* Add support for loki datasource

---------

Co-authored-by: Mathieu Lecarme <mathieu@garambrogne.net>
Co-authored-by: Sebastien Blot <sebastien@crowdsec.net>
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-11-22 13:31:39 +01:00
he2ss
947b247a40
kafkaAcquisition: add more debug (#2609)
* kafkaAcquisition: add more debug
2023-11-22 09:35:58 +01:00
blotus
d7ef51e6ba
properly update the cs_syslogsource_parsed_total metric (#2608) 2023-11-22 09:04:23 +01:00
dependabot[bot]
a51bce8f8d
Bump google.golang.org/grpc from 1.56.1 to 1.56.3 (#2566)
Bumps [google.golang.org/grpc](https://github.com/grpc/grpc-go) from 1.56.1 to 1.56.3.
- [Release notes](https://github.com/grpc/grpc-go/releases)
- [Commits](https://github.com/grpc/grpc-go/compare/v1.56.1...v1.56.3)

---
updated-dependencies:
- dependency-name: google.golang.org/grpc
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2023-11-20 11:55:29 +01:00
mmetc
47eb2e240d
Use go 1.21.4 (#2595) 2023-11-16 11:09:13 +01:00
guangwu
ddd6ee8e42
fix: typo (#2582)
Signed-off-by: guoguangwu <guoguangwu@magic-shield.com>
2023-11-08 09:26:34 +01:00
mmetc
5cd4406f5e
typos/grammar (#2561) 2023-11-07 15:07:36 +01:00
Manuel Sabban
4934fce769
update gantsign.golang name (#2558) 2023-11-07 14:53:14 +01:00
mmetc
272cf543b3
Release action: fix asset upload (#2565) 2023-10-25 14:51:36 +02:00
Laurence Jones
d2d788c5dc
[hubtest] escpae scenario asssert meta keys (#2551) 2023-10-17 15:29:21 +01:00
Thibault "bui" Koechlin
a4dc5053d2
fix null deref in cti calls if key is empty (#2540)
* fix null deref in cti calls if key is empty

* avoid hardcoded error check
2023-10-17 09:34:53 +01:00
Laurence Jones
19de3a8a77
Runtime whitelist parsing improvement (#2422)
* Improve whitelist parsing

* Split whitelist check into a function tied to whitelist, also since we check node debug we can make a pointer to node containing whitelist

* No point passing clog as an argument since it is just a pointer to node we already know about

* We should break instead of returning false, false as it may have been whitelisted by ips/cidrs

* reimplement early return if expr errors

* Fix lint and dont need to parse ip back to string just loop over sources

* Log error with node logger as it provides context

* Move getsource to a function cleanup some code

* Change func name

* Split out compile to a function so we can use in tests. Add a bunch of tests

* spell correction

* Use node logger so it has context

* alternative solution

* quick fixes

* Use containswls

* Change whitelist test to use parseipsource and only events

* Make it simpler

* Postoverflow tests, some basic ones to make sure it works

* Use official pkg

* Add @mmetc reco

* Add @mmetc reco

* Change if if to a switch to only evaluate once

* simplify assertions

---------

Co-authored-by: bui <thibault@crowdsec.net>
Co-authored-by: Marco Mariani <marco@crowdsec.net>
2023-10-16 10:08:57 +01:00
Laurence Jones
e7ad3d88ae
Clear up some community confusion (#2543) 2023-10-16 10:08:41 +01:00
Thibault "bui" Koechlin
3cd4847093
sort map keys when generating asserts (#2494)
* sort map keys when generating asserts
2023-10-16 09:54:19 +02:00
Laurence Jones
b2a6eb92bf
Dont create 3 maps just pass the same one to expr (#2421) 2023-10-13 22:35:30 +01:00
Laurence Jones
f0cda0406b
Load file only once if specified twice, and bail earlier if type is unknown (#2419) 2023-10-13 22:34:57 +01:00
Laurence Jones
ff7acd3347
Reset grokky once all patterns are compiled as we do not need to hold them in memoory (#2420) 2023-10-13 12:53:42 +01:00
mmetc
a6b55f2b5e
cscli config feeature-flags: point user to the right location of feature.yaml (#2539) 2023-10-13 09:52:51 +02:00
mmetc
a254b436c7
use go 1.12.3 (#2535) 2023-10-12 16:28:24 +02:00
mmetc
3b1563a538
Refact cscli hub / pkg/cwhub (part 6) (#2524)
* hub.ConfigDir -> hub.InstallDir; hub.DataDir -> hub.InstallDataDir
* cleanup GetInstalledItemsAsString()
* lint: ReferenceMissingError -> ErrMissingReference
* lint: parent_dir -> parentDir
* link: export Walker type
* lint: return error last
* lint: shadow
* move around and group variable definitions
2023-10-09 21:33:35 +02:00
mmetc
0ecb6eefee
add missing scenarios in first login when authenticating with TLS (#2454)
* refact jwt:Authenticator
* include scenarios in first login request for machines with tlsAuth
* log.Printf -> log.Infof
* errors.Wrap -> fmt.Errorf
* don't override validation error
* fix test
2023-10-09 15:26:38 +02:00
Manuel Sabban
6e228f3f3f
pkg/cwhub: cleanup in argument call (#2527)
* cleanup in argument call
* update test as well
* cwhub_tests: reduce verbosity and use helpers

---------

Co-authored-by: Marco Mariani <marco@crowdsec.net>
2023-10-09 13:26:34 +02:00
Laurence Jones
28238cb01f
reverse nil statement instead of else (#2530) 2023-10-09 11:36:05 +01:00
Laurence Jones
0dd22e8b93
convert ifelseif to switch (#2529) 2023-10-09 11:23:19 +01:00
mmetc
9ae8bd79c5
Refact pkg/csconfig tests (#2526)
* remove unused method
* whitespace, redundant comments
* use test helpers
* move DumpConsoleConfig() from pkg/csconfig to cscli
* package doc header
* var -> const
* rename ./tests -> ./testdata
* shorter tests with more error checks
* lint/formatting
* use helpers; fix tests that didn't actually test
* lint; rename expectedResult -> expected
2023-10-09 11:10:51 +02:00
blotus
6b5da29e3d
Use a default duration if no duration is provided in a profile (#2520) 2023-10-06 14:43:17 +02:00
Thibault "bui" Koechlin
6c20d38c41
ligten bucket logger (#2523) 2023-10-06 14:42:44 +02:00
mmetc
338141f067
Refact cscli hub / pkg/cwhub (part 5) (#2521)
* remove unused yaml tags
* cscli/cwhub: deduplicate, remove dead code
* log.Fatal -> fmt.Errorf
* deflate utils.go by moving functions to respective files
* indexOf() -> slices.Index()
* ItemStatus() + toEmoji() -> Item.status()
* Item.versionStatus()
* move getSHA256() to loader.go
2023-10-06 13:59:51 +02:00
mmetc
9235f55c47
Refact pkg/cwhub (part 4) (#2518)
* generalize function: GetInstalledItems, GetInstalledItemsAsString
* extracted function itemKey, happy path
* review comments / remove redundant; rename file to remove build tags
* remove unused fields in Item struct
* unix build tag
2023-10-05 09:35:03 +02:00
mmetc
61d4ccbfdd
use go 1.21.1 (#2418)
* use go 1.21.1, require 1.21
* import "slices" from stdlib
* allow codeql to set version number from tags
* codeql: custom WASM build - the automated one can silently fail
2023-10-04 13:01:57 +02:00
mmetc
89028f17cf
Refact pkg/cwhub (part 3) (#2516)
* removed unused error; comment
* rename loop variables
* happy path
* rename loop variables
* extract function, method
* log.Printf -> log.Infof
* tests -> testdata

from "go help test":

The go tool will ignore a directory named "testdata", making it available
to hold ancillary data needed by the tests.

* align tags
* extract function toEmoji
2023-10-04 12:54:21 +02:00
mmetc
3253b16f0f
Refact pkg/cwhub (part 2) (#2513)
* remove globals for walker callback
* extract method getItemInfo()
* code dedup, if/else -> switch
* dedent: happy path
* remove target variable
2023-10-04 11:17:35 +02:00
mmetc
5618ba9f46
cscli: refactor hub commands (#2500) 2023-10-04 10:42:47 +02:00
mmetc
d39131d154
Refact pkg/cwhub (part 1) (#2512)
* wrap errors, whitespace
* remove named return
* reverse CheckSuffix logic, rename function
* drop redundant if/else, happy path
* log.Fatal -> fmt.Errorf
* simplify GetItemMap, AddItem
* var -> const
* removed short-lived vars
* de-duplicate function and reverse logic
2023-10-04 10:34:10 +02:00
mmetc
8b5ad6990d
lint: pkg/cwhub (#2510)
no functional changes
 
 - reformat
 - comments
 - whitespace
 - removed a dot or two in log messages
 - some "var x=y" -> x:=y
2023-10-03 11:20:56 +02:00
mmetc
6dadfcb2ef
refact: simplify hubtest CopyDir() (#2509) 2023-10-03 11:17:02 +02:00
mmetc
7a4796d655
Support Postgres 16 (update entgo.io/ent to 0.12.4) (#2368) 2023-10-02 16:30:09 +02:00
mmetc
cba6de024f
cscli: restore config correctly if acquis.d already exists (#2504) 2023-10-02 13:31:04 +02:00
mmetc
bfda483c0a
fix issue #2499 - nil dereference while using capi whitelists (#2501) 2023-10-02 11:42:17 +02:00
mmetc
3cb9dbdb21
notification-email: configurable timeouts (#2465)
* configurable timeouts
* parse email timeouts as duration string
* add helo_host to email.yaml
* move html and body tags outside of the loops
* added quotes to href=.., and formatting test
2023-09-29 16:59:06 +02:00
Laurence Jones
b8e6bd8c9a
[Explain] s02 can cause panic if empty (#2486)
* Add parsers length check as it can panic is enrich is empty

* Lets get smarter and loop backwards to find last successful stage

* Shorten code

---------

Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-09-29 12:03:56 +01:00
mmetc
95ed308207
cscli setup: accept stdin; fix proftpd detection test and service unmask (#2496) 2023-09-29 12:58:35 +02:00
mmetc
0d1c4c6070
update test dependencies (#2490) 2023-09-29 10:19:55 +02:00
Thibault "bui" Koechlin
8f6659a2ec
fix the float comparison by using Abs(a,b) < 1e-6 approach (IEEE 754). Move the initializiation of expr helpers (#2492) 2023-09-28 17:22:00 +02:00
Laurence Jones
9dba6db676
add alert alias (#2485) 2023-09-23 19:35:02 +01:00
Laurence Jones
37c0c067a8
cscli hubtest whitelist (#2479)
* Initial tests

* Always print whitelist as we can compare if we mess up the opposite way
2023-09-20 16:42:19 +01:00
Thibault "bui" Koechlin
e4dcdd2572
fix include_capi filter (#2478) 2023-09-20 11:56:00 +02:00
mmetc
ac01faf483
strip '=' signs from encoded api keys (#2472)
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-09-19 14:00:23 +02:00
Thibault "bui" Koechlin
4c08e1e68c
exclude 'lists' too if we exclude CAPI (#2474) 2023-09-19 13:56:22 +02:00
mmetc
d5b6f2974b
Avoid sending nil body with metrics (#2470) 2023-09-19 13:53:50 +02:00
Laurence Jones
64deeab1ec
Fix PO expr whitelist (#2471) 2023-09-19 12:51:03 +01:00
mmetc
b2212f4225
Use go 1.20.8 (#2473) 2023-09-19 13:21:55 +02:00
Manuel Sabban
ce276d3838
Fix fc38 (#2468)
* fix installation on fc38
2023-09-18 11:54:09 +02:00
blotus
43ef32aa8d
Kafka acquisition: do not create empty events when a read error occurs (#2466) 2023-09-13 13:20:36 +02:00
Thibault "bui" Koechlin
0040569fa9
if 'include capi' is false, only exclude capi alerts instead of assuming they necessarily have attached decisions (#2435) 2023-09-12 11:19:36 +02:00
mmetc
6b9e065764
CI: update pytest-cs - don't remove stopped containers after tests (#2459) 2023-09-12 11:10:22 +02:00
mmetc
d45bec4047
minor log message improvements (#2455) 2023-09-12 11:04:56 +02:00
Laurence Jones
702da0f59a
[enhancement] cscli explain --labels (#2461)
* Add label support for explain and allow user to provide multiple labels

* Change my mind about empty string

* Add debug and im an idiot 😄
2023-09-11 14:18:04 +01:00
mmetc
f02f34d64c
CI: remove explicit cache-dependency-path (#2452)
not required anymore after moving the plugins to cmd/
2023-09-04 16:55:48 +02:00
mmetc
fd94e2c056
refactor alert/decisions insert/update to avoid database locking in bulk operations (#2446) 2023-09-04 14:21:45 +02:00
Laurence Jones
aff80a2863
Add html escape function so it can be invoked from template (#2451) 2023-09-04 09:49:39 +01:00
mmetc
22146eb3e4
fix "cscli console disable --all"; cleanup "cscli console" command (#2444) 2023-08-29 11:44:23 +02:00
mmetc
b562103024
Make: build with debug symbols in func tests or if DEBUG=1; drop BUILD_VENDOR_FLAGS (#2443) 2023-08-28 15:58:26 +02:00
mmetc
25868f27de
option db_client.decision_bulk_size (#2440) 2023-08-25 17:05:17 +02:00
mmetc
0c9943b740
CI: alternate vendor file (xz compression and version number) (#2425) 2023-08-25 17:02:00 +02:00
mmetc
32f196a774
use go 1.20.7 (#2409) 2023-08-25 16:24:04 +02:00
mmetc
c588be0842
golangci-lint: use v1.54, remove unnecessary byte/string conversions (#2438) 2023-08-25 16:22:10 +02:00
mmetc
f2154e362b
update functional tests for build pipeline (#2442) 2023-08-25 16:15:28 +02:00
mmetc
2aa55e9444
move plugins/notifications/* to cmd/notification-* (#2429)
This ensures keeping all dependencies in sync, and simplifies
packaging under freebsd/gentoo/etc because there is a single
vendor directory.
2023-08-24 09:46:25 +02:00
mmetc
e36df40ba7
pkg/types cleanup (#2398)
* move function GetLineCountForFile from pkg/types to cscli
* move ParseDuration from pkg/types to pkg/database
* remove unused types.Profile, types.RemediationProfile
2023-08-24 09:44:46 +02:00
Laurence Jones
86d9384954
Whitelist reason (#2439)
* Update node.go

Dont update whitelist reason if event is whitelisted

* oops
2023-08-23 14:51:37 +01:00
Rasmus
b4d9223625
Update main.go (#2431) 2023-08-22 14:09:53 +02:00
Efren
1ec52431b6
Remove duplicate line (#2432)
Remove duplicate line (60) which is outputting `Configuration Folder` twice in `cscli config show`
2023-08-22 14:09:19 +02:00
mmetc
e8e2ade8f0
remove calls to log.Fatal (#2399)
* remove log.Fatal from scenarios.go
* remove log.Fatal from collections.go
* remove log.Fatal from parsers.go and postoverflows.go
2023-08-16 21:04:46 +02:00
mmetc
6a6501691a
change behavior of flag disable_http_retry_backoff (#2426)
now it does not attempt any retry, instead of attempting all retries
immediately

example: cannot reach LAPI

Before:

$ CROWDSEC_FEATURE_DISABLE_HTTP_RETRY_BACKOFF=true cscli decisions list
ERRO[27-07-2023 10:44:44] error while performing request: dial tcp [::1]:8080: connect: connection refused; 4 retries left
INFO[27-07-2023 10:44:44] retrying in 0 seconds (attempt 2 of 5)
[...]
ERRO[27-07-2023 10:44:44] error while performing request: dial tcp [::1]:8080: connect: connection refused; 1 retries left
INFO[27-07-2023 10:44:44] retrying in 0 seconds (attempt 5 of 5)
ERRO[27-07-2023 10:44:44] error while performing request: dial tcp [::1]:8080: connect: connection refused; 0 retries left
FATA[27-07-2023 10:44:44] Unable to list decisions : performing request: Get "http://localhost:8080/v1/alerts?has_active_decision=true&include_capi=false&limit=100": could not get jwt token: Post "http://localhost:8080/v1/watchers/login": dial tcp [::1]:8080: connect: connection refused

After:

$ CROWDSEC_FEATURE_DISABLE_HTTP_RETRY_BACKOFF=true ./test/local/bin/cscli decisions list
FATA[11-08-2023 16:49:58] unable to retrieve decisions: performing request: Get "http://127.0.0.1:8080/v1/alerts?has_active_decision=true&include_capi=false&limit=100": could not get jwt token: Post "http://127.0.0.1:8080/v1/watchers/login": dial tcp 127.0.0.1:8080: connect: connection refused
2023-08-16 21:04:07 +02:00
mmetc
caaed7c515
Timeout on shutdown while waiting for events to be flushed (#2423) 2023-08-16 21:03:15 +02:00
mmetc
afeb541eac
apic: minor refactoring (#2415)
* apic: minor refactoring

* Add whitelist length check

If user configures the file but fails to define and actual whitelist we should check length to save allocs

* Init with length from file

* extract loop method from ApplyApicWhitelists

* pass pointer

* extract loop method updateBlocklist

---------

Co-authored-by: Laurence Jones <laurence.jones@live.co.uk>
2023-08-10 13:03:47 +02:00
Laurence Jones
93c22f29cf
Unmarshal Json (#2414)
Log the actual line that caused an error to help debugging
2023-08-09 09:42:08 +01:00
mmetc
0f319b31fd
update pytest dependencies (#2407) 2023-08-09 00:49:52 +02:00
Manuel Sabban
d6361d0a40
conditional overflow doesn't overflow on capacity (#2412)
* conditional overflow doesn't overflow on capacity

* typo
2023-08-08 16:12:50 +01:00
mmetc
cd9d8f309d
CI: increase test sleep to fix flaky acquisition/file test under win (#2410)
* CI: increase test sleep to attempt fix for flaky windows acquitition/file test

* wip
2023-08-08 16:11:32 +02:00
Laurence Jones
0334a9afe8
Add method name to child logger so we can see which function is erroring when in enrichers (#2411) 2023-08-08 13:38:11 +01:00
AlteredCoder
31c5727a90
Simplify context add (#2408) 2023-08-04 16:50:35 +02:00
mmetc
644c767019
cscli decisions list -o json => [] instead of null; same for alerts (#2397) 2023-08-03 12:51:50 +02:00
Laurence Jones
6ba682a32f
Update bouncers.go (#2404)
Fix wrong short
2023-08-03 11:26:08 +01:00
Manuel Sabban
1d5baa657f
should fix the rpm build (#2396) 2023-08-01 08:20:39 +02:00
Manuel Sabban
2cb7b0bee6
Fix unit file after modification (#2395)
* fix service file for rpm packages build
2023-07-31 16:57:23 +02:00
Laurence Jones
a18df9c3bb
Add bouncers prune command (#2379)
* Add bouncers prune command

* No point overloading functions

* Add prune to list of commands

* change all short desc to be similar, and made it really really clear when pruning it is not recoverable

* Dont use log. and dont return error on user input to abort
2023-07-28 15:37:39 +01:00
mmetc
ffadd42779
update dependency on go-cs-lib; drop the pkg/ part (#2393) 2023-07-28 16:35:08 +02:00
Laurence Jones
55247cd46a
Add machines prune command (#2011)
* Add machines prune command

* Fix scope variable for naming scheme

* Add some freshness and add new features

* Fix force and fix duration if less than 60

* Allow duration to be more readable

* Fix description

* Improve func wording and make int machines length

* No point overloading functions

* Add prune to list of commands

* Check if GID is already the group if so no need to chown

* Revert "Check if GID is already the group if so no need to chown"

This reverts commit c7cef1773e.

* change all short desc to be similar, and made it really really clear when pruning it is not recoverable

* Better examples

* Match bouncer like for like

* Fix merge error

* Dont use log. and dont return error on user input to abort
2023-07-28 15:23:47 +01:00
mmetc
643445b7cf
docker: allow GID with no persistent sqlite db (#2381) 2023-07-28 16:01:50 +02:00
mmetc
9dfc66ef04
update pytest dependencies (#2389) 2023-07-28 14:39:03 +02:00
mmetc
ae53c0f1cc
fix "crowdsec-cli/require" log verbosity (#2390) 2023-07-28 09:56:20 +02:00
Thibault "bui" Koechlin
718721b341
fix a confusing debug message (#2386)
* fix a confusing debug message

* make CTIHelper simply log the error to avoid failing template rendering
2023-07-28 09:52:21 +02:00
mmetc
5cb7013575
Check cscli preconditions with crowdsec-cli/require package (#2388) 2023-07-27 17:02:20 +02:00
mmetc
a01ce18b98
replace imports of path with path/filepath (#2330) 2023-07-26 10:29:58 +02:00
mmetc
1a6f12c88e
Build target for "make tidy" (#2378)
The make tidy target runs "go mod tidy" in the root directory and all plugins.
2023-07-26 10:24:37 +02:00
mmetc
5e7c0e0f49
update google/winops dependency (#2366) 2023-07-26 10:14:29 +02:00
blotus
867245aefb
go mod tidy for sentinel plugin (#2377) 2023-07-25 15:43:15 +02:00
Laurence Jones
389ea4293f
Add metabase version override and update (#2370)
* Add version override and update

* Ooppsie

* Quick fix

* fgs copilot

* Allow user to overwrite image, add warning for exposing metabase and general cleanup

* One ix

* Default image if not found in config, and add a warning to remove and update

* Reorder check system memory checks so it inline with @mmetc best pratices

* No need for err

* Clean up some group code

* Change ipv6 as [] seems to wildcard

* Split loopback warn and disclaimer. Add force yes to start to allow user to accept disclaimer by default

* All cmd commands are RunE clean up

* Update flag name and dont allow a shorthand
2023-07-25 14:21:25 +01:00
blotus
77d58652a3
add sentinel notification plugin (#2268) 2023-07-25 15:07:10 +02:00
mmetc
4bc225f26b
change output of "cscli metrics -o [json|raw]" from list of objects to map with table names (#2375) 2023-07-25 13:33:50 +02:00
mmetc
fc78845a97
update gin-gonic/gin to 1.9.1 (#2230) 2023-07-25 13:32:32 +02:00
mmetc
395cace69f
fix double push of metrics by properly handling tickers (#2374) 2023-07-25 12:19:26 +02:00
blotus
7106d396dc
expose the FormatAlert function to other packages (#2248) 2023-07-25 09:55:39 +02:00
Manuel Sabban
f12ff3dfed
CI: update ansible requirements (#2364) 2023-07-24 15:35:07 +02:00
AlteredCoder
b52b4252c1
scenario labels to map string interface (#2201)
* labels are now map string interface

* restore api url

---------

Co-authored-by: Laurence Jones <laurence.jones@live.co.uk>
2023-07-24 15:19:28 +02:00
mmetc
202112bcae
CI: test with postgres 15 (#2149)
Postgres 15 restricts the default privileges for the public schema. We set crowdsec_test as owner which is shorter than granting permissions explicitly.
2023-07-24 11:56:04 +02:00
mmetc
46fff0b544
Update dependency: docker/docker (#2360) 2023-07-24 11:53:33 +02:00
mmetc
b6b6fd026b
typo fix, uppercase 'API', adjusted log level (#2361) 2023-07-21 23:23:24 +02:00
Manuel Sabban
9ac5aeda79
fix the ci by adding the ability to enforce event ordering (#2347)
* fix the ci by adding the ability to enforce event ordering
2023-07-20 11:41:30 +02:00
mmetc
3c16139c44
Reduce log verbosity at startup (#2363)
A configuration syntax test is performed every time the service is
started from systemd. The resulting error, if any, is shown on
journalctl logs.
This PR removes the unnecessary output in crowdsec.log generated by the
configuration test.
2023-07-19 13:28:52 +02:00
mmetc
bb16552aca
Use same levenshtein package for cscli, ent, hcl (#2359)
remove one dependency, slightly smaller binary
2023-07-18 11:30:14 +02:00
mmetc
e73ceafdba
Use go 1.20.6 (#2358) 2023-07-18 09:51:32 +02:00
mmetc
9af546bd0a
update pytest dependencies (#2356) 2023-07-18 09:50:06 +02:00
mmetc
11b7b1bc88
Update dependencies: k8s, swag, jwt (#2357)
this also removes dependencies on some deprecated packages
2023-07-18 09:33:32 +02:00
mmetc
6c8ddbccac
Make: error if BUILD_VERSION does not start with "v" (#2355) 2023-07-18 09:31:04 +02:00
blotus
f9ca14f010
add object key in src for S3 acquis (#2342) 2023-07-07 10:09:18 +02:00
blotus
1295de928a
Properly match new files on windows when doing file acquisition (#2329) 2023-07-06 14:45:38 +02:00
mmetc
01d7c1a5c2
update dependency on goccy/go-yaml for arm32 fix (#2343) 2023-07-06 12:05:34 +02:00
mmetc
c10bca93df update dependencies on go-plugin and go-hclog (#2341)
* update dependencies on go-plugin and go-hclog
* bump logrus (panic fix)
* implement HCLogAdapter.Getleve() to satisfy the new interface
2023-07-06 12:01:07 +02:00
mmetc
2fa826318e
CI: bump and lock pytest dependencies (#2340) 2023-07-06 10:29:08 +02:00
mmetc
59afb285f3
Update grpc dependency to latest stable version (#2339) 2023-07-06 10:15:17 +02:00
mmetc
9967d60987
errors.Wrap -> fmt.Errorf (#2333) 2023-07-06 10:14:45 +02:00
mmetc
486b56d1ed
CI: reduce test verbosity; set PKG_CONFIG_PATH for re2 in rpm distros (#2331)
* wip

* wip

* go install with commit hash
2023-07-05 17:45:31 +02:00
mmetc
8bcb4c2436
Update go-re2 dep to fix arm32 build (#2332) 2023-07-05 13:14:40 +02:00
mmetc
73f71a0aa3
tests: vagrant refactoring (#2328) 2023-07-04 12:26:32 +02:00
mmetc
17cd792826
CI: update ansible tests for re2 (#2318) 2023-06-29 16:35:19 +02:00
mmetc
bd41f855cf
errors.Wrap -> fmt.Errorf (#2317) 2023-06-29 11:34:59 +02:00
blotus
e61d5a3034
rename status to state in fire response (#2313) 2023-06-29 11:06:49 +02:00
mmetc
ebe25d7653
func tests: install dependencies from make, log test helpers (#2314) 2023-06-28 10:07:05 +02:00
mmetc
893394ef5f
rename metabase APIClient to avoid confusion (#2305) 2023-06-27 15:07:16 +02:00
mmetc
e404e0b608
raise error with invalid 'on_success', 'on_failure' in profile (#2303) 2023-06-27 15:03:07 +02:00
mmetc
956703c31a
CI: Update setup-go action to v4 (with automatic cache) (#2168) 2023-06-27 14:50:45 +02:00
mmetc
85839b0199
support for stdin with "cscli decision import" and raw values (#2291)
and remove Origin from the struct, which was ignored anyway
2023-06-27 14:29:42 +02:00
mmetc
6e18c652cb
docker: build same re2 version for alpine/debian; bump yq (#2311)
also slightly improve layer cache usage
2023-06-27 13:43:42 +02:00
mmetc
a910b7beca
non-fatal error if some datasource can't be run (i.e. journalctl but systemd is missing) (#2309)
This on the other hand, gives a new fatal error when there are no valid datasources.
In the previous version, crowdsec kept running with just a warning if no
acquisition yaml or dir were specified.
2023-06-27 10:13:13 +02:00
he2ss
d26e17f505
update debian version to have latest systemd (#2304)
Co-authored-by: mmetc <92726601+mmetc@users.noreply.github.com>
2023-06-26 12:52:10 +02:00
mmetc
aeca8f40c2
build docker version with c++ re2 (static) (#2307) 2023-06-25 23:45:20 +02:00
mmetc
4137482f65
docker: always merge .yaml.local in conf_get() (#2272)
With this change, all queries to the configuration will return the
values from .local if they are set. However, conf_set will only write
to .yaml and never to .local. This means users can potentially override
values that are supposed to be under control of the entrypoint
(credentials and things set from envvars).
2023-06-23 15:49:09 +02:00
mmetc
98c6038fde
Build with libre2 by default, options for wasm and static; add mk/gmsl (#2295) 2023-06-23 14:25:29 +02:00
mmetc
507da49b5a
send metrics immediately if agents are added or removed (#2296) 2023-06-23 14:06:04 +02:00
mmetc
9beb5388cb
errors.Wrap -> fmt.Errorf; clean up imports (#2301) 2023-06-23 14:04:58 +02:00
mmetc
d4c0643122
CI: add fedora-37, -38 to vagrant tests (#2299) 2023-06-23 13:59:24 +02:00
mmetc
e42841cd00
Change api_key encoding to base64 to comply with bcrypt max size (#2302) 2023-06-23 13:54:36 +02:00
mmetc
62caffb102
update leakybucket readme (#2298) 2023-06-22 15:35:01 +02:00
mmetc
fddf597040
errors.Wrap -> fmt.Errorf; clean up imports (#2297) 2023-06-22 15:01:34 +02:00
mmetc
8bfeb7d90d
Update go dependencies (#2293)
- update fatih/color (fix windows issue)
- update mongo-driver (fix build issue)
- go.mod: merge two "require" blocks
- update semver dependency (same version as indirect dep), fix test checks in cscli setup
- remove gotest.tools dependency (use testify, cstest)
- update x/ exp, mod, sys dependencies
2023-06-22 11:31:41 +02:00
Emanuel Seemann
40e6b205bc
Add bayesian bucket type (#2290) 2023-06-21 15:08:27 +02:00
mmetc
da6106bd23
spellcheck/style leakybucket readme (#2294) 2023-06-21 11:47:07 +02:00
mmetc
4c4b545e9b
append vendor.tgz to each release (#2288) 2023-06-21 09:52:33 +02:00
mmetc
f7409d47be
fix error message when failing to parse ip address (#2292)
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-06-21 09:22:25 +02:00
Laurence Jones
062f71fb92
CI: vagrant configuration for debian 12 (#2285) 2023-06-19 21:18:29 +02:00
mmetc
89c3c18c19
allow running rootless docker tests (#2281)
Co-authored-by: Thibault "bui" Koechlin <thibault@crowdsec.net>
2023-06-19 12:02:59 +02:00
mmetc
c3c2608947
CI: Remove cache entries when closing a PR (#2289)
This should reduce overall cache occupation and thrashing
2023-06-19 11:30:56 +02:00
Laurence Jones
2c8769adf6
Update jsonextract.go (#2287)
Return nil instead of empty string as ParseKV does the same
2023-06-16 18:34:55 +01:00
mmetc
381220daf4
Use go 1.20.5 (#2280)
https://groups.google.com/g/golang-announce/c/q5135a9d924
2023-06-15 23:18:57 +02:00
mmetc
b9a3acb03f
light pkg/parser cleanup (#2279)
* pkg/parser: clean up imports
* remove duplicate import
* simplify boolean expression
* don't check length before range
* if..else if.. -> switch/case
* errors.Wrap -> fmt.Errorf
* typo, lint
* redundant break
2023-06-13 13:16:13 +02:00
mmetc
76429f033a
trim pkg/types: move DataSet/GetData to pkg/cwhub, removed unused Clone function (#2271) 2023-06-08 16:49:51 +02:00
mmetc
cf747d65e0
fix missing import (#2275) 2023-06-08 15:49:37 +02:00
mmetc
25bb23d8b7
minor refactor to pkg/types, cscli machines (#2270)
* cleanup: separate ui and logic
* trim some code from pkg/types
2023-06-08 15:08:51 +02:00
mmetc
6096cb3c9b
Move grok_pattern.go away from pkg/types to trim bouncer dependencies (#2269) 2023-06-08 15:07:30 +02:00
mmetc
4e2c9c185b
Implement "crowdsec -fatal" flag; change help message (#2266)
The -trace...-fatal flags do not change the log destination but only the
verbosity. This change reflects that, and implements "-fatal" which was missing.
2023-06-08 15:06:06 +02:00
mmetc
8da9d5eefd
don't log notification error if not running under systemd (#2274) 2023-06-08 15:04:48 +02:00
mmetc
5b3200173e
don't pre-create log files (not required anymore) (#2267)
The lumberjack package fixed the issue in natefinch/lumberjack#83 (tested with umask 002) and this code is now redundant since we updated the dependency to v2.2.1.
2023-06-07 12:58:35 +02:00
723 changed files with 44385 additions and 27662 deletions

View file

@ -42,7 +42,7 @@ issue:
3. Check [Releases](https://github.com/crowdsecurity/crowdsec/releases/latest) to make sure your agent is on the latest version. 3. Check [Releases](https://github.com/crowdsecurity/crowdsec/releases/latest) to make sure your agent is on the latest version.
- prefix: kind - prefix: kind
list: ['feature', 'bug', 'packaging', 'enhancement'] list: ['feature', 'bug', 'packaging', 'enhancement', 'refactoring']
multiple: false multiple: false
author_association: author_association:
author: true author: true
@ -54,6 +54,7 @@ issue:
@$AUTHOR: There are no 'kind' label on this issue. You need a 'kind' label to start the triage process. @$AUTHOR: There are no 'kind' label on this issue. You need a 'kind' label to start the triage process.
* `/kind feature` * `/kind feature`
* `/kind enhancement` * `/kind enhancement`
* `/kind refactoring`
* `/kind bug` * `/kind bug`
* `/kind packaging` * `/kind packaging`
@ -65,12 +66,13 @@ pull_request:
labels: labels:
- prefix: kind - prefix: kind
multiple: false multiple: false
list: [ 'feature', 'enhancement', 'fix', 'chore', 'dependencies'] list: [ 'feature', 'enhancement', 'fix', 'chore', 'dependencies', 'refactoring']
needs: needs:
comment: | comment: |
@$AUTHOR: There are no 'kind' label on this PR. You need a 'kind' label to generate the release automatically. @$AUTHOR: There are no 'kind' label on this PR. You need a 'kind' label to generate the release automatically.
* `/kind feature` * `/kind feature`
* `/kind enhancement` * `/kind enhancement`
* `/kind refactoring`
* `/kind fix` * `/kind fix`
* `/kind chore` * `/kind chore`
* `/kind dependencies` * `/kind dependencies`
@ -81,7 +83,7 @@ pull_request:
failure: Missing kind label to generate release automatically. failure: Missing kind label to generate release automatically.
- prefix: area - prefix: area
list: [ "agent", "local-api", "cscli", "security", "configuration"] list: [ "agent", "local-api", "cscli", "security", "configuration", "appsec"]
multiple: true multiple: true
needs: needs:
comment: | comment: |
@ -89,6 +91,7 @@ pull_request:
* `/area agent` * `/area agent`
* `/area local-api` * `/area local-api`
* `/area cscli` * `/area cscli`
* `/area appsec`
* `/area security` * `/area security`
* `/area configuration` * `/area configuration`
@ -98,4 +101,4 @@ pull_request:
author_association: author_association:
collaborator: true collaborator: true
member: true member: true
owner: true owner: true

View file

@ -1,4 +1,4 @@
name: Hub tests name: (sub) Bats / Hub
on: on:
workflow_call: workflow_call:
@ -8,16 +8,13 @@ on:
GIST_BADGES_ID: GIST_BADGES_ID:
required: true required: true
env:
PREFIX_TEST_NAMES_WITH_FILE: true
jobs: jobs:
build: build:
strategy: strategy:
matrix: matrix:
go-version: ["1.20.4"] test-file: ["hub-1.bats", "hub-2.bats", "hub-3.bats"]
name: "Build + tests" name: "Functional tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 30 timeout-minutes: 30
steps: steps:
@ -27,49 +24,36 @@ jobs:
sudo chmod +w /etc/machine-id sudo chmod +w /etc/machine-id
echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: "Check out CrowdSec repository" - name: "Check out CrowdSec repository"
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: "Install bats dependencies" - name: "Install bats dependencies"
env: env:
GOBIN: /usr/local/bin GOBIN: /usr/local/bin
run: | run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq netcat-openbsd sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq libre2-dev
go install github.com/mikefarah/yq/v4@latest
go install github.com/cloudflare/cfssl/cmd/cfssl@master
go install github.com/cloudflare/cfssl/cmd/cfssljson@master
- name: "Build crowdsec and fixture" - name: "Build crowdsec and fixture"
run: make bats-clean bats-build bats-fixture run: make bats-clean bats-build bats-fixture BUILD_STATIC=1
- name: "Run hub tests" - name: "Run hub tests"
run: make bats-test-hub run: |
./test/bin/generate-hub-tests
./test/run-tests ./test/dyn-bats/${{ matrix.test-file }} --formatter $(pwd)/test/lib/color-formatter
- name: "Collect hub coverage" - name: "Collect hub coverage"
run: ./test/bin/collect-hub-coverage >> $GITHUB_ENV run: ./test/bin/collect-hub-coverage >> $GITHUB_ENV
- name: "Create Parsers badge" - name: "Create Parsers badge"
uses: schneegans/dynamic-badges-action@v1.6.0 uses: schneegans/dynamic-badges-action@v1.7.0
if: ${{ github.ref == 'refs/heads/master' && github.repository_owner == 'crowdsecurity' }} if: ${{ github.ref == 'refs/heads/master' && github.repository_owner == 'crowdsecurity' }}
with: with:
auth: ${{ secrets.GIST_BADGES_SECRET }} auth: ${{ secrets.GIST_BADGES_SECRET }}
@ -80,7 +64,7 @@ jobs:
color: ${{ env.SCENARIO_BADGE_COLOR }} color: ${{ env.SCENARIO_BADGE_COLOR }}
- name: "Create Scenarios badge" - name: "Create Scenarios badge"
uses: schneegans/dynamic-badges-action@v1.6.0 uses: schneegans/dynamic-badges-action@v1.7.0
if: ${{ github.ref == 'refs/heads/master' && github.repository_owner == 'crowdsecurity' }} if: ${{ github.ref == 'refs/heads/master' && github.repository_owner == 'crowdsecurity' }}
with: with:
auth: ${{ secrets.GIST_BADGES_SECRET }} auth: ${{ secrets.GIST_BADGES_SECRET }}

View file

@ -1,4 +1,4 @@
name: Functional tests (MySQL) name: (sub) Bats / MySQL
on: on:
workflow_call: workflow_call:
@ -7,16 +7,9 @@ on:
required: true required: true
type: string type: string
env:
PREFIX_TEST_NAMES_WITH_FILE: true
jobs: jobs:
build: build:
strategy: name: "Functional tests"
matrix:
go-version: ["1.20.4"]
name: "Build + tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 30 timeout-minutes: 30
services: services:
@ -34,41 +27,26 @@ jobs:
sudo chmod +w /etc/machine-id sudo chmod +w /etc/machine-id
echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: "Check out CrowdSec repository" - name: "Check out CrowdSec repository"
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: "Install bats dependencies" - name: "Install bats dependencies"
env: env:
GOBIN: /usr/local/bin GOBIN: /usr/local/bin
run: | run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq netcat-openbsd sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq libre2-dev
go install github.com/mikefarah/yq/v4@latest
go install github.com/cloudflare/cfssl/cmd/cfssl@master
go install github.com/cloudflare/cfssl/cmd/cfssljson@master
- name: "Build crowdsec and fixture" - name: "Build crowdsec and fixture"
run: | run: |
make clean bats-build bats-fixture make clean bats-build bats-fixture BUILD_STATIC=1
env: env:
DB_BACKEND: mysql DB_BACKEND: mysql
MYSQL_HOST: 127.0.0.1 MYSQL_HOST: 127.0.0.1
@ -77,7 +55,7 @@ jobs:
MYSQL_USER: root MYSQL_USER: root
- name: "Run tests" - name: "Run tests"
run: make bats-test run: ./test/run-tests ./test/bats --formatter $(pwd)/test/lib/color-formatter
env: env:
DB_BACKEND: mysql DB_BACKEND: mysql
MYSQL_HOST: 127.0.0.1 MYSQL_HOST: 127.0.0.1

View file

@ -1,23 +1,16 @@
name: Functional tests (Postgres) name: (sub) Bats / Postgres
on: on:
workflow_call: workflow_call:
env:
PREFIX_TEST_NAMES_WITH_FILE: true
jobs: jobs:
build: build:
strategy: name: "Functional tests"
matrix:
go-version: ["1.20.4"]
name: "Build + tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 30 timeout-minutes: 30
services: services:
database: database:
image: postgres:14 image: postgres:16
env: env:
POSTGRES_PASSWORD: "secret" POSTGRES_PASSWORD: "secret"
ports: ports:
@ -30,46 +23,39 @@ jobs:
steps: steps:
- name: "Install pg_dump v16"
# we can remove this when it's released on ubuntu-latest
run: |
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget -qO- https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo tee /etc/apt/trusted.gpg.d/pgdg.asc &>/dev/null
sudo apt update
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install postgresql-client-16
- name: "Force machineid" - name: "Force machineid"
run: | run: |
sudo chmod +w /etc/machine-id sudo chmod +w /etc/machine-id
echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: "Check out CrowdSec repository" - name: "Check out CrowdSec repository"
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: "Install bats dependencies" - name: "Install bats dependencies"
env: env:
GOBIN: /usr/local/bin GOBIN: /usr/local/bin
run: | run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq netcat-openbsd sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq libre2-dev
go install github.com/mikefarah/yq/v4@latest
go install github.com/cloudflare/cfssl/cmd/cfssl@master
go install github.com/cloudflare/cfssl/cmd/cfssljson@master
- name: "Build crowdsec and fixture (DB_BACKEND: pgx)" - name: "Build crowdsec and fixture (DB_BACKEND: pgx)"
run: | run: |
make clean bats-build bats-fixture make clean bats-build bats-fixture BUILD_STATIC=1
env: env:
DB_BACKEND: pgx DB_BACKEND: pgx
PGHOST: 127.0.0.1 PGHOST: 127.0.0.1
@ -78,7 +64,7 @@ jobs:
PGUSER: postgres PGUSER: postgres
- name: "Run tests (DB_BACKEND: pgx)" - name: "Run tests (DB_BACKEND: pgx)"
run: make bats-test run: ./test/run-tests ./test/bats --formatter $(pwd)/test/lib/color-formatter
env: env:
DB_BACKEND: pgx DB_BACKEND: pgx
PGHOST: 127.0.0.1 PGHOST: 127.0.0.1

View file

@ -1,19 +1,14 @@
name: Functional tests (sqlite) name: (sub) Bats / sqlite + coverage
on: on:
workflow_call: workflow_call:
env: env:
PREFIX_TEST_NAMES_WITH_FILE: true
TEST_COVERAGE: true TEST_COVERAGE: true
jobs: jobs:
build: build:
strategy: name: "Functional tests"
matrix:
go-version: ["1.20.4"]
name: "Build + tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 20 timeout-minutes: 20
@ -24,44 +19,29 @@ jobs:
sudo chmod +w /etc/machine-id sudo chmod +w /etc/machine-id
echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id echo githubciXXXXXXXXXXXXXXXXXXXXXXXX | sudo tee /etc/machine-id
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: "Check out CrowdSec repository" - name: "Check out CrowdSec repository"
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: true submodules: true
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: "Install bats dependencies" - name: "Install bats dependencies"
env: env:
GOBIN: /usr/local/bin GOBIN: /usr/local/bin
run: | run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq netcat-openbsd sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential daemonize jq libre2-dev
go install github.com/mikefarah/yq/v4@latest
go install github.com/cloudflare/cfssl/cmd/cfssl@master
go install github.com/cloudflare/cfssl/cmd/cfssljson@master
- name: "Build crowdsec and fixture" - name: "Build crowdsec and fixture"
run: | run: |
make clean bats-build bats-fixture make clean bats-build bats-fixture BUILD_STATIC=1
- name: "Run tests" - name: "Run tests"
run: make bats-test run: ./test/run-tests ./test/bats --formatter $(pwd)/test/lib/color-formatter
- name: "Collect coverage data" - name: "Collect coverage data"
run: | run: |
@ -97,7 +77,8 @@ jobs:
if: ${{ always() }} if: ${{ always() }}
- name: Upload crowdsec coverage to codecov - name: Upload crowdsec coverage to codecov
uses: codecov/codecov-action@v3 uses: codecov/codecov-action@v4
with: with:
files: ./coverage-bats.out files: ./coverage-bats.out
flags: bats flags: bats
token: ${{ secrets.CODECOV_TOKEN }}

View file

@ -31,7 +31,7 @@ jobs:
# Jobs for Postgres (and sometimes MySQL) can have failing tests on GitHub # Jobs for Postgres (and sometimes MySQL) can have failing tests on GitHub
# CI, but they pass when run on devs' machines or in the release checks. We # CI, but they pass when run on devs' machines or in the release checks. We
# disable them here by default. Remove the if..false to enable them. # disable them here by default. Remove if...false to enable them.
mariadb: mariadb:
uses: ./.github/workflows/bats-mysql.yml uses: ./.github/workflows/bats-mysql.yml

35
.github/workflows/cache-cleanup.yaml vendored Normal file
View file

@ -0,0 +1,35 @@
# https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows#managing-caches
name: cleanup caches by a branch
on:
pull_request:
types:
- closed
jobs:
cleanup:
runs-on: ubuntu-latest
steps:
- name: Check out code
uses: actions/checkout@v4
- name: Cleanup
run: |
gh extension install actions/gh-actions-cache
REPO=${{ github.repository }}
BRANCH="refs/pull/${{ github.event.pull_request.number }}/merge"
echo "Fetching list of cache key"
cacheKeysForPR=$(gh actions-cache list -R $REPO -B $BRANCH | cut -f 1 )
## Setting this to not fail the workflow while deleting cache keys.
set +e
echo "Deleting caches..."
for cacheKey in $cacheKeysForPR
do
gh actions-cache delete $cacheKey -R $REPO -B $BRANCH --confirm
done
echo "Done"
env:
GH_TOKEN: ${{ secrets.GITHUB_TOKEN }}

View file

@ -21,42 +21,26 @@ on:
jobs: jobs:
build: build:
strategy:
matrix:
go-version: ["1.20.4"]
name: Build name: Build
runs-on: windows-2019 runs-on: windows-2019
steps: steps:
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Check out code into the Go module directory - name: Check out code into the Go module directory
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: false submodules: false
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: Build - name: Build
run: make windows_installer run: make windows_installer BUILD_RE2_WASM=1
- name: Upload MSI - name: Upload MSI
uses: actions/upload-artifact@v3 uses: actions/upload-artifact@v4
with: with:
path: crowdsec*msi path: crowdsec*msi
name: crowdsec.msi name: crowdsec.msi

View file

@ -12,7 +12,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
# Drafts your next Release notes as Pull Requests are merged into "master" # Drafts your next Release notes as Pull Requests are merged into "master"
- uses: release-drafter/release-drafter@v5 - uses: release-drafter/release-drafter@v6
with: with:
config-name: release-drafter.yml config-name: release-drafter.yml
# (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml # (Optional) specify config name to use, relative to .github/. Default: release-drafter.yml

View file

@ -44,11 +44,20 @@ jobs:
steps: steps:
- name: Checkout repository - name: Checkout repository
uses: actions/checkout@v3 uses: actions/checkout@v4
with:
# required to pick up tags for BUILD_VERSION
fetch-depth: 0
- name: "Set up Go"
uses: actions/setup-go@v5
with:
go-version: "1.22.2"
cache-dependency-path: "**/go.sum"
# Initializes the CodeQL tools for scanning. # Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL - name: Initialize CodeQL
uses: github/codeql-action/init@v2 uses: github/codeql-action/init@v3
with: with:
languages: ${{ matrix.language }} languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file. # If you wish to specify custom queries, you can do so here or in a config file.
@ -58,8 +67,8 @@ jobs:
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java). # Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below) # If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild # - name: Autobuild
uses: github/codeql-action/autobuild@v2 # uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell. # Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl # 📚 https://git.io/JvXDl
@ -68,9 +77,8 @@ jobs:
# and modify them (or add more) to build your code if your project # and modify them (or add more) to build your code if your project
# uses a compiled language # uses a compiled language
#- run: | - run: |
# make bootstrap make clean build BUILD_RE2_WASM=1
# make release
- name: Perform CodeQL Analysis - name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v2 uses: github/codeql-action/analyze@v3

View file

@ -15,71 +15,42 @@ on:
- 'README.md' - 'README.md'
jobs: jobs:
test_docker_image: test_flavor:
strategy:
# we could test all the flavors in a single pytest job,
# but let's split them (and the image build) in multiple runners for performance
matrix:
# can be slim, full or debian (no debian slim).
flavor: ["slim", "debian"]
runs-on: ubuntu-latest runs-on: ubuntu-latest
timeout-minutes: 30 timeout-minutes: 30
steps: steps:
- name: Check out the repo - name: Check out the repo
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
- name: Set up Docker Buildx - name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2 uses: docker/setup-buildx-action@v3
with: with:
config: .github/buildkit.toml config: .github/buildkit.toml
# - name: "Build flavor: full" - name: "Build image"
# uses: docker/build-push-action@v4 uses: docker/build-push-action@v5
# with:
# context: .
# file: ./Dockerfile
# tags: crowdsecurity/crowdsec:test
# target: full
# platforms: linux/amd64
# load: true
# cache-from: type=gha
# cache-to: type=gha,mode=min
- name: "Build flavor: slim"
uses: docker/build-push-action@v4
with: with:
context: . context: .
file: ./Dockerfile file: ./Dockerfile${{ matrix.flavor == 'debian' && '.debian' || '' }}
tags: crowdsecurity/crowdsec:test-slim tags: crowdsecurity/crowdsec:test${{ matrix.flavor == 'full' && '' || '-' }}${{ matrix.flavor == 'full' && '' || matrix.flavor }}
target: slim target: ${{ matrix.flavor == 'debian' && 'full' || matrix.flavor }}
platforms: linux/amd64
load: true
cache-from: type=gha
cache-to: type=gha,mode=min
- name: "Build flavor: full"
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
tags: crowdsecurity/crowdsec:test
target: full
platforms: linux/amd64
load: true
cache-from: type=gha
cache-to: type=gha,mode=min
- name: "Build flavor: full (debian)"
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile.debian
tags: crowdsecurity/crowdsec:test-debian
target: full
platforms: linux/amd64 platforms: linux/amd64
load: true load: true
cache-from: type=gha cache-from: type=gha
cache-to: type=gha,mode=min cache-to: type=gha,mode=min
- name: "Setup Python" - name: "Setup Python"
uses: actions/setup-python@v4 uses: actions/setup-python@v5
with: with:
python-version: "3.x" python-version: "3.x"
@ -88,15 +59,15 @@ jobs:
cd docker/test cd docker/test
python -m pip install --upgrade pipenv wheel python -m pip install --upgrade pipenv wheel
- name: "Cache virtualenvs" #- name: "Cache virtualenvs"
id: cache-pipenv # id: cache-pipenv
uses: actions/cache@v3 # uses: actions/cache@v4
with: # with:
path: ~/.local/share/virtualenvs # path: ~/.local/share/virtualenvs
key: ${{ runner.os }}-pipenv-${{ hashFiles('**/Pipfile.lock') }} # key: ${{ runner.os }}-pipenv-${{ hashFiles('**/Pipfile.lock') }}
- name: "Install dependencies" - name: "Install dependencies"
if: steps.cache-pipenv.outputs.cache-hit != 'true' #if: steps.cache-pipenv.outputs.cache-hit != 'true'
run: | run: |
cd docker/test cd docker/test
pipenv install --deploy pipenv install --deploy
@ -107,9 +78,10 @@ jobs:
- name: "Run tests" - name: "Run tests"
env: env:
CROWDSEC_TEST_VERSION: test CROWDSEC_TEST_VERSION: test
CROWDSEC_TEST_FLAVORS: slim,debian CROWDSEC_TEST_FLAVORS: ${{ matrix.flavor }}
CROWDSEC_TEST_NETWORK: net-test CROWDSEC_TEST_NETWORK: net-test
CROWDSEC_TEST_TIMEOUT: 90 CROWDSEC_TEST_TIMEOUT: 90
# running serially to reduce test flakiness
run: | run: |
cd docker/test cd docker/test
pipenv run pytest -n 2 --durations=0 --color=yes pipenv run pytest -n 1 --durations=0 --color=yes

View file

@ -20,41 +20,25 @@ env:
jobs: jobs:
build: build:
strategy:
matrix:
go-version: ["1.20.4"]
name: "Build + tests" name: "Build + tests"
runs-on: windows-2022 runs-on: windows-2022
steps: steps:
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Check out CrowdSec repository - name: Check out CrowdSec repository
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: false submodules: false
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: Build - name: Build
run: | run: |
make build make build BUILD_RE2_WASM=1
- name: Run tests - name: Run tests
run: | run: |
@ -64,15 +48,16 @@ jobs:
cat out.txt | sed 's/ *coverage:.*of statements in.*//' | richgo testfilter cat out.txt | sed 's/ *coverage:.*of statements in.*//' | richgo testfilter
- name: Upload unit coverage to Codecov - name: Upload unit coverage to Codecov
uses: codecov/codecov-action@v3 uses: codecov/codecov-action@v4
with: with:
files: coverage.out files: coverage.out
flags: unit-windows flags: unit-windows
token: ${{ secrets.CODECOV_TOKEN }}
- name: golangci-lint - name: golangci-lint
uses: golangci/golangci-lint-action@v3 uses: golangci/golangci-lint-action@v4
with: with:
version: v1.51 version: v1.57
args: --issues-exit-code=1 --timeout 10m args: --issues-exit-code=1 --timeout 10m
only-new-issues: false only-new-issues: false
# the cache is already managed above, enabling it here # the cache is already managed above, enabling it here

View file

@ -24,23 +24,18 @@ env:
RICHGO_FORCE_COLOR: 1 RICHGO_FORCE_COLOR: 1
AWS_HOST: localstack AWS_HOST: localstack
# these are to mimic aws config # these are to mimic aws config
AWS_ACCESS_KEY_ID: AKIAIOSFODNN7EXAMPLE AWS_ACCESS_KEY_ID: test
AWS_SECRET_ACCESS_KEY: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY AWS_SECRET_ACCESS_KEY: test
AWS_REGION: us-east-1 AWS_REGION: us-east-1
KINESIS_INITIALIZE_STREAMS: "stream-1-shard:1,stream-2-shards:2"
CROWDSEC_FEATURE_DISABLE_HTTP_RETRY_BACKOFF: true CROWDSEC_FEATURE_DISABLE_HTTP_RETRY_BACKOFF: true
jobs: jobs:
build: build:
strategy:
matrix:
go-version: ["1.20.4"]
name: "Build + tests" name: "Build + tests"
runs-on: ubuntu-latest runs-on: ubuntu-latest
services: services:
localstack: localstack:
image: localstack/localstack:1.3.0 image: localstack/localstack:3.0
ports: ports:
- 4566:4566 # Localstack exposes all services on the same port - 4566:4566 # Localstack exposes all services on the same port
env: env:
@ -49,7 +44,7 @@ jobs:
KINESIS_ERROR_PROBABILITY: "" KINESIS_ERROR_PROBABILITY: ""
DOCKER_HOST: unix:///var/run/docker.sock DOCKER_HOST: unix:///var/run/docker.sock
KINESIS_INITIALIZE_STREAMS: ${{ env.KINESIS_INITIALIZE_STREAMS }} KINESIS_INITIALIZE_STREAMS: ${{ env.KINESIS_INITIALIZE_STREAMS }}
HOSTNAME_EXTERNAL: ${{ env.AWS_HOST }} # Required so that resource urls are provided properly LOCALSTACK_HOST: ${{ env.AWS_HOST }} # Required so that resource urls are provided properly
# e.g sqs url will get localhost if we don't set this env to map our service # e.g sqs url will get localhost if we don't set this env to map our service
options: >- options: >-
--name=localstack --name=localstack
@ -58,7 +53,7 @@ jobs:
--health-timeout=5s --health-timeout=5s
--health-retries=3 --health-retries=3
zoo1: zoo1:
image: confluentinc/cp-zookeeper:7.3.0 image: confluentinc/cp-zookeeper:7.4.3
ports: ports:
- "2181:2181" - "2181:2181"
env: env:
@ -108,49 +103,62 @@ jobs:
--health-timeout 10s --health-timeout 10s
--health-retries 5 --health-retries 5
loki:
image: grafana/loki:2.9.1
ports:
- "3100:3100"
options: >-
--name=loki1
--health-cmd "wget -q -O - http://localhost:3100/ready | grep 'ready'"
--health-interval 30s
--health-timeout 10s
--health-retries 5
--health-start-period 30s
steps: steps:
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Check out CrowdSec repository - name: Check out CrowdSec repository
uses: actions/checkout@v3 uses: actions/checkout@v4
with: with:
fetch-depth: 0 fetch-depth: 0
submodules: false submodules: false
- name: Cache Go modules - name: "Set up Go"
uses: actions/cache@v3 uses: actions/setup-go@v5
with: with:
path: | go-version: "1.22.2"
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: Build and run tests - name: Create localstack streams
run: | run: |
aws --endpoint-url=http://127.0.0.1:4566 --region us-east-1 kinesis create-stream --stream-name stream-1-shard --shard-count 1
aws --endpoint-url=http://127.0.0.1:4566 --region us-east-1 kinesis create-stream --stream-name stream-2-shards --shard-count 2
- name: Build and run tests, static
run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential libre2-dev
go install github.com/ory/go-acc@v0.2.8 go install github.com/ory/go-acc@v0.2.8
go install github.com/kyoh86/richgo@v0.3.10 go install github.com/kyoh86/richgo@v0.3.10
set -o pipefail set -o pipefail
make build make build BUILD_STATIC=1
make go-acc | richgo testfilter make go-acc | sed 's/ *coverage:.*of statements in.*//' | richgo testfilter
- name: Run tests again, dynamic
run: |
make clean build
set -o pipefail
make go-acc | sed 's/ *coverage:.*of statements in.*//' | richgo testfilter
- name: Upload unit coverage to Codecov - name: Upload unit coverage to Codecov
uses: codecov/codecov-action@v3 uses: codecov/codecov-action@v4
with: with:
files: coverage.out files: coverage.out
flags: unit-linux flags: unit-linux
token: ${{ secrets.CODECOV_TOKEN }}
- name: golangci-lint - name: golangci-lint
uses: golangci/golangci-lint-action@v3 uses: golangci/golangci-lint-action@v4
with: with:
version: v1.51 version: v1.57
args: --issues-exit-code=1 --timeout 10m args: --issues-exit-code=1 --timeout 10m
only-new-issues: false only-new-issues: false
# the cache is already managed above, enabling it here # the cache is already managed above, enabling it here

View file

@ -23,7 +23,7 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
# Semantic versioning, lock to different version: v2, v2.0 or a commit hash. # Semantic versioning, lock to different version: v2, v2.0 or a commit hash.
- uses: BirthdayResearch/oss-governance-bot@v3 - uses: BirthdayResearch/oss-governance-bot@v4
with: with:
# You can use a PAT to post a comment/label/status so that it shows up as a user instead of github-actions # You can use a PAT to post a comment/label/status so that it shows up as a user instead of github-actions
github-token: ${{secrets.GITHUB_TOKEN}} # optional, default to '${{ github.token }}' github-token: ${{secrets.GITHUB_TOKEN}} # optional, default to '${{ github.token }}'

View file

@ -0,0 +1,47 @@
name: (push-master) Publish latest Docker images
on:
push:
branches: [ master ]
paths:
- 'pkg/**'
- 'cmd/**'
- 'mk/**'
- 'docker/docker_start.sh'
- 'docker/config.yaml'
- '.github/workflows/publish-docker-master.yml'
- '.github/workflows/publish-docker.yml'
- 'Dockerfile'
- 'Dockerfile.debian'
- 'go.mod'
- 'go.sum'
- 'Makefile'
jobs:
dev-alpine:
uses: ./.github/workflows/publish-docker.yml
with:
platform: linux/amd64
crowdsec_version: ""
image_version: dev
latest: false
push: true
slim: false
debian: false
secrets:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
dev-debian:
uses: ./.github/workflows/publish-docker.yml
with:
platform: linux/amd64
crowdsec_version: ""
image_version: dev
latest: false
push: true
slim: false
debian: true
secrets:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}

View file

@ -0,0 +1,48 @@
name: (manual) Publish Docker images
on:
workflow_dispatch:
inputs:
image_version:
description: Docker Image version (base tag, i.e. v1.6.0-2)
required: true
crowdsec_version:
description: Crowdsec version (BUILD_VERSION)
required: true
latest:
description: Overwrite latest (and slim) tags?
default: false
required: true
push:
description: Really push?
default: false
required: true
jobs:
alpine:
uses: ./.github/workflows/publish-docker.yml
secrets:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
with:
image_version: ${{ github.event.inputs.image_version }}
crowdsec_version: ${{ github.event.inputs.crowdsec_version }}
latest: ${{ github.event.inputs.latest == 'true' }}
push: ${{ github.event.inputs.push == 'true' }}
slim: true
debian: false
platform: "linux/amd64,linux/386,linux/arm64,linux/arm/v7,linux/arm/v6"
debian:
uses: ./.github/workflows/publish-docker.yml
secrets:
DOCKER_USERNAME: ${{ secrets.DOCKER_USERNAME }}
DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }}
with:
image_version: ${{ github.event.inputs.image_version }}
crowdsec_version: ${{ github.event.inputs.crowdsec_version }}
latest: ${{ github.event.inputs.latest == 'true' }}
push: ${{ github.event.inputs.push == 'true' }}
slim: false
debian: true
platform: "linux/amd64,linux/386,linux/arm64"

125
.github/workflows/publish-docker.yml vendored Normal file
View file

@ -0,0 +1,125 @@
name: (sub) Publish Docker images
on:
workflow_call:
secrets:
DOCKER_USERNAME:
required: true
DOCKER_PASSWORD:
required: true
inputs:
platform:
required: true
type: string
image_version:
required: true
type: string
crowdsec_version:
required: true
type: string
latest:
required: true
type: boolean
push:
required: true
type: boolean
slim:
required: true
type: boolean
debian:
required: true
type: boolean
jobs:
push_to_registry:
name: Push Docker image to registries
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
config: .github/buildkit.toml
- name: Login to DockerHub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Prepare (slim)
if: ${{ inputs.slim }}
id: slim
run: |
DOCKERHUB_IMAGE=${{ secrets.DOCKER_USERNAME }}/crowdsec
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/crowdsec
VERSION=${{ inputs.image_version }}
DEBIAN=${{ inputs.debian && '-debian' || '' }}
TAGS="${DOCKERHUB_IMAGE}:${VERSION}-slim${DEBIAN},${GHCR_IMAGE}:${VERSION}-slim${DEBIAN}"
if [[ ${{ inputs.latest }} == true ]]; then
TAGS=$TAGS,${DOCKERHUB_IMAGE}:slim${DEBIAN},${GHCR_IMAGE}:slim${DEBIAN}
fi
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Prepare (full)
id: full
run: |
DOCKERHUB_IMAGE=${{ secrets.DOCKER_USERNAME }}/crowdsec
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/crowdsec
VERSION=${{ inputs.image_version }}
DEBIAN=${{ inputs.debian && '-debian' || '' }}
TAGS="${DOCKERHUB_IMAGE}:${VERSION}${DEBIAN},${GHCR_IMAGE}:${VERSION}${DEBIAN}"
if [[ ${{ inputs.latest }} == true ]]; then
TAGS=$TAGS,${DOCKERHUB_IMAGE}:latest${DEBIAN},${GHCR_IMAGE}:latest${DEBIAN}
fi
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Build and push image (slim)
if: ${{ inputs.slim }}
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile${{ inputs.debian && '.debian' || '' }}
push: ${{ inputs.push }}
tags: ${{ steps.slim.outputs.tags }}
target: slim
platforms: ${{ inputs.platform }}
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.slim.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
build-args: |
BUILD_VERSION=${{ inputs.crowdsec_version }}
- name: Build and push image (full)
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile${{ inputs.debian && '.debian' || '' }}
push: ${{ inputs.push }}
tags: ${{ steps.full.outputs.tags }}
target: full
platforms: ${{ inputs.platform }}
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.full.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
build-args: |
BUILD_VERSION=${{ inputs.crowdsec_version }}

View file

@ -0,0 +1,40 @@
# .github/workflows/build-docker-image.yml
name: Release
on:
release:
types:
- prereleased
permissions:
# Use write for: hub release edit
contents: write
jobs:
build:
name: Build and upload binary package
runs-on: ubuntu-latest
steps:
- name: Check out code into the Go module directory
uses: actions/checkout@v4
with:
fetch-depth: 0
submodules: false
- name: "Set up Go"
uses: actions/setup-go@v5
with:
go-version: "1.22.2"
- name: Build the binaries
run: |
sudo apt -qq -y -o=Dpkg::Use-Pty=0 install build-essential libre2-dev
make vendor release BUILD_STATIC=1
- name: Upload to release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
tag_name="${GITHUB_REF##*/}"
gh release upload "$tag_name" crowdsec-release.tgz vendor.tgz *-vendor.tar.xz

View file

@ -1,70 +0,0 @@
name: Publish Debian Docker image on Push to Master
on:
push:
branches: [ master ]
paths:
- 'pkg/**'
- 'cmd/**'
- 'plugins/**'
- 'docker/docker_start.sh'
- 'docker/config.yaml'
- '.github/workflows/publish_docker-image_on_master-debian.yml'
- 'Dockerfile.debian'
- 'go.mod'
- 'go.sum'
- 'Makefile'
jobs:
push_to_registry:
name: Push Debian Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=crowdsecurity/crowdsec
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/crowdsec
VERSION=dev-debian
TAGS="${DOCKER_IMAGE}:${VERSION},${GHCR_IMAGE}:${VERSION}"
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkit.toml
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push full image
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile.debian
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}
platforms: linux/amd64
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=min

View file

@ -1,70 +0,0 @@
name: Publish Docker image on Push to Master
on:
push:
branches: [ master ]
paths:
- 'pkg/**'
- 'cmd/**'
- 'plugins/**'
- 'docker/docker_start.sh'
- 'docker/config.yaml'
- '.github/workflows/publish_docker-image_on_master.yml'
- 'Dockerfile'
- 'go.mod'
- 'go.sum'
- 'Makefile'
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=crowdsecurity/crowdsec
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/crowdsec
VERSION=dev
TAGS="${DOCKER_IMAGE}:${VERSION},${GHCR_IMAGE}:${VERSION}"
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkit.toml
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push full image
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}
platforms: linux/amd64
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=min

View file

@ -1,54 +0,0 @@
# .github/workflows/build-docker-image.yml
name: build
on:
release:
types:
- prereleased
permissions:
# Use write for: hub release edit
contents: write
jobs:
build:
strategy:
matrix:
go-version: ["1.20.4"]
name: Build and upload binary package
runs-on: ubuntu-latest
steps:
- name: "Set up Go ${{ matrix.go-version }}"
uses: actions/setup-go@v3
with:
go-version: ${{ matrix.go-version }}
- name: Check out code into the Go module directory
uses: actions/checkout@v3
with:
fetch-depth: 0
submodules: false
- name: Cache Go modules
uses: actions/cache@v3
with:
path: |
~/go/pkg/mod
~/.cache/go-build
~/Library/Caches/go-build
%LocalAppData%\go-build
key: ${{ runner.os }}-${{ matrix.go-version }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-${{ matrix.go-version }}-go-
- name: Build the binaries
run: make release
- name: Upload to release
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
tag_name="${GITHUB_REF##*/}"
hub release edit -a crowdsec-release.tgz -m "" "$tag_name"

View file

@ -1,61 +0,0 @@
name: Publish Docker Debian image
on:
release:
types:
- released
- prereleased
workflow_dispatch:
jobs:
push_to_registry:
name: Push Docker debian image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=crowdsecurity/crowdsec
VERSION=bullseye
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -E 's#/+#-#g')
elif [[ $GITHUB_REF == refs/pull/* ]]; then
VERSION=pr-${{ github.event.number }}
fi
TAGS="${DOCKER_IMAGE}:${VERSION}-debian"
if [[ "${{ github.event.action }}" == "released" ]]; then
TAGS=$TAGS,${DOCKER_IMAGE}:latest-debian
fi
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkit.toml
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and push
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile.debian
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}
platforms: linux/amd64,linux/arm64,linux/386
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}

View file

@ -1,86 +0,0 @@
name: Publish Docker image
on:
release:
types:
- released
- prereleased
jobs:
push_to_registry:
name: Push Docker image to Docker Hub
runs-on: ubuntu-latest
steps:
- name: Check out the repo
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Prepare
id: prep
run: |
DOCKER_IMAGE=crowdsecurity/crowdsec
GHCR_IMAGE=ghcr.io/${{ github.repository_owner }}/crowdsec
VERSION=edge
if [[ $GITHUB_REF == refs/tags/* ]]; then
VERSION=${GITHUB_REF#refs/tags/}
elif [[ $GITHUB_REF == refs/heads/* ]]; then
VERSION=$(echo ${GITHUB_REF#refs/heads/} | sed -E 's#/+#-#g')
elif [[ $GITHUB_REF == refs/pull/* ]]; then
VERSION=pr-${{ github.event.number }}
fi
TAGS="${DOCKER_IMAGE}:${VERSION},${GHCR_IMAGE}:${VERSION}"
TAGS_SLIM="${DOCKER_IMAGE}:${VERSION}-slim"
if [[ ${{ github.event.action }} == released ]]; then
TAGS=$TAGS,${DOCKER_IMAGE}:latest,${GHCR_IMAGE}:latest
TAGS_SLIM=$TAGS_SLIM,${DOCKER_IMAGE}:slim
fi
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "tags=${TAGS}" >> $GITHUB_OUTPUT
echo "tags_slim=${TAGS_SLIM}" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
with:
config: .github/buildkit.toml
- name: Login to DockerHub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v2
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push slim image
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags_slim }}
target: slim
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6,linux/386
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}
- name: Build and push full image
uses: docker/build-push-action@v4
with:
context: .
file: ./Dockerfile
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.prep.outputs.tags }}
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v6,linux/386
labels: |
org.opencontainers.image.source=${{ github.event.repository.html_url }}
org.opencontainers.image.created=${{ steps.prep.outputs.created }}
org.opencontainers.image.revision=${{ github.sha }}

View file

@ -1,4 +1,4 @@
name: Update Docker Hub README name: (push-master) Update Docker Hub README
on: on:
push: push:
@ -13,7 +13,7 @@ jobs:
steps: steps:
- -
name: Check out the repo name: Check out the repo
uses: actions/checkout@v3 uses: actions/checkout@v4
if: ${{ github.repository_owner == 'crowdsecurity' }} if: ${{ github.repository_owner == 'crowdsecurity' }}
- -
name: Update docker hub README name: Update docker hub README

18
.gitignore vendored
View file

@ -6,7 +6,10 @@
*.dylib *.dylib
*~ *~
.pc .pc
# IDEs
.vscode .vscode
.idea
# If vendor is included, allow prebuilt (wasm?) libraries. # If vendor is included, allow prebuilt (wasm?) libraries.
!vendor/**/*.so !vendor/**/*.so
@ -15,6 +18,9 @@
*.test *.test
*.cover *.cover
# Test dependencies
test/tools/*
# VMs used for dev/test # VMs used for dev/test
.vagrant .vagrant
@ -30,17 +36,15 @@ test/coverage/*
*.swp *.swp
*.swo *.swo
# Dependency directories (remove the comment below to include it) # Dependencies are not vendored by default, but a tarball is created by "make vendor"
# vendor/ # and provided in the release. Used by gentoo, etc.
vendor/
vendor.tgz
# crowdsec binaries # crowdsec binaries
cmd/crowdsec-cli/cscli cmd/crowdsec-cli/cscli
cmd/crowdsec/crowdsec cmd/crowdsec/crowdsec
plugins/notifications/http/notification-http cmd/notification-*/notification-*
plugins/notifications/slack/notification-slack
plugins/notifications/splunk/notification-splunk
plugins/notifications/email/notification-email
plugins/notifications/dummy/notification-dummy
# Test cache (downloaded files) # Test cache (downloaded files)
.cache .cache

4
.gitmodules vendored
View file

@ -14,7 +14,3 @@
[submodule "tests/lib/bats-mock"] [submodule "tests/lib/bats-mock"]
path = test/lib/bats-mock path = test/lib/bats-mock
url = https://github.com/crowdsecurity/bats-mock.git url = https://github.com/crowdsecurity/bats-mock.git
[submodule "coraza"]
path = coraza
url = http://github.com/buixor/coraza
branch = testing

View file

@ -1,38 +1,66 @@
# https://github.com/golangci/golangci-lint/blob/master/.golangci.reference.yml # https://github.com/golangci/golangci-lint/blob/master/.golangci.reference.yml
run:
skip-dirs:
- pkg/time/rate
skip-files:
- pkg/database/ent/generate.go
- pkg/yamlpatch/merge.go
- pkg/yamlpatch/merge_test.go
linters-settings: linters-settings:
cyclop:
# lower this after refactoring
max-complexity: 48
gci:
sections:
- standard
- default
- prefix(github.com/crowdsecurity)
- prefix(github.com/crowdsecurity/crowdsec)
gomoddirectives:
replace-allow-list:
- golang.org/x/time/rate
gocognit:
# lower this after refactoring
min-complexity: 145
gocyclo: gocyclo:
min-complexity: 30 # lower this after refactoring
min-complexity: 48
funlen: funlen:
# Checks the number of lines in a function. # Checks the number of lines in a function.
# If lower than 0, disable the check. # If lower than 0, disable the check.
# Default: 60 # Default: 60
lines: -1 # lower this after refactoring
lines: 437
# Checks the number of statements in a function. # Checks the number of statements in a function.
# If lower than 0, disable the check. # If lower than 0, disable the check.
# Default: 40 # Default: 40
statements: -1 # lower this after refactoring
statements: 122
govet: govet:
check-shadowing: true enable-all: true
disable:
- reflectvaluecompare
- fieldalignment
lll: lll:
line-length: 140 # lower this after refactoring
line-length: 2607
maintidx:
# raise this after refactoring
under: 11
misspell: misspell:
locale: US locale: US
nestif:
# lower this after refactoring
min-complexity: 28
nlreturn:
block-size: 5
nolintlint: nolintlint:
allow-leading-space: true # don't require machine-readable nolint directives (i.e. with no leading space)
allow-unused: false # report any unused nolint directives allow-unused: false # report any unused nolint directives
require-explanation: false # don't require an explanation for nolint directives require-explanation: false # don't require an explanation for nolint directives
require-specific: false # don't require nolint directives to be specific about which linter is being skipped require-specific: false # don't require nolint directives to be specific about which linter is being skipped
@ -40,104 +68,183 @@ linters-settings:
interfacebloat: interfacebloat:
max: 12 max: 12
depguard:
rules:
wrap:
deny:
- pkg: "github.com/pkg/errors"
desc: "errors.Wrap() is deprecated in favor of fmt.Errorf()"
files:
- "!**/pkg/database/*.go"
- "!**/pkg/exprhelpers/*.go"
- "!**/pkg/acquisition/modules/appsec/appsec.go"
- "!**/pkg/acquisition/modules/loki/internal/lokiclient/loki_client.go"
- "!**/pkg/apiserver/controllers/v1/errors.go"
yaml:
files:
- "!**/pkg/acquisition/acquisition.go"
- "!**/pkg/acquisition/acquisition_test.go"
- "!**/pkg/acquisition/modules/appsec/appsec.go"
- "!**/pkg/acquisition/modules/cloudwatch/cloudwatch.go"
- "!**/pkg/acquisition/modules/docker/docker.go"
- "!**/pkg/acquisition/modules/file/file.go"
- "!**/pkg/acquisition/modules/journalctl/journalctl.go"
- "!**/pkg/acquisition/modules/kafka/kafka.go"
- "!**/pkg/acquisition/modules/kinesis/kinesis.go"
- "!**/pkg/acquisition/modules/kubernetesaudit/k8s_audit.go"
- "!**/pkg/acquisition/modules/loki/loki.go"
- "!**/pkg/acquisition/modules/loki/timestamp_test.go"
- "!**/pkg/acquisition/modules/s3/s3.go"
- "!**/pkg/acquisition/modules/syslog/syslog.go"
- "!**/pkg/acquisition/modules/wineventlog/wineventlog_windows.go"
- "!**/pkg/appsec/appsec.go"
- "!**/pkg/appsec/loader.go"
- "!**/pkg/csplugin/broker.go"
- "!**/pkg/csplugin/broker_test.go"
- "!**/pkg/dumps/bucket_dump.go"
- "!**/pkg/dumps/parser_dump.go"
- "!**/pkg/hubtest/coverage.go"
- "!**/pkg/hubtest/hubtest_item.go"
- "!**/pkg/hubtest/parser_assert.go"
- "!**/pkg/hubtest/scenario_assert.go"
- "!**/pkg/leakybucket/buckets_test.go"
- "!**/pkg/leakybucket/manager_load.go"
- "!**/pkg/metabase/metabase.go"
- "!**/pkg/parser/node.go"
- "!**/pkg/parser/node_test.go"
- "!**/pkg/parser/parsing_test.go"
- "!**/pkg/parser/stage.go"
deny:
- pkg: "gopkg.in/yaml.v2"
desc: "yaml.v2 is deprecated for new code in favor of yaml.v3"
wsl:
# Allow blocks to end with comments
allow-trailing-comment: true
linters: linters:
enable-all: true enable-all: true
disable: disable:
# #
# DEPRECATED by golangi-lint # DEPRECATED by golangi-lint
# #
- deadcode # The owner seems to have abandoned the linter. Replaced by unused. - deadcode
- exhaustivestruct # The owner seems to have abandoned the linter. Replaced by exhaustruct. - exhaustivestruct
- golint # Golint differs from gofmt. Gofmt reformats Go source code, whereas golint prints out style mistakes - golint
- ifshort # Checks that your code uses short syntax for if-statements whenever possible - ifshort
- interfacer # Linter that suggests narrower interface types - interfacer
- maligned # Tool to detect Go structs that would take less memory if their fields were sorted - maligned
- nosnakecase # nosnakecase is a linter that detects snake case of variable naming and function name. - nosnakecase
- scopelint # Scopelint checks for unpinned variables in go programs - scopelint
- structcheck # The owner seems to have abandoned the linter. Replaced by unused. - structcheck
- varcheck # The owner seems to have abandoned the linter. Replaced by unused. - varcheck
#
# Disabled until fixed for go 1.22
#
- copyloopvar # copyloopvar is a linter detects places where loop variables are copied
- intrange # intrange is a linter to find places where for loops could make use of an integer range.
# #
# Enabled # Enabled
# #
# - asasalint # check for pass []any as any in variadic func(...any) # - asasalint # check for pass []any as any in variadic func(...any)
# - asciicheck # Simple linter to check that your code does not contain non-ASCII identifiers # - asciicheck # checks that all code identifiers does not have non-ASCII symbols in the name
# - bidichk # Checks for dangerous unicode character sequences # - bidichk # Checks for dangerous unicode character sequences
# - bodyclose # checks whether HTTP response body is closed successfully
# - cyclop # checks function and package cyclomatic complexity
# - decorder # check declaration order and count of types, constants, variables and functions # - decorder # check declaration order and count of types, constants, variables and functions
# - depguard # Go linter that checks if package imports are in a list of acceptable packages # - depguard # Go linter that checks if package imports are in a list of acceptable packages
# - dupword # checks for duplicate words in the source code # - dupword # checks for duplicate words in the source code
# - durationcheck # check for two durations multiplied together # - durationcheck # check for two durations multiplied together
# - errcheck # Errcheck is a program for checking for unchecked errors in go programs. These unchecked errors can be critical bugs in some cases # - errcheck # errcheck is a program for checking for unchecked errors in Go code. These unchecked errors can be critical bugs in some cases
# - errorlint # errorlint is a linter for that can be used to find code that will cause problems with the error wrapping scheme introduced in Go 1.13.
# - execinquery # execinquery is a linter about query string checker in Query function which reads your Go src files and warning it finds
# - exportloopref # checks for pointers to enclosing loop variables # - exportloopref # checks for pointers to enclosing loop variables
# - funlen # Tool for detection of long functions # - funlen # Tool for detection of long functions
# - ginkgolinter # enforces standards of using ginkgo and gomega # - ginkgolinter # enforces standards of using ginkgo and gomega
# - gocheckcompilerdirectives # Checks that go compiler directive comments (//go:) are valid.
# - gochecknoinits # Checks that no init functions are present in Go code # - gochecknoinits # Checks that no init functions are present in Go code
# - gochecksumtype # Run exhaustiveness checks on Go "sum types"
# - gocognit # Computes and checks the cognitive complexity of functions
# - gocritic # Provides diagnostics that check for bugs, performance and style issues. # - gocritic # Provides diagnostics that check for bugs, performance and style issues.
# - gocyclo # Computes and checks the cyclomatic complexity of functions
# - goheader # Checks is file header matches to pattern # - goheader # Checks is file header matches to pattern
# - gomoddirectives # Manage the use of 'replace', 'retract', and 'excludes' directives in go.mod. # - gomoddirectives # Manage the use of 'replace', 'retract', and 'excludes' directives in go.mod.
# - gomodguard # Allow and block list linter for direct Go module dependencies. This is different from depguard where there are different block types for example version constraints and module recommendations. # - gomodguard # Allow and block list linter for direct Go module dependencies. This is different from depguard where there are different block types for example version constraints and module recommendations.
# - goprintffuncname # Checks that printf-like functions are named with `f` at the end # - goprintffuncname # Checks that printf-like functions are named with `f` at the end
# - gosimple # (megacheck): Linter for Go source code that specializes in simplifying a code # - gosimple # (megacheck): Linter for Go source code that specializes in simplifying code
# - govet # (vet, vetshadow): Vet examines Go source code and reports suspicious constructs, such as Printf calls whose arguments do not align with the format string # - gosmopolitan # Report certain i18n/l10n anti-patterns in your Go codebase
# - grouper # An analyzer to analyze expression groups. # - govet # (vet, vetshadow): Vet examines Go source code and reports suspicious constructs. It is roughly the same as 'go vet' and uses its passes.
# - grouper # Analyze expression groups.
# - importas # Enforces consistent import aliases # - importas # Enforces consistent import aliases
# - ineffassign # Detects when assignments to existing variables are not used # - ineffassign # Detects when assignments to existing variables are not used
# - interfacebloat # A linter that checks the number of methods inside an interface. # - interfacebloat # A linter that checks the number of methods inside an interface.
# - lll # Reports long lines
# - loggercheck # (logrlint): Checks key value pairs for common logger libraries (kitlog,klog,logr,zap).
# - logrlint # Check logr arguments. # - logrlint # Check logr arguments.
# - maintidx # maintidx measures the maintainability index of each function.
# - makezero # Finds slice declarations with non-zero initial length # - makezero # Finds slice declarations with non-zero initial length
# - misspell # Finds commonly misspelled English words in comments # - mirror # reports wrong mirror patterns of bytes/strings usage
# - misspell # Finds commonly misspelled English words
# - nakedret # Checks that functions with naked returns are not longer than a maximum size (can be zero).
# - nestif # Reports deeply nested if statements
# - nilerr # Finds the code that returns nil even if it checks that the error is not nil. # - nilerr # Finds the code that returns nil even if it checks that the error is not nil.
# - nolintlint # Reports ill-formed or insufficient nolint directives # - nolintlint # Reports ill-formed or insufficient nolint directives
# - nonamedreturns # Reports all named returns
# - nosprintfhostport # Checks for misuse of Sprintf to construct a host with port in a URL.
# - perfsprint # Checks that fmt.Sprintf can be replaced with a faster alternative.
# - predeclared # find code that shadows one of Go's predeclared identifiers # - predeclared # find code that shadows one of Go's predeclared identifiers
# - reassign # Checks that package variables are not reassigned # - reassign # Checks that package variables are not reassigned
# - rowserrcheck # checks whether Err of rows is checked successfully # - rowserrcheck # checks whether Rows.Err of rows is checked successfully
# - sqlclosecheck # Checks that sql.Rows and sql.Stmt are closed. # - sloglint # ensure consistent code style when using log/slog
# - staticcheck # (megacheck): Staticcheck is a go vet on steroids, applying a ton of static analysis checks # - spancheck # Checks for mistakes with OpenTelemetry/Census spans.
# - testableexamples # linter checks if examples are testable (have an expected output) # - sqlclosecheck # Checks that sql.Rows, sql.Stmt, sqlx.NamedStmt, pgx.Query are closed.
# - staticcheck # (megacheck): It's a set of rules from staticcheck. It's not the same thing as the staticcheck binary. The author of staticcheck doesn't support or approve the use of staticcheck as a library inside golangci-lint.
# - tenv # tenv is analyzer that detects using os.Setenv instead of t.Setenv since Go1.17 # - tenv # tenv is analyzer that detects using os.Setenv instead of t.Setenv since Go1.17
# - testableexamples # linter checks if examples are testable (have an expected output)
# - testifylint # Checks usage of github.com/stretchr/testify.
# - tparallel # tparallel detects inappropriate usage of t.Parallel() method in your Go test codes # - tparallel # tparallel detects inappropriate usage of t.Parallel() method in your Go test codes
# - typecheck # Like the front-end of a Go compiler, parses and type-checks Go code
# - unconvert # Remove unnecessary type conversions # - unconvert # Remove unnecessary type conversions
# - unused # (megacheck): Checks Go code for unused constants, variables, functions and types # - unused # (megacheck): Checks Go code for unused constants, variables, functions and types
# - usestdlibvars # A linter that detect the possibility to use variables/constants from the Go standard library. # - usestdlibvars # A linter that detect the possibility to use variables/constants from the Go standard library.
# - wastedassign # Finds wasted assignment statements
# - zerologlint # Detects the wrong usage of `zerolog` that a user forgets to dispatch with `Send` or `Msg`
# #
# Recommended? (easy) # Recommended? (easy)
# #
- dogsled # Checks assignments with too many blank identifiers (e.g. x, _, _, _, := f()) - dogsled # Checks assignments with too many blank identifiers (e.g. x, _, _, _, := f())
- errchkjson # Checks types passed to the json encoding functions. Reports unsupported types and optionally reports occations, where the check for the returned error can be omitted. - errchkjson # Checks types passed to the json encoding functions. Reports unsupported types and reports occations, where the check for the returned error can be omitted.
- errorlint # errorlint is a linter for that can be used to find code that will cause problems with the error wrapping scheme introduced in Go 1.13.
- exhaustive # check exhaustiveness of enum switch statements - exhaustive # check exhaustiveness of enum switch statements
- gci # Gci control golang package import order and make it always deterministic. - gci # Gci control golang package import order and make it always deterministic.
- godot # Check if comments end in a period - godot # Check if comments end in a period
- gofmt # Gofmt checks whether code was gofmt-ed. By default this tool runs with -s option to check for code simplification - gofmt # Gofmt checks whether code was gofmt-ed. By default this tool runs with -s option to check for code simplification
- goimports # In addition to fixing imports, goimports also formats your code in the same style as gofmt. - goimports # Check import statements are formatted according to the 'goimport' command. Reformat imports in autofix mode.
- gosec # (gas): Inspects source code for security problems - gosec # (gas): Inspects source code for security problems
- lll # Reports long lines - inamedparam # reports interfaces with unnamed method parameters
- musttag # enforce field tags in (un)marshaled structs - musttag # enforce field tags in (un)marshaled structs
- nakedret # Finds naked returns in functions greater than a specified function length
- nonamedreturns # Reports all named returns
- nosprintfhostport # Checks for misuse of Sprintf to construct a host with port in a URL.
- promlinter # Check Prometheus metrics naming via promlint - promlinter # Check Prometheus metrics naming via promlint
- protogetter # Reports direct reads from proto message fields when getters should be used
- revive # Fast, configurable, extensible, flexible, and beautiful linter for Go. Drop-in replacement of golint. - revive # Fast, configurable, extensible, flexible, and beautiful linter for Go. Drop-in replacement of golint.
- thelper # thelper detects golang test helpers without t.Helper() call and checks the consistency of test helpers - tagalign # check that struct tags are well aligned
- wastedassign # wastedassign finds wasted assignment statements. - thelper # thelper detects tests helpers which is not start with t.Helper() method.
- wrapcheck # Checks that errors returned from external packages are wrapped - wrapcheck # Checks that errors returned from external packages are wrapped
# #
# Recommended? (requires some work) # Recommended? (requires some work)
# #
- bodyclose # checks whether HTTP response body is closed successfully
- containedctx # containedctx is a linter that detects struct contained context.Context field - containedctx # containedctx is a linter that detects struct contained context.Context field
- contextcheck # check the function whether use a non-inherited context - contextcheck # check whether the function uses a non-inherited context
- errname # Checks that sentinel errors are prefixed with the `Err` and error types are suffixed with the `Error`. - errname # Checks that sentinel errors are prefixed with the `Err` and error types are suffixed with the `Error`.
- gomnd # An analyzer to detect magic numbers. - gomnd # An analyzer to detect magic numbers.
- ireturn # Accept Interfaces, Return Concrete Types - ireturn # Accept Interfaces, Return Concrete Types
- nilnil # Checks that there is no simultaneous return of `nil` error and an invalid value. - nilnil # Checks that there is no simultaneous return of `nil` error and an invalid value.
- noctx # noctx finds sending http request without context.Context - noctx # Finds sending http request without context.Context
- unparam # Reports unused function parameters - unparam # Reports unused function parameters
# #
@ -146,31 +253,25 @@ linters:
- gofumpt # Gofumpt checks whether code was gofumpt-ed. - gofumpt # Gofumpt checks whether code was gofumpt-ed.
- nlreturn # nlreturn checks for a new line before return and branch statements to increase code clarity - nlreturn # nlreturn checks for a new line before return and branch statements to increase code clarity
- whitespace # Tool for detection of leading and trailing whitespace - whitespace # Whitespace is a linter that checks for unnecessary newlines at the start and end of functions, if, for, etc.
- wsl # Whitespace Linter - Forces you to use empty lines! - wsl # add or remove empty lines
# #
# Well intended, but not ready for this # Well intended, but not ready for this
# #
- cyclop # checks function and package cyclomatic complexity
- dupl # Tool for code clone detection - dupl # Tool for code clone detection
- forcetypeassert # finds forced type assertions - forcetypeassert # finds forced type assertions
- gocognit # Computes and checks the cognitive complexity of functions
- gocyclo # Computes and checks the cyclomatic complexity of functions
- godox # Tool for detection of FIXME, TODO and other comment keywords - godox # Tool for detection of FIXME, TODO and other comment keywords
- goerr113 # Golang linter to check the errors handling expressions - goerr113 # Go linter to check the errors handling expressions
- maintidx # maintidx measures the maintainability index of each function. - paralleltest # Detects missing usage of t.Parallel() method in your Go test
- nestif # Reports deeply nested if statements
- paralleltest # paralleltest detects missing usage of t.Parallel() method in your Go test
- testpackage # linter that makes you use a separate _test package - testpackage # linter that makes you use a separate _test package
# #
# Too strict / too many false positives (for now?) # Too strict / too many false positives (for now?)
# #
- execinquery # execinquery is a linter about query string checker in Query function which reads your Go src files and warning it finds
- exhaustruct # Checks if all structure fields are initialized - exhaustruct # Checks if all structure fields are initialized
- forbidigo # Forbids identifiers - forbidigo # Forbids identifiers
- gochecknoglobals # check that no global variables exist - gochecknoglobals # Check that no global variables exist.
- goconst # Finds repeated strings that could be replaced by a constant - goconst # Finds repeated strings that could be replaced by a constant
- stylecheck # Stylecheck is a replacement for golint - stylecheck # Stylecheck is a replacement for golint
- tagliatelle # Checks the struct tags. - tagliatelle # Checks the struct tags.
@ -187,29 +288,30 @@ issues:
# “Look, thats why theres rules, understand? So that you think before you # “Look, thats why theres rules, understand? So that you think before you
# break em.” ― Terry Pratchett # break em.” ― Terry Pratchett
exclude-dirs:
- pkg/time/rate
exclude-files:
- pkg/yamlpatch/merge.go
- pkg/yamlpatch/merge_test.go
exclude-generated-strict: true
max-issues-per-linter: 0 max-issues-per-linter: 0
max-same-issues: 10 max-same-issues: 0
exclude-rules: exclude-rules:
- path: go.mod
text: "replacement are not allowed: golang.org/x/time/rate" # Won't fix:
# `err` is often shadowed, we may continue to do it # `err` is often shadowed, we may continue to do it
- linters: - linters:
- govet - govet
text: "shadow: declaration of \"err\" shadows declaration" text: "shadow: declaration of \"err\" shadows declaration"
#
# errcheck
#
- linters: - linters:
- errcheck - errcheck
text: "Error return value of `.*` is not checked" text: "Error return value of `.*` is not checked"
#
# gocritic
#
- linters: - linters:
- gocritic - gocritic
text: "ifElseChain: rewrite if-else to switch statement" text: "ifElseChain: rewrite if-else to switch statement"
@ -226,6 +328,55 @@ issues:
- gocritic - gocritic
text: "commentFormatting: put a space between `//` and comment text" text: "commentFormatting: put a space between `//` and comment text"
# Will fix, trivial - just beware of merge conflicts
- linters: - linters:
- staticcheck - perfsprint
text: "x509.ParseCRL has been deprecated since Go 1.19: Use ParseRevocationList instead" text: "fmt.Sprintf can be replaced .*"
- linters:
- perfsprint
text: "fmt.Errorf can be replaced with errors.New"
#
# Will fix, easy but some neurons required
#
- linters:
- errorlint
text: "non-wrapping format verb for fmt.Errorf. Use `%w` to format errors"
- linters:
- errorlint
text: "type assertion on error will fail on wrapped errors. Use errors.As to check for specific errors"
- linters:
- errorlint
text: "type switch on error will fail on wrapped errors. Use errors.As to check for specific errors"
- linters:
- errorlint
text: "type assertion on error will fail on wrapped errors. Use errors.Is to check for specific errors"
- linters:
- errorlint
text: "comparing with .* will fail on wrapped errors. Use errors.Is to check for a specific error"
- linters:
- errorlint
text: "switch on an error will fail on wrapped errors. Use errors.Is to check for specific errors"
- linters:
- nosprintfhostport
text: "host:port in url should be constructed with net.JoinHostPort and not directly with fmt.Sprintf"
# https://github.com/timakin/bodyclose
- linters:
- bodyclose
text: "response body must be closed"
# named/naked returns are evil, with a single exception
# https://go.dev/wiki/CodeReviewComments#named-result-parameters
- linters:
- nonamedreturns
text: "named return .* with type .* found"

View file

@ -1,69 +1,63 @@
# vim: set ft=dockerfile: # vim: set ft=dockerfile:
ARG GOVERSION=1.20.4 FROM golang:1.22.2-alpine3.18 AS build
FROM golang:${GOVERSION}-alpine AS build ARG BUILD_VERSION
WORKDIR /go/src/crowdsec WORKDIR /go/src/crowdsec
COPY . . # We like to choose the release of re2 to use, and Alpine does not ship a static version anyway.
# Alpine does not ship a static version of re2, we can build it ourselves
# Later versions require 'abseil', which is likewise not available in its static form
ENV RE2_VERSION=2023-03-01 ENV RE2_VERSION=2023-03-01
ENV BUILD_VERSION=${BUILD_VERSION}
# wizard.sh requires GNU coreutils # wizard.sh requires GNU coreutils
RUN apk add --no-cache git g++ gcc libc-dev make bash gettext binutils-gold coreutils icu-static re2-dev pkgconfig && \ RUN apk add --no-cache git g++ gcc libc-dev make bash gettext binutils-gold coreutils pkgconfig && \
wget https://github.com/google/re2/archive/refs/tags/${RE2_VERSION}.tar.gz && \ wget https://github.com/google/re2/archive/refs/tags/${RE2_VERSION}.tar.gz && \
tar -xzf ${RE2_VERSION}.tar.gz && \ tar -xzf ${RE2_VERSION}.tar.gz && \
cd re2-${RE2_VERSION} && \ cd re2-${RE2_VERSION} && \
make && \
make install && \ make install && \
echo "githubciXXXXXXXXXXXXXXXXXXXXXXXX" > /etc/machine-id && \ echo "githubciXXXXXXXXXXXXXXXXXXXXXXXX" > /etc/machine-id && \
cd - && \ go install github.com/mikefarah/yq/v4@v4.43.1
make clean release DOCKER_BUILD=1 && \
COPY . .
RUN make clean release DOCKER_BUILD=1 BUILD_STATIC=1 && \
cd crowdsec-v* && \ cd crowdsec-v* && \
./wizard.sh --docker-mode && \ ./wizard.sh --docker-mode && \
cd - >/dev/null && \ cd - >/dev/null && \
cscli hub update && \ cscli hub update && \
cscli collections install crowdsecurity/linux && \ cscli collections install crowdsecurity/linux && \
cscli parsers install crowdsecurity/whitelists && \ cscli parsers install crowdsecurity/whitelists
go install github.com/mikefarah/yq/v4@v4.31.2
# In case we need to remove agents here.. # In case we need to remove agents here..
# cscli machines list -o json | yq '.[].machineId' | xargs -r cscli machines delete # cscli machines list -o json | yq '.[].machineId' | xargs -r cscli machines delete
FROM alpine:latest as slim FROM alpine:latest as slim
RUN apk add --no-cache --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community tzdata bash && \ RUN apk add --no-cache --repository=http://dl-cdn.alpinelinux.org/alpine/edge/community tzdata bash rsync && \
mkdir -p /staging/etc/crowdsec && \ mkdir -p /staging/etc/crowdsec && \
mkdir -p /staging/etc/crowdsec/acquis.d && \ mkdir -p /staging/etc/crowdsec/acquis.d && \
mkdir -p /staging/var/lib/crowdsec && \ mkdir -p /staging/var/lib/crowdsec && \
mkdir -p /var/lib/crowdsec/data mkdir -p /var/lib/crowdsec/data
COPY --from=build /go/bin/yq /usr/local/bin/yq COPY --from=build /go/bin/yq /usr/local/bin/crowdsec /usr/local/bin/cscli /usr/local/bin/
COPY --from=build /etc/crowdsec /staging/etc/crowdsec COPY --from=build /etc/crowdsec /staging/etc/crowdsec
COPY --from=build /usr/local/bin/crowdsec /usr/local/bin/crowdsec
COPY --from=build /usr/local/bin/cscli /usr/local/bin/cscli
COPY --from=build /go/src/crowdsec/docker/docker_start.sh / COPY --from=build /go/src/crowdsec/docker/docker_start.sh /
COPY --from=build /go/src/crowdsec/docker/config.yaml /staging/etc/crowdsec/config.yaml COPY --from=build /go/src/crowdsec/docker/config.yaml /staging/etc/crowdsec/config.yaml
COPY --from=build /var/lib/crowdsec /staging/var/lib/crowdsec
RUN yq -n '.url="http://0.0.0.0:8080"' | install -m 0600 /dev/stdin /staging/etc/crowdsec/local_api_credentials.yaml RUN yq -n '.url="http://0.0.0.0:8080"' | install -m 0600 /dev/stdin /staging/etc/crowdsec/local_api_credentials.yaml
ENTRYPOINT /bin/bash docker_start.sh ENTRYPOINT /bin/bash /docker_start.sh
FROM slim as plugins FROM slim as full
# Due to the wizard using cp -n, we have to copy the config files directly from the source as -n does not exist in busybox cp # Due to the wizard using cp -n, we have to copy the config files directly from the source as -n does not exist in busybox cp
# The files are here for reference, as users will need to mount a new version to be actually able to use notifications # The files are here for reference, as users will need to mount a new version to be actually able to use notifications
COPY --from=build /go/src/crowdsec/plugins/notifications/email/email.yaml /staging/etc/crowdsec/notifications/email.yaml COPY --from=build \
COPY --from=build /go/src/crowdsec/plugins/notifications/http/http.yaml /staging/etc/crowdsec/notifications/http.yaml /go/src/crowdsec/cmd/notification-email/email.yaml \
COPY --from=build /go/src/crowdsec/plugins/notifications/slack/slack.yaml /staging/etc/crowdsec/notifications/slack.yaml /go/src/crowdsec/cmd/notification-http/http.yaml \
COPY --from=build /go/src/crowdsec/plugins/notifications/splunk/splunk.yaml /staging/etc/crowdsec/notifications/splunk.yaml /go/src/crowdsec/cmd/notification-slack/slack.yaml \
/go/src/crowdsec/cmd/notification-splunk/splunk.yaml \
/go/src/crowdsec/cmd/notification-sentinel/sentinel.yaml \
/staging/etc/crowdsec/notifications/
COPY --from=build /usr/local/lib/crowdsec/plugins /usr/local/lib/crowdsec/plugins COPY --from=build /usr/local/lib/crowdsec/plugins /usr/local/lib/crowdsec/plugins
FROM slim as geoip
COPY --from=build /var/lib/crowdsec /staging/var/lib/crowdsec
FROM plugins as full
COPY --from=build /var/lib/crowdsec /staging/var/lib/crowdsec

View file

@ -1,32 +1,42 @@
# vim: set ft=dockerfile: # vim: set ft=dockerfile:
ARG GOVERSION=1.20.4 FROM golang:1.22.2-bookworm AS build
FROM golang:${GOVERSION}-bullseye AS build ARG BUILD_VERSION
WORKDIR /go/src/crowdsec WORKDIR /go/src/crowdsec
COPY . .
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
ENV DEBCONF_NOWARNINGS="yes" ENV DEBCONF_NOWARNINGS="yes"
# We like to choose the release of re2 to use, the debian version is usually older.
ENV RE2_VERSION=2023-03-01
ENV BUILD_VERSION=${BUILD_VERSION}
# wizard.sh requires GNU coreutils # wizard.sh requires GNU coreutils
RUN apt-get update && \ RUN apt-get update && \
apt-get install -y -q git gcc libc-dev make bash gettext binutils-gold coreutils tzdata libre2-dev && \ apt-get install -y -q git gcc libc-dev make bash gettext binutils-gold coreutils tzdata && \
wget https://github.com/google/re2/archive/refs/tags/${RE2_VERSION}.tar.gz && \
tar -xzf ${RE2_VERSION}.tar.gz && \
cd re2-${RE2_VERSION} && \
make && \
make install && \
echo "githubciXXXXXXXXXXXXXXXXXXXXXXXX" > /etc/machine-id && \ echo "githubciXXXXXXXXXXXXXXXXXXXXXXXX" > /etc/machine-id && \
make clean release DOCKER_BUILD=1 && \ go install github.com/mikefarah/yq/v4@v4.43.1
COPY . .
RUN make clean release DOCKER_BUILD=1 BUILD_STATIC=1 && \
cd crowdsec-v* && \ cd crowdsec-v* && \
./wizard.sh --docker-mode && \ ./wizard.sh --docker-mode && \
cd - >/dev/null && \ cd - >/dev/null && \
cscli hub update && \ cscli hub update && \
cscli collections install crowdsecurity/linux && \ cscli collections install crowdsecurity/linux && \
cscli parsers install crowdsecurity/whitelists && \ cscli parsers install crowdsecurity/whitelists
go install github.com/mikefarah/yq/v4@v4.31.2
# In case we need to remove agents here.. # In case we need to remove agents here..
# cscli machines list -o json | yq '.[].machineId' | xargs -r cscli machines delete # cscli machines list -o json | yq '.[].machineId' | xargs -r cscli machines delete
FROM debian:bullseye-slim as slim FROM debian:bookworm-slim as slim
ENV DEBIAN_FRONTEND=noninteractive ENV DEBIAN_FRONTEND=noninteractive
ENV DEBCONF_NOWARNINGS="yes" ENV DEBCONF_NOWARNINGS="yes"
@ -38,19 +48,15 @@ RUN apt-get update && \
iproute2 \ iproute2 \
ca-certificates \ ca-certificates \
bash \ bash \
tzdata && \ tzdata \
rsync && \
mkdir -p /staging/etc/crowdsec && \ mkdir -p /staging/etc/crowdsec && \
mkdir -p /staging/etc/crowdsec/acquis.d && \ mkdir -p /staging/etc/crowdsec/acquis.d && \
mkdir -p /staging/var/lib/crowdsec && \ mkdir -p /staging/var/lib/crowdsec && \
mkdir -p /var/lib/crowdsec/data mkdir -p /var/lib/crowdsec/data
RUN echo "deb http://deb.debian.org/debian bullseye-backports main" >> /etc/apt/sources.list \ COPY --from=build /go/bin/yq /usr/local/bin/crowdsec /usr/local/bin/cscli /usr/local/bin/
&& apt-get update && apt-get install -t bullseye-backports -y libsystemd0
COPY --from=build /go/bin/yq /usr/local/bin/yq
COPY --from=build /etc/crowdsec /staging/etc/crowdsec COPY --from=build /etc/crowdsec /staging/etc/crowdsec
COPY --from=build /usr/local/bin/crowdsec /usr/local/bin/crowdsec
COPY --from=build /usr/local/bin/cscli /usr/local/bin/cscli
COPY --from=build /go/src/crowdsec/docker/docker_start.sh / COPY --from=build /go/src/crowdsec/docker/docker_start.sh /
COPY --from=build /go/src/crowdsec/docker/config.yaml /staging/etc/crowdsec/config.yaml COPY --from=build /go/src/crowdsec/docker/config.yaml /staging/etc/crowdsec/config.yaml
RUN yq -n '.url="http://0.0.0.0:8080"' | install -m 0600 /dev/stdin /staging/etc/crowdsec/local_api_credentials.yaml && \ RUN yq -n '.url="http://0.0.0.0:8080"' | install -m 0600 /dev/stdin /staging/etc/crowdsec/local_api_credentials.yaml && \
@ -62,10 +68,14 @@ FROM slim as plugins
# Due to the wizard using cp -n, we have to copy the config files directly from the source as -n does not exist in busybox cp # Due to the wizard using cp -n, we have to copy the config files directly from the source as -n does not exist in busybox cp
# The files are here for reference, as users will need to mount a new version to be actually able to use notifications # The files are here for reference, as users will need to mount a new version to be actually able to use notifications
COPY --from=build /go/src/crowdsec/plugins/notifications/email/email.yaml /staging/etc/crowdsec/notifications/email.yaml COPY --from=build \
COPY --from=build /go/src/crowdsec/plugins/notifications/http/http.yaml /staging/etc/crowdsec/notifications/http.yaml /go/src/crowdsec/cmd/notification-email/email.yaml \
COPY --from=build /go/src/crowdsec/plugins/notifications/slack/slack.yaml /staging/etc/crowdsec/notifications/slack.yaml /go/src/crowdsec/cmd/notification-http/http.yaml \
COPY --from=build /go/src/crowdsec/plugins/notifications/splunk/splunk.yaml /staging/etc/crowdsec/notifications/splunk.yaml /go/src/crowdsec/cmd/notification-slack/slack.yaml \
/go/src/crowdsec/cmd/notification-splunk/splunk.yaml \
/go/src/crowdsec/cmd/notification-sentinel/sentinel.yaml \
/staging/etc/crowdsec/notifications/
COPY --from=build /usr/local/lib/crowdsec/plugins /usr/local/lib/crowdsec/plugins COPY --from=build /usr/local/lib/crowdsec/plugins /usr/local/lib/crowdsec/plugins
FROM slim as geoip FROM slim as geoip

205
Makefile
View file

@ -1,41 +1,78 @@
include mk/platform.mk include mk/platform.mk
include mk/gmsl
# By default, this build requires the C++ re2 library to be installed.
#
# Debian/Ubuntu: apt install libre2-dev
# Fedora/CentOS: dnf install re2-devel
# FreeBSD: pkg install re2
# Alpine: apk add re2-dev
# Windows: choco install re2
# MacOS: brew install re2
# To build without re2, run "make BUILD_RE2_WASM=1"
# The WASM version is slower and introduces a short delay when starting a process
# (including cscli) so it is not recommended for production use.
BUILD_RE2_WASM ?= 0
# To build static binaries, run "make BUILD_STATIC=1".
# On some platforms, this requires additional packages
# (e.g. glibc-static and libstdc++-static on fedora, centos.. which are on the powertools/crb repository).
# If the static build fails at the link stage, it might be because the static library is not provided
# for your distribution (look for libre2.a). See the Dockerfile for an example of how to build it.
BUILD_STATIC ?= 0
# List of plugins to build
PLUGINS ?= $(patsubst ./cmd/notification-%,%,$(wildcard ./cmd/notification-*))
# Can be overriden, if you can deal with the consequences
BUILD_REQUIRE_GO_MAJOR ?= 1 BUILD_REQUIRE_GO_MAJOR ?= 1
BUILD_REQUIRE_GO_MINOR ?= 20 BUILD_REQUIRE_GO_MINOR ?= 21
GOCMD = go #--------------------------------------
GOTEST = $(GOCMD) test
GO = go
GOTEST = $(GO) test
BUILD_CODENAME ?= alphaga BUILD_CODENAME ?= alphaga
CROWDSEC_FOLDER = ./cmd/crowdsec CROWDSEC_FOLDER = ./cmd/crowdsec
CSCLI_FOLDER = ./cmd/crowdsec-cli/ CSCLI_FOLDER = ./cmd/crowdsec-cli/
PLUGINS_DIR_PREFIX = ./cmd/notification-
PLUGINS ?= $(patsubst ./plugins/notifications/%,%,$(wildcard ./plugins/notifications/*))
PLUGINS_DIR = ./plugins/notifications
CROWDSEC_BIN = crowdsec$(EXT) CROWDSEC_BIN = crowdsec$(EXT)
CSCLI_BIN = cscli$(EXT) CSCLI_BIN = cscli$(EXT)
# semver comparison to select the hub branch requires the version to start with "v"
ifneq ($(call substr,$(BUILD_VERSION),1,1),v)
$(error BUILD_VERSION "$(BUILD_VERSION)" should start with "v")
endif
# Directory for the release files # Directory for the release files
RELDIR = crowdsec-$(BUILD_VERSION) RELDIR = crowdsec-$(BUILD_VERSION)
GO_MODULE_NAME = github.com/crowdsecurity/crowdsec GO_MODULE_NAME = github.com/crowdsecurity/crowdsec
# see if we have libre2-dev installed for C++ optimizations # Check if a given value is considered truthy and returns "0" or "1".
RE2_CHECK := $(shell pkg-config --libs re2 2>/dev/null) # A truthy value is one of the following: "1", "yes", or "true", case-insensitive.
#
# Usage:
# ifeq ($(call bool,$(FOO)),1)
# $(info Let's foo)
# endif
bool = $(if $(filter $(call lc, $1),1 yes true),1,0)
#-------------------------------------- #--------------------------------------
# #
# Define MAKE_FLAGS and LD_OPTS for the sub-makefiles in cmd/ and plugins/ # Define MAKE_FLAGS and LD_OPTS for the sub-makefiles in cmd/
# #
MAKE_FLAGS = --no-print-directory GOARCH=$(GOARCH) GOOS=$(GOOS) RM="$(RM)" WIN_IGNORE_ERR="$(WIN_IGNORE_ERR)" CP="$(CP)" CPR="$(CPR)" MKDIR="$(MKDIR)" MAKE_FLAGS = --no-print-directory GOARCH=$(GOARCH) GOOS=$(GOOS) RM="$(RM)" WIN_IGNORE_ERR="$(WIN_IGNORE_ERR)" CP="$(CP)" CPR="$(CPR)" MKDIR="$(MKDIR)"
LD_OPTS_VARS= \ LD_OPTS_VARS= \
-X 'github.com/crowdsecurity/go-cs-lib/pkg/version.Version=$(BUILD_VERSION)' \ -X 'github.com/crowdsecurity/go-cs-lib/version.Version=$(BUILD_VERSION)' \
-X 'github.com/crowdsecurity/go-cs-lib/pkg/version.BuildDate=$(BUILD_TIMESTAMP)' \ -X 'github.com/crowdsecurity/go-cs-lib/version.BuildDate=$(BUILD_TIMESTAMP)' \
-X 'github.com/crowdsecurity/go-cs-lib/pkg/version.Tag=$(BUILD_TAG)' \ -X 'github.com/crowdsecurity/go-cs-lib/version.Tag=$(BUILD_TAG)' \
-X '$(GO_MODULE_NAME)/pkg/cwversion.Codename=$(BUILD_CODENAME)' \ -X '$(GO_MODULE_NAME)/pkg/cwversion.Codename=$(BUILD_CODENAME)' \
-X '$(GO_MODULE_NAME)/pkg/csconfig.defaultConfigDir=$(DEFAULT_CONFIGDIR)' \ -X '$(GO_MODULE_NAME)/pkg/csconfig.defaultConfigDir=$(DEFAULT_CONFIGDIR)' \
-X '$(GO_MODULE_NAME)/pkg/csconfig.defaultDataDir=$(DEFAULT_DATADIR)' -X '$(GO_MODULE_NAME)/pkg/csconfig.defaultDataDir=$(DEFAULT_DATADIR)'
@ -46,48 +83,107 @@ endif
GO_TAGS := netgo,osusergo,sqlite_omit_load_extension GO_TAGS := netgo,osusergo,sqlite_omit_load_extension
ifneq (,$(RE2_CHECK)) # this will be used by Go in the make target, some distributions require it
export PKG_CONFIG_PATH:=/usr/local/lib/pkgconfig:$(PKG_CONFIG_PATH)
ifeq ($(call bool,$(BUILD_RE2_WASM)),0)
ifeq ($(PKG_CONFIG),)
$(error "pkg-config is not available. Please install pkg-config.")
endif
ifeq ($(RE2_CHECK),)
RE2_FAIL := "libre2-dev is not installed, please install it or set BUILD_RE2_WASM=1 to use the WebAssembly version"
else
# += adds a space that we don't want # += adds a space that we don't want
GO_TAGS := $(GO_TAGS),re2_cgo GO_TAGS := $(GO_TAGS),re2_cgo
LD_OPTS_VARS += -X '$(GO_MODULE_NAME)/pkg/cwversion.Libre2=C++' LD_OPTS_VARS += -X '$(GO_MODULE_NAME)/pkg/cwversion.Libre2=C++'
endif endif
endif
export LD_OPTS=-ldflags "-s -w -extldflags '-static' $(LD_OPTS_VARS)" \ # Build static to avoid the runtime dependency on libre2.so
-trimpath -tags $(GO_TAGS) ifeq ($(call bool,$(BUILD_STATIC)),1)
BUILD_TYPE = static
EXTLDFLAGS := -extldflags '-static'
else
BUILD_TYPE = dynamic
EXTLDFLAGS :=
endif
ifneq (,$(TEST_COVERAGE)) # Build with debug symbols, and disable optimizations + inlining, to use Delve
ifeq ($(call bool,$(DEBUG)),1)
STRIP_SYMBOLS :=
DISABLE_OPTIMIZATION := -gcflags "-N -l"
else
STRIP_SYMBOLS := -s -w
DISABLE_OPTIMIZATION :=
endif
export LD_OPTS=-ldflags "$(STRIP_SYMBOLS) $(EXTLDFLAGS) $(LD_OPTS_VARS)" \
-trimpath -tags $(GO_TAGS) $(DISABLE_OPTIMIZATION)
ifeq ($(call bool,$(TEST_COVERAGE)),1)
LD_OPTS += -cover LD_OPTS += -cover
endif endif
#-------------------------------------- #--------------------------------------
.PHONY: build .PHONY: build
build: pre-build goversion crowdsec cscli plugins build: pre-build goversion crowdsec cscli plugins ## Build crowdsec, cscli and plugins
.PHONY: pre-build .PHONY: pre-build
pre-build: pre-build: ## Sanity checks and build information
ifdef BUILD_STATIC $(info Building $(BUILD_VERSION) ($(BUILD_TAG)) $(BUILD_TYPE) for $(GOOS)/$(GOARCH))
$(warning WARNING: The BUILD_STATIC variable is deprecated and has no effect. Builds are static by default since v1.5.0.)
ifneq (,$(RE2_FAIL))
$(error $(RE2_FAIL))
endif endif
$(info Building $(BUILD_VERSION) ($(BUILD_TAG)) for $(GOOS)/$(GOARCH))
ifneq (,$(RE2_CHECK)) ifneq (,$(RE2_CHECK))
$(info Using C++ regexp library) $(info Using C++ regexp library)
else else
$(info Fallback to WebAssembly regexp library. To use the C++ version, make sure you have installed libre2-dev and pkg-config.) $(info Fallback to WebAssembly regexp library. To use the C++ version, make sure you have installed libre2-dev and pkg-config.)
endif endif
ifeq ($(call bool,$(DEBUG)),1)
$(info Building with debug symbols and disabled optimizations)
endif
ifeq ($(call bool,$(TEST_COVERAGE)),1)
$(info Test coverage collection enabled)
endif
# intentional, empty line
$(info ) $(info )
.PHONY: all .PHONY: all
all: clean test build all: clean test build ## Clean, test and build (requires localstack)
.PHONY: plugins .PHONY: plugins
plugins: plugins: ## Build notification plugins
@$(foreach plugin,$(PLUGINS), \ @$(foreach plugin,$(PLUGINS), \
$(MAKE) -C $(PLUGINS_DIR)/$(plugin) build $(MAKE_FLAGS); \ $(MAKE) -C $(PLUGINS_DIR_PREFIX)$(plugin) build $(MAKE_FLAGS); \
) )
# same as "$(MAKE) -f debian/rules clean" but without the dependency on debhelper
.PHONY: clean-debian
clean-debian:
@$(RM) -r debian/crowdsec
@$(RM) -r debian/crowdsec
@$(RM) -r debian/files
@$(RM) -r debian/.debhelper
@$(RM) -r debian/*.substvars
@$(RM) -r debian/*-stamp
.PHONY: clean-rpm
clean-rpm:
@$(RM) -r rpm/BUILD
@$(RM) -r rpm/BUILDROOT
@$(RM) -r rpm/RPMS
@$(RM) -r rpm/SOURCES/*.tar.gz
@$(RM) -r rpm/SRPMS
.PHONY: clean .PHONY: clean
clean: testclean clean: clean-debian clean-rpm testclean ## Remove build artifacts
@$(MAKE) -C $(CROWDSEC_FOLDER) clean $(MAKE_FLAGS) @$(MAKE) -C $(CROWDSEC_FOLDER) clean $(MAKE_FLAGS)
@$(MAKE) -C $(CSCLI_FOLDER) clean $(MAKE_FLAGS) @$(MAKE) -C $(CSCLI_FOLDER) clean $(MAKE_FLAGS)
@$(RM) $(CROWDSEC_BIN) $(WIN_IGNORE_ERR) @$(RM) $(CROWDSEC_BIN) $(WIN_IGNORE_ERR)
@ -95,55 +191,65 @@ clean: testclean
@$(RM) *.log $(WIN_IGNORE_ERR) @$(RM) *.log $(WIN_IGNORE_ERR)
@$(RM) crowdsec-release.tgz $(WIN_IGNORE_ERR) @$(RM) crowdsec-release.tgz $(WIN_IGNORE_ERR)
@$(foreach plugin,$(PLUGINS), \ @$(foreach plugin,$(PLUGINS), \
$(MAKE) -C $(PLUGINS_DIR)/$(plugin) clean $(MAKE_FLAGS); \ $(MAKE) -C $(PLUGINS_DIR_PREFIX)$(plugin) clean $(MAKE_FLAGS); \
) )
.PHONY: cscli .PHONY: cscli
cscli: goversion cscli: goversion ## Build cscli
@$(MAKE) -C $(CSCLI_FOLDER) build $(MAKE_FLAGS) @$(MAKE) -C $(CSCLI_FOLDER) build $(MAKE_FLAGS)
.PHONY: crowdsec .PHONY: crowdsec
crowdsec: goversion crowdsec: goversion ## Build crowdsec
@$(MAKE) -C $(CROWDSEC_FOLDER) build $(MAKE_FLAGS) @$(MAKE) -C $(CROWDSEC_FOLDER) build $(MAKE_FLAGS)
.PHONY: generate
generate: ## Generate code for the database and APIs
$(GO) generate ./pkg/database/ent
$(GO) generate ./pkg/models
.PHONY: testclean .PHONY: testclean
testclean: bats-clean testclean: bats-clean ## Remove test artifacts
@$(RM) pkg/apiserver/ent $(WIN_IGNORE_ERR) @$(RM) pkg/apiserver/ent $(WIN_IGNORE_ERR)
@$(RM) pkg/cwhub/hubdir $(WIN_IGNORE_ERR) @$(RM) pkg/cwhub/hubdir $(WIN_IGNORE_ERR)
@$(RM) pkg/cwhub/install $(WIN_IGNORE_ERR) @$(RM) pkg/cwhub/install $(WIN_IGNORE_ERR)
@$(RM) pkg/types/example.txt $(WIN_IGNORE_ERR) @$(RM) pkg/types/example.txt $(WIN_IGNORE_ERR)
# for the tests with localstack
export AWS_ENDPOINT_FORCE=http://localhost:4566 export AWS_ENDPOINT_FORCE=http://localhost:4566
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE export AWS_ACCESS_KEY_ID=test
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY export AWS_SECRET_ACCESS_KEY=test
testenv: testenv:
@echo 'NOTE: You need Docker, docker-compose and run "make localstack" in a separate shell ("make localstack-stop" to terminate it)' @echo 'NOTE: You need Docker, docker-compose and run "make localstack" in a separate shell ("make localstack-stop" to terminate it)'
.PHONY: test .PHONY: test
test: testenv goversion test: testenv goversion ## Run unit tests with localstack
$(GOTEST) $(LD_OPTS) ./... $(GOTEST) $(LD_OPTS) ./...
.PHONY: go-acc .PHONY: go-acc
go-acc: testenv goversion go-acc: testenv goversion ## Run unit tests with localstack + coverage
go-acc ./... -o coverage.out --ignore database,notifications,protobufs,cwversion,cstest,models -- $(LD_OPTS) | \ go-acc ./... -o coverage.out --ignore database,notifications,protobufs,cwversion,cstest,models -- $(LD_OPTS)
sed 's/ *coverage:.*of statements in.*//'
# mock AWS services
.PHONY: localstack .PHONY: localstack
localstack: localstack: ## Run localstack containers (required for unit testing)
docker-compose -f test/localstack/docker-compose.yml up docker-compose -f test/localstack/docker-compose.yml up
.PHONY: localstack-stop .PHONY: localstack-stop
localstack-stop: localstack-stop: ## Stop localstack containers
docker-compose -f test/localstack/docker-compose.yml down docker-compose -f test/localstack/docker-compose.yml down
# build vendor.tgz to be distributed with the release
.PHONY: vendor .PHONY: vendor
vendor: vendor: vendor-remove ## CI only - vendor dependencies and archive them for packaging
@echo "Vendoring dependencies" $(GO) mod vendor
@$(GOCMD) mod vendor tar czf vendor.tgz vendor
@$(foreach plugin,$(PLUGINS), \ tar --create --auto-compress --file=$(RELDIR)-vendor.tar.xz vendor
$(MAKE) -C $(PLUGINS_DIR)/$(plugin) vendor $(MAKE_FLAGS); \
) # remove vendor directories and vendor.tgz
.PHONY: vendor-remove
vendor-remove: ## Remove vendor dependencies and archives
$(RM) vendor vendor.tgz *-vendor.tar.xz
.PHONY: package .PHONY: package
package: package:
@ -154,9 +260,9 @@ package:
@$(CP) $(CSCLI_FOLDER)/$(CSCLI_BIN) $(RELDIR)/cmd/crowdsec-cli @$(CP) $(CSCLI_FOLDER)/$(CSCLI_BIN) $(RELDIR)/cmd/crowdsec-cli
@$(foreach plugin,$(PLUGINS), \ @$(foreach plugin,$(PLUGINS), \
$(MKDIR) $(RELDIR)/$(PLUGINS_DIR)/$(plugin); \ $(MKDIR) $(RELDIR)/$(PLUGINS_DIR_PREFIX)$(plugin); \
$(CP) $(PLUGINS_DIR)/$(plugin)/notification-$(plugin)$(EXT) $(RELDIR)/$(PLUGINS_DIR)/$(plugin); \ $(CP) $(PLUGINS_DIR_PREFIX)$(plugin)/notification-$(plugin)$(EXT) $(RELDIR)/$(PLUGINS_DIR_PREFIX)$(plugin); \
$(CP) $(PLUGINS_DIR)/$(plugin)/$(plugin).yaml $(RELDIR)/$(PLUGINS_DIR)/$(plugin)/; \ $(CP) $(PLUGINS_DIR_PREFIX)$(plugin)/$(plugin).yaml $(RELDIR)/$(PLUGINS_DIR_PREFIX)$(plugin)/; \
) )
@$(CPR) ./config $(RELDIR) @$(CPR) ./config $(RELDIR)
@ -175,14 +281,14 @@ else
endif endif
.PHONY: release .PHONY: release
release: check_release build package release: check_release build package ## Build a release tarball
.PHONY: windows_installer .PHONY: windows_installer
windows_installer: build windows_installer: build ## Windows - build the installer
@.\make_installer.ps1 -version $(BUILD_VERSION) @.\make_installer.ps1 -version $(BUILD_VERSION)
.PHONY: chocolatey .PHONY: chocolatey
chocolatey: windows_installer chocolatey: windows_installer ## Windows - build the chocolatey package
@.\make_chocolatey.ps1 -version $(BUILD_VERSION) @.\make_chocolatey.ps1 -version $(BUILD_VERSION)
# Include test/bats.mk only if it exists # Include test/bats.mk only if it exists
@ -195,3 +301,4 @@ include test/bats.mk
endif endif
include mk/goversion.mk include mk/goversion.mk
include mk/help.mk

View file

@ -15,19 +15,13 @@ pool:
stages: stages:
- stage: Build - stage: Build
jobs: jobs:
- job: - job: Build
displayName: "Build" displayName: "Build"
steps: steps:
- task: DotNetCoreCLI@2
displayName: "Install SignClient"
inputs:
command: 'custom'
custom: 'tool'
arguments: 'install --global SignClient --version 1.3.155'
- task: GoTool@0 - task: GoTool@0
displayName: "Install Go 1.20" displayName: "Install Go"
inputs: inputs:
version: '1.20.4' version: '1.22.2'
- pwsh: | - pwsh: |
choco install -y make choco install -y make
@ -38,25 +32,15 @@ stages:
pwsh: true pwsh: true
#we are not calling make windows_installer because we want to sign the binaries before they are added to the MSI #we are not calling make windows_installer because we want to sign the binaries before they are added to the MSI
script: | script: |
make build make build BUILD_RE2_WASM=1
- task: AzureKeyVault@2
inputs:
azureSubscription: 'Azure subscription 1(8a93ab40-7e99-445e-ad47-0f6a3e2ef546)'
KeyVaultName: 'CodeSigningSecrets'
SecretsFilter: 'CodeSigningUser,CodeSigningPassword'
RunAsPreJob: false
- task: DownloadSecureFile@1
inputs:
secureFile: appsettings.json
- pwsh: |
SignClient.exe Sign --name "crowdsec-binaries" `
--input "**/*.exe" --config (Join-Path -Path $(Agent.TempDirectory) -ChildPath "appsettings.json") `
--user $(CodeSigningUser) --secret '$(CodeSigningPassword)'
displayName: "Sign Crowdsec binaries + plugins"
- pwsh: | - pwsh: |
$build_version=$env:BUILD_SOURCEBRANCHNAME $build_version=$env:BUILD_SOURCEBRANCHNAME
#Override the version if it's set in the pipeline
if ( ${env:USERBUILDVERSION} -ne "")
{
$build_version = ${env:USERBUILDVERSION}
}
if ($build_version.StartsWith("v")) if ($build_version.StartsWith("v"))
{ {
$build_version = $build_version.Substring(1) $build_version = $build_version.Substring(1)
@ -69,35 +53,112 @@ stages:
displayName: GetCrowdsecVersion displayName: GetCrowdsecVersion
name: GetCrowdsecVersion name: GetCrowdsecVersion
- pwsh: | - pwsh: |
.\make_installer.ps1 -version '$(GetCrowdsecVersion.BuildVersion)' Get-ChildItem -Path .\cmd -Directory | ForEach-Object {
$dirName = $_.Name
Get-ChildItem -Path .\cmd\$dirName -File -Filter '*.exe' | ForEach-Object {
$fileName = $_.Name
$destDir = Join-Path $(Build.ArtifactStagingDirectory) cmd\$dirName
New-Item -ItemType Directory -Path $destDir -Force
Copy-Item -Path .\cmd\$dirName\$fileName -Destination $destDir
}
}
displayName: "Copy binaries to staging directory"
- task: PublishPipelineArtifact@1
inputs:
targetPath: '$(Build.ArtifactStagingDirectory)'
artifact: 'unsigned_binaries'
displayName: "Upload binaries artifact"
- stage: Sign
dependsOn: Build
variables:
- group: 'FOSS Build Variables'
- name: BuildVersion
value: $[ stageDependencies.Build.Build.outputs['GetCrowdsecVersion.BuildVersion'] ]
condition: succeeded()
jobs:
- job: Sign
displayName: "Sign"
steps:
- download: current
artifact: unsigned_binaries
displayName: "Download binaries artifact"
- task: CopyFiles@2
inputs:
SourceFolder: '$(Pipeline.Workspace)/unsigned_binaries'
TargetFolder: '$(Build.SourcesDirectory)'
displayName: "Copy binaries to workspace"
- task: DotNetCoreCLI@2
displayName: "Install SignTool tool"
inputs:
command: 'custom'
custom: 'tool'
arguments: install --global sign --version 0.9.0-beta.23127.3
- task: AzureKeyVault@2
displayName: "Get signing parameters"
inputs:
azureSubscription: "Azure subscription"
KeyVaultName: "$(KeyVaultName)"
SecretsFilter: "TenantId,ClientId,ClientSecret,Certificate,KeyVaultUrl"
- pwsh: |
sign code azure-key-vault `
"**/*.exe" `
--base-directory "$(Build.SourcesDirectory)/cmd/" `
--publisher-name "CrowdSec" `
--description "CrowdSec" `
--description-url "https://github.com/crowdsecurity/crowdsec" `
--azure-key-vault-tenant-id "$(TenantId)" `
--azure-key-vault-client-id "$(ClientId)" `
--azure-key-vault-client-secret "$(ClientSecret)" `
--azure-key-vault-certificate "$(Certificate)" `
--azure-key-vault-url "$(KeyVaultUrl)"
displayName: "Sign crowdsec binaries"
- pwsh: |
.\make_installer.ps1 -version '$(BuildVersion)'
displayName: "Build Crowdsec MSI" displayName: "Build Crowdsec MSI"
name: BuildMSI name: BuildMSI
- pwsh: | - pwsh: |
.\make_chocolatey.ps1 -version '$(GetCrowdsecVersion.BuildVersion)' .\make_chocolatey.ps1 -version '$(BuildVersion)'
displayName: "Build Chocolatey nupkg" displayName: "Build Chocolatey nupkg"
- pwsh: | - pwsh: |
SignClient.exe Sign --name "crowdsec-msi" ` sign code azure-key-vault `
--input "*.msi" --config (Join-Path -Path $(Agent.TempDirectory) -ChildPath "appsettings.json") ` "*.msi" `
--user $(CodeSigningUser) --secret '$(CodeSigningPassword)' --base-directory "$(Build.SourcesDirectory)" `
displayName: "Sign Crowdsec MSI" --publisher-name "CrowdSec" `
--description "CrowdSec" `
- task: PublishBuildArtifacts@1 --description-url "https://github.com/crowdsecurity/crowdsec" `
--azure-key-vault-tenant-id "$(TenantId)" `
--azure-key-vault-client-id "$(ClientId)" `
--azure-key-vault-client-secret "$(ClientSecret)" `
--azure-key-vault-certificate "$(Certificate)" `
--azure-key-vault-url "$(KeyVaultUrl)"
displayName: "Sign MSI package"
- pwsh: |
sign code azure-key-vault `
"*.nupkg" `
--base-directory "$(Build.SourcesDirectory)" `
--publisher-name "CrowdSec" `
--description "CrowdSec" `
--description-url "https://github.com/crowdsecurity/crowdsec" `
--azure-key-vault-tenant-id "$(TenantId)" `
--azure-key-vault-client-id "$(ClientId)" `
--azure-key-vault-client-secret "$(ClientSecret)" `
--azure-key-vault-certificate "$(Certificate)" `
--azure-key-vault-url "$(KeyVaultUrl)"
displayName: "Sign nuget package"
- task: PublishPipelineArtifact@1
inputs: inputs:
PathtoPublish: '$(Build.Repository.LocalPath)\\crowdsec_$(GetCrowdsecVersion.BuildVersion).msi' targetPath: '$(Build.SourcesDirectory)/crowdsec_$(BuildVersion).msi'
ArtifactName: 'crowdsec.msi' artifact: 'signed_msi_package'
publishLocation: 'Container' displayName: "Upload signed MSI artifact"
displayName: "Upload MSI artifact" - task: PublishPipelineArtifact@1
- task: PublishBuildArtifacts@1
inputs: inputs:
PathtoPublish: '$(Build.Repository.LocalPath)\\windows\\Chocolatey\\crowdsec\\crowdsec.$(GetCrowdsecVersion.BuildVersion).nupkg' targetPath: '$(Build.SourcesDirectory)/crowdsec.$(BuildVersion).nupkg'
ArtifactName: 'crowdsec.nupkg' artifact: 'signed_nuget_package'
publishLocation: 'Container' displayName: "Upload signed nuget artifact"
displayName: "Upload nupkg artifact"
- stage: Publish - stage: Publish
dependsOn: Build dependsOn: Sign
jobs: jobs:
- deployment: "Publish" - deployment: "Publish"
displayName: "Publish to GitHub" displayName: "Publish to GitHub"
@ -119,8 +180,7 @@ stages:
assetUploadMode: 'replace' assetUploadMode: 'replace'
addChangeLog: false addChangeLog: false
isPreRelease: true #we force prerelease because the pipeline is invoked on tag creation, which happens when we do a prerelease isPreRelease: true #we force prerelease because the pipeline is invoked on tag creation, which happens when we do a prerelease
#the .. is an ugly hack, but I can't find the var that gives D:\a\1 ...
assets: | assets: |
$(Build.ArtifactStagingDirectory)\..\crowdsec.msi/*.msi $(Pipeline.Workspace)/signed_msi_package/*.msi
$(Build.ArtifactStagingDirectory)\..\crowdsec.nupkg/*.nupkg $(Pipeline.Workspace)/signed_nuget_package/*.nupkg
condition: ne(variables['GetLatestPrelease.LatestPreRelease'], '') condition: ne(variables['GetLatestPrelease.LatestPreRelease'], '')

View file

@ -4,10 +4,8 @@ ifeq ($(OS), Windows_NT)
EXT = .exe EXT = .exe
endif endif
# Go parameters GO = go
GOCMD = go GOBUILD = $(GO) build
GOBUILD = $(GOCMD) build
GOTEST = $(GOCMD) test
BINARY_NAME = cscli$(EXT) BINARY_NAME = cscli$(EXT)
PREFIX ?= "/" PREFIX ?= "/"
@ -17,7 +15,7 @@ BIN_PREFIX = $(PREFIX)"/usr/local/bin/"
all: clean build all: clean build
build: clean build: clean
$(GOBUILD) $(LD_OPTS) $(BUILD_VENDOR_FLAGS) -o $(BINARY_NAME) $(GOBUILD) $(LD_OPTS) -o $(BINARY_NAME)
.PHONY: install .PHONY: install
install: install-conf install-bin install: install-conf install-bin

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"encoding/csv" "encoding/csv"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"net/url" "net/url"
"os" "os"
@ -11,17 +12,16 @@ import (
"strconv" "strconv"
"strings" "strings"
"text/template" "text/template"
"time"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
@ -30,83 +30,46 @@ import (
func DecisionsFromAlert(alert *models.Alert) string { func DecisionsFromAlert(alert *models.Alert) string {
ret := "" ret := ""
var decMap = make(map[string]int) decMap := make(map[string]int)
for _, decision := range alert.Decisions { for _, decision := range alert.Decisions {
k := *decision.Type k := *decision.Type
if *decision.Simulated { if *decision.Simulated {
k = fmt.Sprintf("(simul)%s", k) k = fmt.Sprintf("(simul)%s", k)
} }
v := decMap[k] v := decMap[k]
decMap[k] = v + 1 decMap[k] = v + 1
} }
for k, v := range decMap { for k, v := range decMap {
if len(ret) > 0 { if len(ret) > 0 {
ret += " " ret += " "
} }
ret += fmt.Sprintf("%s:%d", k, v) ret += fmt.Sprintf("%s:%d", k, v)
} }
return ret return ret
} }
func DateFromAlert(alert *models.Alert) string { func (cli *cliAlerts) alertsToTable(alerts *models.GetAlertsResponse, printMachine bool) error {
ts, err := time.Parse(time.RFC3339, alert.CreatedAt) switch cli.cfg().Cscli.Output {
if err != nil { case "raw":
log.Infof("while parsing %s with %s : %s", alert.CreatedAt, time.RFC3339, err)
return alert.CreatedAt
}
return ts.Format(time.RFC822)
}
func SourceFromAlert(alert *models.Alert) string {
//more than one item, just number and scope
if len(alert.Decisions) > 1 {
return fmt.Sprintf("%d %ss (%s)", len(alert.Decisions), *alert.Decisions[0].Scope, *alert.Decisions[0].Origin)
}
//fallback on single decision information
if len(alert.Decisions) == 1 {
return fmt.Sprintf("%s:%s", *alert.Decisions[0].Scope, *alert.Decisions[0].Value)
}
//try to compose a human friendly version
if *alert.Source.Value != "" && *alert.Source.Scope != "" {
scope := ""
scope = fmt.Sprintf("%s:%s", *alert.Source.Scope, *alert.Source.Value)
extra := ""
if alert.Source.Cn != "" {
extra = alert.Source.Cn
}
if alert.Source.AsNumber != "" {
extra += fmt.Sprintf("/%s", alert.Source.AsNumber)
}
if alert.Source.AsName != "" {
extra += fmt.Sprintf("/%s", alert.Source.AsName)
}
if extra != "" {
scope += " (" + extra + ")"
}
return scope
}
return ""
}
func AlertsToTable(alerts *models.GetAlertsResponse, printMachine bool) error {
if csConfig.Cscli.Output == "raw" {
csvwriter := csv.NewWriter(os.Stdout) csvwriter := csv.NewWriter(os.Stdout)
header := []string{"id", "scope", "value", "reason", "country", "as", "decisions", "created_at"} header := []string{"id", "scope", "value", "reason", "country", "as", "decisions", "created_at"}
if printMachine { if printMachine {
header = append(header, "machine") header = append(header, "machine")
} }
err := csvwriter.Write(header)
if err != nil { if err := csvwriter.Write(header); err != nil {
return err return err
} }
for _, alertItem := range *alerts { for _, alertItem := range *alerts {
row := []string{ row := []string{
fmt.Sprintf("%d", alertItem.ID), strconv.FormatInt(alertItem.ID, 10),
*alertItem.Source.Scope, *alertItem.Source.Scope,
*alertItem.Source.Value, *alertItem.Source.Value,
*alertItem.Scenario, *alertItem.Scenario,
@ -118,22 +81,32 @@ func AlertsToTable(alerts *models.GetAlertsResponse, printMachine bool) error {
if printMachine { if printMachine {
row = append(row, alertItem.MachineID) row = append(row, alertItem.MachineID)
} }
err := csvwriter.Write(row)
if err != nil { if err := csvwriter.Write(row); err != nil {
return err return err
} }
} }
csvwriter.Flush() csvwriter.Flush()
} else if csConfig.Cscli.Output == "json" { case "json":
if *alerts == nil {
// avoid returning "null" in json
// could be cleaner if we used slice of alerts directly
fmt.Println("[]")
return nil
}
x, _ := json.MarshalIndent(alerts, "", " ") x, _ := json.MarshalIndent(alerts, "", " ")
fmt.Printf("%s", string(x)) fmt.Print(string(x))
} else if csConfig.Cscli.Output == "human" { case "human":
if len(*alerts) == 0 { if len(*alerts) == 0 {
fmt.Println("No active alerts") fmt.Println("No active alerts")
return nil return nil
} }
alertsTable(color.Output, alerts, printMachine) alertsTable(color.Output, alerts, printMachine)
} }
return nil return nil
} }
@ -155,93 +128,109 @@ var alertTemplate = `
` `
func (cli *cliAlerts) displayOneAlert(alert *models.Alert, withDetail bool) error {
tmpl, err := template.New("alert").Parse(alertTemplate)
if err != nil {
return err
}
func DisplayOneAlert(alert *models.Alert, withDetail bool) error { if err = tmpl.Execute(os.Stdout, alert); err != nil {
if csConfig.Cscli.Output == "human" { return err
tmpl, err := template.New("alert").Parse(alertTemplate) }
if err != nil {
return err
}
err = tmpl.Execute(os.Stdout, alert)
if err != nil {
return err
}
alertDecisionsTable(color.Output, alert) alertDecisionsTable(color.Output, alert)
if len(alert.Meta) > 0 { if len(alert.Meta) > 0 {
fmt.Printf("\n - Context :\n") fmt.Printf("\n - Context :\n")
sort.Slice(alert.Meta, func(i, j int) bool { sort.Slice(alert.Meta, func(i, j int) bool {
return alert.Meta[i].Key < alert.Meta[j].Key return alert.Meta[i].Key < alert.Meta[j].Key
}) })
table := newTable(color.Output)
table.SetRowLines(false) table := newTable(color.Output)
table.SetHeaders("Key", "Value") table.SetRowLines(false)
for _, meta := range alert.Meta { table.SetHeaders("Key", "Value")
var valSlice []string
if err := json.Unmarshal([]byte(meta.Value), &valSlice); err != nil { for _, meta := range alert.Meta {
return fmt.Errorf("unknown context value type '%s' : %s", meta.Value, err) var valSlice []string
} if err := json.Unmarshal([]byte(meta.Value), &valSlice); err != nil {
for _, value := range valSlice { return fmt.Errorf("unknown context value type '%s': %w", meta.Value, err)
table.AddRow( }
meta.Key,
value, for _, value := range valSlice {
) table.AddRow(
} meta.Key,
value,
)
} }
table.Render()
} }
if withDetail { table.Render()
fmt.Printf("\n - Events :\n") }
for _, event := range alert.Events {
alertEventTable(color.Output, event) if withDetail {
} fmt.Printf("\n - Events :\n")
for _, event := range alert.Events {
alertEventTable(color.Output, event)
} }
} }
return nil return nil
} }
func NewAlertsCmd() *cobra.Command { type cliAlerts struct {
var cmdAlerts = &cobra.Command{ client *apiclient.ApiClient
cfg configGetter
}
func NewCLIAlerts(getconfig configGetter) *cliAlerts {
return &cliAlerts{
cfg: getconfig,
}
}
func (cli *cliAlerts) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "alerts [action]", Use: "alerts [action]",
Short: "Manage alerts", Short: "Manage alerts",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { Aliases: []string{"alert"},
var err error PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIClient(); err != nil { cfg := cli.cfg()
return errors.Wrap(err, "loading api client") if err := cfg.LoadAPIClient(); err != nil {
return fmt.Errorf("loading api client: %w", err)
} }
apiURL, err := url.Parse(csConfig.API.Client.Credentials.URL) apiURL, err := url.Parse(cfg.API.Client.Credentials.URL)
if err != nil { if err != nil {
return errors.Wrapf(err, "parsing api url %s", apiURL) return fmt.Errorf("parsing api url %s: %w", apiURL, err)
} }
Client, err = apiclient.NewClient(&apiclient.Config{
MachineID: csConfig.API.Client.Credentials.Login, cli.client, err = apiclient.NewClient(&apiclient.Config{
Password: strfmt.Password(csConfig.API.Client.Credentials.Password), MachineID: cfg.API.Client.Credentials.Login,
Password: strfmt.Password(cfg.API.Client.Credentials.Password),
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()), UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL, URL: apiURL,
VersionPrefix: "v1", VersionPrefix: "v1",
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "new api client") return fmt.Errorf("new api client: %w", err)
} }
return nil return nil
}, },
} }
cmdAlerts.AddCommand(NewAlertsListCmd()) cmd.AddCommand(cli.NewListCmd())
cmdAlerts.AddCommand(NewAlertsInspectCmd()) cmd.AddCommand(cli.NewInspectCmd())
cmdAlerts.AddCommand(NewAlertsFlushCmd()) cmd.AddCommand(cli.NewFlushCmd())
cmdAlerts.AddCommand(NewAlertsDeleteCmd()) cmd.AddCommand(cli.NewDeleteCmd())
return cmdAlerts return cmd
} }
func NewAlertsListCmd() *cobra.Command { func (cli *cliAlerts) NewListCmd() *cobra.Command {
var alertListFilter = apiclient.AlertsListOpts{ alertListFilter := apiclient.AlertsListOpts{
ScopeEquals: new(string), ScopeEquals: new(string),
ValueEquals: new(string), ValueEquals: new(string),
ScenarioEquals: new(string), ScenarioEquals: new(string),
@ -253,21 +242,24 @@ func NewAlertsListCmd() *cobra.Command {
IncludeCAPI: new(bool), IncludeCAPI: new(bool),
OriginEquals: new(string), OriginEquals: new(string),
} }
var limit = new(int)
limit := new(int)
contained := new(bool) contained := new(bool)
var printMachine bool var printMachine bool
var cmdAlertsList = &cobra.Command{
cmd := &cobra.Command{
Use: "list [filters]", Use: "list [filters]",
Short: "List alerts", Short: "List alerts",
Example: `cscli alerts list Example: `cscli alerts list
cscli alerts list --ip 1.2.3.4 cscli alerts list --ip 1.2.3.4
cscli alerts list --range 1.2.3.0/24 cscli alerts list --range 1.2.3.0/24
cscli alerts list --origin lists
cscli alerts list -s crowdsecurity/ssh-bf cscli alerts list -s crowdsecurity/ssh-bf
cscli alerts list --type ban`, cscli alerts list --type ban`,
Long: `List alerts with optional filters`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, _ []string) error {
var err error
if err := manageCliDecisionAlerts(alertListFilter.IPEquals, alertListFilter.RangeEquals, if err := manageCliDecisionAlerts(alertListFilter.IPEquals, alertListFilter.RangeEquals,
alertListFilter.ScopeEquals, alertListFilter.ValueEquals); err != nil { alertListFilter.ScopeEquals, alertListFilter.ValueEquals); err != nil {
printHelp(cmd) printHelp(cmd)
@ -333,50 +325,56 @@ cscli alerts list --type ban`,
alertListFilter.Contains = new(bool) alertListFilter.Contains = new(bool)
} }
alerts, _, err := Client.Alerts.List(context.Background(), alertListFilter) alerts, _, err := cli.client.Alerts.List(context.Background(), alertListFilter)
if err != nil { if err != nil {
return fmt.Errorf("unable to list alerts: %v", err) return fmt.Errorf("unable to list alerts: %w", err)
} }
err = AlertsToTable(alerts, printMachine) if err = cli.alertsToTable(alerts, printMachine); err != nil {
if err != nil { return fmt.Errorf("unable to list alerts: %w", err)
return fmt.Errorf("unable to list alerts: %v", err)
} }
return nil return nil
}, },
} }
cmdAlertsList.Flags().SortFlags = false
cmdAlertsList.Flags().BoolVarP(alertListFilter.IncludeCAPI, "all", "a", false, "Include decisions from Central API")
cmdAlertsList.Flags().StringVar(alertListFilter.Until, "until", "", "restrict to alerts older than until (ie. 4h, 30d)")
cmdAlertsList.Flags().StringVar(alertListFilter.Since, "since", "", "restrict to alerts newer than since (ie. 4h, 30d)")
cmdAlertsList.Flags().StringVarP(alertListFilter.IPEquals, "ip", "i", "", "restrict to alerts from this source ip (shorthand for --scope ip --value <IP>)")
cmdAlertsList.Flags().StringVarP(alertListFilter.ScenarioEquals, "scenario", "s", "", "the scenario (ie. crowdsecurity/ssh-bf)")
cmdAlertsList.Flags().StringVarP(alertListFilter.RangeEquals, "range", "r", "", "restrict to alerts from this range (shorthand for --scope range --value <RANGE/X>)")
cmdAlertsList.Flags().StringVar(alertListFilter.TypeEquals, "type", "", "restrict to alerts with given decision type (ie. ban, captcha)")
cmdAlertsList.Flags().StringVar(alertListFilter.ScopeEquals, "scope", "", "restrict to alerts of this scope (ie. ip,range)")
cmdAlertsList.Flags().StringVarP(alertListFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope")
cmdAlertsList.Flags().StringVar(alertListFilter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ",")))
cmdAlertsList.Flags().BoolVar(contained, "contained", false, "query decisions contained by range")
cmdAlertsList.Flags().BoolVarP(&printMachine, "machine", "m", false, "print machines that sent alerts")
cmdAlertsList.Flags().IntVarP(limit, "limit", "l", 50, "limit size of alerts list table (0 to view all alerts)")
return cmdAlertsList flags := cmd.Flags()
flags.SortFlags = false
flags.BoolVarP(alertListFilter.IncludeCAPI, "all", "a", false, "Include decisions from Central API")
flags.StringVar(alertListFilter.Until, "until", "", "restrict to alerts older than until (ie. 4h, 30d)")
flags.StringVar(alertListFilter.Since, "since", "", "restrict to alerts newer than since (ie. 4h, 30d)")
flags.StringVarP(alertListFilter.IPEquals, "ip", "i", "", "restrict to alerts from this source ip (shorthand for --scope ip --value <IP>)")
flags.StringVarP(alertListFilter.ScenarioEquals, "scenario", "s", "", "the scenario (ie. crowdsecurity/ssh-bf)")
flags.StringVarP(alertListFilter.RangeEquals, "range", "r", "", "restrict to alerts from this range (shorthand for --scope range --value <RANGE/X>)")
flags.StringVar(alertListFilter.TypeEquals, "type", "", "restrict to alerts with given decision type (ie. ban, captcha)")
flags.StringVar(alertListFilter.ScopeEquals, "scope", "", "restrict to alerts of this scope (ie. ip,range)")
flags.StringVarP(alertListFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope")
flags.StringVar(alertListFilter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ",")))
flags.BoolVar(contained, "contained", false, "query decisions contained by range")
flags.BoolVarP(&printMachine, "machine", "m", false, "print machines that sent alerts")
flags.IntVarP(limit, "limit", "l", 50, "limit size of alerts list table (0 to view all alerts)")
return cmd
} }
func NewAlertsDeleteCmd() *cobra.Command { func (cli *cliAlerts) NewDeleteCmd() *cobra.Command {
var ActiveDecision *bool var (
var AlertDeleteAll bool ActiveDecision *bool
var delAlertByID string AlertDeleteAll bool
contained := new(bool) delAlertByID string
var alertDeleteFilter = apiclient.AlertsDeleteOpts{ )
alertDeleteFilter := apiclient.AlertsDeleteOpts{
ScopeEquals: new(string), ScopeEquals: new(string),
ValueEquals: new(string), ValueEquals: new(string),
ScenarioEquals: new(string), ScenarioEquals: new(string),
IPEquals: new(string), IPEquals: new(string),
RangeEquals: new(string), RangeEquals: new(string),
} }
var cmdAlertsDelete = &cobra.Command{
contained := new(bool)
cmd := &cobra.Command{
Use: "delete [filters] [--all]", Use: "delete [filters] [--all]",
Short: `Delete alerts Short: `Delete alerts
/!\ This command can be use only on the same machine than the local API.`, /!\ This command can be use only on the same machine than the local API.`,
@ -386,7 +384,7 @@ cscli alerts delete -s crowdsecurity/ssh-bf"`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Aliases: []string{"remove"}, Aliases: []string{"remove"},
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
PreRunE: func(cmd *cobra.Command, args []string) error { PreRunE: func(cmd *cobra.Command, _ []string) error {
if AlertDeleteAll { if AlertDeleteAll {
return nil return nil
} }
@ -394,16 +392,16 @@ cscli alerts delete -s crowdsecurity/ssh-bf"`,
*alertDeleteFilter.ScenarioEquals == "" && *alertDeleteFilter.IPEquals == "" && *alertDeleteFilter.ScenarioEquals == "" && *alertDeleteFilter.IPEquals == "" &&
*alertDeleteFilter.RangeEquals == "" && delAlertByID == "" { *alertDeleteFilter.RangeEquals == "" && delAlertByID == "" {
_ = cmd.Usage() _ = cmd.Usage()
return fmt.Errorf("at least one filter or --all must be specified") return errors.New("at least one filter or --all must be specified")
} }
return nil return nil
}, },
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, _ []string) error {
var err error var err error
if !AlertDeleteAll { if !AlertDeleteAll {
if err := manageCliDecisionAlerts(alertDeleteFilter.IPEquals, alertDeleteFilter.RangeEquals, if err = manageCliDecisionAlerts(alertDeleteFilter.IPEquals, alertDeleteFilter.RangeEquals,
alertDeleteFilter.ScopeEquals, alertDeleteFilter.ValueEquals); err != nil { alertDeleteFilter.ScopeEquals, alertDeleteFilter.ValueEquals); err != nil {
printHelp(cmd) printHelp(cmd)
return err return err
@ -439,14 +437,14 @@ cscli alerts delete -s crowdsecurity/ssh-bf"`,
var alerts *models.DeleteAlertsResponse var alerts *models.DeleteAlertsResponse
if delAlertByID == "" { if delAlertByID == "" {
alerts, _, err = Client.Alerts.Delete(context.Background(), alertDeleteFilter) alerts, _, err = cli.client.Alerts.Delete(context.Background(), alertDeleteFilter)
if err != nil { if err != nil {
return fmt.Errorf("unable to delete alerts : %v", err) return fmt.Errorf("unable to delete alerts: %w", err)
} }
} else { } else {
alerts, _, err = Client.Alerts.DeleteOne(context.Background(), delAlertByID) alerts, _, err = cli.client.Alerts.DeleteOne(context.Background(), delAlertByID)
if err != nil { if err != nil {
return fmt.Errorf("unable to delete alert: %v", err) return fmt.Errorf("unable to delete alert: %w", err)
} }
} }
log.Infof("%s alert(s) deleted", alerts.NbDeleted) log.Infof("%s alert(s) deleted", alerts.NbDeleted)
@ -454,90 +452,99 @@ cscli alerts delete -s crowdsecurity/ssh-bf"`,
return nil return nil
}, },
} }
cmdAlertsDelete.Flags().SortFlags = false
cmdAlertsDelete.Flags().StringVar(alertDeleteFilter.ScopeEquals, "scope", "", "the scope (ie. ip,range)") flags := cmd.Flags()
cmdAlertsDelete.Flags().StringVarP(alertDeleteFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope") flags.SortFlags = false
cmdAlertsDelete.Flags().StringVarP(alertDeleteFilter.ScenarioEquals, "scenario", "s", "", "the scenario (ie. crowdsecurity/ssh-bf)") flags.StringVar(alertDeleteFilter.ScopeEquals, "scope", "", "the scope (ie. ip,range)")
cmdAlertsDelete.Flags().StringVarP(alertDeleteFilter.IPEquals, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)") flags.StringVarP(alertDeleteFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope")
cmdAlertsDelete.Flags().StringVarP(alertDeleteFilter.RangeEquals, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)") flags.StringVarP(alertDeleteFilter.ScenarioEquals, "scenario", "s", "", "the scenario (ie. crowdsecurity/ssh-bf)")
cmdAlertsDelete.Flags().StringVar(&delAlertByID, "id", "", "alert ID") flags.StringVarP(alertDeleteFilter.IPEquals, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)")
cmdAlertsDelete.Flags().BoolVarP(&AlertDeleteAll, "all", "a", false, "delete all alerts") flags.StringVarP(alertDeleteFilter.RangeEquals, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)")
cmdAlertsDelete.Flags().BoolVar(contained, "contained", false, "query decisions contained by range") flags.StringVar(&delAlertByID, "id", "", "alert ID")
return cmdAlertsDelete flags.BoolVarP(&AlertDeleteAll, "all", "a", false, "delete all alerts")
flags.BoolVar(contained, "contained", false, "query decisions contained by range")
return cmd
} }
func NewAlertsInspectCmd() *cobra.Command { func (cli *cliAlerts) NewInspectCmd() *cobra.Command {
var details bool var details bool
var cmdAlertsInspect = &cobra.Command{
cmd := &cobra.Command{
Use: `inspect "alert_id"`, Use: `inspect "alert_id"`,
Short: `Show info about an alert`, Short: `Show info about an alert`,
Example: `cscli alerts inspect 123`, Example: `cscli alerts inspect 123`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
cfg := cli.cfg()
if len(args) == 0 { if len(args) == 0 {
printHelp(cmd) printHelp(cmd)
return fmt.Errorf("missing alert_id") return errors.New("missing alert_id")
} }
for _, alertID := range args { for _, alertID := range args {
id, err := strconv.Atoi(alertID) id, err := strconv.Atoi(alertID)
if err != nil { if err != nil {
return fmt.Errorf("bad alert id %s", alertID) return fmt.Errorf("bad alert id %s", alertID)
} }
alert, _, err := Client.Alerts.GetByID(context.Background(), id) alert, _, err := cli.client.Alerts.GetByID(context.Background(), id)
if err != nil { if err != nil {
return fmt.Errorf("can't find alert with id %s: %s", alertID, err) return fmt.Errorf("can't find alert with id %s: %w", alertID, err)
} }
switch csConfig.Cscli.Output { switch cfg.Cscli.Output {
case "human": case "human":
if err := DisplayOneAlert(alert, details); err != nil { if err := cli.displayOneAlert(alert, details); err != nil {
continue continue
} }
case "json": case "json":
data, err := json.MarshalIndent(alert, "", " ") data, err := json.MarshalIndent(alert, "", " ")
if err != nil { if err != nil {
return fmt.Errorf("unable to marshal alert with id %s: %s", alertID, err) return fmt.Errorf("unable to marshal alert with id %s: %w", alertID, err)
} }
fmt.Printf("%s\n", string(data)) fmt.Printf("%s\n", string(data))
case "raw": case "raw":
data, err := yaml.Marshal(alert) data, err := yaml.Marshal(alert)
if err != nil { if err != nil {
return fmt.Errorf("unable to marshal alert with id %s: %s", alertID, err) return fmt.Errorf("unable to marshal alert with id %s: %w", alertID, err)
} }
fmt.Printf("%s\n", string(data)) fmt.Println(string(data))
} }
} }
return nil return nil
}, },
} }
cmdAlertsInspect.Flags().SortFlags = false
cmdAlertsInspect.Flags().BoolVarP(&details, "details", "d", false, "show alerts with events")
return cmdAlertsInspect cmd.Flags().SortFlags = false
cmd.Flags().BoolVarP(&details, "details", "d", false, "show alerts with events")
return cmd
} }
func NewAlertsFlushCmd() *cobra.Command { func (cli *cliAlerts) NewFlushCmd() *cobra.Command {
var maxItems int var (
var maxAge string maxItems int
var cmdAlertsFlush = &cobra.Command{ maxAge string
)
cmd := &cobra.Command{
Use: `flush`, Use: `flush`,
Short: `Flush alerts Short: `Flush alerts
/!\ This command can be used only on the same machine than the local API`, /!\ This command can be used only on the same machine than the local API`,
Example: `cscli alerts flush --max-items 1000 --max-age 7d`, Example: `cscli alerts flush --max-items 1000 --max-age 7d`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
var err error cfg := cli.cfg()
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI { if err := require.LAPI(cfg); err != nil {
return fmt.Errorf("local API is disabled, please run this command on the local API machine") return err
} }
dbClient, err = database.NewClient(csConfig.DbConfig) db, err := database.NewClient(cfg.DbConfig)
if err != nil { if err != nil {
return fmt.Errorf("unable to create new database client: %s", err) return fmt.Errorf("unable to create new database client: %w", err)
} }
log.Info("Flushing alerts. !! This may take a long time !!") log.Info("Flushing alerts. !! This may take a long time !!")
err = dbClient.FlushAlerts(maxAge, maxItems) err = db.FlushAlerts(maxAge, maxItems)
if err != nil { if err != nil {
return fmt.Errorf("unable to flush alerts: %s", err) return fmt.Errorf("unable to flush alerts: %w", err)
} }
log.Info("Alerts flushed") log.Info("Alerts flushed")
@ -545,9 +552,9 @@ func NewAlertsFlushCmd() *cobra.Command {
}, },
} }
cmdAlertsFlush.Flags().SortFlags = false cmd.Flags().SortFlags = false
cmdAlertsFlush.Flags().IntVar(&maxItems, "max-items", 5000, "Maximum number of alert items to keep in the database") cmd.Flags().IntVar(&maxItems, "max-items", 5000, "Maximum number of alert items to keep in the database")
cmdAlertsFlush.Flags().StringVar(&maxAge, "max-age", "7d", "Maximum age of alert items to keep in the database") cmd.Flags().StringVar(&maxAge, "max-age", "7d", "Maximum age of alert items to keep in the database")
return cmdAlertsFlush return cmd
} }

View file

@ -3,217 +3,318 @@ package main
import ( import (
"encoding/csv" "encoding/csv"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "os"
"slices"
"strings" "strings"
"time" "time"
"github.com/AlecAivazis/survey/v2"
"github.com/fatih/color" "github.com/fatih/color"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/slices"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
middlewares "github.com/crowdsecurity/crowdsec/pkg/apiserver/middlewares/v1" middlewares "github.com/crowdsecurity/crowdsec/pkg/apiserver/middlewares/v1"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
) )
func getBouncers(out io.Writer, dbClient *database.Client) error { func askYesNo(message string, defaultAnswer bool) (bool, error) {
bouncers, err := dbClient.ListBouncers() var answer bool
if err != nil {
return fmt.Errorf("unable to list bouncers: %s", err) prompt := &survey.Confirm{
Message: message,
Default: defaultAnswer,
} }
if csConfig.Cscli.Output == "human" {
getBouncersTable(out, bouncers) if err := survey.AskOne(prompt, &answer); err != nil {
} else if csConfig.Cscli.Output == "json" { return defaultAnswer, err
enc := json.NewEncoder(out)
enc.SetIndent("", " ")
if err := enc.Encode(bouncers); err != nil {
return fmt.Errorf("failed to unmarshal: %w", err)
}
return nil
} else if csConfig.Cscli.Output == "raw" {
csvwriter := csv.NewWriter(out)
err := csvwriter.Write([]string{"name", "ip", "revoked", "last_pull", "type", "version", "auth_type"})
if err != nil {
return fmt.Errorf("failed to write raw header: %w", err)
}
for _, b := range bouncers {
var revoked string
if !b.Revoked {
revoked = "validated"
} else {
revoked = "pending"
}
err := csvwriter.Write([]string{b.Name, b.IPAddress, revoked, b.LastPull.Format(time.RFC3339), b.Type, b.Version, b.AuthType})
if err != nil {
return fmt.Errorf("failed to write raw: %w", err)
}
}
csvwriter.Flush()
} }
return nil
return answer, nil
} }
func NewBouncersListCmd() *cobra.Command { type cliBouncers struct {
cmdBouncersList := &cobra.Command{ db *database.Client
Use: "list", cfg configGetter
Short: "List bouncers", }
Long: `List bouncers`,
Example: `cscli bouncers list`, func NewCLIBouncers(cfg configGetter) *cliBouncers {
Args: cobra.ExactArgs(0), return &cliBouncers{
cfg: cfg,
}
}
func (cli *cliBouncers) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "bouncers [action]",
Short: "Manage bouncers [requires local API]",
Long: `To list/add/delete/prune bouncers.
Note: This command requires database direct access, so is intended to be run on Local API/master.
`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"bouncer"},
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, arg []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
err := getBouncers(color.Output, dbClient) var err error
if err != nil {
return fmt.Errorf("unable to list bouncers: %s", err) cfg := cli.cfg()
if err = require.LAPI(cfg); err != nil {
return err
} }
cli.db, err = database.NewClient(cfg.DbConfig)
if err != nil {
return fmt.Errorf("can't connect to the database: %w", err)
}
return nil return nil
}, },
} }
return cmdBouncersList cmd.AddCommand(cli.newListCmd())
cmd.AddCommand(cli.newAddCmd())
cmd.AddCommand(cli.newDeleteCmd())
cmd.AddCommand(cli.newPruneCmd())
return cmd
} }
func runBouncersAdd(cmd *cobra.Command, args []string) error { func (cli *cliBouncers) list() error {
flags := cmd.Flags() out := color.Output
keyLength, err := flags.GetInt("length") bouncers, err := cli.db.ListBouncers()
if err != nil { if err != nil {
return err return fmt.Errorf("unable to list bouncers: %w", err)
} }
key, err := flags.GetString("key") switch cli.cfg().Cscli.Output {
if err != nil { case "human":
return err getBouncersTable(out, bouncers)
} case "json":
enc := json.NewEncoder(out)
enc.SetIndent("", " ")
keyName := args[0] if err := enc.Encode(bouncers); err != nil {
var apiKey string return fmt.Errorf("failed to marshal: %w", err)
if keyName == "" {
return fmt.Errorf("please provide a name for the api key")
}
apiKey = key
if key == "" {
apiKey, err = middlewares.GenerateAPIKey(keyLength)
}
if err != nil {
return fmt.Errorf("unable to generate api key: %s", err)
}
_, err = dbClient.CreateBouncer(keyName, "", middlewares.HashSHA512(apiKey), types.ApiKeyAuthType)
if err != nil {
return fmt.Errorf("unable to create bouncer: %s", err)
}
if csConfig.Cscli.Output == "human" {
fmt.Printf("Api key for '%s':\n\n", keyName)
fmt.Printf(" %s\n\n", apiKey)
fmt.Print("Please keep this key since you will not be able to retrieve it!\n")
} else if csConfig.Cscli.Output == "raw" {
fmt.Printf("%s", apiKey)
} else if csConfig.Cscli.Output == "json" {
j, err := json.Marshal(apiKey)
if err != nil {
return fmt.Errorf("unable to marshal api key")
} }
fmt.Printf("%s", string(j))
return nil
case "raw":
csvwriter := csv.NewWriter(out)
if err := csvwriter.Write([]string{"name", "ip", "revoked", "last_pull", "type", "version", "auth_type"}); err != nil {
return fmt.Errorf("failed to write raw header: %w", err)
}
for _, b := range bouncers {
valid := "validated"
if b.Revoked {
valid = "pending"
}
if err := csvwriter.Write([]string{b.Name, b.IPAddress, valid, b.LastPull.Format(time.RFC3339), b.Type, b.Version, b.AuthType}); err != nil {
return fmt.Errorf("failed to write raw: %w", err)
}
}
csvwriter.Flush()
} }
return nil return nil
} }
func NewBouncersAddCmd() *cobra.Command { func (cli *cliBouncers) newListCmd() *cobra.Command {
cmdBouncersAdd := &cobra.Command{ cmd := &cobra.Command{
Use: "add MyBouncerName [--length 16]", Use: "list",
Short: "add bouncer", Short: "list all bouncers within the database",
Long: `add bouncer`, Example: `cscli bouncers list`,
Example: `cscli bouncers add MyBouncerName Args: cobra.ExactArgs(0),
cscli bouncers add MyBouncerName -l 24
cscli bouncers add MyBouncerName -k <random-key>`,
Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runBouncersAdd, RunE: func(_ *cobra.Command, _ []string) error {
return cli.list()
},
} }
flags := cmdBouncersAdd.Flags() return cmd
flags.IntP("length", "l", 16, "length of the api key")
flags.StringP("key", "k", "", "api key for the bouncer")
return cmdBouncersAdd
} }
func runBouncersDelete(cmd *cobra.Command, args []string) error { func (cli *cliBouncers) add(bouncerName string, key string) error {
for _, bouncerID := range args { var err error
err := dbClient.DeleteBouncer(bouncerID)
keyLength := 32
if key == "" {
key, err = middlewares.GenerateAPIKey(keyLength)
if err != nil { if err != nil {
return fmt.Errorf("unable to delete bouncer '%s': %s", bouncerID, err) return fmt.Errorf("unable to generate api key: %w", err)
} }
}
_, err = cli.db.CreateBouncer(bouncerName, "", middlewares.HashSHA512(key), types.ApiKeyAuthType)
if err != nil {
return fmt.Errorf("unable to create bouncer: %w", err)
}
switch cli.cfg().Cscli.Output {
case "human":
fmt.Printf("API key for '%s':\n\n", bouncerName)
fmt.Printf(" %s\n\n", key)
fmt.Print("Please keep this key since you will not be able to retrieve it!\n")
case "raw":
fmt.Print(key)
case "json":
j, err := json.Marshal(key)
if err != nil {
return errors.New("unable to marshal api key")
}
fmt.Print(string(j))
}
return nil
}
func (cli *cliBouncers) newAddCmd() *cobra.Command {
var key string
cmd := &cobra.Command{
Use: "add MyBouncerName",
Short: "add a single bouncer to the database",
Example: `cscli bouncers add MyBouncerName
cscli bouncers add MyBouncerName --key <random-key>`,
Args: cobra.ExactArgs(1),
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, args []string) error {
return cli.add(args[0], key)
},
}
flags := cmd.Flags()
flags.StringP("length", "l", "", "length of the api key")
_ = flags.MarkDeprecated("length", "use --key instead")
flags.StringVarP(&key, "key", "k", "", "api key for the bouncer")
return cmd
}
func (cli *cliBouncers) deleteValid(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
bouncers, err := cli.db.ListBouncers()
if err != nil {
cobra.CompError("unable to list bouncers " + err.Error())
}
ret := []string{}
for _, bouncer := range bouncers {
if strings.Contains(bouncer.Name, toComplete) && !slices.Contains(args, bouncer.Name) {
ret = append(ret, bouncer.Name)
}
}
return ret, cobra.ShellCompDirectiveNoFileComp
}
func (cli *cliBouncers) delete(bouncers []string) error {
for _, bouncerID := range bouncers {
err := cli.db.DeleteBouncer(bouncerID)
if err != nil {
return fmt.Errorf("unable to delete bouncer '%s': %w", bouncerID, err)
}
log.Infof("bouncer '%s' deleted successfully", bouncerID) log.Infof("bouncer '%s' deleted successfully", bouncerID)
} }
return nil return nil
} }
func NewBouncersDeleteCmd() *cobra.Command { func (cli *cliBouncers) newDeleteCmd() *cobra.Command {
cmdBouncersDelete := &cobra.Command{ cmd := &cobra.Command{
Use: "delete MyBouncerName", Use: "delete MyBouncerName",
Short: "delete bouncer", Short: "delete bouncer(s) from the database",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Aliases: []string{"remove"}, Aliases: []string{"remove"},
DisableAutoGenTag: true, DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) { ValidArgsFunction: cli.deleteValid,
var err error RunE: func(_ *cobra.Command, args []string) error {
dbClient, err = getDBClient() return cli.delete(args)
if err != nil {
cobra.CompError("unable to create new database client: " + err.Error())
return nil, cobra.ShellCompDirectiveNoFileComp
}
bouncers, err := dbClient.ListBouncers()
if err != nil {
cobra.CompError("unable to list bouncers " + err.Error())
}
ret := make([]string, 0)
for _, bouncer := range bouncers {
if strings.Contains(bouncer.Name, toComplete) && !slices.Contains(args, bouncer.Name) {
ret = append(ret, bouncer.Name)
}
}
return ret, cobra.ShellCompDirectiveNoFileComp
}, },
RunE: runBouncersDelete,
} }
return cmdBouncersDelete return cmd
} }
func NewBouncersCmd() *cobra.Command { func (cli *cliBouncers) prune(duration time.Duration, force bool) error {
var cmdBouncers = &cobra.Command{ if duration < 2*time.Minute {
Use: "bouncers [action]", if yes, err := askYesNo(
Short: "Manage bouncers [requires local API]", "The duration you provided is less than 2 minutes. " +
Long: `To list/add/delete bouncers. "This may remove active bouncers. Continue?", false); err != nil {
Note: This command requires database direct access, so is intended to be run on Local API/master. return err
`, } else if !yes {
Args: cobra.MinimumNArgs(1), fmt.Println("User aborted prune. No changes were made.")
Aliases: []string{"bouncer"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
var err error
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI {
return fmt.Errorf("local API is disabled, please run this command on the local API machine")
}
dbClient, err = database.NewClient(csConfig.DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %s", err)
}
return nil return nil
}
}
bouncers, err := cli.db.QueryBouncersLastPulltimeLT(time.Now().UTC().Add(-duration))
if err != nil {
return fmt.Errorf("unable to query bouncers: %w", err)
}
if len(bouncers) == 0 {
fmt.Println("No bouncers to prune.")
return nil
}
getBouncersTable(color.Output, bouncers)
if !force {
if yes, err := askYesNo(
"You are about to PERMANENTLY remove the above bouncers from the database. " +
"These will NOT be recoverable. Continue?", false); err != nil {
return err
} else if !yes {
fmt.Println("User aborted prune. No changes were made.")
return nil
}
}
deleted, err := cli.db.BulkDeleteBouncers(bouncers)
if err != nil {
return fmt.Errorf("unable to prune bouncers: %w", err)
}
fmt.Fprintf(os.Stderr, "Successfully deleted %d bouncers\n", deleted)
return nil
}
func (cli *cliBouncers) newPruneCmd() *cobra.Command {
var (
duration time.Duration
force bool
)
const defaultDuration = 60 * time.Minute
cmd := &cobra.Command{
Use: "prune",
Short: "prune multiple bouncers from the database",
Args: cobra.NoArgs,
DisableAutoGenTag: true,
Example: `cscli bouncers prune -d 45m
cscli bouncers prune -d 45m --force`,
RunE: func(_ *cobra.Command, _ []string) error {
return cli.prune(duration, force)
}, },
} }
cmdBouncers.AddCommand(NewBouncersListCmd()) flags := cmd.Flags()
cmdBouncers.AddCommand(NewBouncersAddCmd()) flags.DurationVarP(&duration, "duration", "d", defaultDuration, "duration of time since last pull")
cmdBouncers.AddCommand(NewBouncersDeleteCmd()) flags.BoolVar(&force, "force", false, "force prune without asking for confirmation")
return cmdBouncers return cmd
} }

View file

@ -5,9 +5,9 @@ import (
"time" "time"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/enescakir/emoji"
"github.com/crowdsecurity/crowdsec/pkg/database/ent" "github.com/crowdsecurity/crowdsec/pkg/database/ent"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
) )
func getBouncersTable(out io.Writer, bouncers []*ent.Bouncer) { func getBouncersTable(out io.Writer, bouncers []*ent.Bouncer) {
@ -17,11 +17,9 @@ func getBouncersTable(out io.Writer, bouncers []*ent.Bouncer) {
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
for _, b := range bouncers { for _, b := range bouncers {
var revoked string revoked := emoji.CheckMark
if !b.Revoked { if b.Revoked {
revoked = emoji.CheckMark.String() revoked = emoji.Prohibited
} else {
revoked = emoji.Prohibited.String()
} }
t.AddRow(b.Name, b.IPAddress, revoked, b.LastPull.Format(time.RFC3339), b.Type, b.Version, b.AuthType) t.AddRow(b.Name, b.IPAddress, revoked, b.LastPull.Format(time.RFC3339), b.Type, b.Version, b.AuthType)

View file

@ -2,187 +2,221 @@ package main
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"net/url" "net/url"
"os" "os"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/go-openapi/strfmt"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/fflag"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
"github.com/go-openapi/strfmt"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"gopkg.in/yaml.v2"
) )
const CAPIBaseURL string = "https://api.crowdsec.net/" const (
const CAPIURLPrefix = "v3" CAPIBaseURL = "https://api.crowdsec.net/"
CAPIURLPrefix = "v3"
)
func NewCapiCmd() *cobra.Command { type cliCapi struct {
var cmdCapi = &cobra.Command{ cfg configGetter
}
func NewCLICapi(cfg configGetter) *cliCapi {
return &cliCapi{
cfg: cfg,
}
}
func (cli *cliCapi) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "capi [action]", Use: "capi [action]",
Short: "Manage interaction with Central API (CAPI)", Short: "Manage interaction with Central API (CAPI)",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI { cfg := cli.cfg()
return errors.Wrap(err, "Local API is disabled, please run this command on the local API machine") if err := require.LAPI(cfg); err != nil {
return err
} }
if csConfig.API.Server.OnlineClient == nil {
log.Fatalf("no configuration for Central API in '%s'", *csConfig.FilePath) if err := require.CAPI(cfg); err != nil {
return err
} }
return nil return nil
}, },
} }
cmdCapi.AddCommand(NewCapiRegisterCmd()) cmd.AddCommand(cli.newRegisterCmd())
cmdCapi.AddCommand(NewCapiStatusCmd()) cmd.AddCommand(cli.newStatusCmd())
return cmdCapi return cmd
} }
func NewCapiRegisterCmd() *cobra.Command { func (cli *cliCapi) register(capiUserPrefix string, outputFile string) error {
var capiUserPrefix string cfg := cli.cfg()
var outputFile string
var cmdCapiRegister = &cobra.Command{ capiUser, err := generateID(capiUserPrefix)
if err != nil {
return fmt.Errorf("unable to generate machine id: %w", err)
}
password := strfmt.Password(generatePassword(passwordLength))
apiurl, err := url.Parse(types.CAPIBaseURL)
if err != nil {
return fmt.Errorf("unable to parse api url %s: %w", types.CAPIBaseURL, err)
}
_, err = apiclient.RegisterClient(&apiclient.Config{
MachineID: capiUser,
Password: password,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiurl,
VersionPrefix: CAPIURLPrefix,
}, nil)
if err != nil {
return fmt.Errorf("api client register ('%s'): %w", types.CAPIBaseURL, err)
}
log.Infof("Successfully registered to Central API (CAPI)")
var dumpFile string
switch {
case outputFile != "":
dumpFile = outputFile
case cfg.API.Server.OnlineClient.CredentialsFilePath != "":
dumpFile = cfg.API.Server.OnlineClient.CredentialsFilePath
default:
dumpFile = ""
}
apiCfg := csconfig.ApiCredentialsCfg{
Login: capiUser,
Password: password.String(),
URL: types.CAPIBaseURL,
}
apiConfigDump, err := yaml.Marshal(apiCfg)
if err != nil {
return fmt.Errorf("unable to marshal api credentials: %w", err)
}
if dumpFile != "" {
err = os.WriteFile(dumpFile, apiConfigDump, 0o600)
if err != nil {
return fmt.Errorf("write api credentials in '%s' failed: %w", dumpFile, err)
}
log.Infof("Central API credentials written to '%s'", dumpFile)
} else {
fmt.Println(string(apiConfigDump))
}
log.Warning(ReloadMessage())
return nil
}
func (cli *cliCapi) newRegisterCmd() *cobra.Command {
var (
capiUserPrefix string
outputFile string
)
cmd := &cobra.Command{
Use: "register", Use: "register",
Short: "Register to Central API (CAPI)", Short: "Register to Central API (CAPI)",
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error return cli.register(capiUserPrefix, outputFile)
capiUser, err := generateID(capiUserPrefix)
if err != nil {
log.Fatalf("unable to generate machine id: %s", err)
}
password := strfmt.Password(generatePassword(passwordLength))
apiurl, err := url.Parse(types.CAPIBaseURL)
if err != nil {
log.Fatalf("unable to parse api url %s : %s", types.CAPIBaseURL, err)
}
_, err = apiclient.RegisterClient(&apiclient.Config{
MachineID: capiUser,
Password: password,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiurl,
VersionPrefix: CAPIURLPrefix,
}, nil)
if err != nil {
log.Fatalf("api client register ('%s'): %s", types.CAPIBaseURL, err)
}
log.Printf("Successfully registered to Central API (CAPI)")
var dumpFile string
if outputFile != "" {
dumpFile = outputFile
} else if csConfig.API.Server.OnlineClient.CredentialsFilePath != "" {
dumpFile = csConfig.API.Server.OnlineClient.CredentialsFilePath
} else {
dumpFile = ""
}
apiCfg := csconfig.ApiCredentialsCfg{
Login: capiUser,
Password: password.String(),
URL: types.CAPIBaseURL,
}
if fflag.PapiClient.IsEnabled() {
apiCfg.PapiURL = types.PAPIBaseURL
}
apiConfigDump, err := yaml.Marshal(apiCfg)
if err != nil {
log.Fatalf("unable to marshal api credentials: %s", err)
}
if dumpFile != "" {
err = os.WriteFile(dumpFile, apiConfigDump, 0600)
if err != nil {
log.Fatalf("write api credentials in '%s' failed: %s", dumpFile, err)
}
log.Printf("Central API credentials dumped to '%s'", dumpFile)
} else {
fmt.Printf("%s\n", string(apiConfigDump))
}
log.Warning(ReloadMessage())
}, },
} }
cmdCapiRegister.Flags().StringVarP(&outputFile, "file", "f", "", "output file destination")
cmdCapiRegister.Flags().StringVar(&capiUserPrefix, "schmilblick", "", "set a schmilblick (use in tests only)") cmd.Flags().StringVarP(&outputFile, "file", "f", "", "output file destination")
if err := cmdCapiRegister.Flags().MarkHidden("schmilblick"); err != nil { cmd.Flags().StringVar(&capiUserPrefix, "schmilblick", "", "set a schmilblick (use in tests only)")
if err := cmd.Flags().MarkHidden("schmilblick"); err != nil {
log.Fatalf("failed to hide flag: %s", err) log.Fatalf("failed to hide flag: %s", err)
} }
return cmdCapiRegister return cmd
} }
func NewCapiStatusCmd() *cobra.Command { func (cli *cliCapi) status() error {
var cmdCapiStatus = &cobra.Command{ cfg := cli.cfg()
if err := require.CAPIRegistered(cfg); err != nil {
return err
}
password := strfmt.Password(cfg.API.Server.OnlineClient.Credentials.Password)
apiurl, err := url.Parse(cfg.API.Server.OnlineClient.Credentials.URL)
if err != nil {
return fmt.Errorf("parsing api url ('%s'): %w", cfg.API.Server.OnlineClient.Credentials.URL, err)
}
hub, err := require.Hub(cfg, nil, nil)
if err != nil {
return err
}
scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil {
return fmt.Errorf("failed to get scenarios: %w", err)
}
if len(scenarios) == 0 {
return errors.New("no scenarios installed, abort")
}
Client, err = apiclient.NewDefaultClient(apiurl, CAPIURLPrefix, fmt.Sprintf("crowdsec/%s", version.String()), nil)
if err != nil {
return fmt.Errorf("init default client: %w", err)
}
t := models.WatcherAuthRequest{
MachineID: &cfg.API.Server.OnlineClient.Credentials.Login,
Password: &password,
Scenarios: scenarios,
}
log.Infof("Loaded credentials from %s", cfg.API.Server.OnlineClient.CredentialsFilePath)
log.Infof("Trying to authenticate with username %s on %s", cfg.API.Server.OnlineClient.Credentials.Login, apiurl)
_, _, err = Client.Auth.AuthenticateWatcher(context.Background(), t)
if err != nil {
return fmt.Errorf("failed to authenticate to Central API (CAPI): %w", err)
}
log.Info("You can successfully interact with Central API (CAPI)")
return nil
}
func (cli *cliCapi) newStatusCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "status", Use: "status",
Short: "Check status with the Central API (CAPI)", Short: "Check status with the Central API (CAPI)",
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error return cli.status()
if csConfig.API.Server == nil {
log.Fatalln("There is no configuration on 'api.server:'")
}
if csConfig.API.Server.OnlineClient == nil {
log.Fatalf("Please provide credentials for the Central API (CAPI) in '%s'", csConfig.API.Server.OnlineClient.CredentialsFilePath)
}
if csConfig.API.Server.OnlineClient.Credentials == nil {
log.Fatalf("no credentials for Central API (CAPI) in '%s'", csConfig.API.Server.OnlineClient.CredentialsFilePath)
}
password := strfmt.Password(csConfig.API.Server.OnlineClient.Credentials.Password)
apiurl, err := url.Parse(csConfig.API.Server.OnlineClient.Credentials.URL)
if err != nil {
log.Fatalf("parsing api url ('%s'): %s", csConfig.API.Server.OnlineClient.Credentials.URL, err)
}
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to load hub index : %s", err)
}
scenarios, err := cwhub.GetInstalledScenariosAsString()
if err != nil {
log.Fatalf("failed to get scenarios : %s", err)
}
if len(scenarios) == 0 {
log.Fatalf("no scenarios installed, abort")
}
Client, err = apiclient.NewDefaultClient(apiurl, CAPIURLPrefix, fmt.Sprintf("crowdsec/%s", version.String()), nil)
if err != nil {
log.Fatalf("init default client: %s", err)
}
t := models.WatcherAuthRequest{
MachineID: &csConfig.API.Server.OnlineClient.Credentials.Login,
Password: &password,
Scenarios: scenarios,
}
log.Infof("Loaded credentials from %s", csConfig.API.Server.OnlineClient.CredentialsFilePath)
log.Infof("Trying to authenticate with username %s on %s", csConfig.API.Server.OnlineClient.Credentials.Login, apiurl)
_, _, err = Client.Auth.AuthenticateWatcher(context.Background(), t)
if err != nil {
log.Fatalf("Failed to authenticate to Central API (CAPI) : %s", err)
}
log.Infof("You can successfully interact with Central API (CAPI)")
}, },
} }
return cmdCapiStatus return cmd
} }

View file

@ -1,183 +0,0 @@
package main
import (
"fmt"
"github.com/fatih/color"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCollectionsCmd() *cobra.Command {
var cmdCollections = &cobra.Command{
Use: "collections [action]",
Short: "Manage collections from hub",
Long: `Install/Remove/Upgrade/Inspect collections from the CrowdSec Hub.`,
/*TBD fix help*/
Args: cobra.MinimumNArgs(1),
Aliases: []string{"collection"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
return nil
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
if cmd.Name() == "inspect" || cmd.Name() == "list" {
return
}
log.Infof(ReloadMessage())
},
}
var ignoreError bool
var cmdCollectionsInstall = &cobra.Command{
Use: "install collection",
Short: "Install given collection(s)",
Long: `Fetch and install given collection(s) from hub`,
Example: `cscli collections install crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cwhub.COLLECTIONS, args, toComplete)
},
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
t := cwhub.GetItem(cwhub.COLLECTIONS, name)
if t == nil {
nearestItem, score := GetDistance(cwhub.COLLECTIONS, name)
Suggest(cwhub.COLLECTIONS, name, nearestItem.Name, score, ignoreError)
continue
}
if err := cwhub.InstallItem(csConfig, name, cwhub.COLLECTIONS, forceAction, downloadOnly); err != nil {
if !ignoreError {
log.Fatalf("Error while installing '%s': %s", name, err)
}
log.Errorf("Error while installing '%s': %s", name, err)
}
}
},
}
cmdCollectionsInstall.PersistentFlags().BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
cmdCollectionsInstall.PersistentFlags().BoolVar(&forceAction, "force", false, "Force install : Overwrite tainted and outdated files")
cmdCollectionsInstall.PersistentFlags().BoolVar(&ignoreError, "ignore", false, "Ignore errors when installing multiple collections")
cmdCollections.AddCommand(cmdCollectionsInstall)
var cmdCollectionsRemove = &cobra.Command{
Use: "remove collection",
Short: "Remove given collection(s)",
Long: `Remove given collection(s) from hub`,
Example: `cscli collections remove crowdsec/xxx crowdsec/xyz`,
Aliases: []string{"delete"},
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.COLLECTIONS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.RemoveMany(csConfig, cwhub.COLLECTIONS, "", all, purge, forceAction)
return
}
if len(args) == 0 {
log.Fatal("Specify at least one collection to remove or '--all' flag.")
}
for _, name := range args {
if !forceAction {
item := cwhub.GetItem(cwhub.COLLECTIONS, name)
if item == nil {
log.Fatalf("unable to retrieve: %s\n", name)
}
if len(item.BelongsToCollections) > 0 {
log.Warningf("%s belongs to other collections :\n%s\n", name, item.BelongsToCollections)
log.Printf("Run 'sudo cscli collections remove %s --force' if you want to force remove this sub collection\n", name)
continue
}
}
cwhub.RemoveMany(csConfig, cwhub.COLLECTIONS, name, all, purge, forceAction)
}
},
}
cmdCollectionsRemove.PersistentFlags().BoolVar(&purge, "purge", false, "Delete source file too")
cmdCollectionsRemove.PersistentFlags().BoolVar(&forceAction, "force", false, "Force remove : Remove tainted and outdated files")
cmdCollectionsRemove.PersistentFlags().BoolVar(&all, "all", false, "Delete all the collections")
cmdCollections.AddCommand(cmdCollectionsRemove)
var cmdCollectionsUpgrade = &cobra.Command{
Use: "upgrade collection",
Short: "Upgrade given collection(s)",
Long: `Fetch and upgrade given collection(s) from hub`,
Example: `cscli collections upgrade crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.COLLECTIONS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.UpgradeConfig(csConfig, cwhub.COLLECTIONS, "", forceAction)
} else {
if len(args) == 0 {
log.Fatalf("no target collection to upgrade")
}
for _, name := range args {
cwhub.UpgradeConfig(csConfig, cwhub.COLLECTIONS, name, forceAction)
}
}
},
}
cmdCollectionsUpgrade.PersistentFlags().BoolVarP(&all, "all", "a", false, "Upgrade all the collections")
cmdCollectionsUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
cmdCollections.AddCommand(cmdCollectionsUpgrade)
var cmdCollectionsInspect = &cobra.Command{
Use: "inspect collection",
Short: "Inspect given collection",
Long: `Inspect given collection`,
Example: `cscli collections inspect crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.COLLECTIONS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
InspectItem(name, cwhub.COLLECTIONS)
}
},
}
cmdCollectionsInspect.PersistentFlags().StringVarP(&prometheusURL, "url", "u", "", "Prometheus url")
cmdCollections.AddCommand(cmdCollectionsInspect)
var cmdCollectionsList = &cobra.Command{
Use: "list collection [-a]",
Short: "List all collections",
Long: `List all collections`,
Example: `cscli collections list`,
Args: cobra.ExactArgs(0),
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
ListItems(color.Output, []string{cwhub.COLLECTIONS}, args, false, true, all)
},
}
cmdCollectionsList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
cmdCollections.AddCommand(cmdCollectionsList)
return cmdCollections
}

View file

@ -7,8 +7,7 @@ import (
) )
func NewCompletionCmd() *cobra.Command { func NewCompletionCmd() *cobra.Command {
completionCmd := &cobra.Command{
var completionCmd = &cobra.Command{
Use: "completion [bash|zsh|powershell|fish]", Use: "completion [bash|zsh|powershell|fish]",
Short: "Generate completion script", Short: "Generate completion script",
Long: `To load completions: Long: `To load completions:
@ -82,5 +81,6 @@ func NewCompletionCmd() *cobra.Command {
} }
}, },
} }
return completionCmd return completionCmd
} }

View file

@ -4,19 +4,29 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
func NewConfigCmd() *cobra.Command { type cliConfig struct {
cmdConfig := &cobra.Command{ cfg configGetter
}
func NewCLIConfig(cfg configGetter) *cliConfig {
return &cliConfig{
cfg: cfg,
}
}
func (cli *cliConfig) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "config [command]", Use: "config [command]",
Short: "Allows to view current config", Short: "Allows to view current config",
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
} }
cmdConfig.AddCommand(NewConfigShowCmd()) cmd.AddCommand(cli.newShowCmd())
cmdConfig.AddCommand(NewConfigShowYAMLCmd()) cmd.AddCommand(cli.newShowYAMLCmd())
cmdConfig.AddCommand(NewConfigBackupCmd()) cmd.AddCommand(cli.newBackupCmd())
cmdConfig.AddCommand(NewConfigRestoreCmd()) cmd.AddCommand(cli.newRestoreCmd())
cmdConfig.AddCommand(NewConfigFeatureFlagsCmd()) cmd.AddCommand(cli.newFeatureFlagsCmd())
return cmdConfig return cmd
} }

View file

@ -1,19 +1,99 @@
package main package main
import ( import (
"encoding/json"
"errors"
"fmt" "fmt"
"os" "os"
"path/filepath" "path/filepath"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
) )
/* Backup crowdsec configurations to directory <dirPath> : func (cli *cliConfig) backupHub(dirPath string) error {
hub, err := require.Hub(cli.cfg(), nil, nil)
if err != nil {
return err
}
for _, itemType := range cwhub.ItemTypes {
clog := log.WithFields(log.Fields{
"type": itemType,
})
itemMap := hub.GetItemMap(itemType)
if itemMap == nil {
clog.Infof("No %s to backup.", itemType)
continue
}
itemDirectory := fmt.Sprintf("%s/%s/", dirPath, itemType)
if err = os.MkdirAll(itemDirectory, os.ModePerm); err != nil {
return fmt.Errorf("error while creating %s: %w", itemDirectory, err)
}
upstreamParsers := []string{}
for k, v := range itemMap {
clog = clog.WithFields(log.Fields{
"file": v.Name,
})
if !v.State.Installed { // only backup installed ones
clog.Debugf("[%s]: not installed", k)
continue
}
// for the local/tainted ones, we back up the full file
if v.State.Tainted || v.State.IsLocal() || !v.State.UpToDate {
// we need to backup stages for parsers
if itemType == cwhub.PARSERS || itemType == cwhub.POSTOVERFLOWS {
fstagedir := fmt.Sprintf("%s%s", itemDirectory, v.Stage)
if err = os.MkdirAll(fstagedir, os.ModePerm); err != nil {
return fmt.Errorf("error while creating stage dir %s: %w", fstagedir, err)
}
}
clog.Debugf("[%s]: backing up file (tainted:%t local:%t up-to-date:%t)", k, v.State.Tainted, v.State.IsLocal(), v.State.UpToDate)
tfile := fmt.Sprintf("%s%s/%s", itemDirectory, v.Stage, v.FileName)
if err = CopyFile(v.State.LocalPath, tfile); err != nil {
return fmt.Errorf("failed copy %s %s to %s: %w", itemType, v.State.LocalPath, tfile, err)
}
clog.Infof("local/tainted saved %s to %s", v.State.LocalPath, tfile)
continue
}
clog.Debugf("[%s]: from hub, just backup name (up-to-date:%t)", k, v.State.UpToDate)
clog.Infof("saving, version:%s, up-to-date:%t", v.Version, v.State.UpToDate)
upstreamParsers = append(upstreamParsers, v.Name)
}
// write the upstream items
upstreamParsersFname := fmt.Sprintf("%s/upstream-%s.json", itemDirectory, itemType)
upstreamParsersContent, err := json.MarshalIndent(upstreamParsers, "", " ")
if err != nil {
return fmt.Errorf("failed marshaling upstream parsers: %w", err)
}
err = os.WriteFile(upstreamParsersFname, upstreamParsersContent, 0o644)
if err != nil {
return fmt.Errorf("unable to write to %s %s: %w", itemType, upstreamParsersFname, err)
}
clog.Infof("Wrote %d entries for %s to %s", len(upstreamParsers), itemType, upstreamParsersFname)
}
return nil
}
/*
Backup crowdsec configurations to directory <dirPath>:
- Main config (config.yaml) - Main config (config.yaml)
- Profiles config (profiles.yaml) - Profiles config (profiles.yaml)
@ -21,30 +101,33 @@ import (
- Backup of API credentials (local API and online API) - Backup of API credentials (local API and online API)
- List of scenarios, parsers, postoverflows and collections that are up-to-date - List of scenarios, parsers, postoverflows and collections that are up-to-date
- Tainted/local/out-of-date scenarios, parsers, postoverflows and collections - Tainted/local/out-of-date scenarios, parsers, postoverflows and collections
- Acquisition files (acquis.yaml, acquis.d/*.yaml)
*/ */
func backupConfigToDirectory(dirPath string) error { func (cli *cliConfig) backup(dirPath string) error {
var err error var err error
cfg := cli.cfg()
if dirPath == "" { if dirPath == "" {
return fmt.Errorf("directory path can't be empty") return errors.New("directory path can't be empty")
} }
log.Infof("Starting configuration backup") log.Infof("Starting configuration backup")
/*if parent directory doesn't exist, bail out. create final dir with Mkdir*/ /*if parent directory doesn't exist, bail out. create final dir with Mkdir*/
parentDir := filepath.Dir(dirPath) parentDir := filepath.Dir(dirPath)
if _, err := os.Stat(parentDir); err != nil { if _, err = os.Stat(parentDir); err != nil {
return errors.Wrapf(err, "while checking parent directory %s existence", parentDir) return fmt.Errorf("while checking parent directory %s existence: %w", parentDir, err)
} }
if err = os.Mkdir(dirPath, 0o700); err != nil { if err = os.Mkdir(dirPath, 0o700); err != nil {
return errors.Wrapf(err, "while creating %s", dirPath) return fmt.Errorf("while creating %s: %w", dirPath, err)
} }
if csConfig.ConfigPaths.SimulationFilePath != "" { if cfg.ConfigPaths.SimulationFilePath != "" {
backupSimulation := filepath.Join(dirPath, "simulation.yaml") backupSimulation := filepath.Join(dirPath, "simulation.yaml")
if err = types.CopyFile(csConfig.ConfigPaths.SimulationFilePath, backupSimulation); err != nil { if err = CopyFile(cfg.ConfigPaths.SimulationFilePath, backupSimulation); err != nil {
return errors.Wrapf(err, "failed copy %s to %s", csConfig.ConfigPaths.SimulationFilePath, backupSimulation) return fmt.Errorf("failed copy %s to %s: %w", cfg.ConfigPaths.SimulationFilePath, backupSimulation, err)
} }
log.Infof("Saved simulation to %s", backupSimulation) log.Infof("Saved simulation to %s", backupSimulation)
@ -54,32 +137,32 @@ func backupConfigToDirectory(dirPath string) error {
- backup AcquisitionFilePath - backup AcquisitionFilePath
- backup the other files of acquisition directory - backup the other files of acquisition directory
*/ */
if csConfig.Crowdsec != nil && csConfig.Crowdsec.AcquisitionFilePath != "" { if cfg.Crowdsec != nil && cfg.Crowdsec.AcquisitionFilePath != "" {
backupAcquisition := filepath.Join(dirPath, "acquis.yaml") backupAcquisition := filepath.Join(dirPath, "acquis.yaml")
if err = types.CopyFile(csConfig.Crowdsec.AcquisitionFilePath, backupAcquisition); err != nil { if err = CopyFile(cfg.Crowdsec.AcquisitionFilePath, backupAcquisition); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", csConfig.Crowdsec.AcquisitionFilePath, backupAcquisition, err) return fmt.Errorf("failed copy %s to %s: %w", cfg.Crowdsec.AcquisitionFilePath, backupAcquisition, err)
} }
} }
acquisBackupDir := filepath.Join(dirPath, "acquis") acquisBackupDir := filepath.Join(dirPath, "acquis")
if err = os.Mkdir(acquisBackupDir, 0o700); err != nil { if err = os.Mkdir(acquisBackupDir, 0o700); err != nil {
return fmt.Errorf("error while creating %s : %s", acquisBackupDir, err) return fmt.Errorf("error while creating %s: %w", acquisBackupDir, err)
} }
if csConfig.Crowdsec != nil && len(csConfig.Crowdsec.AcquisitionFiles) > 0 { if cfg.Crowdsec != nil && len(cfg.Crowdsec.AcquisitionFiles) > 0 {
for _, acquisFile := range csConfig.Crowdsec.AcquisitionFiles { for _, acquisFile := range cfg.Crowdsec.AcquisitionFiles {
/*if it was the default one, it was already backup'ed*/ /*if it was the default one, it was already backup'ed*/
if csConfig.Crowdsec.AcquisitionFilePath == acquisFile { if cfg.Crowdsec.AcquisitionFilePath == acquisFile {
continue continue
} }
targetFname, err := filepath.Abs(filepath.Join(acquisBackupDir, filepath.Base(acquisFile))) targetFname, err := filepath.Abs(filepath.Join(acquisBackupDir, filepath.Base(acquisFile)))
if err != nil { if err != nil {
return errors.Wrapf(err, "while saving %s to %s", acquisFile, acquisBackupDir) return fmt.Errorf("while saving %s to %s: %w", acquisFile, acquisBackupDir, err)
} }
if err = types.CopyFile(acquisFile, targetFname); err != nil { if err = CopyFile(acquisFile, targetFname); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", acquisFile, targetFname, err) return fmt.Errorf("failed copy %s to %s: %w", acquisFile, targetFname, err)
} }
log.Infof("Saved acquis %s to %s", acquisFile, targetFname) log.Infof("Saved acquis %s to %s", acquisFile, targetFname)
@ -88,68 +171,49 @@ func backupConfigToDirectory(dirPath string) error {
if ConfigFilePath != "" { if ConfigFilePath != "" {
backupMain := fmt.Sprintf("%s/config.yaml", dirPath) backupMain := fmt.Sprintf("%s/config.yaml", dirPath)
if err = types.CopyFile(ConfigFilePath, backupMain); err != nil { if err = CopyFile(ConfigFilePath, backupMain); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", ConfigFilePath, backupMain, err) return fmt.Errorf("failed copy %s to %s: %w", ConfigFilePath, backupMain, err)
} }
log.Infof("Saved default yaml to %s", backupMain) log.Infof("Saved default yaml to %s", backupMain)
} }
if csConfig.API != nil && csConfig.API.Server != nil && csConfig.API.Server.OnlineClient != nil && csConfig.API.Server.OnlineClient.CredentialsFilePath != "" { if cfg.API != nil && cfg.API.Server != nil && cfg.API.Server.OnlineClient != nil && cfg.API.Server.OnlineClient.CredentialsFilePath != "" {
backupCAPICreds := fmt.Sprintf("%s/online_api_credentials.yaml", dirPath) backupCAPICreds := fmt.Sprintf("%s/online_api_credentials.yaml", dirPath)
if err = types.CopyFile(csConfig.API.Server.OnlineClient.CredentialsFilePath, backupCAPICreds); err != nil { if err = CopyFile(cfg.API.Server.OnlineClient.CredentialsFilePath, backupCAPICreds); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", csConfig.API.Server.OnlineClient.CredentialsFilePath, backupCAPICreds, err) return fmt.Errorf("failed copy %s to %s: %w", cfg.API.Server.OnlineClient.CredentialsFilePath, backupCAPICreds, err)
} }
log.Infof("Saved online API credentials to %s", backupCAPICreds) log.Infof("Saved online API credentials to %s", backupCAPICreds)
} }
if csConfig.API != nil && csConfig.API.Client != nil && csConfig.API.Client.CredentialsFilePath != "" { if cfg.API != nil && cfg.API.Client != nil && cfg.API.Client.CredentialsFilePath != "" {
backupLAPICreds := fmt.Sprintf("%s/local_api_credentials.yaml", dirPath) backupLAPICreds := fmt.Sprintf("%s/local_api_credentials.yaml", dirPath)
if err = types.CopyFile(csConfig.API.Client.CredentialsFilePath, backupLAPICreds); err != nil { if err = CopyFile(cfg.API.Client.CredentialsFilePath, backupLAPICreds); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", csConfig.API.Client.CredentialsFilePath, backupLAPICreds, err) return fmt.Errorf("failed copy %s to %s: %w", cfg.API.Client.CredentialsFilePath, backupLAPICreds, err)
} }
log.Infof("Saved local API credentials to %s", backupLAPICreds) log.Infof("Saved local API credentials to %s", backupLAPICreds)
} }
if csConfig.API != nil && csConfig.API.Server != nil && csConfig.API.Server.ProfilesPath != "" { if cfg.API != nil && cfg.API.Server != nil && cfg.API.Server.ProfilesPath != "" {
backupProfiles := fmt.Sprintf("%s/profiles.yaml", dirPath) backupProfiles := fmt.Sprintf("%s/profiles.yaml", dirPath)
if err = types.CopyFile(csConfig.API.Server.ProfilesPath, backupProfiles); err != nil { if err = CopyFile(cfg.API.Server.ProfilesPath, backupProfiles); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", csConfig.API.Server.ProfilesPath, backupProfiles, err) return fmt.Errorf("failed copy %s to %s: %w", cfg.API.Server.ProfilesPath, backupProfiles, err)
} }
log.Infof("Saved profiles to %s", backupProfiles) log.Infof("Saved profiles to %s", backupProfiles)
} }
if err = BackupHub(dirPath); err != nil { if err = cli.backupHub(dirPath); err != nil {
return fmt.Errorf("failed to backup hub config : %s", err) return fmt.Errorf("failed to backup hub config: %w", err)
} }
return nil return nil
} }
func (cli *cliConfig) newBackupCmd() *cobra.Command {
func runConfigBackup(cmd *cobra.Command, args []string) error { cmd := &cobra.Command{
if err := csConfig.LoadHub(); err != nil {
return err
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
return fmt.Errorf("failed to get Hub index: %w", err)
}
if err := backupConfigToDirectory(args[0]); err != nil {
return fmt.Errorf("failed to backup config: %w", err)
}
return nil
}
func NewConfigBackupCmd() *cobra.Command {
cmdConfigBackup := &cobra.Command{
Use: `backup "directory"`, Use: `backup "directory"`,
Short: "Backup current config", Short: "Backup current config",
Long: `Backup the current crowdsec configuration including : Long: `Backup the current crowdsec configuration including :
@ -163,8 +227,14 @@ func NewConfigBackupCmd() *cobra.Command {
Example: `cscli config backup ./my-backup`, Example: `cscli config backup ./my-backup`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runConfigBackup, RunE: func(_ *cobra.Command, args []string) error {
if err := cli.backup(args[0]); err != nil {
return fmt.Errorf("failed to backup config: %w", err)
}
return nil
},
} }
return cmdConfigBackup return cmd
} }

View file

@ -2,21 +2,16 @@ package main
import ( import (
"fmt" "fmt"
"path/filepath"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/fflag" "github.com/crowdsecurity/crowdsec/pkg/fflag"
) )
func runConfigFeatureFlags(cmd *cobra.Command, args []string) error { func (cli *cliConfig) featureFlags(showRetired bool) error {
flags := cmd.Flags()
showRetired, err := flags.GetBool("retired")
if err != nil {
return err
}
green := color.New(color.FgGreen).SprintFunc() green := color.New(color.FgGreen).SprintFunc()
red := color.New(color.FgRed).SprintFunc() red := color.New(color.FgRed).SprintFunc()
yellow := color.New(color.FgYellow).SprintFunc() yellow := color.New(color.FgYellow).SprintFunc()
@ -42,6 +37,7 @@ func runConfigFeatureFlags(cmd *cobra.Command, args []string) error {
if feat.State == fflag.RetiredState { if feat.State == fflag.RetiredState {
fmt.Printf("\n %s %s", magenta("RETIRED"), feat.DeprecationMsg) fmt.Printf("\n %s %s", magenta("RETIRED"), feat.DeprecationMsg)
} }
fmt.Println() fmt.Println()
} }
@ -56,10 +52,12 @@ func runConfigFeatureFlags(cmd *cobra.Command, args []string) error {
retired = append(retired, feat) retired = append(retired, feat)
continue continue
} }
if feat.IsEnabled() { if feat.IsEnabled() {
enabled = append(enabled, feat) enabled = append(enabled, feat)
continue continue
} }
disabled = append(disabled, feat) disabled = append(disabled, feat)
} }
@ -87,7 +85,14 @@ func runConfigFeatureFlags(cmd *cobra.Command, args []string) error {
fmt.Println("To enable a feature you can: ") fmt.Println("To enable a feature you can: ")
fmt.Println(" - set the environment variable CROWDSEC_FEATURE_<uppercase_feature_name> to true") fmt.Println(" - set the environment variable CROWDSEC_FEATURE_<uppercase_feature_name> to true")
fmt.Printf(" - add the line '- <feature_name>' to the file %s/feature.yaml\n", csConfig.ConfigPaths.ConfigDir)
featurePath, err := filepath.Abs(csconfig.GetFeatureFilePath(ConfigFilePath))
if err != nil {
// we already read the file, shouldn't happen
return err
}
fmt.Printf(" - add the line '- <feature_name>' to the file %s\n", featurePath)
fmt.Println() fmt.Println()
if len(enabled) == 0 && len(disabled) == 0 { if len(enabled) == 0 && len(disabled) == 0 {
@ -109,18 +114,22 @@ func runConfigFeatureFlags(cmd *cobra.Command, args []string) error {
return nil return nil
} }
func NewConfigFeatureFlagsCmd() *cobra.Command { func (cli *cliConfig) newFeatureFlagsCmd() *cobra.Command {
cmdConfigFeatureFlags := &cobra.Command{ var showRetired bool
cmd := &cobra.Command{
Use: "feature-flags", Use: "feature-flags",
Short: "Displays feature flag status", Short: "Displays feature flag status",
Long: `Displays the supported feature flags and their current status.`, Long: `Displays the supported feature flags and their current status.`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runConfigFeatureFlags, RunE: func(_ *cobra.Command, _ []string) error {
return cli.featureFlags(showRetired)
},
} }
flags := cmdConfigFeatureFlags.Flags() flags := cmd.Flags()
flags.Bool("retired", false, "Show retired features") flags.BoolVar(&showRetired, "retired", false, "Show retired features")
return cmdConfigFeatureFlags return cmd
} }

View file

@ -3,26 +3,120 @@ package main
import ( import (
"encoding/json" "encoding/json"
"fmt" "fmt"
"io"
"os" "os"
"path/filepath" "path/filepath"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v2"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/types"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
) )
type OldAPICfg struct { func (cli *cliConfig) restoreHub(dirPath string) error {
MachineID string `json:"machine_id"` cfg := cli.cfg()
Password string `json:"password"`
hub, err := require.Hub(cfg, require.RemoteHub(cfg), nil)
if err != nil {
return err
}
for _, itype := range cwhub.ItemTypes {
itemDirectory := fmt.Sprintf("%s/%s/", dirPath, itype)
if _, err = os.Stat(itemDirectory); err != nil {
log.Infof("no %s in backup", itype)
continue
}
/*restore the upstream items*/
upstreamListFN := fmt.Sprintf("%s/upstream-%s.json", itemDirectory, itype)
file, err := os.ReadFile(upstreamListFN)
if err != nil {
return fmt.Errorf("error while opening %s: %w", upstreamListFN, err)
}
var upstreamList []string
err = json.Unmarshal(file, &upstreamList)
if err != nil {
return fmt.Errorf("error unmarshaling %s: %w", upstreamListFN, err)
}
for _, toinstall := range upstreamList {
item := hub.GetItem(itype, toinstall)
if item == nil {
log.Errorf("Item %s/%s not found in hub", itype, toinstall)
continue
}
if err = item.Install(false, false); err != nil {
log.Errorf("Error while installing %s : %s", toinstall, err)
}
}
/*restore the local and tainted items*/
files, err := os.ReadDir(itemDirectory)
if err != nil {
return fmt.Errorf("failed enumerating files of %s: %w", itemDirectory, err)
}
for _, file := range files {
// this was the upstream data
if file.Name() == fmt.Sprintf("upstream-%s.json", itype) {
continue
}
if itype == cwhub.PARSERS || itype == cwhub.POSTOVERFLOWS {
// we expect a stage here
if !file.IsDir() {
continue
}
stage := file.Name()
stagedir := fmt.Sprintf("%s/%s/%s/", cfg.ConfigPaths.ConfigDir, itype, stage)
log.Debugf("Found stage %s in %s, target directory : %s", stage, itype, stagedir)
if err = os.MkdirAll(stagedir, os.ModePerm); err != nil {
return fmt.Errorf("error while creating stage directory %s: %w", stagedir, err)
}
// find items
ifiles, err := os.ReadDir(itemDirectory + "/" + stage + "/")
if err != nil {
return fmt.Errorf("failed enumerating files of %s: %w", itemDirectory+"/"+stage, err)
}
// finally copy item
for _, tfile := range ifiles {
log.Infof("Going to restore local/tainted [%s]", tfile.Name())
sourceFile := fmt.Sprintf("%s/%s/%s", itemDirectory, stage, tfile.Name())
destinationFile := fmt.Sprintf("%s%s", stagedir, tfile.Name())
if err = CopyFile(sourceFile, destinationFile); err != nil {
return fmt.Errorf("failed copy %s %s to %s: %w", itype, sourceFile, destinationFile, err)
}
log.Infof("restored %s to %s", sourceFile, destinationFile)
}
} else {
log.Infof("Going to restore local/tainted [%s]", file.Name())
sourceFile := fmt.Sprintf("%s/%s", itemDirectory, file.Name())
destinationFile := fmt.Sprintf("%s/%s/%s", cfg.ConfigPaths.ConfigDir, itype, file.Name())
if err = CopyFile(sourceFile, destinationFile); err != nil {
return fmt.Errorf("failed copy %s %s to %s: %w", itype, sourceFile, destinationFile, err)
}
log.Infof("restored %s to %s", sourceFile, destinationFile)
}
}
}
return nil
} }
/* Restore crowdsec configurations to directory <dirPath> : /*
Restore crowdsec configurations to directory <dirPath>:
- Main config (config.yaml) - Main config (config.yaml)
- Profiles config (profiles.yaml) - Profiles config (profiles.yaml)
@ -30,91 +124,66 @@ type OldAPICfg struct {
- Backup of API credentials (local API and online API) - Backup of API credentials (local API and online API)
- List of scenarios, parsers, postoverflows and collections that are up-to-date - List of scenarios, parsers, postoverflows and collections that are up-to-date
- Tainted/local/out-of-date scenarios, parsers, postoverflows and collections - Tainted/local/out-of-date scenarios, parsers, postoverflows and collections
- Acquisition files (acquis.yaml, acquis.d/*.yaml)
*/ */
func restoreConfigFromDirectory(dirPath string, oldBackup bool) error { func (cli *cliConfig) restore(dirPath string) error {
var err error var err error
if !oldBackup { cfg := cli.cfg()
backupMain := fmt.Sprintf("%s/config.yaml", dirPath)
if _, err = os.Stat(backupMain); err == nil { backupMain := fmt.Sprintf("%s/config.yaml", dirPath)
if csConfig.ConfigPaths != nil && csConfig.ConfigPaths.ConfigDir != "" { if _, err = os.Stat(backupMain); err == nil {
if err = types.CopyFile(backupMain, fmt.Sprintf("%s/config.yaml", csConfig.ConfigPaths.ConfigDir)); err != nil { if cfg.ConfigPaths != nil && cfg.ConfigPaths.ConfigDir != "" {
return fmt.Errorf("failed copy %s to %s : %s", backupMain, csConfig.ConfigPaths.ConfigDir, err) if err = CopyFile(backupMain, fmt.Sprintf("%s/config.yaml", cfg.ConfigPaths.ConfigDir)); err != nil {
} return fmt.Errorf("failed copy %s to %s: %w", backupMain, cfg.ConfigPaths.ConfigDir, err)
} }
} }
}
// Now we have config.yaml, we should regenerate config struct to have rights paths etc // Now we have config.yaml, we should regenerate config struct to have rights paths etc
ConfigFilePath = fmt.Sprintf("%s/config.yaml", csConfig.ConfigPaths.ConfigDir) ConfigFilePath = fmt.Sprintf("%s/config.yaml", cfg.ConfigPaths.ConfigDir)
initConfig() log.Debug("Reloading configuration")
backupCAPICreds := fmt.Sprintf("%s/online_api_credentials.yaml", dirPath) csConfig, _, err = loadConfigFor("config")
if _, err = os.Stat(backupCAPICreds); err == nil { if err != nil {
if err = types.CopyFile(backupCAPICreds, csConfig.API.Server.OnlineClient.CredentialsFilePath); err != nil { return fmt.Errorf("failed to reload configuration: %w", err)
return fmt.Errorf("failed copy %s to %s : %s", backupCAPICreds, csConfig.API.Server.OnlineClient.CredentialsFilePath, err) }
}
cfg = cli.cfg()
backupCAPICreds := fmt.Sprintf("%s/online_api_credentials.yaml", dirPath)
if _, err = os.Stat(backupCAPICreds); err == nil {
if err = CopyFile(backupCAPICreds, cfg.API.Server.OnlineClient.CredentialsFilePath); err != nil {
return fmt.Errorf("failed copy %s to %s: %w", backupCAPICreds, cfg.API.Server.OnlineClient.CredentialsFilePath, err)
} }
}
backupLAPICreds := fmt.Sprintf("%s/local_api_credentials.yaml", dirPath) backupLAPICreds := fmt.Sprintf("%s/local_api_credentials.yaml", dirPath)
if _, err = os.Stat(backupLAPICreds); err == nil { if _, err = os.Stat(backupLAPICreds); err == nil {
if err = types.CopyFile(backupLAPICreds, csConfig.API.Client.CredentialsFilePath); err != nil { if err = CopyFile(backupLAPICreds, cfg.API.Client.CredentialsFilePath); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", backupLAPICreds, csConfig.API.Client.CredentialsFilePath, err) return fmt.Errorf("failed copy %s to %s: %w", backupLAPICreds, cfg.API.Client.CredentialsFilePath, err)
}
} }
}
backupProfiles := fmt.Sprintf("%s/profiles.yaml", dirPath) backupProfiles := fmt.Sprintf("%s/profiles.yaml", dirPath)
if _, err = os.Stat(backupProfiles); err == nil { if _, err = os.Stat(backupProfiles); err == nil {
if err = types.CopyFile(backupProfiles, csConfig.API.Server.ProfilesPath); err != nil { if err = CopyFile(backupProfiles, cfg.API.Server.ProfilesPath); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", backupProfiles, csConfig.API.Server.ProfilesPath, err) return fmt.Errorf("failed copy %s to %s: %w", backupProfiles, cfg.API.Server.ProfilesPath, err)
}
}
} else {
var oldAPICfg OldAPICfg
backupOldAPICfg := fmt.Sprintf("%s/api_creds.json", dirPath)
jsonFile, err := os.Open(backupOldAPICfg)
if err != nil {
log.Warningf("failed to open %s : %s", backupOldAPICfg, err)
} else {
byteValue, _ := io.ReadAll(jsonFile)
err = json.Unmarshal(byteValue, &oldAPICfg)
if err != nil {
return fmt.Errorf("failed to load json file %s : %s", backupOldAPICfg, err)
}
apiCfg := csconfig.ApiCredentialsCfg{
Login: oldAPICfg.MachineID,
Password: oldAPICfg.Password,
URL: CAPIBaseURL,
}
apiConfigDump, err := yaml.Marshal(apiCfg)
if err != nil {
return fmt.Errorf("unable to dump api credentials: %s", err)
}
apiConfigDumpFile := fmt.Sprintf("%s/online_api_credentials.yaml", csConfig.ConfigPaths.ConfigDir)
if csConfig.API.Server.OnlineClient != nil && csConfig.API.Server.OnlineClient.CredentialsFilePath != "" {
apiConfigDumpFile = csConfig.API.Server.OnlineClient.CredentialsFilePath
}
err = os.WriteFile(apiConfigDumpFile, apiConfigDump, 0o644)
if err != nil {
return fmt.Errorf("write api credentials in '%s' failed: %s", apiConfigDumpFile, err)
}
log.Infof("Saved API credentials to %s", apiConfigDumpFile)
} }
} }
backupSimulation := fmt.Sprintf("%s/simulation.yaml", dirPath) backupSimulation := fmt.Sprintf("%s/simulation.yaml", dirPath)
if _, err = os.Stat(backupSimulation); err == nil { if _, err = os.Stat(backupSimulation); err == nil {
if err = types.CopyFile(backupSimulation, csConfig.ConfigPaths.SimulationFilePath); err != nil { if err = CopyFile(backupSimulation, cfg.ConfigPaths.SimulationFilePath); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", backupSimulation, csConfig.ConfigPaths.SimulationFilePath, err) return fmt.Errorf("failed copy %s to %s: %w", backupSimulation, cfg.ConfigPaths.SimulationFilePath, err)
} }
} }
/*if there is a acquisition dir, restore its content*/ /*if there is a acquisition dir, restore its content*/
if csConfig.Crowdsec.AcquisitionDirPath != "" { if cfg.Crowdsec.AcquisitionDirPath != "" {
if err = os.Mkdir(csConfig.Crowdsec.AcquisitionDirPath, 0o700); err != nil { if err = os.MkdirAll(cfg.Crowdsec.AcquisitionDirPath, 0o700); err != nil {
return fmt.Errorf("error while creating %s : %s", csConfig.Crowdsec.AcquisitionDirPath, err) return fmt.Errorf("error while creating %s: %w", cfg.Crowdsec.AcquisitionDirPath, err)
} }
} }
@ -123,86 +192,60 @@ func restoreConfigFromDirectory(dirPath string, oldBackup bool) error {
if _, err = os.Stat(backupAcquisition); err == nil { if _, err = os.Stat(backupAcquisition); err == nil {
log.Debugf("restoring backup'ed %s", backupAcquisition) log.Debugf("restoring backup'ed %s", backupAcquisition)
if err = types.CopyFile(backupAcquisition, csConfig.Crowdsec.AcquisitionFilePath); err != nil { if err = CopyFile(backupAcquisition, cfg.Crowdsec.AcquisitionFilePath); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", backupAcquisition, csConfig.Crowdsec.AcquisitionFilePath, err) return fmt.Errorf("failed copy %s to %s: %w", backupAcquisition, cfg.Crowdsec.AcquisitionFilePath, err)
} }
} }
// if there is files in the acquis backup dir, restore them // if there are files in the acquis backup dir, restore them
acquisBackupDir := filepath.Join(dirPath, "acquis", "*.yaml") acquisBackupDir := filepath.Join(dirPath, "acquis", "*.yaml")
if acquisFiles, err := filepath.Glob(acquisBackupDir); err == nil { if acquisFiles, err := filepath.Glob(acquisBackupDir); err == nil {
for _, acquisFile := range acquisFiles { for _, acquisFile := range acquisFiles {
targetFname, err := filepath.Abs(csConfig.Crowdsec.AcquisitionDirPath + "/" + filepath.Base(acquisFile)) targetFname, err := filepath.Abs(cfg.Crowdsec.AcquisitionDirPath + "/" + filepath.Base(acquisFile))
if err != nil { if err != nil {
return errors.Wrapf(err, "while saving %s to %s", acquisFile, targetFname) return fmt.Errorf("while saving %s to %s: %w", acquisFile, targetFname, err)
} }
log.Debugf("restoring %s to %s", acquisFile, targetFname) log.Debugf("restoring %s to %s", acquisFile, targetFname)
if err = types.CopyFile(acquisFile, targetFname); err != nil { if err = CopyFile(acquisFile, targetFname); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", acquisFile, targetFname, err) return fmt.Errorf("failed copy %s to %s: %w", acquisFile, targetFname, err)
} }
} }
} }
if csConfig.Crowdsec != nil && len(csConfig.Crowdsec.AcquisitionFiles) > 0 { if cfg.Crowdsec != nil && len(cfg.Crowdsec.AcquisitionFiles) > 0 {
for _, acquisFile := range csConfig.Crowdsec.AcquisitionFiles { for _, acquisFile := range cfg.Crowdsec.AcquisitionFiles {
log.Infof("backup filepath from dir -> %s", acquisFile) log.Infof("backup filepath from dir -> %s", acquisFile)
// if it was the default one, it has already been backed up // if it was the default one, it has already been backed up
if csConfig.Crowdsec.AcquisitionFilePath == acquisFile { if cfg.Crowdsec.AcquisitionFilePath == acquisFile {
log.Infof("skip this one") log.Infof("skip this one")
continue continue
} }
targetFname, err := filepath.Abs(filepath.Join(acquisBackupDir, filepath.Base(acquisFile))) targetFname, err := filepath.Abs(filepath.Join(acquisBackupDir, filepath.Base(acquisFile)))
if err != nil { if err != nil {
return errors.Wrapf(err, "while saving %s to %s", acquisFile, acquisBackupDir) return fmt.Errorf("while saving %s to %s: %w", acquisFile, acquisBackupDir, err)
} }
if err = types.CopyFile(acquisFile, targetFname); err != nil { if err = CopyFile(acquisFile, targetFname); err != nil {
return fmt.Errorf("failed copy %s to %s : %s", acquisFile, targetFname, err) return fmt.Errorf("failed copy %s to %s: %w", acquisFile, targetFname, err)
} }
log.Infof("Saved acquis %s to %s", acquisFile, targetFname) log.Infof("Saved acquis %s to %s", acquisFile, targetFname)
} }
} }
if err = RestoreHub(dirPath); err != nil { if err = cli.restoreHub(dirPath); err != nil {
return fmt.Errorf("failed to restore hub config : %s", err) return fmt.Errorf("failed to restore hub config: %w", err)
} }
return nil return nil
} }
func (cli *cliConfig) newRestoreCmd() *cobra.Command {
func runConfigRestore(cmd *cobra.Command, args []string) error { cmd := &cobra.Command{
flags := cmd.Flags()
oldBackup, err := flags.GetBool("old-backup")
if err != nil {
return err
}
if err := csConfig.LoadHub(); err != nil {
return err
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
return fmt.Errorf("failed to get Hub index: %w", err)
}
if err := restoreConfigFromDirectory(args[0], oldBackup); err != nil {
return fmt.Errorf("failed to restore config from %s: %w", args[0], err)
}
return nil
}
func NewConfigRestoreCmd() *cobra.Command {
cmdConfigRestore := &cobra.Command{
Use: `restore "directory"`, Use: `restore "directory"`,
Short: `Restore config in backup "directory"`, Short: `Restore config in backup "directory"`,
Long: `Restore the crowdsec configuration from specified backup "directory" including: Long: `Restore the crowdsec configuration from specified backup "directory" including:
@ -215,11 +258,16 @@ func NewConfigRestoreCmd() *cobra.Command {
- Backup of API credentials (local API and online API)`, - Backup of API credentials (local API and online API)`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runConfigRestore, RunE: func(_ *cobra.Command, args []string) error {
dirPath := args[0]
if err := cli.restore(dirPath); err != nil {
return fmt.Errorf("failed to restore config from %s: %w", dirPath, err)
}
return nil
},
} }
flags := cmdConfigRestore.Flags() return cmd
flags.BoolP("old-backup", "", false, "To use when you are upgrading crowdsec v0.X to v1.X and you need to restore backup from v0.X")
return cmdConfigRestore
} }

View file

@ -7,15 +7,18 @@ import (
"text/template" "text/template"
"github.com/antonmedv/expr" "github.com/antonmedv/expr"
"github.com/sanity-io/litter"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v3"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/exprhelpers" "github.com/crowdsecurity/crowdsec/pkg/exprhelpers"
) )
func showConfigKey(key string) error { func (cli *cliConfig) showKey(key string) error {
cfg := cli.cfg()
type Env struct { type Env struct {
Config *csconfig.Config Config *csconfig.Config
} }
@ -23,25 +26,26 @@ func showConfigKey(key string) error {
opts := []expr.Option{} opts := []expr.Option{}
opts = append(opts, exprhelpers.GetExprOptions(map[string]interface{}{})...) opts = append(opts, exprhelpers.GetExprOptions(map[string]interface{}{})...)
opts = append(opts, expr.Env(Env{})) opts = append(opts, expr.Env(Env{}))
program, err := expr.Compile(key, opts...) program, err := expr.Compile(key, opts...)
if err != nil { if err != nil {
return err return err
} }
output, err := expr.Run(program, Env{Config: csConfig}) output, err := expr.Run(program, Env{Config: cfg})
if err != nil { if err != nil {
return err return err
} }
switch csConfig.Cscli.Output { switch cfg.Cscli.Output {
case "human", "raw": case "human", "raw":
// Don't use litter for strings, it adds quotes
// that would break compatibility with previous versions
switch output.(type) { switch output.(type) {
case string: case string:
fmt.Printf("%s\n", output) fmt.Println(output)
case int:
fmt.Printf("%d\n", output)
default: default:
fmt.Printf("%v\n", output) litter.Dump(output)
} }
case "json": case "json":
data, err := json.MarshalIndent(output, "", " ") data, err := json.MarshalIndent(output, "", " ")
@ -49,15 +53,16 @@ func showConfigKey(key string) error {
return fmt.Errorf("failed to marshal configuration: %w", err) return fmt.Errorf("failed to marshal configuration: %w", err)
} }
fmt.Printf("%s\n", string(data)) fmt.Println(string(data))
} }
return nil return nil
} }
var configShowTemplate = `Global: func (cli *cliConfig) template() string {
return `Global:
{{- if .ConfigPaths }} {{- if .ConfigPaths }}
- Configuration Folder : {{.ConfigPaths.ConfigDir}}
- Configuration Folder : {{.ConfigPaths.ConfigDir}} - Configuration Folder : {{.ConfigPaths.ConfigDir}}
- Data Folder : {{.ConfigPaths.DataDir}} - Data Folder : {{.ConfigPaths.DataDir}}
- Hub Folder : {{.ConfigPaths.HubDir}} - Hub Folder : {{.ConfigPaths.HubDir}}
@ -71,7 +76,7 @@ var configShowTemplate = `Global:
{{- end }} {{- end }}
{{- if .Crowdsec }} {{- if .Crowdsec }}
Crowdsec: Crowdsec{{if and .Crowdsec.Enable (not (ValueBool .Crowdsec.Enable))}} (disabled){{end}}:
- Acquisition File : {{.Crowdsec.AcquisitionFilePath}} - Acquisition File : {{.Crowdsec.AcquisitionFilePath}}
- Parsers routines : {{.Crowdsec.ParserRoutinesCount}} - Parsers routines : {{.Crowdsec.ParserRoutinesCount}}
{{- if .Crowdsec.AcquisitionDirPath }} {{- if .Crowdsec.AcquisitionDirPath }}
@ -83,7 +88,6 @@ Crowdsec:
cscli: cscli:
- Output : {{.Cscli.Output}} - Output : {{.Cscli.Output}}
- Hub Branch : {{.Cscli.HubBranch}} - Hub Branch : {{.Cscli.HubBranch}}
- Hub Folder : {{.Cscli.HubDir}}
{{- end }} {{- end }}
{{- if .API }} {{- if .API }}
@ -97,8 +101,9 @@ API Client:
{{- end }} {{- end }}
{{- if .API.Server }} {{- if .API.Server }}
Local API Server: Local API Server{{if and .API.Server.Enable (not (ValueBool .API.Server.Enable))}} (disabled){{end}}:
- Listen URL : {{.API.Server.ListenURI}} - Listen URL : {{.API.Server.ListenURI}}
- Listen Socket : {{.API.Server.ListenSocket}}
- Profile File : {{.API.Server.ProfilesPath}} - Profile File : {{.API.Server.ProfilesPath}}
{{- if .API.Server.TLS }} {{- if .API.Server.TLS }}
@ -164,6 +169,12 @@ Central API:
- User : {{.DbConfig.User}} - User : {{.DbConfig.User}}
- DB Name : {{.DbConfig.DbName}} - DB Name : {{.DbConfig.DbName}}
{{- end }} {{- end }}
{{- if .DbConfig.MaxOpenConns }}
- Max Open Conns : {{.DbConfig.MaxOpenConns}}
{{- end }}
{{- if ne .DbConfig.DecisionBulkSize 0 }}
- Decision Bulk Size : {{.DbConfig.DecisionBulkSize}}
{{- end }}
{{- if .DbConfig.Flush }} {{- if .DbConfig.Flush }}
{{- if .DbConfig.Flush.MaxAge }} {{- if .DbConfig.Flush.MaxAge }}
- Flush age : {{.DbConfig.Flush.MaxAge}} - Flush age : {{.DbConfig.Flush.MaxAge}}
@ -174,64 +185,74 @@ Central API:
{{- end }} {{- end }}
{{- end }} {{- end }}
` `
}
func runConfigShow(cmd *cobra.Command, args []string) error { func (cli *cliConfig) show() error {
flags := cmd.Flags() cfg := cli.cfg()
if err := csConfig.LoadAPIClient(); err != nil { switch cfg.Cscli.Output {
log.Errorf("failed to load API client configuration: %s", err)
// don't return, we can still show the configuration
}
key, err := flags.GetString("key")
if err != nil {
return err
}
if key != "" {
return showConfigKey(key)
}
switch csConfig.Cscli.Output {
case "human": case "human":
tmp, err := template.New("config").Parse(configShowTemplate) // The tests on .Enable look funny because the option has a true default which has
// not been set yet (we don't really load the LAPI) and go templates don't dereference
// pointers in boolean tests. Prefix notation is the cherry on top.
funcs := template.FuncMap{
// can't use generics here
"ValueBool": func(b *bool) bool { return b != nil && *b },
}
tmp, err := template.New("config").Funcs(funcs).Parse(cli.template())
if err != nil { if err != nil {
return err return err
} }
err = tmp.Execute(os.Stdout, csConfig)
err = tmp.Execute(os.Stdout, cfg)
if err != nil { if err != nil {
return err return err
} }
case "json": case "json":
data, err := json.MarshalIndent(csConfig, "", " ") data, err := json.MarshalIndent(cfg, "", " ")
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal configuration: %w", err) return fmt.Errorf("failed to marshal configuration: %w", err)
} }
fmt.Printf("%s\n", string(data)) fmt.Println(string(data))
case "raw": case "raw":
data, err := yaml.Marshal(csConfig) data, err := yaml.Marshal(cfg)
if err != nil { if err != nil {
return fmt.Errorf("failed to marshal configuration: %w", err) return fmt.Errorf("failed to marshal configuration: %w", err)
} }
fmt.Printf("%s\n", string(data)) fmt.Println(string(data))
} }
return nil return nil
} }
func NewConfigShowCmd() *cobra.Command { func (cli *cliConfig) newShowCmd() *cobra.Command {
cmdConfigShow := &cobra.Command{ var key string
cmd := &cobra.Command{
Use: "show", Use: "show",
Short: "Displays current config", Short: "Displays current config",
Long: `Displays the current cli configuration.`, Long: `Displays the current cli configuration.`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runConfigShow, RunE: func(_ *cobra.Command, _ []string) error {
if err := cli.cfg().LoadAPIClient(); err != nil {
log.Errorf("failed to load API client configuration: %s", err)
// don't return, we can still show the configuration
}
if key != "" {
return cli.showKey(key)
}
return cli.show()
},
} }
flags := cmdConfigShow.Flags() flags := cmd.Flags()
flags.StringP("key", "", "", "Display only this value (Config.API.Server.ListenURI)") flags.StringVarP(&key, "key", "", "", "Display only this value (Config.API.Server.ListenURI)")
return cmdConfigShow return cmd
} }

View file

@ -6,19 +6,21 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
) )
func runConfigShowYAML(cmd *cobra.Command, args []string) error { func (cli *cliConfig) showYAML() error {
fmt.Println(mergedConfig) fmt.Println(mergedConfig)
return nil return nil
} }
func NewConfigShowYAMLCmd() *cobra.Command { func (cli *cliConfig) newShowYAMLCmd() *cobra.Command {
cmdConfigShow := &cobra.Command{ cmd := &cobra.Command{
Use: "show-yaml", Use: "show-yaml",
Short: "Displays merged config.yaml + config.yaml.local", Short: "Displays merged config.yaml + config.yaml.local",
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runConfigShowYAML, RunE: func(_ *cobra.Command, _ []string) error {
return cli.showYAML()
},
} }
return cmdConfigShow return cmd
} }

View file

@ -6,9 +6,10 @@ import (
"encoding/json" "encoding/json"
"errors" "errors"
"fmt" "fmt"
"io/fs"
"net/url" "net/url"
"os" "os"
"strconv"
"strings"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
@ -16,51 +17,63 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/pkg/ptr" "github.com/crowdsecurity/go-cs-lib/ptr"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/fflag"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
) )
func NewConsoleCmd() *cobra.Command { type cliConsole struct {
var cmdConsole = &cobra.Command{ cfg configGetter
}
func NewCLIConsole(cfg configGetter) *cliConsole {
return &cliConsole{
cfg: cfg,
}
}
func (cli *cliConsole) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "console [action]", Use: "console [action]",
Short: "Manage interaction with Crowdsec console (https://app.crowdsec.net)", Short: "Manage interaction with Crowdsec console (https://app.crowdsec.net)",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI { cfg := cli.cfg()
var fdErr *fs.PathError if err := require.LAPI(cfg); err != nil {
if errors.As(err, &fdErr) { return err
log.Fatalf("Unable to load Local API : %s", fdErr)
}
if err != nil {
log.Fatalf("Unable to load required Local API Configuration : %s", err)
}
log.Fatal("Local API is disabled, please run this command on the local API machine")
} }
if csConfig.DisableAPI { if err := require.CAPI(cfg); err != nil {
log.Fatal("Local API is disabled, please run this command on the local API machine") return err
} }
if csConfig.API.Server.OnlineClient == nil { if err := require.CAPIRegistered(cfg); err != nil {
log.Fatalf("No configuration for Central API (CAPI) in '%s'", *csConfig.FilePath) return err
}
if csConfig.API.Server.OnlineClient.Credentials == nil {
log.Fatal("You must configure Central API (CAPI) with `cscli capi register` before accessing console features.")
} }
return nil return nil
}, },
} }
cmd.AddCommand(cli.newEnrollCmd())
cmd.AddCommand(cli.newEnableCmd())
cmd.AddCommand(cli.newDisableCmd())
cmd.AddCommand(cli.newStatusCmd())
return cmd
}
func (cli *cliConsole) newEnrollCmd() *cobra.Command {
name := "" name := ""
overwrite := false overwrite := false
tags := []string{} tags := []string{}
opts := []string{}
cmdEnroll := &cobra.Command{ cmd := &cobra.Command{
Use: "enroll [enroll-key]", Use: "enroll [enroll-key]",
Short: "Enroll this instance to https://app.crowdsec.net [requires local API]", Short: "Enroll this instance to https://app.crowdsec.net [requires local API]",
Long: ` Long: `
@ -68,71 +81,115 @@ Enroll this instance to https://app.crowdsec.net
You can get your enrollment key by creating an account on https://app.crowdsec.net. You can get your enrollment key by creating an account on https://app.crowdsec.net.
After running this command your will need to validate the enrollment in the webapp.`, After running this command your will need to validate the enrollment in the webapp.`,
Example: `cscli console enroll YOUR-ENROLL-KEY Example: fmt.Sprintf(`cscli console enroll YOUR-ENROLL-KEY
cscli console enroll --name [instance_name] YOUR-ENROLL-KEY cscli console enroll --name [instance_name] YOUR-ENROLL-KEY
cscli console enroll --name [instance_name] --tags [tag_1] --tags [tag_2] YOUR-ENROLL-KEY cscli console enroll --name [instance_name] --tags [tag_1] --tags [tag_2] YOUR-ENROLL-KEY
`, cscli console enroll --enable context,manual YOUR-ENROLL-KEY
valid options are : %s,all (see 'cscli console status' for details)`, strings.Join(csconfig.CONSOLE_CONFIGS, ",")),
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, args []string) error {
password := strfmt.Password(csConfig.API.Server.OnlineClient.Credentials.Password) cfg := cli.cfg()
apiURL, err := url.Parse(csConfig.API.Server.OnlineClient.Credentials.URL) password := strfmt.Password(cfg.API.Server.OnlineClient.Credentials.Password)
apiURL, err := url.Parse(cfg.API.Server.OnlineClient.Credentials.URL)
if err != nil { if err != nil {
log.Fatalf("Could not parse CAPI URL : %s", err) return fmt.Errorf("could not parse CAPI URL: %w", err)
} }
if err := csConfig.LoadHub(); err != nil { hub, err := require.Hub(cfg, nil, nil)
log.Fatal(err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Fatalf("Failed to load hub index : %s", err)
log.Info("Run 'sudo cscli hub update' to get the hub index")
}
scenarios, err := cwhub.GetInstalledScenariosAsString()
if err != nil { if err != nil {
log.Fatalf("failed to get scenarios : %s", err) return err
}
scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil {
return fmt.Errorf("failed to get installed scenarios: %w", err)
} }
if len(scenarios) == 0 { if len(scenarios) == 0 {
scenarios = make([]string, 0) scenarios = make([]string, 0)
} }
enableOpts := []string{csconfig.SEND_MANUAL_SCENARIOS, csconfig.SEND_TAINTED_SCENARIOS}
if len(opts) != 0 {
for _, opt := range opts {
valid := false
if opt == "all" {
enableOpts = csconfig.CONSOLE_CONFIGS
break
}
for _, availableOpt := range csconfig.CONSOLE_CONFIGS {
if opt == availableOpt {
valid = true
enable := true
for _, enabledOpt := range enableOpts {
if opt == enabledOpt {
enable = false
continue
}
}
if enable {
enableOpts = append(enableOpts, opt)
}
break
}
}
if !valid {
return fmt.Errorf("option %s doesn't exist", opt)
}
}
}
c, _ := apiclient.NewClient(&apiclient.Config{ c, _ := apiclient.NewClient(&apiclient.Config{
MachineID: csConfig.API.Server.OnlineClient.Credentials.Login, MachineID: cli.cfg().API.Server.OnlineClient.Credentials.Login,
Password: password, Password: password,
Scenarios: scenarios, Scenarios: scenarios,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()), UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL, URL: apiURL,
VersionPrefix: "v3", VersionPrefix: "v3",
}) })
resp, err := c.Auth.EnrollWatcher(context.Background(), args[0], name, tags, overwrite) resp, err := c.Auth.EnrollWatcher(context.Background(), args[0], name, tags, overwrite)
if err != nil { if err != nil {
log.Fatalf("Could not enroll instance: %s", err) return fmt.Errorf("could not enroll instance: %w", err)
} }
if resp.Response.StatusCode == 200 && !overwrite { if resp.Response.StatusCode == 200 && !overwrite {
log.Warning("Instance already enrolled. You can use '--overwrite' to force enroll") log.Warning("Instance already enrolled. You can use '--overwrite' to force enroll")
return return nil
} }
SetConsoleOpts(csconfig.CONSOLE_CONFIGS, true) if err := cli.setConsoleOpts(enableOpts, true); err != nil {
if err := csConfig.API.Server.DumpConsoleConfig(); err != nil { return err
log.Fatalf("failed writing console config : %s", err)
} }
log.Infof("Enabled tainted&manual alerts sharing, see 'cscli console status'.")
log.Infof("Watcher successfully enrolled. Visit https://app.crowdsec.net to accept it.") for _, opt := range enableOpts {
log.Infof("Please restart crowdsec after accepting the enrollment.") log.Infof("Enabled %s : %s", opt, csconfig.CONSOLE_CONFIGS_HELP[opt])
}
log.Info("Watcher successfully enrolled. Visit https://app.crowdsec.net to accept it.")
log.Info("Please restart crowdsec after accepting the enrollment.")
return nil
}, },
} }
cmdEnroll.Flags().StringVarP(&name, "name", "n", "", "Name to display in the console")
cmdEnroll.Flags().BoolVarP(&overwrite, "overwrite", "", false, "Force enroll the instance")
cmdEnroll.Flags().StringSliceVarP(&tags, "tags", "t", tags, "Tags to display in the console")
cmdConsole.AddCommand(cmdEnroll)
var enableAll, disableAll bool flags := cmd.Flags()
flags.StringVarP(&name, "name", "n", "", "Name to display in the console")
flags.BoolVarP(&overwrite, "overwrite", "", false, "Force enroll the instance")
flags.StringSliceVarP(&tags, "tags", "t", tags, "Tags to display in the console")
flags.StringSliceVarP(&opts, "enable", "e", opts, "Enable console options")
cmdEnable := &cobra.Command{ return cmd
}
func (cli *cliConsole) newEnableCmd() *cobra.Command {
var enableAll bool
cmd := &cobra.Command{
Use: "enable [option]", Use: "enable [option]",
Short: "Enable a console option", Short: "Enable a console option",
Example: "sudo cscli console enable tainted", Example: "sudo cscli console enable tainted",
@ -140,195 +197,245 @@ After running this command your will need to validate the enrollment in the weba
Enable given information push to the central API. Allows to empower the console`, Enable given information push to the central API. Allows to empower the console`,
ValidArgs: csconfig.CONSOLE_CONFIGS, ValidArgs: csconfig.CONSOLE_CONFIGS,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, args []string) error {
if enableAll { if enableAll {
SetConsoleOpts(csconfig.CONSOLE_CONFIGS, true) if err := cli.setConsoleOpts(csconfig.CONSOLE_CONFIGS, true); err != nil {
return err
}
log.Infof("All features have been enabled successfully") log.Infof("All features have been enabled successfully")
} else { } else {
if len(args) == 0 { if len(args) == 0 {
log.Fatalf("You must specify at least one feature to enable") return errors.New("you must specify at least one feature to enable")
}
if err := cli.setConsoleOpts(args, true); err != nil {
return err
} }
SetConsoleOpts(args, true)
log.Infof("%v have been enabled", args) log.Infof("%v have been enabled", args)
} }
if err := csConfig.API.Server.DumpConsoleConfig(); err != nil {
log.Fatalf("failed writing console config : %s", err)
}
log.Infof(ReloadMessage()) log.Infof(ReloadMessage())
return nil
}, },
} }
cmdEnable.Flags().BoolVarP(&enableAll, "all", "a", false, "Enable all console options") cmd.Flags().BoolVarP(&enableAll, "all", "a", false, "Enable all console options")
cmdConsole.AddCommand(cmdEnable)
cmdDisable := &cobra.Command{ return cmd
}
func (cli *cliConsole) newDisableCmd() *cobra.Command {
var disableAll bool
cmd := &cobra.Command{
Use: "disable [option]", Use: "disable [option]",
Short: "Disable a console option", Short: "Disable a console option",
Example: "sudo cscli console disable tainted", Example: "sudo cscli console disable tainted",
Long: ` Long: `
Disable given information push to the central API.`, Disable given information push to the central API.`,
ValidArgs: csconfig.CONSOLE_CONFIGS, ValidArgs: csconfig.CONSOLE_CONFIGS,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, args []string) error {
if disableAll {
SetConsoleOpts(csconfig.CONSOLE_CONFIGS, false)
} else {
SetConsoleOpts(args, false)
}
if err := csConfig.API.Server.DumpConsoleConfig(); err != nil {
log.Fatalf("failed writing console config : %s", err)
}
if disableAll { if disableAll {
if err := cli.setConsoleOpts(csconfig.CONSOLE_CONFIGS, false); err != nil {
return err
}
log.Infof("All features have been disabled") log.Infof("All features have been disabled")
} else { } else {
if err := cli.setConsoleOpts(args, false); err != nil {
return err
}
log.Infof("%v have been disabled", args) log.Infof("%v have been disabled", args)
} }
log.Infof(ReloadMessage()) log.Infof(ReloadMessage())
return nil
}, },
} }
cmdDisable.Flags().BoolVarP(&disableAll, "all", "a", false, "Disable all console options") cmd.Flags().BoolVarP(&disableAll, "all", "a", false, "Disable all console options")
cmdConsole.AddCommand(cmdDisable)
cmdConsoleStatus := &cobra.Command{ return cmd
Use: "status [option]", }
Short: "Shows status of one or all console options",
func (cli *cliConsole) newStatusCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "status",
Short: "Shows status of the console options",
Example: `sudo cscli console status`, Example: `sudo cscli console status`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
switch csConfig.Cscli.Output { cfg := cli.cfg()
consoleCfg := cfg.API.Server.ConsoleConfig
switch cfg.Cscli.Output {
case "human": case "human":
cmdConsoleStatusTable(color.Output, *csConfig) cmdConsoleStatusTable(color.Output, *consoleCfg)
case "json": case "json":
data, err := json.MarshalIndent(csConfig.API.Server.ConsoleConfig, "", " ") out := map[string](*bool){
if err != nil { csconfig.SEND_MANUAL_SCENARIOS: consoleCfg.ShareManualDecisions,
log.Fatalf("failed to marshal configuration: %s", err) csconfig.SEND_CUSTOM_SCENARIOS: consoleCfg.ShareCustomScenarios,
csconfig.SEND_TAINTED_SCENARIOS: consoleCfg.ShareTaintedScenarios,
csconfig.SEND_CONTEXT: consoleCfg.ShareContext,
csconfig.CONSOLE_MANAGEMENT: consoleCfg.ConsoleManagement,
} }
fmt.Printf("%s\n", string(data)) data, err := json.MarshalIndent(out, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal configuration: %w", err)
}
fmt.Println(string(data))
case "raw": case "raw":
csvwriter := csv.NewWriter(os.Stdout) csvwriter := csv.NewWriter(os.Stdout)
err := csvwriter.Write([]string{"option", "enabled"}) err := csvwriter.Write([]string{"option", "enabled"})
if err != nil { if err != nil {
log.Fatal(err) return err
} }
rows := [][]string{ rows := [][]string{
{csconfig.SEND_MANUAL_SCENARIOS, fmt.Sprintf("%t", *csConfig.API.Server.ConsoleConfig.ShareManualDecisions)}, {csconfig.SEND_MANUAL_SCENARIOS, strconv.FormatBool(*consoleCfg.ShareManualDecisions)},
{csconfig.SEND_CUSTOM_SCENARIOS, fmt.Sprintf("%t", *csConfig.API.Server.ConsoleConfig.ShareCustomScenarios)}, {csconfig.SEND_CUSTOM_SCENARIOS, strconv.FormatBool(*consoleCfg.ShareCustomScenarios)},
{csconfig.SEND_TAINTED_SCENARIOS, fmt.Sprintf("%t", *csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios)}, {csconfig.SEND_TAINTED_SCENARIOS, strconv.FormatBool(*consoleCfg.ShareTaintedScenarios)},
{csconfig.SEND_CONTEXT, fmt.Sprintf("%t", *csConfig.API.Server.ConsoleConfig.ShareContext)}, {csconfig.SEND_CONTEXT, strconv.FormatBool(*consoleCfg.ShareContext)},
{csconfig.CONSOLE_MANAGEMENT, fmt.Sprintf("%t", *csConfig.API.Server.ConsoleConfig.ConsoleManagement)}, {csconfig.CONSOLE_MANAGEMENT, strconv.FormatBool(*consoleCfg.ConsoleManagement)},
} }
for _, row := range rows { for _, row := range rows {
err = csvwriter.Write(row) err = csvwriter.Write(row)
if err != nil { if err != nil {
log.Fatal(err) return err
} }
} }
csvwriter.Flush() csvwriter.Flush()
} }
return nil
}, },
} }
cmdConsole.AddCommand(cmdConsoleStatus)
return cmdConsole return cmd
} }
func SetConsoleOpts(args []string, wanted bool) { func (cli *cliConsole) dumpConfig() error {
serverCfg := cli.cfg().API.Server
out, err := yaml.Marshal(serverCfg.ConsoleConfig)
if err != nil {
return fmt.Errorf("while marshaling ConsoleConfig (for %s): %w", serverCfg.ConsoleConfigPath, err)
}
if serverCfg.ConsoleConfigPath == "" {
serverCfg.ConsoleConfigPath = csconfig.DefaultConsoleConfigFilePath
log.Debugf("Empty console_path, defaulting to %s", serverCfg.ConsoleConfigPath)
}
if err := os.WriteFile(serverCfg.ConsoleConfigPath, out, 0o600); err != nil {
return fmt.Errorf("while dumping console config to %s: %w", serverCfg.ConsoleConfigPath, err)
}
return nil
}
func (cli *cliConsole) setConsoleOpts(args []string, wanted bool) error {
cfg := cli.cfg()
consoleCfg := cfg.API.Server.ConsoleConfig
for _, arg := range args { for _, arg := range args {
switch arg { switch arg {
case csconfig.CONSOLE_MANAGEMENT: case csconfig.CONSOLE_MANAGEMENT:
if !fflag.PapiClient.IsEnabled() {
continue
}
/*for each flag check if it's already set before setting it*/ /*for each flag check if it's already set before setting it*/
if csConfig.API.Server.ConsoleConfig.ConsoleManagement != nil { if consoleCfg.ConsoleManagement != nil {
if *csConfig.API.Server.ConsoleConfig.ConsoleManagement == wanted { if *consoleCfg.ConsoleManagement == wanted {
log.Debugf("%s already set to %t", csconfig.CONSOLE_MANAGEMENT, wanted) log.Debugf("%s already set to %t", csconfig.CONSOLE_MANAGEMENT, wanted)
} else { } else {
log.Infof("%s set to %t", csconfig.CONSOLE_MANAGEMENT, wanted) log.Infof("%s set to %t", csconfig.CONSOLE_MANAGEMENT, wanted)
*csConfig.API.Server.ConsoleConfig.ConsoleManagement = wanted *consoleCfg.ConsoleManagement = wanted
} }
} else { } else {
log.Infof("%s set to %t", csconfig.CONSOLE_MANAGEMENT, wanted) log.Infof("%s set to %t", csconfig.CONSOLE_MANAGEMENT, wanted)
csConfig.API.Server.ConsoleConfig.ConsoleManagement = ptr.Of(wanted) consoleCfg.ConsoleManagement = ptr.Of(wanted)
} }
if csConfig.API.Server.OnlineClient.Credentials != nil {
if cfg.API.Server.OnlineClient.Credentials != nil {
changed := false changed := false
if wanted && csConfig.API.Server.OnlineClient.Credentials.PapiURL == "" { if wanted && cfg.API.Server.OnlineClient.Credentials.PapiURL == "" {
changed = true changed = true
csConfig.API.Server.OnlineClient.Credentials.PapiURL = types.PAPIBaseURL cfg.API.Server.OnlineClient.Credentials.PapiURL = types.PAPIBaseURL
} else if !wanted && csConfig.API.Server.OnlineClient.Credentials.PapiURL != "" { } else if !wanted && cfg.API.Server.OnlineClient.Credentials.PapiURL != "" {
changed = true changed = true
csConfig.API.Server.OnlineClient.Credentials.PapiURL = "" cfg.API.Server.OnlineClient.Credentials.PapiURL = ""
} }
if changed { if changed {
fileContent, err := yaml.Marshal(csConfig.API.Server.OnlineClient.Credentials) fileContent, err := yaml.Marshal(cfg.API.Server.OnlineClient.Credentials)
if err != nil { if err != nil {
log.Fatalf("Cannot marshal credentials: %s", err) return fmt.Errorf("cannot marshal credentials: %w", err)
} }
log.Infof("Updating credentials file: %s", csConfig.API.Server.OnlineClient.CredentialsFilePath)
err = os.WriteFile(csConfig.API.Server.OnlineClient.CredentialsFilePath, fileContent, 0600) log.Infof("Updating credentials file: %s", cfg.API.Server.OnlineClient.CredentialsFilePath)
err = os.WriteFile(cfg.API.Server.OnlineClient.CredentialsFilePath, fileContent, 0o600)
if err != nil { if err != nil {
log.Fatalf("Cannot write credentials file: %s", err) return fmt.Errorf("cannot write credentials file: %w", err)
} }
} }
} }
case csconfig.SEND_CUSTOM_SCENARIOS: case csconfig.SEND_CUSTOM_SCENARIOS:
/*for each flag check if it's already set before setting it*/ /*for each flag check if it's already set before setting it*/
if csConfig.API.Server.ConsoleConfig.ShareCustomScenarios != nil { if consoleCfg.ShareCustomScenarios != nil {
if *csConfig.API.Server.ConsoleConfig.ShareCustomScenarios == wanted { if *consoleCfg.ShareCustomScenarios == wanted {
log.Debugf("%s already set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted) log.Debugf("%s already set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted)
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted)
*csConfig.API.Server.ConsoleConfig.ShareCustomScenarios = wanted *consoleCfg.ShareCustomScenarios = wanted
} }
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_CUSTOM_SCENARIOS, wanted)
csConfig.API.Server.ConsoleConfig.ShareCustomScenarios = ptr.Of(wanted) consoleCfg.ShareCustomScenarios = ptr.Of(wanted)
} }
case csconfig.SEND_TAINTED_SCENARIOS: case csconfig.SEND_TAINTED_SCENARIOS:
/*for each flag check if it's already set before setting it*/ /*for each flag check if it's already set before setting it*/
if csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios != nil { if consoleCfg.ShareTaintedScenarios != nil {
if *csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios == wanted { if *consoleCfg.ShareTaintedScenarios == wanted {
log.Debugf("%s already set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted) log.Debugf("%s already set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted)
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted)
*csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios = wanted *consoleCfg.ShareTaintedScenarios = wanted
} }
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_TAINTED_SCENARIOS, wanted)
csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios = ptr.Of(wanted) consoleCfg.ShareTaintedScenarios = ptr.Of(wanted)
} }
case csconfig.SEND_MANUAL_SCENARIOS: case csconfig.SEND_MANUAL_SCENARIOS:
/*for each flag check if it's already set before setting it*/ /*for each flag check if it's already set before setting it*/
if csConfig.API.Server.ConsoleConfig.ShareManualDecisions != nil { if consoleCfg.ShareManualDecisions != nil {
if *csConfig.API.Server.ConsoleConfig.ShareManualDecisions == wanted { if *consoleCfg.ShareManualDecisions == wanted {
log.Debugf("%s already set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted) log.Debugf("%s already set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted)
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted)
*csConfig.API.Server.ConsoleConfig.ShareManualDecisions = wanted *consoleCfg.ShareManualDecisions = wanted
} }
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted) log.Infof("%s set to %t", csconfig.SEND_MANUAL_SCENARIOS, wanted)
csConfig.API.Server.ConsoleConfig.ShareManualDecisions = ptr.Of(wanted) consoleCfg.ShareManualDecisions = ptr.Of(wanted)
} }
case csconfig.SEND_CONTEXT: case csconfig.SEND_CONTEXT:
/*for each flag check if it's already set before setting it*/ /*for each flag check if it's already set before setting it*/
if csConfig.API.Server.ConsoleConfig.ShareContext != nil { if consoleCfg.ShareContext != nil {
if *csConfig.API.Server.ConsoleConfig.ShareContext == wanted { if *consoleCfg.ShareContext == wanted {
log.Debugf("%s already set to %t", csconfig.SEND_CONTEXT, wanted) log.Debugf("%s already set to %t", csconfig.SEND_CONTEXT, wanted)
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_CONTEXT, wanted) log.Infof("%s set to %t", csconfig.SEND_CONTEXT, wanted)
*csConfig.API.Server.ConsoleConfig.ShareContext = wanted *consoleCfg.ShareContext = wanted
} }
} else { } else {
log.Infof("%s set to %t", csconfig.SEND_CONTEXT, wanted) log.Infof("%s set to %t", csconfig.SEND_CONTEXT, wanted)
csConfig.API.Server.ConsoleConfig.ShareContext = ptr.Of(wanted) consoleCfg.ShareContext = ptr.Of(wanted)
} }
default: default:
log.Fatalf("unknown flag %s", arg) return fmt.Errorf("unknown flag %s", arg)
} }
} }
if err := cli.dumpConfig(); err != nil {
return fmt.Errorf("failed writing console config: %w", err)
}
return nil
} }

View file

@ -4,12 +4,12 @@ import (
"io" "io"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/enescakir/emoji"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
) )
func cmdConsoleStatusTable(out io.Writer, csConfig csconfig.Config) { func cmdConsoleStatusTable(out io.Writer, consoleCfg csconfig.ConsoleConfig) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
@ -17,43 +17,32 @@ func cmdConsoleStatusTable(out io.Writer, csConfig csconfig.Config) {
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
for _, option := range csconfig.CONSOLE_CONFIGS { for _, option := range csconfig.CONSOLE_CONFIGS {
activated := emoji.CrossMark
switch option { switch option {
case csconfig.SEND_CUSTOM_SCENARIOS: case csconfig.SEND_CUSTOM_SCENARIOS:
activated := string(emoji.CrossMark) if *consoleCfg.ShareCustomScenarios {
if *csConfig.API.Server.ConsoleConfig.ShareCustomScenarios { activated = emoji.CheckMarkButton
activated = string(emoji.CheckMarkButton)
} }
t.AddRow(option, activated, "Send alerts from custom scenarios to the console")
case csconfig.SEND_MANUAL_SCENARIOS: case csconfig.SEND_MANUAL_SCENARIOS:
activated := string(emoji.CrossMark) if *consoleCfg.ShareManualDecisions {
if *csConfig.API.Server.ConsoleConfig.ShareManualDecisions { activated = emoji.CheckMarkButton
activated = string(emoji.CheckMarkButton)
} }
t.AddRow(option, activated, "Send manual decisions to the console")
case csconfig.SEND_TAINTED_SCENARIOS: case csconfig.SEND_TAINTED_SCENARIOS:
activated := string(emoji.CrossMark) if *consoleCfg.ShareTaintedScenarios {
if *csConfig.API.Server.ConsoleConfig.ShareTaintedScenarios { activated = emoji.CheckMarkButton
activated = string(emoji.CheckMarkButton)
} }
t.AddRow(option, activated, "Send alerts from tainted scenarios to the console")
case csconfig.SEND_CONTEXT: case csconfig.SEND_CONTEXT:
activated := string(emoji.CrossMark) if *consoleCfg.ShareContext {
if *csConfig.API.Server.ConsoleConfig.ShareContext { activated = emoji.CheckMarkButton
activated = string(emoji.CheckMarkButton)
} }
t.AddRow(option, activated, "Send context with alerts to the console")
case csconfig.CONSOLE_MANAGEMENT: case csconfig.CONSOLE_MANAGEMENT:
activated := string(emoji.CrossMark) if *consoleCfg.ConsoleManagement {
if *csConfig.API.Server.ConsoleConfig.ConsoleManagement { activated = emoji.CheckMarkButton
activated = string(emoji.CheckMarkButton)
} }
t.AddRow(option, activated, "Receive decisions from console")
} }
t.AddRow(option, activated, csconfig.CONSOLE_CONFIGS_HELP[option])
} }
t.Render() t.Render()

View file

@ -0,0 +1,82 @@
package main
import (
"fmt"
"io"
"os"
"path/filepath"
log "github.com/sirupsen/logrus"
)
/*help to copy the file, ioutil doesn't offer the feature*/
func copyFileContents(src, dst string) (err error) {
in, err := os.Open(src)
if err != nil {
return
}
defer in.Close()
out, err := os.Create(dst)
if err != nil {
return
}
defer func() {
cerr := out.Close()
if err == nil {
err = cerr
}
}()
if _, err = io.Copy(out, in); err != nil {
return
}
err = out.Sync()
return
}
/*copy the file, ioutile doesn't offer the feature*/
func CopyFile(sourceSymLink, destinationFile string) error {
sourceFile, err := filepath.EvalSymlinks(sourceSymLink)
if err != nil {
log.Infof("Not a symlink : %s", err)
sourceFile = sourceSymLink
}
sourceFileStat, err := os.Stat(sourceFile)
if err != nil {
return err
}
if !sourceFileStat.Mode().IsRegular() {
// cannot copy non-regular files (e.g., directories,
// symlinks, devices, etc.)
return fmt.Errorf("copyFile: non-regular source file %s (%q)", sourceFileStat.Name(), sourceFileStat.Mode().String())
}
destinationFileStat, err := os.Stat(destinationFile)
if err != nil {
if !os.IsNotExist(err) {
return err
}
} else {
if !(destinationFileStat.Mode().IsRegular()) {
return fmt.Errorf("copyFile: non-regular destination file %s (%q)", destinationFileStat.Name(), destinationFileStat.Mode().String())
}
if os.SameFile(sourceFileStat, destinationFileStat) {
return err
}
}
if err = os.Link(sourceFile, destinationFile); err != nil {
err = copyFileContents(sourceFile, destinationFile)
}
return err
}

View file

@ -1,7 +1,8 @@
//go:build linux
package main package main
import ( import (
"errors"
"fmt" "fmt"
"math" "math"
"os" "os"
@ -10,6 +11,7 @@ import (
"path/filepath" "path/filepath"
"strconv" "strconv"
"strings" "strings"
"syscall"
"unicode" "unicode"
"github.com/AlecAivazis/survey/v2" "github.com/AlecAivazis/survey/v2"
@ -17,16 +19,18 @@ import (
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/metabase" "github.com/crowdsecurity/crowdsec/pkg/metabase"
) )
var ( var (
metabaseUser = "crowdsec@crowdsec.net" metabaseUser = "crowdsec@crowdsec.net"
metabasePassword string metabasePassword string
metabaseDbPath string metabaseDBPath string
metabaseConfigPath string metabaseConfigPath string
metabaseConfigFolder = "metabase/" metabaseConfigFolder = "metabase/"
metabaseConfigFile = "metabase.yaml" metabaseConfigFile = "metabase.yaml"
metabaseImage = "metabase/metabase:v0.46.6.1"
/**/ /**/
metabaseListenAddress = "127.0.0.1" metabaseListenAddress = "127.0.0.1"
metabaseListenPort = "3000" metabaseListenPort = "3000"
@ -35,12 +39,21 @@ var (
forceYes bool forceYes bool
/*informations needed to setup a random password on user's behalf*/ // information needed to set up a random password on user's behalf
) )
func NewDashboardCmd() *cobra.Command { type cliDashboard struct {
/* ---- UPDATE COMMAND */ cfg configGetter
var cmdDashboard = &cobra.Command{ }
func NewCLIDashboard(cfg configGetter) *cliDashboard {
return &cliDashboard{
cfg: cfg,
}
}
func (cli *cliDashboard) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "dashboard [command]", Use: "dashboard [command]",
Short: "Manage your metabase dashboard container [requires local API]", Short: "Manage your metabase dashboard container [requires local API]",
Long: `Install/Start/Stop/Remove a metabase container exposing dashboard and metrics. Long: `Install/Start/Stop/Remove a metabase container exposing dashboard and metrics.
@ -54,23 +67,24 @@ cscli dashboard start
cscli dashboard stop cscli dashboard stop
cscli dashboard remove cscli dashboard remove
`, `,
PersistentPreRun: func(cmd *cobra.Command, args []string) { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
cfg := cli.cfg()
if err := require.LAPI(cfg); err != nil {
return err
}
if err := metabase.TestAvailability(); err != nil { if err := metabase.TestAvailability(); err != nil {
log.Fatalf("%s", err) return err
} }
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI { metabaseConfigFolderPath := filepath.Join(cfg.ConfigPaths.ConfigDir, metabaseConfigFolder)
log.Fatal("Local API is disabled, please run this command on the local API machine")
}
metabaseConfigFolderPath := filepath.Join(csConfig.ConfigPaths.ConfigDir, metabaseConfigFolder)
metabaseConfigPath = filepath.Join(metabaseConfigFolderPath, metabaseConfigFile) metabaseConfigPath = filepath.Join(metabaseConfigFolderPath, metabaseConfigFile)
if err := os.MkdirAll(metabaseConfigFolderPath, os.ModePerm); err != nil { if err := os.MkdirAll(metabaseConfigFolderPath, os.ModePerm); err != nil {
log.Fatal(err) return err
} }
if err := csConfig.LoadDBConfig(); err != nil {
log.Errorf("This command requires direct database access (must be run on the local API machine)") if err := require.DB(cfg); err != nil {
log.Fatal(err) return err
} }
/* /*
@ -84,23 +98,24 @@ cscli dashboard remove
metabaseContainerID = oldContainerID metabaseContainerID = oldContainerID
} }
} }
return nil
}, },
} }
cmdDashboard.AddCommand(NewDashboardSetupCmd()) cmd.AddCommand(cli.newSetupCmd())
cmdDashboard.AddCommand(NewDashboardStartCmd()) cmd.AddCommand(cli.newStartCmd())
cmdDashboard.AddCommand(NewDashboardStopCmd()) cmd.AddCommand(cli.newStopCmd())
cmdDashboard.AddCommand(NewDashboardShowPasswordCmd()) cmd.AddCommand(cli.newShowPasswordCmd())
cmdDashboard.AddCommand(NewDashboardRemoveCmd()) cmd.AddCommand(cli.newRemoveCmd())
return cmdDashboard return cmd
} }
func (cli *cliDashboard) newSetupCmd() *cobra.Command {
func NewDashboardSetupCmd() *cobra.Command {
var force bool var force bool
var cmdDashSetup = &cobra.Command{ cmd := &cobra.Command{
Use: "setup", Use: "setup",
Short: "Setup a metabase container.", Short: "Setup a metabase container.",
Long: `Perform a metabase docker setup, download standard dashboards, create a fresh user and start the container`, Long: `Perform a metabase docker setup, download standard dashboards, create a fresh user and start the container`,
@ -111,9 +126,9 @@ cscli dashboard setup
cscli dashboard setup --listen 0.0.0.0 cscli dashboard setup --listen 0.0.0.0
cscli dashboard setup -l 0.0.0.0 -p 443 --password <password> cscli dashboard setup -l 0.0.0.0 -p 443 --password <password>
`, `,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
if metabaseDbPath == "" { if metabaseDBPath == "" {
metabaseDbPath = csConfig.ConfigPaths.DataDir metabaseDBPath = cli.cfg().ConfigPaths.DataDir
} }
if metabasePassword == "" { if metabasePassword == "" {
@ -123,70 +138,26 @@ cscli dashboard setup -l 0.0.0.0 -p 443 --password <password>
isValid = passwordIsValid(metabasePassword) isValid = passwordIsValid(metabasePassword)
} }
} }
var answer bool if err := checkSystemMemory(&forceYes); err != nil {
if valid, err := checkSystemMemory(); err == nil && !valid { return err
if !forceYes {
prompt := &survey.Confirm{
Message: "Metabase requires 1-2GB of RAM, your system is below this requirement continue ?",
Default: true,
}
if err := survey.AskOne(prompt, &answer); err != nil {
log.Warnf("unable to ask about RAM check: %s", err)
}
if !answer {
log.Fatal("Unable to continue due to RAM requirement")
}
} else {
log.Warnf("Metabase requires 1-2GB of RAM, your system is below this requirement")
}
} }
groupExist := false warnIfNotLoopback(metabaseListenAddress)
dockerGroup, err := user.LookupGroup(crowdsecGroup) if err := disclaimer(&forceYes); err != nil {
if err == nil { return err
groupExist = true
} }
if !forceYes && !groupExist { dockerGroup, err := checkGroups(&forceYes)
prompt := &survey.Confirm{
Message: fmt.Sprintf("For metabase docker to be able to access SQLite file we need to add a new group called '%s' to the system, is it ok for you ?", crowdsecGroup),
Default: true,
}
if err := survey.AskOne(prompt, &answer); err != nil {
log.Fatalf("unable to ask to force: %s", err)
}
}
if !answer && !forceYes && !groupExist {
log.Fatalf("unable to continue without creating '%s' group", crowdsecGroup)
}
if !groupExist {
groupAddCmd, err := exec.LookPath("groupadd")
if err != nil {
log.Fatalf("unable to find 'groupadd' command, can't continue")
}
groupAdd := &exec.Cmd{Path: groupAddCmd, Args: []string{groupAddCmd, crowdsecGroup}}
if err := groupAdd.Run(); err != nil {
log.Fatalf("unable to add group '%s': %s", dockerGroup, err)
}
dockerGroup, err = user.LookupGroup(crowdsecGroup)
if err != nil {
log.Fatalf("unable to lookup '%s' group: %+v", dockerGroup, err)
}
}
intID, err := strconv.Atoi(dockerGroup.Gid)
if err != nil { if err != nil {
log.Fatalf("unable to convert group ID to int: %s", err) return err
} }
if err := os.Chown(csConfig.DbConfig.DbPath, 0, intID); err != nil { if err = cli.chownDatabase(dockerGroup.Gid); err != nil {
log.Fatalf("unable to chown sqlite db file '%s': %s", csConfig.DbConfig.DbPath, err) return err
} }
mb, err := metabase.SetupMetabase(cli.cfg().API.Server.DbConfig, metabaseListenAddress, metabaseListenPort, metabaseUser, metabasePassword, metabaseDBPath, dockerGroup.Gid, metabaseContainerID, metabaseImage)
mb, err := metabase.SetupMetabase(csConfig.API.Server.DbConfig, metabaseListenAddress, metabaseListenPort, metabaseUser, metabasePassword, metabaseDbPath, dockerGroup.Gid, metabaseContainerID)
if err != nil { if err != nil {
log.Fatal(err) return err
} }
if err := mb.DumpConfig(metabaseConfigPath); err != nil { if err := mb.DumpConfig(metabaseConfigPath); err != nil {
log.Fatal(err) return err
} }
log.Infof("Metabase is ready") log.Infof("Metabase is ready")
@ -194,79 +165,96 @@ cscli dashboard setup -l 0.0.0.0 -p 443 --password <password>
fmt.Printf("\tURL : '%s'\n", mb.Config.ListenURL) fmt.Printf("\tURL : '%s'\n", mb.Config.ListenURL)
fmt.Printf("\tusername : '%s'\n", mb.Config.Username) fmt.Printf("\tusername : '%s'\n", mb.Config.Username)
fmt.Printf("\tpassword : '%s'\n", mb.Config.Password) fmt.Printf("\tpassword : '%s'\n", mb.Config.Password)
return nil
}, },
} }
cmdDashSetup.Flags().BoolVarP(&force, "force", "f", false, "Force setup : override existing files.")
cmdDashSetup.Flags().StringVarP(&metabaseDbPath, "dir", "d", "", "Shared directory with metabase container.")
cmdDashSetup.Flags().StringVarP(&metabaseListenAddress, "listen", "l", metabaseListenAddress, "Listen address of container")
cmdDashSetup.Flags().StringVarP(&metabaseListenPort, "port", "p", metabaseListenPort, "Listen port of container")
cmdDashSetup.Flags().BoolVarP(&forceYes, "yes", "y", false, "force yes")
//cmdDashSetup.Flags().StringVarP(&metabaseUser, "user", "u", "crowdsec@crowdsec.net", "metabase user")
cmdDashSetup.Flags().StringVar(&metabasePassword, "password", "", "metabase password")
return cmdDashSetup flags := cmd.Flags()
flags.BoolVarP(&force, "force", "f", false, "Force setup : override existing files")
flags.StringVarP(&metabaseDBPath, "dir", "d", "", "Shared directory with metabase container")
flags.StringVarP(&metabaseListenAddress, "listen", "l", metabaseListenAddress, "Listen address of container")
flags.StringVar(&metabaseImage, "metabase-image", metabaseImage, "Metabase image to use")
flags.StringVarP(&metabaseListenPort, "port", "p", metabaseListenPort, "Listen port of container")
flags.BoolVarP(&forceYes, "yes", "y", false, "force yes")
// flags.StringVarP(&metabaseUser, "user", "u", "crowdsec@crowdsec.net", "metabase user")
flags.StringVar(&metabasePassword, "password", "", "metabase password")
return cmd
} }
func NewDashboardStartCmd() *cobra.Command { func (cli *cliDashboard) newStartCmd() *cobra.Command {
var cmdDashStart = &cobra.Command{ cmd := &cobra.Command{
Use: "start", Use: "start",
Short: "Start the metabase container.", Short: "Start the metabase container.",
Long: `Stats the metabase container using docker.`, Long: `Stats the metabase container using docker.`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
mb, err := metabase.NewMetabase(metabaseConfigPath, metabaseContainerID) mb, err := metabase.NewMetabase(metabaseConfigPath, metabaseContainerID)
if err != nil { if err != nil {
log.Fatal(err) return err
}
warnIfNotLoopback(mb.Config.ListenAddr)
if err := disclaimer(&forceYes); err != nil {
return err
} }
if err := mb.Container.Start(); err != nil { if err := mb.Container.Start(); err != nil {
log.Fatalf("Failed to start metabase container : %s", err) return fmt.Errorf("failed to start metabase container : %s", err)
} }
log.Infof("Started metabase") log.Infof("Started metabase")
log.Infof("url : http://%s:%s", metabaseListenAddress, metabaseListenPort) log.Infof("url : http://%s:%s", mb.Config.ListenAddr, mb.Config.ListenPort)
return nil
}, },
} }
return cmdDashStart
cmd.Flags().BoolVarP(&forceYes, "yes", "y", false, "force yes")
return cmd
} }
func NewDashboardStopCmd() *cobra.Command { func (cli *cliDashboard) newStopCmd() *cobra.Command {
var cmdDashStop = &cobra.Command{ cmd := &cobra.Command{
Use: "stop", Use: "stop",
Short: "Stops the metabase container.", Short: "Stops the metabase container.",
Long: `Stops the metabase container using docker.`, Long: `Stops the metabase container using docker.`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
if err := metabase.StopContainer(metabaseContainerID); err != nil { if err := metabase.StopContainer(metabaseContainerID); err != nil {
log.Fatalf("unable to stop container '%s': %s", metabaseContainerID, err) return fmt.Errorf("unable to stop container '%s': %s", metabaseContainerID, err)
} }
return nil
}, },
} }
return cmdDashStop
return cmd
} }
func (cli *cliDashboard) newShowPasswordCmd() *cobra.Command {
func NewDashboardShowPasswordCmd() *cobra.Command { cmd := &cobra.Command{Use: "show-password",
var cmdDashShowPassword = &cobra.Command{Use: "show-password",
Short: "displays password of metabase.", Short: "displays password of metabase.",
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
m := metabase.Metabase{} m := metabase.Metabase{}
if err := m.LoadConfig(metabaseConfigPath); err != nil { if err := m.LoadConfig(metabaseConfigPath); err != nil {
log.Fatal(err) return err
} }
log.Printf("'%s'", m.Config.Password) log.Printf("'%s'", m.Config.Password)
return nil
}, },
} }
return cmdDashShowPassword
return cmd
} }
func (cli *cliDashboard) newRemoveCmd() *cobra.Command {
func NewDashboardRemoveCmd() *cobra.Command {
var force bool var force bool
var cmdDashRemove = &cobra.Command{ cmd := &cobra.Command{
Use: "remove", Use: "remove",
Short: "removes the metabase container.", Short: "removes the metabase container.",
Long: `removes the metabase container using docker.`, Long: `removes the metabase container using docker.`,
@ -276,66 +264,77 @@ func NewDashboardRemoveCmd() *cobra.Command {
cscli dashboard remove cscli dashboard remove
cscli dashboard remove --force cscli dashboard remove --force
`, `,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
answer := true
if !forceYes { if !forceYes {
var answer bool
prompt := &survey.Confirm{ prompt := &survey.Confirm{
Message: "Do you really want to remove crowdsec dashboard? (all your changes will be lost)", Message: "Do you really want to remove crowdsec dashboard? (all your changes will be lost)",
Default: true, Default: true,
} }
if err := survey.AskOne(prompt, &answer); err != nil { if err := survey.AskOne(prompt, &answer); err != nil {
log.Fatalf("unable to ask to force: %s", err) return fmt.Errorf("unable to ask to force: %s", err)
}
if !answer {
return fmt.Errorf("user stated no to continue")
} }
} }
if answer { if metabase.IsContainerExist(metabaseContainerID) {
if metabase.IsContainerExist(metabaseContainerID) { log.Debugf("Stopping container %s", metabaseContainerID)
log.Debugf("Stopping container %s", metabaseContainerID) if err := metabase.StopContainer(metabaseContainerID); err != nil {
if err := metabase.StopContainer(metabaseContainerID); err != nil { log.Warningf("unable to stop container '%s': %s", metabaseContainerID, err)
log.Warningf("unable to stop container '%s': %s", metabaseContainerID, err) }
dockerGroup, err := user.LookupGroup(crowdsecGroup)
if err == nil { // if group exist, remove it
groupDelCmd, err := exec.LookPath("groupdel")
if err != nil {
return fmt.Errorf("unable to find 'groupdel' command, can't continue")
} }
dockerGroup, err := user.LookupGroup(crowdsecGroup)
if err == nil { // if group exist, remove it
groupDelCmd, err := exec.LookPath("groupdel")
if err != nil {
log.Fatalf("unable to find 'groupdel' command, can't continue")
}
groupDel := &exec.Cmd{Path: groupDelCmd, Args: []string{groupDelCmd, crowdsecGroup}} groupDel := &exec.Cmd{Path: groupDelCmd, Args: []string{groupDelCmd, crowdsecGroup}}
if err := groupDel.Run(); err != nil { if err := groupDel.Run(); err != nil {
log.Errorf("unable to delete group '%s': %s", dockerGroup, err) log.Warnf("unable to delete group '%s': %s", dockerGroup, err)
}
} }
log.Debugf("Removing container %s", metabaseContainerID)
if err := metabase.RemoveContainer(metabaseContainerID); err != nil {
log.Warningf("unable to remove container '%s': %s", metabaseContainerID, err)
}
log.Infof("container %s stopped & removed", metabaseContainerID)
} }
log.Debugf("Removing metabase db %s", csConfig.ConfigPaths.DataDir) log.Debugf("Removing container %s", metabaseContainerID)
if err := metabase.RemoveDatabase(csConfig.ConfigPaths.DataDir); err != nil { if err := metabase.RemoveContainer(metabaseContainerID); err != nil {
log.Warningf("failed to remove metabase internal db : %s", err) log.Warnf("unable to remove container '%s': %s", metabaseContainerID, err)
} }
if force { log.Infof("container %s stopped & removed", metabaseContainerID)
if err := metabase.RemoveImageContainer(); err != nil { }
if !strings.Contains(err.Error(), "No such image") { log.Debugf("Removing metabase db %s", cli.cfg().ConfigPaths.DataDir)
log.Fatalf("removing docker image: %s", err) if err := metabase.RemoveDatabase(cli.cfg().ConfigPaths.DataDir); err != nil {
} log.Warnf("failed to remove metabase internal db : %s", err)
}
if force {
m := metabase.Metabase{}
if err := m.LoadConfig(metabaseConfigPath); err != nil {
return err
}
if err := metabase.RemoveImageContainer(m.Config.Image); err != nil {
if !strings.Contains(err.Error(), "No such image") {
return fmt.Errorf("removing docker image: %s", err)
} }
} }
} }
return nil
}, },
} }
cmdDashRemove.Flags().BoolVarP(&force, "force", "f", false, "Remove also the metabase image")
cmdDashRemove.Flags().BoolVarP(&forceYes, "yes", "y", false, "force yes")
return cmdDashRemove flags := cmd.Flags()
flags.BoolVarP(&force, "force", "f", false, "Remove also the metabase image")
flags.BoolVarP(&forceYes, "yes", "y", false, "force yes")
return cmd
} }
func passwordIsValid(password string) bool { func passwordIsValid(password string) bool {
hasDigit := false hasDigit := false
for _, j := range password { for _, j := range password {
if unicode.IsDigit(j) { if unicode.IsDigit(j) {
hasDigit = true hasDigit = true
break break
} }
} }
@ -343,17 +342,134 @@ func passwordIsValid(password string) bool {
if !hasDigit || len(password) < 6 { if !hasDigit || len(password) < 6 {
return false return false
} }
return true return true
} }
func checkSystemMemory() (bool, error) { func checkSystemMemory(forceYes *bool) error {
totMem := memory.TotalMemory() totMem := memory.TotalMemory()
if totMem == 0 { if totMem >= uint64(math.Pow(2, 30)) {
return true, errors.New("Unable to get system total memory") return nil
} }
if uint64(math.Pow(2, 30)) >= totMem {
return false, nil if !*forceYes {
var answer bool
prompt := &survey.Confirm{
Message: "Metabase requires 1-2GB of RAM, your system is below this requirement continue ?",
Default: true,
}
if err := survey.AskOne(prompt, &answer); err != nil {
return fmt.Errorf("unable to ask about RAM check: %s", err)
}
if !answer {
return fmt.Errorf("user stated no to continue")
}
return nil
} }
return true, nil
log.Warn("Metabase requires 1-2GB of RAM, your system is below this requirement")
return nil
}
func warnIfNotLoopback(addr string) {
if addr == "127.0.0.1" || addr == "::1" {
return
}
log.Warnf("You are potentially exposing your metabase port to the internet (addr: %s), please consider using a reverse proxy", addr)
}
func disclaimer(forceYes *bool) error {
if !*forceYes {
var answer bool
prompt := &survey.Confirm{
Message: "CrowdSec takes no responsibility for the security of your metabase instance. Do you accept these responsibilities ?",
Default: true,
}
if err := survey.AskOne(prompt, &answer); err != nil {
return fmt.Errorf("unable to ask to question: %s", err)
}
if !answer {
return fmt.Errorf("user stated no to responsibilities")
}
return nil
}
log.Warn("CrowdSec takes no responsibility for the security of your metabase instance. You used force yes, so you accept this disclaimer")
return nil
}
func checkGroups(forceYes *bool) (*user.Group, error) {
dockerGroup, err := user.LookupGroup(crowdsecGroup)
if err == nil {
return dockerGroup, nil
}
if !*forceYes {
var answer bool
prompt := &survey.Confirm{
Message: fmt.Sprintf("For metabase docker to be able to access SQLite file we need to add a new group called '%s' to the system, is it ok for you ?", crowdsecGroup),
Default: true,
}
if err := survey.AskOne(prompt, &answer); err != nil {
return dockerGroup, fmt.Errorf("unable to ask to question: %s", err)
}
if !answer {
return dockerGroup, fmt.Errorf("unable to continue without creating '%s' group", crowdsecGroup)
}
}
groupAddCmd, err := exec.LookPath("groupadd")
if err != nil {
return dockerGroup, fmt.Errorf("unable to find 'groupadd' command, can't continue")
}
groupAdd := &exec.Cmd{Path: groupAddCmd, Args: []string{groupAddCmd, crowdsecGroup}}
if err := groupAdd.Run(); err != nil {
return dockerGroup, fmt.Errorf("unable to add group '%s': %s", dockerGroup, err)
}
return user.LookupGroup(crowdsecGroup)
}
func (cli *cliDashboard) chownDatabase(gid string) error {
cfg := cli.cfg()
intID, err := strconv.Atoi(gid)
if err != nil {
return fmt.Errorf("unable to convert group ID to int: %s", err)
}
if stat, err := os.Stat(cfg.DbConfig.DbPath); !os.IsNotExist(err) {
info := stat.Sys()
if err := os.Chown(cfg.DbConfig.DbPath, int(info.(*syscall.Stat_t).Uid), intID); err != nil {
return fmt.Errorf("unable to chown sqlite db file '%s': %s", cfg.DbConfig.DbPath, err)
}
}
if cfg.DbConfig.Type == "sqlite" && cfg.DbConfig.UseWal != nil && *cfg.DbConfig.UseWal {
for _, ext := range []string{"-wal", "-shm"} {
file := cfg.DbConfig.DbPath + ext
if stat, err := os.Stat(file); !os.IsNotExist(err) {
info := stat.Sys()
if err := os.Chown(file, int(info.(*syscall.Stat_t).Uid), intID); err != nil {
return fmt.Errorf("unable to chown sqlite db file '%s': %s", file, err)
}
}
}
}
return nil
} }

View file

@ -0,0 +1,32 @@
//go:build !linux
package main
import (
"runtime"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
)
type cliDashboard struct{
cfg configGetter
}
func NewCLIDashboard(cfg configGetter) *cliDashboard {
return &cliDashboard{
cfg: cfg,
}
}
func (cli cliDashboard) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "dashboard",
DisableAutoGenTag: true,
Run: func(_ *cobra.Command, _ []string) {
log.Infof("Dashboard command is disabled on %s", runtime.GOOS)
},
}
return cmd
}

View file

@ -4,23 +4,20 @@ import (
"context" "context"
"encoding/csv" "encoding/csv"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"net/url" "net/url"
"os" "os"
"path/filepath"
"strconv" "strconv"
"strings" "strings"
"time" "time"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
"github.com/jszwec/csvutil"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/go-cs-lib/pkg/ptr" "github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/go-cs-lib/pkg/version"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
@ -29,7 +26,7 @@ import (
var Client *apiclient.ApiClient var Client *apiclient.ApiClient
func DecisionsToTable(alerts *models.GetAlertsResponse, printMachine bool) error { func (cli *cliDecisions) decisionsToTable(alerts *models.GetAlertsResponse, printMachine bool) error {
/*here we cheat a bit : to make it more readable for the user, we dedup some entries*/ /*here we cheat a bit : to make it more readable for the user, we dedup some entries*/
spamLimit := make(map[string]bool) spamLimit := make(map[string]bool)
skipped := 0 skipped := 0
@ -37,27 +34,36 @@ func DecisionsToTable(alerts *models.GetAlertsResponse, printMachine bool) error
for aIdx := 0; aIdx < len(*alerts); aIdx++ { for aIdx := 0; aIdx < len(*alerts); aIdx++ {
alertItem := (*alerts)[aIdx] alertItem := (*alerts)[aIdx]
newDecisions := make([]*models.Decision, 0) newDecisions := make([]*models.Decision, 0)
for _, decisionItem := range alertItem.Decisions { for _, decisionItem := range alertItem.Decisions {
spamKey := fmt.Sprintf("%t:%s:%s:%s", *decisionItem.Simulated, *decisionItem.Type, *decisionItem.Scope, *decisionItem.Value) spamKey := fmt.Sprintf("%t:%s:%s:%s", *decisionItem.Simulated, *decisionItem.Type, *decisionItem.Scope, *decisionItem.Value)
if _, ok := spamLimit[spamKey]; ok { if _, ok := spamLimit[spamKey]; ok {
skipped++ skipped++
continue continue
} }
spamLimit[spamKey] = true spamLimit[spamKey] = true
newDecisions = append(newDecisions, decisionItem) newDecisions = append(newDecisions, decisionItem)
} }
alertItem.Decisions = newDecisions alertItem.Decisions = newDecisions
} }
if csConfig.Cscli.Output == "raw" {
switch cli.cfg().Cscli.Output {
case "raw":
csvwriter := csv.NewWriter(os.Stdout) csvwriter := csv.NewWriter(os.Stdout)
header := []string{"id", "source", "ip", "reason", "action", "country", "as", "events_count", "expiration", "simulated", "alert_id"} header := []string{"id", "source", "ip", "reason", "action", "country", "as", "events_count", "expiration", "simulated", "alert_id"}
if printMachine { if printMachine {
header = append(header, "machine") header = append(header, "machine")
} }
err := csvwriter.Write(header) err := csvwriter.Write(header)
if err != nil { if err != nil {
return err return err
} }
for _, alertItem := range *alerts { for _, alertItem := range *alerts {
for _, decisionItem := range alertItem.Decisions { for _, decisionItem := range alertItem.Decisions {
raw := []string{ raw := []string{
@ -83,25 +89,46 @@ func DecisionsToTable(alerts *models.GetAlertsResponse, printMachine bool) error
} }
} }
} }
csvwriter.Flush() csvwriter.Flush()
} else if csConfig.Cscli.Output == "json" { case "json":
if *alerts == nil {
// avoid returning "null" in `json"
// could be cleaner if we used slice of alerts directly
fmt.Println("[]")
return nil
}
x, _ := json.MarshalIndent(alerts, "", " ") x, _ := json.MarshalIndent(alerts, "", " ")
fmt.Printf("%s", string(x)) fmt.Printf("%s", string(x))
} else if csConfig.Cscli.Output == "human" { case "human":
if len(*alerts) == 0 { if len(*alerts) == 0 {
fmt.Println("No active decisions") fmt.Println("No active decisions")
return nil return nil
} }
decisionsTable(color.Output, alerts, printMachine)
cli.decisionsTable(color.Output, alerts, printMachine)
if skipped > 0 { if skipped > 0 {
fmt.Printf("%d duplicated entries skipped\n", skipped) fmt.Printf("%d duplicated entries skipped\n", skipped)
} }
} }
return nil return nil
} }
func NewDecisionsCmd() *cobra.Command { type cliDecisions struct {
var cmdDecisions = &cobra.Command{ cfg configGetter
}
func NewCLIDecisions(cfg configGetter) *cliDecisions {
return &cliDecisions{
cfg: cfg,
}
}
func (cli *cliDecisions) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "decisions [action]", Use: "decisions [action]",
Short: "Manage decisions", Short: "Manage decisions",
Long: `Add/List/Delete/Import decisions from LAPI`, Long: `Add/List/Delete/Import decisions from LAPI`,
@ -110,38 +137,40 @@ func NewDecisionsCmd() *cobra.Command {
/*TBD example*/ /*TBD example*/
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIClient(); err != nil { cfg := cli.cfg()
return errors.Wrap(err, "loading api client") if err := cfg.LoadAPIClient(); err != nil {
return fmt.Errorf("loading api client: %w", err)
} }
password := strfmt.Password(csConfig.API.Client.Credentials.Password) password := strfmt.Password(cfg.API.Client.Credentials.Password)
apiurl, err := url.Parse(csConfig.API.Client.Credentials.URL) apiurl, err := url.Parse(cfg.API.Client.Credentials.URL)
if err != nil { if err != nil {
return errors.Wrapf(err, "parsing api url %s", csConfig.API.Client.Credentials.URL) return fmt.Errorf("parsing api url %s: %w", cfg.API.Client.Credentials.URL, err)
} }
Client, err = apiclient.NewClient(&apiclient.Config{ Client, err = apiclient.NewClient(&apiclient.Config{
MachineID: csConfig.API.Client.Credentials.Login, MachineID: cfg.API.Client.Credentials.Login,
Password: password, Password: password,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()), UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiurl, URL: apiurl,
VersionPrefix: "v1", VersionPrefix: "v1",
}) })
if err != nil { if err != nil {
return errors.Wrap(err, "creating api client") return fmt.Errorf("creating api client: %w", err)
} }
return nil return nil
}, },
} }
cmdDecisions.AddCommand(NewDecisionsListCmd()) cmd.AddCommand(cli.newListCmd())
cmdDecisions.AddCommand(NewDecisionsAddCmd()) cmd.AddCommand(cli.newAddCmd())
cmdDecisions.AddCommand(NewDecisionsDeleteCmd()) cmd.AddCommand(cli.newDeleteCmd())
cmdDecisions.AddCommand(NewDecisionsImportCmd()) cmd.AddCommand(cli.newImportCmd())
return cmdDecisions return cmd
} }
func NewDecisionsListCmd() *cobra.Command { func (cli *cliDecisions) newListCmd() *cobra.Command {
var filter = apiclient.AlertsListOpts{ var filter = apiclient.AlertsListOpts{
ValueEquals: new(string), ValueEquals: new(string),
ScopeEquals: new(string), ScopeEquals: new(string),
@ -155,25 +184,27 @@ func NewDecisionsListCmd() *cobra.Command {
IncludeCAPI: new(bool), IncludeCAPI: new(bool),
Limit: new(int), Limit: new(int),
} }
NoSimu := new(bool) NoSimu := new(bool)
contained := new(bool) contained := new(bool)
var printMachine bool var printMachine bool
var cmdDecisionsList = &cobra.Command{ cmd := &cobra.Command{
Use: "list [options]", Use: "list [options]",
Short: "List decisions from LAPI", Short: "List decisions from LAPI",
Example: `cscli decisions list -i 1.2.3.4 Example: `cscli decisions list -i 1.2.3.4
cscli decisions list -r 1.2.3.0/24 cscli decisions list -r 1.2.3.0/24
cscli decisions list -s crowdsecurity/ssh-bf cscli decisions list -s crowdsecurity/ssh-bf
cscli decisions list -t ban cscli decisions list --origin lists --scenario list_name
`, `,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, _ []string) error {
var err error var err error
/*take care of shorthand options*/ /*take care of shorthand options*/
if err := manageCliDecisionAlerts(filter.IPEquals, filter.RangeEquals, filter.ScopeEquals, filter.ValueEquals); err != nil { if err = manageCliDecisionAlerts(filter.IPEquals, filter.RangeEquals, filter.ScopeEquals, filter.ValueEquals); err != nil {
log.Fatalf("%s", err) return err
} }
filter.ActiveDecisionEquals = new(bool) filter.ActiveDecisionEquals = new(bool)
*filter.ActiveDecisionEquals = true *filter.ActiveDecisionEquals = true
@ -189,7 +220,7 @@ cscli decisions list -t ban
days, err := strconv.Atoi(realDuration) days, err := strconv.Atoi(realDuration)
if err != nil { if err != nil {
printHelp(cmd) printHelp(cmd)
log.Fatalf("Can't parse duration %s, valid durations format: 1d, 4h, 4h15m", *filter.Until) return fmt.Errorf("can't parse duration %s, valid durations format: 1d, 4h, 4h15m", *filter.Until)
} }
*filter.Until = fmt.Sprintf("%d%s", days*24, "h") *filter.Until = fmt.Sprintf("%d%s", days*24, "h")
} }
@ -202,7 +233,7 @@ cscli decisions list -t ban
days, err := strconv.Atoi(realDuration) days, err := strconv.Atoi(realDuration)
if err != nil { if err != nil {
printHelp(cmd) printHelp(cmd)
log.Fatalf("Can't parse duration %s, valid durations format: 1d, 4h, 4h15m", *filter.Until) return fmt.Errorf("can't parse duration %s, valid durations format: 1d, 4h, 4h15m", *filter.Since)
} }
*filter.Since = fmt.Sprintf("%d%s", days*24, "h") *filter.Since = fmt.Sprintf("%d%s", days*24, "h")
} }
@ -238,35 +269,37 @@ cscli decisions list -t ban
alerts, _, err := Client.Alerts.List(context.Background(), filter) alerts, _, err := Client.Alerts.List(context.Background(), filter)
if err != nil { if err != nil {
log.Fatalf("Unable to list decisions : %v", err) return fmt.Errorf("unable to retrieve decisions: %w", err)
} }
err = DecisionsToTable(alerts, printMachine) err = cli.decisionsToTable(alerts, printMachine)
if err != nil { if err != nil {
log.Fatalf("unable to list decisions : %v", err) return fmt.Errorf("unable to print decisions: %w", err)
} }
return nil
}, },
} }
cmdDecisionsList.Flags().SortFlags = false cmd.Flags().SortFlags = false
cmdDecisionsList.Flags().BoolVarP(filter.IncludeCAPI, "all", "a", false, "Include decisions from Central API") cmd.Flags().BoolVarP(filter.IncludeCAPI, "all", "a", false, "Include decisions from Central API")
cmdDecisionsList.Flags().StringVar(filter.Since, "since", "", "restrict to alerts newer than since (ie. 4h, 30d)") cmd.Flags().StringVar(filter.Since, "since", "", "restrict to alerts newer than since (ie. 4h, 30d)")
cmdDecisionsList.Flags().StringVar(filter.Until, "until", "", "restrict to alerts older than until (ie. 4h, 30d)") cmd.Flags().StringVar(filter.Until, "until", "", "restrict to alerts older than until (ie. 4h, 30d)")
cmdDecisionsList.Flags().StringVarP(filter.TypeEquals, "type", "t", "", "restrict to this decision type (ie. ban,captcha)") cmd.Flags().StringVarP(filter.TypeEquals, "type", "t", "", "restrict to this decision type (ie. ban,captcha)")
cmdDecisionsList.Flags().StringVar(filter.ScopeEquals, "scope", "", "restrict to this scope (ie. ip,range,session)") cmd.Flags().StringVar(filter.ScopeEquals, "scope", "", "restrict to this scope (ie. ip,range,session)")
cmdDecisionsList.Flags().StringVar(filter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ","))) cmd.Flags().StringVar(filter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ",")))
cmdDecisionsList.Flags().StringVarP(filter.ValueEquals, "value", "v", "", "restrict to this value (ie. 1.2.3.4,userName)") cmd.Flags().StringVarP(filter.ValueEquals, "value", "v", "", "restrict to this value (ie. 1.2.3.4,userName)")
cmdDecisionsList.Flags().StringVarP(filter.ScenarioEquals, "scenario", "s", "", "restrict to this scenario (ie. crowdsecurity/ssh-bf)") cmd.Flags().StringVarP(filter.ScenarioEquals, "scenario", "s", "", "restrict to this scenario (ie. crowdsecurity/ssh-bf)")
cmdDecisionsList.Flags().StringVarP(filter.IPEquals, "ip", "i", "", "restrict to alerts from this source ip (shorthand for --scope ip --value <IP>)") cmd.Flags().StringVarP(filter.IPEquals, "ip", "i", "", "restrict to alerts from this source ip (shorthand for --scope ip --value <IP>)")
cmdDecisionsList.Flags().StringVarP(filter.RangeEquals, "range", "r", "", "restrict to alerts from this source range (shorthand for --scope range --value <RANGE>)") cmd.Flags().StringVarP(filter.RangeEquals, "range", "r", "", "restrict to alerts from this source range (shorthand for --scope range --value <RANGE>)")
cmdDecisionsList.Flags().IntVarP(filter.Limit, "limit", "l", 100, "number of alerts to get (use 0 to remove the limit)") cmd.Flags().IntVarP(filter.Limit, "limit", "l", 100, "number of alerts to get (use 0 to remove the limit)")
cmdDecisionsList.Flags().BoolVar(NoSimu, "no-simu", false, "exclude decisions in simulation mode") cmd.Flags().BoolVar(NoSimu, "no-simu", false, "exclude decisions in simulation mode")
cmdDecisionsList.Flags().BoolVarP(&printMachine, "machine", "m", false, "print machines that triggered decisions") cmd.Flags().BoolVarP(&printMachine, "machine", "m", false, "print machines that triggered decisions")
cmdDecisionsList.Flags().BoolVar(contained, "contained", false, "query decisions contained by range") cmd.Flags().BoolVar(contained, "contained", false, "query decisions contained by range")
return cmdDecisionsList return cmd
} }
func NewDecisionsAddCmd() *cobra.Command { func (cli *cliDecisions) newAddCmd() *cobra.Command {
var ( var (
addIP string addIP string
addRange string addRange string
@ -277,7 +310,7 @@ func NewDecisionsAddCmd() *cobra.Command {
addType string addType string
) )
var cmdDecisionsAdd = &cobra.Command{ cmd := &cobra.Command{
Use: "add [options]", Use: "add [options]",
Short: "Add decision to LAPI", Short: "Add decision to LAPI",
Example: `cscli decisions add --ip 1.2.3.4 Example: `cscli decisions add --ip 1.2.3.4
@ -288,7 +321,7 @@ cscli decisions add --scope username --value foobar
/*TBD : fix long and example*/ /*TBD : fix long and example*/
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, _ []string) error {
var err error var err error
alerts := models.AddAlertsRequest{} alerts := models.AddAlertsRequest{}
origin := types.CscliOrigin origin := types.CscliOrigin
@ -302,8 +335,8 @@ cscli decisions add --scope username --value foobar
createdAt := time.Now().UTC().Format(time.RFC3339) createdAt := time.Now().UTC().Format(time.RFC3339)
/*take care of shorthand options*/ /*take care of shorthand options*/
if err := manageCliDecisionAlerts(&addIP, &addRange, &addScope, &addValue); err != nil { if err = manageCliDecisionAlerts(&addIP, &addRange, &addScope, &addValue); err != nil {
log.Fatalf("%s", err) return err
} }
if addIP != "" { if addIP != "" {
@ -314,11 +347,11 @@ cscli decisions add --scope username --value foobar
addScope = types.Range addScope = types.Range
} else if addValue == "" { } else if addValue == "" {
printHelp(cmd) printHelp(cmd)
log.Fatalf("Missing arguments, a value is required (--ip, --range or --scope and --value)") return errors.New("missing arguments, a value is required (--ip, --range or --scope and --value)")
} }
if addReason == "" { if addReason == "" {
addReason = fmt.Sprintf("manual '%s' from '%s'", addType, csConfig.API.Client.Credentials.Login) addReason = fmt.Sprintf("manual '%s' from '%s'", addType, cli.cfg().API.Client.Credentials.Login)
} }
decision := models.Decision{ decision := models.Decision{
Duration: &addDuration, Duration: &addDuration,
@ -339,7 +372,7 @@ cscli decisions add --scope username --value foobar
Scenario: &addReason, Scenario: &addReason,
ScenarioVersion: &empty, ScenarioVersion: &empty,
Simulated: &simulated, Simulated: &simulated,
//setting empty scope/value broke plugins, and it didn't seem to be needed anymore w/ latest papi changes // setting empty scope/value broke plugins, and it didn't seem to be needed anymore w/ latest papi changes
Source: &models.Source{ Source: &models.Source{
AsName: empty, AsName: empty,
AsNumber: empty, AsNumber: empty,
@ -357,27 +390,29 @@ cscli decisions add --scope username --value foobar
_, _, err = Client.Alerts.Add(context.Background(), alerts) _, _, err = Client.Alerts.Add(context.Background(), alerts)
if err != nil { if err != nil {
log.Fatal(err) return err
} }
log.Info("Decision successfully added") log.Info("Decision successfully added")
return nil
}, },
} }
cmdDecisionsAdd.Flags().SortFlags = false cmd.Flags().SortFlags = false
cmdDecisionsAdd.Flags().StringVarP(&addIP, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)") cmd.Flags().StringVarP(&addIP, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)")
cmdDecisionsAdd.Flags().StringVarP(&addRange, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)") cmd.Flags().StringVarP(&addRange, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)")
cmdDecisionsAdd.Flags().StringVarP(&addDuration, "duration", "d", "4h", "Decision duration (ie. 1h,4h,30m)") cmd.Flags().StringVarP(&addDuration, "duration", "d", "4h", "Decision duration (ie. 1h,4h,30m)")
cmdDecisionsAdd.Flags().StringVarP(&addValue, "value", "v", "", "The value (ie. --scope username --value foobar)") cmd.Flags().StringVarP(&addValue, "value", "v", "", "The value (ie. --scope username --value foobar)")
cmdDecisionsAdd.Flags().StringVar(&addScope, "scope", types.Ip, "Decision scope (ie. ip,range,username)") cmd.Flags().StringVar(&addScope, "scope", types.Ip, "Decision scope (ie. ip,range,username)")
cmdDecisionsAdd.Flags().StringVarP(&addReason, "reason", "R", "", "Decision reason (ie. scenario-name)") cmd.Flags().StringVarP(&addReason, "reason", "R", "", "Decision reason (ie. scenario-name)")
cmdDecisionsAdd.Flags().StringVarP(&addType, "type", "t", "ban", "Decision type (ie. ban,captcha,throttle)") cmd.Flags().StringVarP(&addType, "type", "t", "ban", "Decision type (ie. ban,captcha,throttle)")
return cmdDecisionsAdd return cmd
} }
func NewDecisionsDeleteCmd() *cobra.Command { func (cli *cliDecisions) newDeleteCmd() *cobra.Command {
var delFilter = apiclient.DecisionsDeleteOpts{ delFilter := apiclient.DecisionsDeleteOpts{
ScopeEquals: new(string), ScopeEquals: new(string),
ValueEquals: new(string), ValueEquals: new(string),
TypeEquals: new(string), TypeEquals: new(string),
@ -386,11 +421,14 @@ func NewDecisionsDeleteCmd() *cobra.Command {
ScenarioEquals: new(string), ScenarioEquals: new(string),
OriginEquals: new(string), OriginEquals: new(string),
} }
var delDecisionId string
var delDecisionID string
var delDecisionAll bool var delDecisionAll bool
contained := new(bool) contained := new(bool)
var cmdDecisionsDelete = &cobra.Command{ cmd := &cobra.Command{
Use: "delete [options]", Use: "delete [options]",
Short: "Delete decisions", Short: "Delete decisions",
DisableAutoGenTag: true, DisableAutoGenTag: true,
@ -399,27 +437,30 @@ func NewDecisionsDeleteCmd() *cobra.Command {
cscli decisions delete -i 1.2.3.4 cscli decisions delete -i 1.2.3.4
cscli decisions delete --id 42 cscli decisions delete --id 42
cscli decisions delete --type captcha cscli decisions delete --type captcha
cscli decisions delete --origin lists --scenario list_name
`, `,
/*TBD : refaire le Long/Example*/ /*TBD : refaire le Long/Example*/
PreRun: func(cmd *cobra.Command, args []string) { PreRunE: func(cmd *cobra.Command, _ []string) error {
if delDecisionAll { if delDecisionAll {
return return nil
} }
if *delFilter.ScopeEquals == "" && *delFilter.ValueEquals == "" && if *delFilter.ScopeEquals == "" && *delFilter.ValueEquals == "" &&
*delFilter.TypeEquals == "" && *delFilter.IPEquals == "" && *delFilter.TypeEquals == "" && *delFilter.IPEquals == "" &&
*delFilter.RangeEquals == "" && *delFilter.ScenarioEquals == "" && *delFilter.RangeEquals == "" && *delFilter.ScenarioEquals == "" &&
*delFilter.OriginEquals == "" && delDecisionId == "" { *delFilter.OriginEquals == "" && delDecisionID == "" {
cmd.Usage() cmd.Usage()
log.Fatalln("At least one filter or --all must be specified") return errors.New("at least one filter or --all must be specified")
} }
return nil
}, },
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error var err error
var decisions *models.DeleteDecisionResponse var decisions *models.DeleteDecisionResponse
/*take care of shorthand options*/ /*take care of shorthand options*/
if err := manageCliDecisionAlerts(delFilter.IPEquals, delFilter.RangeEquals, delFilter.ScopeEquals, delFilter.ValueEquals); err != nil { if err = manageCliDecisionAlerts(delFilter.IPEquals, delFilter.RangeEquals, delFilter.ScopeEquals, delFilter.ValueEquals); err != nil {
log.Fatalf("%s", err) return err
} }
if *delFilter.ScopeEquals == "" { if *delFilter.ScopeEquals == "" {
delFilter.ScopeEquals = nil delFilter.ScopeEquals = nil
@ -446,224 +487,37 @@ cscli decisions delete --type captcha
delFilter.Contains = new(bool) delFilter.Contains = new(bool)
} }
if delDecisionId == "" { if delDecisionID == "" {
decisions, _, err = Client.Decisions.Delete(context.Background(), delFilter) decisions, _, err = Client.Decisions.Delete(context.Background(), delFilter)
if err != nil { if err != nil {
log.Fatalf("Unable to delete decisions : %v", err) return fmt.Errorf("unable to delete decisions: %v", err)
} }
} else { } else {
if _, err = strconv.Atoi(delDecisionId); err != nil { if _, err = strconv.Atoi(delDecisionID); err != nil {
log.Fatalf("id '%s' is not an integer: %v", delDecisionId, err) return fmt.Errorf("id '%s' is not an integer: %v", delDecisionID, err)
} }
decisions, _, err = Client.Decisions.DeleteOne(context.Background(), delDecisionId) decisions, _, err = Client.Decisions.DeleteOne(context.Background(), delDecisionID)
if err != nil { if err != nil {
log.Fatalf("Unable to delete decision : %v", err) return fmt.Errorf("unable to delete decision: %v", err)
} }
} }
log.Infof("%s decision(s) deleted", decisions.NbDeleted) log.Infof("%s decision(s) deleted", decisions.NbDeleted)
return nil
}, },
} }
cmdDecisionsDelete.Flags().SortFlags = false cmd.Flags().SortFlags = false
cmdDecisionsDelete.Flags().StringVarP(delFilter.IPEquals, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)") cmd.Flags().StringVarP(delFilter.IPEquals, "ip", "i", "", "Source ip (shorthand for --scope ip --value <IP>)")
cmdDecisionsDelete.Flags().StringVarP(delFilter.RangeEquals, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)") cmd.Flags().StringVarP(delFilter.RangeEquals, "range", "r", "", "Range source ip (shorthand for --scope range --value <RANGE>)")
cmdDecisionsDelete.Flags().StringVarP(delFilter.TypeEquals, "type", "t", "", "the decision type (ie. ban,captcha)") cmd.Flags().StringVarP(delFilter.TypeEquals, "type", "t", "", "the decision type (ie. ban,captcha)")
cmdDecisionsDelete.Flags().StringVarP(delFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope") cmd.Flags().StringVarP(delFilter.ValueEquals, "value", "v", "", "the value to match for in the specified scope")
cmdDecisionsDelete.Flags().StringVarP(delFilter.ScenarioEquals, "scenario", "s", "", "the scenario name (ie. crowdsecurity/ssh-bf)") cmd.Flags().StringVarP(delFilter.ScenarioEquals, "scenario", "s", "", "the scenario name (ie. crowdsecurity/ssh-bf)")
cmdDecisionsDelete.Flags().StringVar(delFilter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ","))) cmd.Flags().StringVar(delFilter.OriginEquals, "origin", "", fmt.Sprintf("the value to match for the specified origin (%s ...)", strings.Join(types.GetOrigins(), ",")))
cmdDecisionsDelete.Flags().StringVar(&delDecisionId, "id", "", "decision id") cmd.Flags().StringVar(&delDecisionID, "id", "", "decision id")
cmdDecisionsDelete.Flags().BoolVar(&delDecisionAll, "all", false, "delete all decisions") cmd.Flags().BoolVar(&delDecisionAll, "all", false, "delete all decisions")
cmdDecisionsDelete.Flags().BoolVar(contained, "contained", false, "query decisions contained by range") cmd.Flags().BoolVar(contained, "contained", false, "query decisions contained by range")
return cmdDecisionsDelete return cmd
}
func NewDecisionsImportCmd() *cobra.Command {
var (
defaultDuration = "4h"
defaultScope = "ip"
defaultType = "ban"
defaultReason = "manual"
importDuration string
importScope string
importReason string
importType string
importFile string
batchSize int
)
var cmdDecisionImport = &cobra.Command{
Use: "import [options]",
Short: "Import decisions from json or csv file",
Long: "expected format :\n" +
"csv : any of duration,origin,reason,scope,type,value, with a header line\n" +
`json : {"duration" : "24h", "origin" : "my-list", "reason" : "my_scenario", "scope" : "ip", "type" : "ban", "value" : "x.y.z.z"}`,
DisableAutoGenTag: true,
Example: `decisions.csv :
duration,scope,value
24h,ip,1.2.3.4
cscsli decisions import -i decisions.csv
decisions.json :
[{"duration" : "4h", "scope" : "ip", "type" : "ban", "value" : "1.2.3.4"}]
`,
Run: func(cmd *cobra.Command, args []string) {
if importFile == "" {
log.Fatalf("Please provide a input file containing decisions with -i flag")
}
csvData, err := os.ReadFile(importFile)
if err != nil {
log.Fatalf("unable to open '%s': %s", importFile, err)
}
type decisionRaw struct {
Duration string `csv:"duration,omitempty" json:"duration,omitempty"`
Origin string `csv:"origin,omitempty" json:"origin,omitempty"`
Scenario string `csv:"reason,omitempty" json:"reason,omitempty"`
Scope string `csv:"scope,omitempty" json:"scope,omitempty"`
Type string `csv:"type,omitempty" json:"type,omitempty"`
Value string `csv:"value" json:"value"`
}
var decisionsListRaw []decisionRaw
switch fileFormat := filepath.Ext(importFile); fileFormat {
case ".json":
if err := json.Unmarshal(csvData, &decisionsListRaw); err != nil {
log.Fatalf("unable to unmarshall json: '%s'", err)
}
case ".csv":
if err := csvutil.Unmarshal(csvData, &decisionsListRaw); err != nil {
log.Fatalf("unable to unmarshall csv: '%s'", err)
}
default:
log.Fatalf("file format not supported for '%s'. supported format are 'json' and 'csv'", importFile)
}
decisionsList := make([]*models.Decision, 0)
for i, decisionLine := range decisionsListRaw {
line := i + 2
if decisionLine.Value == "" {
log.Fatalf("please provide a 'value' in your csv line %d", line)
}
/*deal with defaults and cli-override*/
if decisionLine.Duration == "" {
decisionLine.Duration = defaultDuration
log.Debugf("No 'duration' line %d, using default value: '%s'", line, defaultDuration)
}
if importDuration != "" {
decisionLine.Duration = importDuration
log.Debugf("'duration' line %d, using supplied value: '%s'", line, importDuration)
}
decisionLine.Origin = types.CscliImportOrigin
if decisionLine.Scenario == "" {
decisionLine.Scenario = defaultReason
log.Debugf("No 'reason' line %d, using value: '%s'", line, decisionLine.Scenario)
}
if importReason != "" {
decisionLine.Scenario = importReason
log.Debugf("No 'reason' line %d, using supplied value: '%s'", line, importReason)
}
if decisionLine.Type == "" {
decisionLine.Type = defaultType
log.Debugf("No 'type' line %d, using default value: '%s'", line, decisionLine.Type)
}
if importType != "" {
decisionLine.Type = importType
log.Debugf("'type' line %d, using supplied value: '%s'", line, importType)
}
if decisionLine.Scope == "" {
decisionLine.Scope = defaultScope
log.Debugf("No 'scope' line %d, using default value: '%s'", line, decisionLine.Scope)
}
if importScope != "" {
decisionLine.Scope = importScope
log.Debugf("'scope' line %d, using supplied value: '%s'", line, importScope)
}
decision := models.Decision{
Value: ptr.Of(decisionLine.Value),
Duration: ptr.Of(decisionLine.Duration),
Origin: ptr.Of(decisionLine.Origin),
Scenario: ptr.Of(decisionLine.Scenario),
Type: ptr.Of(decisionLine.Type),
Scope: ptr.Of(decisionLine.Scope),
Simulated: new(bool),
}
decisionsList = append(decisionsList, &decision)
}
alerts := models.AddAlertsRequest{}
if batchSize > 0 {
for i := 0; i < len(decisionsList); i += batchSize {
end := i + batchSize
if end > len(decisionsList) {
end = len(decisionsList)
}
decisionBatch := decisionsList[i:end]
importAlert := models.Alert{
CreatedAt: time.Now().UTC().Format(time.RFC3339),
Scenario: ptr.Of(fmt.Sprintf("import %s : %d IPs", importFile, len(decisionBatch))),
Message: ptr.Of(""),
Events: []*models.Event{},
Source: &models.Source{
Scope: ptr.Of(""),
Value: ptr.Of(""),
},
StartAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
StopAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
Capacity: ptr.Of(int32(0)),
Simulated: ptr.Of(false),
EventsCount: ptr.Of(int32(len(decisionBatch))),
Leakspeed: ptr.Of(""),
ScenarioHash: ptr.Of(""),
ScenarioVersion: ptr.Of(""),
Decisions: decisionBatch,
}
alerts = append(alerts, &importAlert)
}
} else {
importAlert := models.Alert{
CreatedAt: time.Now().UTC().Format(time.RFC3339),
Scenario: ptr.Of(fmt.Sprintf("import %s : %d IPs", importFile, len(decisionsList))),
Message: ptr.Of(""),
Events: []*models.Event{},
Source: &models.Source{
Scope: ptr.Of(""),
Value: ptr.Of(""),
},
StartAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
StopAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
Capacity: ptr.Of(int32(0)),
Simulated: ptr.Of(false),
EventsCount: ptr.Of(int32(len(decisionsList))),
Leakspeed: ptr.Of(""),
ScenarioHash: ptr.Of(""),
ScenarioVersion: ptr.Of(""),
Decisions: decisionsList,
}
alerts = append(alerts, &importAlert)
}
if len(decisionsList) > 1000 {
log.Infof("You are about to add %d decisions, this may take a while", len(decisionsList))
}
_, _, err = Client.Alerts.Add(context.Background(), alerts)
if err != nil {
log.Fatal(err)
}
log.Infof("%d decisions successfully imported", len(decisionsList))
},
}
cmdDecisionImport.Flags().SortFlags = false
cmdDecisionImport.Flags().StringVarP(&importFile, "input", "i", "", "Input file")
cmdDecisionImport.Flags().StringVarP(&importDuration, "duration", "d", "", "Decision duration (ie. 1h,4h,30m)")
cmdDecisionImport.Flags().StringVar(&importScope, "scope", types.Ip, "Decision scope (ie. ip,range,username)")
cmdDecisionImport.Flags().StringVarP(&importReason, "reason", "R", "", "Decision reason (ie. scenario-name)")
cmdDecisionImport.Flags().StringVarP(&importType, "type", "t", "", "Decision type (ie. ban,captcha,throttle)")
cmdDecisionImport.Flags().IntVar(&batchSize, "batch", 0, "Split import in batches of N decisions")
return cmdDecisionImport
} }

View file

@ -0,0 +1,280 @@
package main
import (
"bufio"
"bytes"
"context"
"encoding/json"
"errors"
"fmt"
"io"
"os"
"strings"
"time"
"github.com/jszwec/csvutil"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/go-cs-lib/ptr"
"github.com/crowdsecurity/go-cs-lib/slicetools"
"github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/types"
)
// decisionRaw is only used to unmarshall json/csv decisions
type decisionRaw struct {
Duration string `csv:"duration,omitempty" json:"duration,omitempty"`
Scenario string `csv:"reason,omitempty" json:"reason,omitempty"`
Scope string `csv:"scope,omitempty" json:"scope,omitempty"`
Type string `csv:"type,omitempty" json:"type,omitempty"`
Value string `csv:"value" json:"value"`
}
func parseDecisionList(content []byte, format string) ([]decisionRaw, error) {
ret := []decisionRaw{}
switch format {
case "values":
log.Infof("Parsing values")
scanner := bufio.NewScanner(bytes.NewReader(content))
for scanner.Scan() {
value := strings.TrimSpace(scanner.Text())
ret = append(ret, decisionRaw{Value: value})
}
if err := scanner.Err(); err != nil {
return nil, fmt.Errorf("unable to parse values: '%s'", err)
}
case "json":
log.Infof("Parsing json")
if err := json.Unmarshal(content, &ret); err != nil {
return nil, err
}
case "csv":
log.Infof("Parsing csv")
if err := csvutil.Unmarshal(content, &ret); err != nil {
return nil, fmt.Errorf("unable to parse csv: '%s'", err)
}
default:
return nil, fmt.Errorf("invalid format '%s', expected one of 'json', 'csv', 'values'", format)
}
return ret, nil
}
func (cli *cliDecisions) runImport(cmd *cobra.Command, args []string) error {
flags := cmd.Flags()
input, err := flags.GetString("input")
if err != nil {
return err
}
defaultDuration, err := flags.GetString("duration")
if err != nil {
return err
}
if defaultDuration == "" {
return errors.New("--duration cannot be empty")
}
defaultScope, err := flags.GetString("scope")
if err != nil {
return err
}
if defaultScope == "" {
return errors.New("--scope cannot be empty")
}
defaultReason, err := flags.GetString("reason")
if err != nil {
return err
}
if defaultReason == "" {
return errors.New("--reason cannot be empty")
}
defaultType, err := flags.GetString("type")
if err != nil {
return err
}
if defaultType == "" {
return errors.New("--type cannot be empty")
}
batchSize, err := flags.GetInt("batch")
if err != nil {
return err
}
format, err := flags.GetString("format")
if err != nil {
return err
}
var (
content []byte
fin *os.File
)
// set format if the file has a json or csv extension
if format == "" {
if strings.HasSuffix(input, ".json") {
format = "json"
} else if strings.HasSuffix(input, ".csv") {
format = "csv"
}
}
if format == "" {
return errors.New("unable to guess format from file extension, please provide a format with --format flag")
}
if input == "-" {
fin = os.Stdin
input = "stdin"
} else {
fin, err = os.Open(input)
if err != nil {
return fmt.Errorf("unable to open %s: %s", input, err)
}
}
content, err = io.ReadAll(fin)
if err != nil {
return fmt.Errorf("unable to read from %s: %s", input, err)
}
decisionsListRaw, err := parseDecisionList(content, format)
if err != nil {
return err
}
decisions := make([]*models.Decision, len(decisionsListRaw))
for i, d := range decisionsListRaw {
if d.Value == "" {
return fmt.Errorf("item %d: missing 'value'", i)
}
if d.Duration == "" {
d.Duration = defaultDuration
log.Debugf("item %d: missing 'duration', using default '%s'", i, defaultDuration)
}
if d.Scenario == "" {
d.Scenario = defaultReason
log.Debugf("item %d: missing 'reason', using default '%s'", i, defaultReason)
}
if d.Type == "" {
d.Type = defaultType
log.Debugf("item %d: missing 'type', using default '%s'", i, defaultType)
}
if d.Scope == "" {
d.Scope = defaultScope
log.Debugf("item %d: missing 'scope', using default '%s'", i, defaultScope)
}
decisions[i] = &models.Decision{
Value: ptr.Of(d.Value),
Duration: ptr.Of(d.Duration),
Origin: ptr.Of(types.CscliImportOrigin),
Scenario: ptr.Of(d.Scenario),
Type: ptr.Of(d.Type),
Scope: ptr.Of(d.Scope),
Simulated: ptr.Of(false),
}
}
if len(decisions) > 1000 {
log.Infof("You are about to add %d decisions, this may take a while", len(decisions))
}
for _, chunk := range slicetools.Chunks(decisions, batchSize) {
log.Debugf("Processing chunk of %d decisions", len(chunk))
importAlert := models.Alert{
CreatedAt: time.Now().UTC().Format(time.RFC3339),
Scenario: ptr.Of(fmt.Sprintf("import %s: %d IPs", input, len(chunk))),
Message: ptr.Of(""),
Events: []*models.Event{},
Source: &models.Source{
Scope: ptr.Of(""),
Value: ptr.Of(""),
},
StartAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
StopAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
Capacity: ptr.Of(int32(0)),
Simulated: ptr.Of(false),
EventsCount: ptr.Of(int32(len(chunk))),
Leakspeed: ptr.Of(""),
ScenarioHash: ptr.Of(""),
ScenarioVersion: ptr.Of(""),
Decisions: chunk,
}
_, _, err = Client.Alerts.Add(context.Background(), models.AddAlertsRequest{&importAlert})
if err != nil {
return err
}
}
log.Infof("Imported %d decisions", len(decisions))
return nil
}
func (cli *cliDecisions) newImportCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "import [options]",
Short: "Import decisions from a file or pipe",
Long: "expected format:\n" +
"csv : any of duration,reason,scope,type,value, with a header line\n" +
"json :" + "`{" + `"duration" : "24h", "reason" : "my_scenario", "scope" : "ip", "type" : "ban", "value" : "x.y.z.z"` + "}`",
Args: cobra.NoArgs,
DisableAutoGenTag: true,
Example: `decisions.csv:
duration,scope,value
24h,ip,1.2.3.4
$ cscli decisions import -i decisions.csv
decisions.json:
[{"duration" : "4h", "scope" : "ip", "type" : "ban", "value" : "1.2.3.4"}]
The file format is detected from the extension, but can be forced with the --format option
which is required when reading from standard input.
Raw values, standard input:
$ echo "1.2.3.4" | cscli decisions import -i - --format values
`,
RunE: cli.runImport,
}
flags := cmd.Flags()
flags.SortFlags = false
flags.StringP("input", "i", "", "Input file")
flags.StringP("duration", "d", "4h", "Decision duration: 1h,4h,30m")
flags.String("scope", types.Ip, "Decision scope: ip,range,username")
flags.StringP("reason", "R", "manual", "Decision reason: <scenario-name>")
flags.StringP("type", "t", "ban", "Decision type: ban,captcha,throttle")
flags.Int("batch", 0, "Split import in batches of N decisions")
flags.String("format", "", "Input format: 'json', 'csv' or 'values' (each line is a value, no headers)")
cmd.MarkFlagRequired("input")
return cmd
}

View file

@ -8,13 +8,15 @@ import (
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
) )
func decisionsTable(out io.Writer, alerts *models.GetAlertsResponse, printMachine bool) { func (cli *cliDecisions) decisionsTable(out io.Writer, alerts *models.GetAlertsResponse, printMachine bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
header := []string{"ID", "Source", "Scope:Value", "Reason", "Action", "Country", "AS", "Events", "expiration", "Alert ID"} header := []string{"ID", "Source", "Scope:Value", "Reason", "Action", "Country", "AS", "Events", "expiration", "Alert ID"}
if printMachine { if printMachine {
header = append(header, "Machine") header = append(header, "Machine")
} }
t.SetHeaders(header...) t.SetHeaders(header...)
for _, alertItem := range *alerts { for _, alertItem := range *alerts {
@ -22,6 +24,7 @@ func decisionsTable(out io.Writer, alerts *models.GetAlertsResponse, printMachin
if *alertItem.Simulated { if *alertItem.Simulated {
*decisionItem.Type = fmt.Sprintf("(simul)%s", *decisionItem.Type) *decisionItem.Type = fmt.Sprintf("(simul)%s", *decisionItem.Type)
} }
row := []string{ row := []string{
strconv.Itoa(int(decisionItem.ID)), strconv.Itoa(int(decisionItem.ID)),
*decisionItem.Origin, *decisionItem.Origin,
@ -42,5 +45,6 @@ func decisionsTable(out io.Writer, alerts *models.GetAlertsResponse, printMachin
t.AddRow(row...) t.AddRow(row...)
} }
} }
t.Render() t.Render()
} }

51
cmd/crowdsec-cli/doc.go Normal file
View file

@ -0,0 +1,51 @@
package main
import (
"fmt"
"path/filepath"
"strings"
"github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
)
type cliDoc struct{}
func NewCLIDoc() *cliDoc {
return &cliDoc{}
}
func (cli cliDoc) NewCommand(rootCmd *cobra.Command) *cobra.Command {
cmd := &cobra.Command{
Use: "doc",
Short: "Generate the documentation in `./doc/`. Directory must exist.",
Args: cobra.ExactArgs(0),
Hidden: true,
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, _ []string) error {
if err := doc.GenMarkdownTreeCustom(rootCmd, "./doc/", cli.filePrepender, cli.linkHandler); err != nil {
return fmt.Errorf("failed to generate cobra doc: %s", err)
}
return nil
},
}
return cmd
}
func (cli cliDoc) filePrepender(filename string) string {
const header = `---
id: %s
title: %s
---
`
name := filepath.Base(filename)
base := strings.TrimSuffix(name, filepath.Ext(name))
return fmt.Sprintf(header, base, strings.ReplaceAll(base, "_", " "))
}
func (cli cliDoc) linkHandler(name string) string {
return fmt.Sprintf("/cscli/%s", name)
}

View file

@ -2,6 +2,7 @@ package main
import ( import (
"bufio" "bufio"
"errors"
"fmt" "fmt"
"io" "io"
"os" "os"
@ -11,166 +12,58 @@ import (
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/dumps"
"github.com/crowdsecurity/crowdsec/pkg/hubtest" "github.com/crowdsecurity/crowdsec/pkg/hubtest"
"github.com/crowdsecurity/crowdsec/pkg/types"
) )
func runExplain(cmd *cobra.Command, args []string) error { func getLineCountForFile(filepath string) (int, error) {
flags := cmd.Flags() f, err := os.Open(filepath)
logFile, err := flags.GetString("file")
if err != nil { if err != nil {
return err return 0, err
} }
defer f.Close()
dsn, err := flags.GetString("dsn") lc := 0
if err != nil { fs := bufio.NewReader(f)
return err
}
logLine, err := flags.GetString("log") for {
if err != nil { input, err := fs.ReadBytes('\n')
return err if len(input) > 1 {
} lc++
logType, err := flags.GetString("type")
if err != nil {
return err
}
opts := hubtest.DumpOpts{}
opts.Details, err = flags.GetBool("verbose")
if err != nil {
return err
}
opts.SkipOk, err = flags.GetBool("failures")
if err != nil {
return err
}
opts.ShowNotOkParsers, err = flags.GetBool("only-successful-parsers")
opts.ShowNotOkParsers = !opts.ShowNotOkParsers
if err != nil {
return err
}
crowdsec, err := flags.GetString("crowdsec")
if err != nil {
return err
}
fileInfo, _ := os.Stdin.Stat()
if logType == "" || (logLine == "" && logFile == "" && dsn == "") {
printHelp(cmd)
fmt.Println()
fmt.Printf("Please provide --type flag\n")
os.Exit(1)
}
if logFile == "-" && ((fileInfo.Mode() & os.ModeCharDevice) == os.ModeCharDevice) {
return fmt.Errorf("the option -f - is intended to work with pipes")
}
var f *os.File
// using empty string fallback to /tmp
dir, err := os.MkdirTemp("", "cscli_explain")
if err != nil {
return fmt.Errorf("couldn't create a temporary directory to store cscli explain result: %s", err)
}
tmpFile := ""
// we create a temporary log file if a log line/stdin has been provided
if logLine != "" || logFile == "-" {
tmpFile = filepath.Join(dir, "cscli_test_tmp.log")
f, err = os.Create(tmpFile)
if err != nil {
return err
} }
if logLine != "" { if err != nil && err == io.EOF {
_, err = f.WriteString(logLine) break
if err != nil {
return err
}
} else if logFile == "-" {
reader := bufio.NewReader(os.Stdin)
errCount := 0
for {
input, err := reader.ReadBytes('\n')
if err != nil && err == io.EOF {
break
}
_, err = f.Write(input)
if err != nil {
errCount++
}
}
if errCount > 0 {
log.Warnf("Failed to write %d lines to tmp file", errCount)
}
}
f.Close()
// this is the file that was going to be read by crowdsec anyway
logFile = tmpFile
}
if logFile != "" {
absolutePath, err := filepath.Abs(logFile)
if err != nil {
return fmt.Errorf("unable to get absolute path of '%s', exiting", logFile)
}
dsn = fmt.Sprintf("file://%s", absolutePath)
lineCount := types.GetLineCountForFile(absolutePath)
if lineCount > 100 {
log.Warnf("log file contains %d lines. This may take lot of resources.", lineCount)
} }
} }
if dsn == "" { return lc, nil
return fmt.Errorf("no acquisition (--file or --dsn) provided, can't run cscli test")
}
cmdArgs := []string{"-c", ConfigFilePath, "-type", logType, "-dsn", dsn, "-dump-data", dir, "-no-api"}
crowdsecCmd := exec.Command(crowdsec, cmdArgs...)
output, err := crowdsecCmd.CombinedOutput()
if err != nil {
fmt.Println(string(output))
return fmt.Errorf("fail to run crowdsec for test: %v", err)
}
// rm the temporary log file if only a log line/stdin was provided
if tmpFile != "" {
if err := os.Remove(tmpFile); err != nil {
return fmt.Errorf("unable to remove tmp log file '%s': %+v", tmpFile, err)
}
}
parserDumpFile := filepath.Join(dir, hubtest.ParserResultFileName)
bucketStateDumpFile := filepath.Join(dir, hubtest.BucketPourResultFileName)
parserDump, err := hubtest.LoadParserDump(parserDumpFile)
if err != nil {
return fmt.Errorf("unable to load parser dump result: %s", err)
}
bucketStateDump, err := hubtest.LoadBucketPourDump(bucketStateDumpFile)
if err != nil {
return fmt.Errorf("unable to load bucket dump result: %s", err)
}
hubtest.DumpTree(*parserDump, *bucketStateDump, opts)
if err := os.RemoveAll(dir); err != nil {
return fmt.Errorf("unable to delete temporary directory '%s': %s", dir, err)
}
return nil
} }
func NewExplainCmd() *cobra.Command { type cliExplain struct {
cmdExplain := &cobra.Command{ cfg configGetter
flags struct {
logFile string
dsn string
logLine string
logType string
details bool
skipOk bool
onlySuccessfulParsers bool
noClean bool
crowdsec string
labels string
}
}
func NewCLIExplain(cfg configGetter) *cliExplain {
return &cliExplain{
cfg: cfg,
}
}
func (cli *cliExplain) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "explain", Use: "explain",
Short: "Explain log pipeline", Short: "Explain log pipeline",
Long: ` Long: `
@ -184,19 +77,173 @@ tail -n 5 myfile.log | cscli explain --type nginx -f -
`, `,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runExplain, RunE: func(_ *cobra.Command, _ []string) error {
return cli.run()
},
PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
fileInfo, _ := os.Stdin.Stat()
if cli.flags.logFile == "-" && ((fileInfo.Mode() & os.ModeCharDevice) == os.ModeCharDevice) {
return errors.New("the option -f - is intended to work with pipes")
}
return nil
},
} }
flags := cmdExplain.Flags() flags := cmd.Flags()
flags.StringP("file", "f", "", "Log file to test") flags.StringVarP(&cli.flags.logFile, "file", "f", "", "Log file to test")
flags.StringP("dsn", "d", "", "DSN to test") flags.StringVarP(&cli.flags.dsn, "dsn", "d", "", "DSN to test")
flags.StringP("log", "l", "", "Log line to test") flags.StringVarP(&cli.flags.logLine, "log", "l", "", "Log line to test")
flags.StringP("type", "t", "", "Type of the acquisition to test") flags.StringVarP(&cli.flags.logType, "type", "t", "", "Type of the acquisition to test")
flags.BoolP("verbose", "v", false, "Display individual changes") flags.StringVar(&cli.flags.labels, "labels", "", "Additional labels to add to the acquisition format (key:value,key2:value2)")
flags.Bool("failures", false, "Only show failed lines") flags.BoolVarP(&cli.flags.details, "verbose", "v", false, "Display individual changes")
flags.Bool("only-successful-parsers", false, "Only show successful parsers") flags.BoolVar(&cli.flags.skipOk, "failures", false, "Only show failed lines")
flags.String("crowdsec", "crowdsec", "Path to crowdsec") flags.BoolVar(&cli.flags.onlySuccessfulParsers, "only-successful-parsers", false, "Only show successful parsers")
flags.StringVar(&cli.flags.crowdsec, "crowdsec", "crowdsec", "Path to crowdsec")
flags.BoolVar(&cli.flags.noClean, "no-clean", false, "Don't clean runtime environment after tests")
return cmdExplain cmd.MarkFlagRequired("type")
cmd.MarkFlagsOneRequired("log", "file", "dsn")
return cmd
}
func (cli *cliExplain) run() error {
logFile := cli.flags.logFile
logLine := cli.flags.logLine
logType := cli.flags.logType
dsn := cli.flags.dsn
labels := cli.flags.labels
crowdsec := cli.flags.crowdsec
opts := dumps.DumpOpts{
Details: cli.flags.details,
SkipOk: cli.flags.skipOk,
ShowNotOkParsers: !cli.flags.onlySuccessfulParsers,
}
var f *os.File
// using empty string fallback to /tmp
dir, err := os.MkdirTemp("", "cscli_explain")
if err != nil {
return fmt.Errorf("couldn't create a temporary directory to store cscli explain result: %w", err)
}
defer func() {
if cli.flags.noClean {
return
}
if _, err := os.Stat(dir); !os.IsNotExist(err) {
if err := os.RemoveAll(dir); err != nil {
log.Errorf("unable to delete temporary directory '%s': %s", dir, err)
}
}
}()
// we create a temporary log file if a log line/stdin has been provided
if logLine != "" || logFile == "-" {
tmpFile := filepath.Join(dir, "cscli_test_tmp.log")
f, err = os.Create(tmpFile)
if err != nil {
return err
}
if logLine != "" {
_, err = f.WriteString(logLine)
if err != nil {
return err
}
} else if logFile == "-" {
reader := bufio.NewReader(os.Stdin)
errCount := 0
for {
input, err := reader.ReadBytes('\n')
if err != nil && errors.Is(err, io.EOF) {
break
}
if len(input) > 1 {
_, err = f.Write(input)
}
if err != nil || len(input) <= 1 {
errCount++
}
}
if errCount > 0 {
log.Warnf("Failed to write %d lines to %s", errCount, tmpFile)
}
}
f.Close()
// this is the file that was going to be read by crowdsec anyway
logFile = tmpFile
}
if logFile != "" {
absolutePath, err := filepath.Abs(logFile)
if err != nil {
return fmt.Errorf("unable to get absolute path of '%s', exiting", logFile)
}
dsn = fmt.Sprintf("file://%s", absolutePath)
lineCount, err := getLineCountForFile(absolutePath)
if err != nil {
return err
}
log.Debugf("file %s has %d lines", absolutePath, lineCount)
if lineCount == 0 {
return fmt.Errorf("the log file is empty: %s", absolutePath)
}
if lineCount > 100 {
log.Warnf("%s contains %d lines. This may take a lot of resources.", absolutePath, lineCount)
}
}
if dsn == "" {
return errors.New("no acquisition (--file or --dsn) provided, can't run cscli test")
}
cmdArgs := []string{"-c", ConfigFilePath, "-type", logType, "-dsn", dsn, "-dump-data", dir, "-no-api"}
if labels != "" {
log.Debugf("adding labels %s", labels)
cmdArgs = append(cmdArgs, "-label", labels)
}
crowdsecCmd := exec.Command(crowdsec, cmdArgs...)
output, err := crowdsecCmd.CombinedOutput()
if err != nil {
fmt.Println(string(output))
return fmt.Errorf("fail to run crowdsec for test: %w", err)
}
parserDumpFile := filepath.Join(dir, hubtest.ParserResultFileName)
bucketStateDumpFile := filepath.Join(dir, hubtest.BucketPourResultFileName)
parserDump, err := dumps.LoadParserDump(parserDumpFile)
if err != nil {
return fmt.Errorf("unable to load parser dump result: %w", err)
}
bucketStateDump, err := dumps.LoadBucketPourDump(bucketStateDumpFile)
if err != nil {
return fmt.Errorf("unable to load bucket dump result: %w", err)
}
dumps.DumpTree(*parserDump, *bucketStateDump, opts)
return nil
} }

29
cmd/crowdsec-cli/flag.go Normal file
View file

@ -0,0 +1,29 @@
package main
// Custom types for flag validation and conversion.
import (
"errors"
)
type MachinePassword string
func (p *MachinePassword) String() string {
return string(*p)
}
func (p *MachinePassword) Set(v string) error {
// a password can't be more than 72 characters
// due to bcrypt limitations
if len(v) > 72 {
return errors.New("password too long (max 72 characters)")
}
*p = MachinePassword(v)
return nil
}
func (p *MachinePassword) Type() string {
return "string"
}

View file

@ -1,164 +1,228 @@
package main package main
import ( import (
"errors" "encoding/json"
"fmt" "fmt"
"github.com/fatih/color" "github.com/fatih/color"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
) )
func NewHubCmd() *cobra.Command { type cliHub struct{
var cmdHub = &cobra.Command{ cfg configGetter
}
func NewCLIHub(cfg configGetter) *cliHub {
return &cliHub{
cfg: cfg,
}
}
func (cli *cliHub) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "hub [action]", Use: "hub [action]",
Short: "Manage Hub", Short: "Manage hub index",
Long: ` Long: `Hub management
Hub management
List/update parsers/scenarios/postoverflows/collections from [Crowdsec Hub](https://hub.crowdsec.net). List/update parsers/scenarios/postoverflows/collections from [Crowdsec Hub](https://hub.crowdsec.net).
The Hub is managed by cscli, to get the latest hub files from [Crowdsec Hub](https://hub.crowdsec.net), you need to update. The Hub is managed by cscli, to get the latest hub files from [Crowdsec Hub](https://hub.crowdsec.net), you need to update.`,
`, Example: `cscli hub list
Example: ` cscli hub update
cscli hub list # List all installed configurations cscli hub upgrade`,
cscli hub update # Download list of available configurations from the hub
`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if csConfig.Cscli == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
return nil
},
} }
cmdHub.PersistentFlags().StringVarP(&cwhub.HubBranch, "branch", "b", "", "Use given branch from hub")
cmdHub.AddCommand(NewHubListCmd()) cmd.AddCommand(cli.newListCmd())
cmdHub.AddCommand(NewHubUpdateCmd()) cmd.AddCommand(cli.newUpdateCmd())
cmdHub.AddCommand(NewHubUpgradeCmd()) cmd.AddCommand(cli.newUpgradeCmd())
cmd.AddCommand(cli.newTypesCmd())
return cmdHub return cmd
} }
func NewHubListCmd() *cobra.Command { func (cli *cliHub) list(all bool) error {
var cmdHubList = &cobra.Command{ hub, err := require.Hub(cli.cfg(), nil, log.StandardLogger())
if err != nil {
return err
}
for _, v := range hub.Warnings {
log.Info(v)
}
for _, line := range hub.ItemStats() {
log.Info(line)
}
items := make(map[string][]*cwhub.Item)
for _, itemType := range cwhub.ItemTypes {
items[itemType], err = selectItems(hub, itemType, nil, !all)
if err != nil {
return err
}
}
err = listItems(color.Output, cwhub.ItemTypes, items, true)
if err != nil {
return err
}
return nil
}
func (cli *cliHub) newListCmd() *cobra.Command {
var all bool
cmd := &cobra.Command{
Use: "list [-a]", Use: "list [-a]",
Short: "List installed configs", Short: "List all installed configurations",
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
return cli.list(all)
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
//use LocalSync to get warnings about tainted / outdated items
_, warn := cwhub.LocalSync(csConfig.Hub)
for _, v := range warn {
log.Info(v)
}
cwhub.DisplaySummary()
ListItems(color.Output, []string{
cwhub.COLLECTIONS, cwhub.PARSERS, cwhub.SCENARIOS, cwhub.PARSERS_OVFLW,
}, args, true, false, all)
}, },
} }
cmdHubList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmdHubList flags := cmd.Flags()
flags.BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmd
} }
func NewHubUpdateCmd() *cobra.Command { func (cli *cliHub) update() error {
var cmdHubUpdate = &cobra.Command{ local := cli.cfg().Hub
remote := require.RemoteHub(cli.cfg())
// don't use require.Hub because if there is no index file, it would fail
hub, err := cwhub.NewHub(local, remote, true, log.StandardLogger())
if err != nil {
return fmt.Errorf("failed to update hub: %w", err)
}
for _, v := range hub.Warnings {
log.Info(v)
}
return nil
}
func (cli *cliHub) newUpdateCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "update", Use: "update",
Short: "Fetch available configs from hub", Short: "Download the latest index (catalog of available configurations)",
Long: ` Long: `
Fetches the [.index.json](https://github.com/crowdsecurity/hub/blob/master/.index.json) file from hub, containing the list of available configs. Fetches the .index.json file from the hub, containing the list of available configs.
`, `,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
if csConfig.Cscli == nil { return cli.update()
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if err := cwhub.UpdateHubIdx(csConfig.Hub); err != nil {
if errors.Is(err, cwhub.ErrIndexNotFound) {
log.Warnf("Could not find index file for branch '%s', using 'master'", cwhub.HubBranch)
cwhub.HubBranch = "master"
if err := cwhub.UpdateHubIdx(csConfig.Hub); err != nil {
log.Fatalf("Failed to get Hub index after retry : %v", err)
}
} else {
log.Fatalf("Failed to get Hub index : %v", err)
}
}
//use LocalSync to get warnings about tainted / outdated items
_, warn := cwhub.LocalSync(csConfig.Hub)
for _, v := range warn {
log.Info(v)
}
}, },
} }
return cmdHubUpdate return cmd
} }
func NewHubUpgradeCmd() *cobra.Command { func (cli *cliHub) upgrade(force bool) error {
var cmdHubUpgrade = &cobra.Command{ hub, err := require.Hub(cli.cfg(), require.RemoteHub(cli.cfg()), log.StandardLogger())
if err != nil {
return err
}
for _, itemType := range cwhub.ItemTypes {
items, err := hub.GetInstalledItemsByType(itemType)
if err != nil {
return err
}
updated := 0
log.Infof("Upgrading %s", itemType)
for _, item := range items {
didUpdate, err := item.Upgrade(force)
if err != nil {
return err
}
if didUpdate {
updated++
}
}
log.Infof("Upgraded %d %s", updated, itemType)
}
return nil
}
func (cli *cliHub) newUpgradeCmd() *cobra.Command {
var force bool
cmd := &cobra.Command{
Use: "upgrade", Use: "upgrade",
Short: "Upgrade all configs installed from hub", Short: "Upgrade all configurations to their latest version",
Long: ` Long: `
Upgrade all configs installed from Crowdsec Hub. Run 'sudo cscli hub update' if you want the latest versions available. Upgrade all configs installed from Crowdsec Hub. Run 'sudo cscli hub update' if you want the latest versions available.
`, `,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
if csConfig.Cscli == nil { return cli.upgrade(force)
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
return nil
},
Run: func(cmd *cobra.Command, args []string) {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
log.Infof("Upgrading collections")
cwhub.UpgradeConfig(csConfig, cwhub.COLLECTIONS, "", forceAction)
log.Infof("Upgrading parsers")
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS, "", forceAction)
log.Infof("Upgrading scenarios")
cwhub.UpgradeConfig(csConfig, cwhub.SCENARIOS, "", forceAction)
log.Infof("Upgrading postoverflows")
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS_OVFLW, "", forceAction)
}, },
} }
cmdHubUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
return cmdHubUpgrade flags := cmd.Flags()
flags.BoolVar(&force, "force", false, "Force upgrade: overwrite tainted and outdated files")
return cmd
}
func (cli *cliHub) types() error {
switch cli.cfg().Cscli.Output {
case "human":
s, err := yaml.Marshal(cwhub.ItemTypes)
if err != nil {
return err
}
fmt.Print(string(s))
case "json":
jsonStr, err := json.Marshal(cwhub.ItemTypes)
if err != nil {
return err
}
fmt.Println(string(jsonStr))
case "raw":
for _, itemType := range cwhub.ItemTypes {
fmt.Println(itemType)
}
}
return nil
}
func (cli *cliHub) newTypesCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "types",
Short: "List supported item types",
Long: `
List the types of supported hub items.
`,
Args: cobra.ExactArgs(0),
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, _ []string) error {
return cli.types()
},
}
return cmd
} }

View file

@ -0,0 +1,123 @@
package main
import (
"fmt"
"os"
"golang.org/x/text/cases"
"golang.org/x/text/language"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/crowdsec/pkg/appsec"
"github.com/crowdsecurity/crowdsec/pkg/appsec/appsec_rule"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLIAppsecConfig(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.APPSEC_CONFIGS,
singular: "appsec-config",
oneOrMore: "appsec-config(s)",
help: cliHelp{
example: `cscli appsec-configs list -a
cscli appsec-configs install crowdsecurity/vpatch
cscli appsec-configs inspect crowdsecurity/vpatch
cscli appsec-configs upgrade crowdsecurity/vpatch
cscli appsec-configs remove crowdsecurity/vpatch
`,
},
installHelp: cliHelp{
example: `cscli appsec-configs install crowdsecurity/vpatch`,
},
removeHelp: cliHelp{
example: `cscli appsec-configs remove crowdsecurity/vpatch`,
},
upgradeHelp: cliHelp{
example: `cscli appsec-configs upgrade crowdsecurity/vpatch`,
},
inspectHelp: cliHelp{
example: `cscli appsec-configs inspect crowdsecurity/vpatch`,
},
listHelp: cliHelp{
example: `cscli appsec-configs list
cscli appsec-configs list -a
cscli appsec-configs list crowdsecurity/vpatch`,
},
}
}
func NewCLIAppsecRule(cfg configGetter) *cliItem {
inspectDetail := func(item *cwhub.Item) error {
// Only show the converted rules in human mode
if csConfig.Cscli.Output != "human" {
return nil
}
appsecRule := appsec.AppsecCollectionConfig{}
yamlContent, err := os.ReadFile(item.State.LocalPath)
if err != nil {
return fmt.Errorf("unable to read file %s: %w", item.State.LocalPath, err)
}
if err := yaml.Unmarshal(yamlContent, &appsecRule); err != nil {
return fmt.Errorf("unable to unmarshal yaml file %s: %w", item.State.LocalPath, err)
}
for _, ruleType := range appsec_rule.SupportedTypes() {
fmt.Printf("\n%s format:\n", cases.Title(language.Und, cases.NoLower).String(ruleType))
for _, rule := range appsecRule.Rules {
convertedRule, _, err := rule.Convert(ruleType, appsecRule.Name)
if err != nil {
return fmt.Errorf("unable to convert rule %s: %w", rule.Name, err)
}
fmt.Println(convertedRule)
}
switch ruleType { //nolint:gocritic
case appsec_rule.ModsecurityRuleType:
for _, rule := range appsecRule.SecLangRules {
fmt.Println(rule)
}
}
}
return nil
}
return &cliItem{
cfg: cfg,
name: "appsec-rules",
singular: "appsec-rule",
oneOrMore: "appsec-rule(s)",
help: cliHelp{
example: `cscli appsec-rules list -a
cscli appsec-rules install crowdsecurity/crs
cscli appsec-rules inspect crowdsecurity/crs
cscli appsec-rules upgrade crowdsecurity/crs
cscli appsec-rules remove crowdsecurity/crs
`,
},
installHelp: cliHelp{
example: `cscli appsec-rules install crowdsecurity/crs`,
},
removeHelp: cliHelp{
example: `cscli appsec-rules remove crowdsecurity/crs`,
},
upgradeHelp: cliHelp{
example: `cscli appsec-rules upgrade crowdsecurity/crs`,
},
inspectHelp: cliHelp{
example: `cscli appsec-rules inspect crowdsecurity/crs`,
},
inspectDetail: inspectDetail,
listHelp: cliHelp{
example: `cscli appsec-rules list
cscli appsec-rules list -a
cscli appsec-rules list crowdsecurity/crs`,
},
}
}

View file

@ -0,0 +1,41 @@
package main
import (
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLICollection(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.COLLECTIONS,
singular: "collection",
oneOrMore: "collection(s)",
help: cliHelp{
example: `cscli collections list -a
cscli collections install crowdsecurity/http-cve crowdsecurity/iptables
cscli collections inspect crowdsecurity/http-cve crowdsecurity/iptables
cscli collections upgrade crowdsecurity/http-cve crowdsecurity/iptables
cscli collections remove crowdsecurity/http-cve crowdsecurity/iptables
`,
},
installHelp: cliHelp{
example: `cscli collections install crowdsecurity/http-cve crowdsecurity/iptables`,
},
removeHelp: cliHelp{
example: `cscli collections remove crowdsecurity/http-cve crowdsecurity/iptables`,
},
upgradeHelp: cliHelp{
example: `cscli collections upgrade crowdsecurity/http-cve crowdsecurity/iptables`,
},
inspectHelp: cliHelp{
example: `cscli collections inspect crowdsecurity/http-cve crowdsecurity/iptables`,
},
listHelp: cliHelp{
example: `cscli collections list
cscli collections list -a
cscli collections list crowdsecurity/http-cve crowdsecurity/iptables
List only enabled collections unless "-a" or names are specified.`,
},
}
}

View file

@ -0,0 +1,41 @@
package main
import (
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLIContext(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.CONTEXTS,
singular: "context",
oneOrMore: "context(s)",
help: cliHelp{
example: `cscli contexts list -a
cscli contexts install crowdsecurity/yyy crowdsecurity/zzz
cscli contexts inspect crowdsecurity/yyy crowdsecurity/zzz
cscli contexts upgrade crowdsecurity/yyy crowdsecurity/zzz
cscli contexts remove crowdsecurity/yyy crowdsecurity/zzz
`,
},
installHelp: cliHelp{
example: `cscli contexts install crowdsecurity/yyy crowdsecurity/zzz`,
},
removeHelp: cliHelp{
example: `cscli contexts remove crowdsecurity/yyy crowdsecurity/zzz`,
},
upgradeHelp: cliHelp{
example: `cscli contexts upgrade crowdsecurity/yyy crowdsecurity/zzz`,
},
inspectHelp: cliHelp{
example: `cscli contexts inspect crowdsecurity/yyy crowdsecurity/zzz`,
},
listHelp: cliHelp{
example: `cscli contexts list
cscli contexts list -a
cscli contexts list crowdsecurity/yyy crowdsecurity/zzz
List only enabled contexts unless "-a" or names are specified.`,
},
}
}

View file

@ -0,0 +1,41 @@
package main
import (
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLIParser(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.PARSERS,
singular: "parser",
oneOrMore: "parser(s)",
help: cliHelp{
example: `cscli parsers list -a
cscli parsers install crowdsecurity/caddy-logs crowdsecurity/sshd-logs
cscli parsers inspect crowdsecurity/caddy-logs crowdsecurity/sshd-logs
cscli parsers upgrade crowdsecurity/caddy-logs crowdsecurity/sshd-logs
cscli parsers remove crowdsecurity/caddy-logs crowdsecurity/sshd-logs
`,
},
installHelp: cliHelp{
example: `cscli parsers install crowdsecurity/caddy-logs crowdsecurity/sshd-logs`,
},
removeHelp: cliHelp{
example: `cscli parsers remove crowdsecurity/caddy-logs crowdsecurity/sshd-logs`,
},
upgradeHelp: cliHelp{
example: `cscli parsers upgrade crowdsecurity/caddy-logs crowdsecurity/sshd-logs`,
},
inspectHelp: cliHelp{
example: `cscli parsers inspect crowdsecurity/httpd-logs crowdsecurity/sshd-logs`,
},
listHelp: cliHelp{
example: `cscli parsers list
cscli parsers list -a
cscli parsers list crowdsecurity/caddy-logs crowdsecurity/sshd-logs
List only enabled parsers unless "-a" or names are specified.`,
},
}
}

View file

@ -0,0 +1,41 @@
package main
import (
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLIPostOverflow(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.POSTOVERFLOWS,
singular: "postoverflow",
oneOrMore: "postoverflow(s)",
help: cliHelp{
example: `cscli postoverflows list -a
cscli postoverflows install crowdsecurity/cdn-whitelist crowdsecurity/rdns
cscli postoverflows inspect crowdsecurity/cdn-whitelist crowdsecurity/rdns
cscli postoverflows upgrade crowdsecurity/cdn-whitelist crowdsecurity/rdns
cscli postoverflows remove crowdsecurity/cdn-whitelist crowdsecurity/rdns
`,
},
installHelp: cliHelp{
example: `cscli postoverflows install crowdsecurity/cdn-whitelist crowdsecurity/rdns`,
},
removeHelp: cliHelp{
example: `cscli postoverflows remove crowdsecurity/cdn-whitelist crowdsecurity/rdns`,
},
upgradeHelp: cliHelp{
example: `cscli postoverflows upgrade crowdsecurity/cdn-whitelist crowdsecurity/rdns`,
},
inspectHelp: cliHelp{
example: `cscli postoverflows inspect crowdsecurity/cdn-whitelist crowdsecurity/rdns`,
},
listHelp: cliHelp{
example: `cscli postoverflows list
cscli postoverflows list -a
cscli postoverflows list crowdsecurity/cdn-whitelist crowdsecurity/rdns
List only enabled postoverflows unless "-a" or names are specified.`,
},
}
}

View file

@ -0,0 +1,41 @@
package main
import (
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewCLIScenario(cfg configGetter) *cliItem {
return &cliItem{
cfg: cfg,
name: cwhub.SCENARIOS,
singular: "scenario",
oneOrMore: "scenario(s)",
help: cliHelp{
example: `cscli scenarios list -a
cscli scenarios install crowdsecurity/ssh-bf crowdsecurity/http-probing
cscli scenarios inspect crowdsecurity/ssh-bf crowdsecurity/http-probing
cscli scenarios upgrade crowdsecurity/ssh-bf crowdsecurity/http-probing
cscli scenarios remove crowdsecurity/ssh-bf crowdsecurity/http-probing
`,
},
installHelp: cliHelp{
example: `cscli scenarios install crowdsecurity/ssh-bf crowdsecurity/http-probing`,
},
removeHelp: cliHelp{
example: `cscli scenarios remove crowdsecurity/ssh-bf crowdsecurity/http-probing`,
},
upgradeHelp: cliHelp{
example: `cscli scenarios upgrade crowdsecurity/ssh-bf crowdsecurity/http-probing`,
},
inspectHelp: cliHelp{
example: `cscli scenarios inspect crowdsecurity/ssh-bf crowdsecurity/http-probing`,
},
listHelp: cliHelp{
example: `cscli scenarios list
cscli scenarios list -a
cscli scenarios list crowdsecurity/ssh-bf crowdsecurity/http-probing
List only enabled scenarios unless "-a" or names are specified.`,
},
}
}

View file

@ -2,73 +2,106 @@ package main
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"math" "math"
"os" "os"
"path/filepath" "path/filepath"
"strings" "strings"
"text/template"
"github.com/AlecAivazis/survey/v2" "github.com/AlecAivazis/survey/v2"
"github.com/enescakir/emoji"
"github.com/fatih/color" "github.com/fatih/color"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v3"
"github.com/crowdsecurity/crowdsec/pkg/dumps"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
"github.com/crowdsecurity/crowdsec/pkg/hubtest" "github.com/crowdsecurity/crowdsec/pkg/hubtest"
) )
var ( var (
HubTest hubtest.HubTest HubTest hubtest.HubTest
HubAppsecTests hubtest.HubTest
hubPtr *hubtest.HubTest
isAppsecTest bool
) )
func NewHubTestCmd() *cobra.Command { type cliHubTest struct {
var hubPath string cfg configGetter
var crowdsecPath string }
var cscliPath string
var cmdHubTest = &cobra.Command{ func NewCLIHubTest(cfg configGetter) *cliHubTest {
return &cliHubTest{
cfg: cfg,
}
}
func (cli *cliHubTest) NewCommand() *cobra.Command {
var (
hubPath string
crowdsecPath string
cscliPath string
)
cmd := &cobra.Command{
Use: "hubtest", Use: "hubtest",
Short: "Run functional tests on hub configurations", Short: "Run functional tests on hub configurations",
Long: "Run functional tests on hub configurations (parsers, scenarios, collections...)", Long: "Run functional tests on hub configurations (parsers, scenarios, collections...)",
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
var err error var err error
HubTest, err = hubtest.NewHubTest(hubPath, crowdsecPath, cscliPath) HubTest, err = hubtest.NewHubTest(hubPath, crowdsecPath, cscliPath, false)
if err != nil { if err != nil {
return fmt.Errorf("unable to load hubtest: %+v", err) return fmt.Errorf("unable to load hubtest: %+v", err)
} }
HubAppsecTests, err = hubtest.NewHubTest(hubPath, crowdsecPath, cscliPath, true)
if err != nil {
return fmt.Errorf("unable to load appsec specific hubtest: %+v", err)
}
// commands will use the hubPtr, will point to the default hubTest object, or the one dedicated to appsec tests
hubPtr = &HubTest
if isAppsecTest {
hubPtr = &HubAppsecTests
}
return nil return nil
}, },
} }
cmdHubTest.PersistentFlags().StringVar(&hubPath, "hub", ".", "Path to hub folder")
cmdHubTest.PersistentFlags().StringVar(&crowdsecPath, "crowdsec", "crowdsec", "Path to crowdsec")
cmdHubTest.PersistentFlags().StringVar(&cscliPath, "cscli", "cscli", "Path to cscli")
cmdHubTest.AddCommand(NewHubTestCreateCmd()) cmd.PersistentFlags().StringVar(&hubPath, "hub", ".", "Path to hub folder")
cmdHubTest.AddCommand(NewHubTestRunCmd()) cmd.PersistentFlags().StringVar(&crowdsecPath, "crowdsec", "crowdsec", "Path to crowdsec")
cmdHubTest.AddCommand(NewHubTestCleanCmd()) cmd.PersistentFlags().StringVar(&cscliPath, "cscli", "cscli", "Path to cscli")
cmdHubTest.AddCommand(NewHubTestInfoCmd()) cmd.PersistentFlags().BoolVar(&isAppsecTest, "appsec", false, "Command relates to appsec tests")
cmdHubTest.AddCommand(NewHubTestListCmd())
cmdHubTest.AddCommand(NewHubTestCoverageCmd())
cmdHubTest.AddCommand(NewHubTestEvalCmd())
cmdHubTest.AddCommand(NewHubTestExplainCmd())
return cmdHubTest cmd.AddCommand(cli.NewCreateCmd())
cmd.AddCommand(cli.NewRunCmd())
cmd.AddCommand(cli.NewCleanCmd())
cmd.AddCommand(cli.NewInfoCmd())
cmd.AddCommand(cli.NewListCmd())
cmd.AddCommand(cli.NewCoverageCmd())
cmd.AddCommand(cli.NewEvalCmd())
cmd.AddCommand(cli.NewExplainCmd())
return cmd
} }
func (cli *cliHubTest) NewCreateCmd() *cobra.Command {
var (
ignoreParsers bool
labels map[string]string
logType string
)
func NewHubTestCreateCmd() *cobra.Command {
parsers := []string{} parsers := []string{}
postoverflows := []string{} postoverflows := []string{}
scenarios := []string{} scenarios := []string{}
var ignoreParsers bool
var labels map[string]string
var logType string
var cmdHubTestCreate = &cobra.Command{ cmd := &cobra.Command{
Use: "create", Use: "create",
Short: "create [test_name]", Short: "create [test_name]",
Example: `cscli hubtest create my-awesome-test --type syslog Example: `cscli hubtest create my-awesome-test --type syslog
@ -76,134 +109,170 @@ cscli hubtest create my-nginx-custom-test --type nginx
cscli hubtest create my-scenario-test --parsers crowdsecurity/nginx --scenarios crowdsecurity/http-probing`, cscli hubtest create my-scenario-test --parsers crowdsecurity/nginx --scenarios crowdsecurity/http-probing`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
testName := args[0] testName := args[0]
testPath := filepath.Join(HubTest.HubTestPath, testName) testPath := filepath.Join(hubPtr.HubTestPath, testName)
if _, err := os.Stat(testPath); os.IsExist(err) { if _, err := os.Stat(testPath); os.IsExist(err) {
return fmt.Errorf("test '%s' already exists in '%s', exiting", testName, testPath) return fmt.Errorf("test '%s' already exists in '%s', exiting", testName, testPath)
} }
if isAppsecTest {
logType = "appsec"
}
if logType == "" { if logType == "" {
return fmt.Errorf("please provide a type (--type) for the test") return errors.New("please provide a type (--type) for the test")
} }
if err := os.MkdirAll(testPath, os.ModePerm); err != nil { if err := os.MkdirAll(testPath, os.ModePerm); err != nil {
return fmt.Errorf("unable to create folder '%s': %+v", testPath, err) return fmt.Errorf("unable to create folder '%s': %+v", testPath, err)
} }
// create empty log file
logFileName := fmt.Sprintf("%s.log", testName)
logFilePath := filepath.Join(testPath, logFileName)
logFile, err := os.Create(logFilePath)
if err != nil {
return err
}
logFile.Close()
// create empty parser assertion file
parserAssertFilePath := filepath.Join(testPath, hubtest.ParserAssertFileName)
parserAssertFile, err := os.Create(parserAssertFilePath)
if err != nil {
return err
}
parserAssertFile.Close()
// create empty scenario assertion file
scenarioAssertFilePath := filepath.Join(testPath, hubtest.ScenarioAssertFileName)
scenarioAssertFile, err := os.Create(scenarioAssertFilePath)
if err != nil {
return err
}
scenarioAssertFile.Close()
parsers = append(parsers, "crowdsecurity/syslog-logs")
parsers = append(parsers, "crowdsecurity/dateparse-enrich")
if len(scenarios) == 0 {
scenarios = append(scenarios, "")
}
if len(postoverflows) == 0 {
postoverflows = append(postoverflows, "")
}
configFileData := &hubtest.HubTestItemConfig{
Parsers: parsers,
Scenarios: scenarios,
PostOVerflows: postoverflows,
LogFile: logFileName,
LogType: logType,
IgnoreParsers: ignoreParsers,
Labels: labels,
}
configFilePath := filepath.Join(testPath, "config.yaml") configFilePath := filepath.Join(testPath, "config.yaml")
fd, err := os.OpenFile(configFilePath, os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0666)
configFileData := &hubtest.HubTestItemConfig{}
if logType == "appsec" {
// create empty nuclei template file
nucleiFileName := fmt.Sprintf("%s.yaml", testName)
nucleiFilePath := filepath.Join(testPath, nucleiFileName)
nucleiFile, err := os.OpenFile(nucleiFilePath, os.O_RDWR|os.O_CREATE, 0o755)
if err != nil {
return err
}
ntpl := template.Must(template.New("nuclei").Parse(hubtest.TemplateNucleiFile))
if ntpl == nil {
return errors.New("unable to parse nuclei template")
}
ntpl.ExecuteTemplate(nucleiFile, "nuclei", struct{ TestName string }{TestName: testName})
nucleiFile.Close()
configFileData.AppsecRules = []string{"./appsec-rules/<author>/your_rule_here.yaml"}
configFileData.NucleiTemplate = nucleiFileName
fmt.Println()
fmt.Printf(" Test name : %s\n", testName)
fmt.Printf(" Test path : %s\n", testPath)
fmt.Printf(" Config File : %s\n", configFilePath)
fmt.Printf(" Nuclei Template : %s\n", nucleiFilePath)
} else {
// create empty log file
logFileName := fmt.Sprintf("%s.log", testName)
logFilePath := filepath.Join(testPath, logFileName)
logFile, err := os.Create(logFilePath)
if err != nil {
return err
}
logFile.Close()
// create empty parser assertion file
parserAssertFilePath := filepath.Join(testPath, hubtest.ParserAssertFileName)
parserAssertFile, err := os.Create(parserAssertFilePath)
if err != nil {
return err
}
parserAssertFile.Close()
// create empty scenario assertion file
scenarioAssertFilePath := filepath.Join(testPath, hubtest.ScenarioAssertFileName)
scenarioAssertFile, err := os.Create(scenarioAssertFilePath)
if err != nil {
return err
}
scenarioAssertFile.Close()
parsers = append(parsers, "crowdsecurity/syslog-logs")
parsers = append(parsers, "crowdsecurity/dateparse-enrich")
if len(scenarios) == 0 {
scenarios = append(scenarios, "")
}
if len(postoverflows) == 0 {
postoverflows = append(postoverflows, "")
}
configFileData.Parsers = parsers
configFileData.Scenarios = scenarios
configFileData.PostOverflows = postoverflows
configFileData.LogFile = logFileName
configFileData.LogType = logType
configFileData.IgnoreParsers = ignoreParsers
configFileData.Labels = labels
fmt.Println()
fmt.Printf(" Test name : %s\n", testName)
fmt.Printf(" Test path : %s\n", testPath)
fmt.Printf(" Log file : %s (please fill it with logs)\n", logFilePath)
fmt.Printf(" Parser assertion file : %s (please fill it with assertion)\n", parserAssertFilePath)
fmt.Printf(" Scenario assertion file : %s (please fill it with assertion)\n", scenarioAssertFilePath)
fmt.Printf(" Configuration File : %s (please fill it with parsers, scenarios...)\n", configFilePath)
}
fd, err := os.Create(configFilePath)
if err != nil { if err != nil {
return fmt.Errorf("open: %s", err) return fmt.Errorf("open: %w", err)
} }
data, err := yaml.Marshal(configFileData) data, err := yaml.Marshal(configFileData)
if err != nil { if err != nil {
return fmt.Errorf("marshal: %s", err) return fmt.Errorf("marshal: %w", err)
} }
_, err = fd.Write(data) _, err = fd.Write(data)
if err != nil { if err != nil {
return fmt.Errorf("write: %s", err) return fmt.Errorf("write: %w", err)
} }
if err := fd.Close(); err != nil { if err := fd.Close(); err != nil {
return fmt.Errorf("close: %s", err) return fmt.Errorf("close: %w", err)
} }
fmt.Println()
fmt.Printf(" Test name : %s\n", testName)
fmt.Printf(" Test path : %s\n", testPath)
fmt.Printf(" Log file : %s (please fill it with logs)\n", logFilePath)
fmt.Printf(" Parser assertion file : %s (please fill it with assertion)\n", parserAssertFilePath)
fmt.Printf(" Scenario assertion file : %s (please fill it with assertion)\n", scenarioAssertFilePath)
fmt.Printf(" Configuration File : %s (please fill it with parsers, scenarios...)\n", configFilePath)
return nil return nil
}, },
} }
cmdHubTestCreate.PersistentFlags().StringVarP(&logType, "type", "t", "", "Log type of the test")
cmdHubTestCreate.Flags().StringSliceVarP(&parsers, "parsers", "p", parsers, "Parsers to add to test")
cmdHubTestCreate.Flags().StringSliceVar(&postoverflows, "postoverflows", postoverflows, "Postoverflows to add to test")
cmdHubTestCreate.Flags().StringSliceVarP(&scenarios, "scenarios", "s", scenarios, "Scenarios to add to test")
cmdHubTestCreate.PersistentFlags().BoolVar(&ignoreParsers, "ignore-parsers", false, "Don't run test on parsers")
return cmdHubTestCreate cmd.PersistentFlags().StringVarP(&logType, "type", "t", "", "Log type of the test")
cmd.Flags().StringSliceVarP(&parsers, "parsers", "p", parsers, "Parsers to add to test")
cmd.Flags().StringSliceVar(&postoverflows, "postoverflows", postoverflows, "Postoverflows to add to test")
cmd.Flags().StringSliceVarP(&scenarios, "scenarios", "s", scenarios, "Scenarios to add to test")
cmd.PersistentFlags().BoolVar(&ignoreParsers, "ignore-parsers", false, "Don't run test on parsers")
return cmd
} }
func (cli *cliHubTest) NewRunCmd() *cobra.Command {
var (
noClean bool
runAll bool
forceClean bool
NucleiTargetHost string
AppSecHost string
)
func NewHubTestRunCmd() *cobra.Command { cmd := &cobra.Command{
var noClean bool
var runAll bool
var forceClean bool
var cmdHubTestRun = &cobra.Command{
Use: "run", Use: "run",
Short: "run [test_name]", Short: "run [test_name]",
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(cmd *cobra.Command, args []string) error {
cfg := cli.cfg()
if !runAll && len(args) == 0 { if !runAll && len(args) == 0 {
printHelp(cmd) printHelp(cmd)
return fmt.Errorf("Please provide test to run or --all flag") return errors.New("please provide test to run or --all flag")
} }
hubPtr.NucleiTargetHost = NucleiTargetHost
hubPtr.AppSecHost = AppSecHost
if runAll { if runAll {
if err := HubTest.LoadAllTests(); err != nil { if err := hubPtr.LoadAllTests(); err != nil {
return fmt.Errorf("unable to load all tests: %+v", err) return fmt.Errorf("unable to load all tests: %+v", err)
} }
} else { } else {
for _, testName := range args { for _, testName := range args {
_, err := HubTest.LoadTestItem(testName) _, err := hubPtr.LoadTestItem(testName)
if err != nil { if err != nil {
return fmt.Errorf("unable to load test '%s': %s", testName, err) return fmt.Errorf("unable to load test '%s': %w", testName, err)
} }
} }
} }
for _, test := range HubTest.Tests { // set timezone to avoid DST issues
if csConfig.Cscli.Output == "human" { os.Setenv("TZ", "UTC")
for _, test := range hubPtr.Tests {
if cfg.Cscli.Output == "human" {
log.Infof("Running test '%s'", test.Name) log.Infof("Running test '%s'", test.Name)
} }
err := test.Run() err := test.Run()
@ -214,11 +283,13 @@ func NewHubTestRunCmd() *cobra.Command {
return nil return nil
}, },
PersistentPostRunE: func(cmd *cobra.Command, args []string) error { PersistentPostRunE: func(_ *cobra.Command, _ []string) error {
cfg := cli.cfg()
success := true success := true
testResult := make(map[string]bool) testResult := make(map[string]bool)
for _, test := range HubTest.Tests { for _, test := range hubPtr.Tests {
if test.AutoGen { if test.AutoGen && !isAppsecTest {
if test.ParserAssert.AutoGenAssert { if test.ParserAssert.AutoGenAssert {
log.Warningf("Assert file '%s' is empty, generating assertion:", test.ParserAssert.File) log.Warningf("Assert file '%s' is empty, generating assertion:", test.ParserAssert.File)
fmt.Println() fmt.Println()
@ -231,7 +302,7 @@ func NewHubTestRunCmd() *cobra.Command {
} }
if !noClean { if !noClean {
if err := test.Clean(); err != nil { if err := test.Clean(); err != nil {
return fmt.Errorf("unable to clean test '%s' env: %s", test.Name, err) return fmt.Errorf("unable to clean test '%s' env: %w", test.Name, err)
} }
} }
fmt.Printf("\nPlease fill your assert file(s) for test '%s', exiting\n", test.Name) fmt.Printf("\nPlease fill your assert file(s) for test '%s', exiting\n", test.Name)
@ -239,18 +310,18 @@ func NewHubTestRunCmd() *cobra.Command {
} }
testResult[test.Name] = test.Success testResult[test.Name] = test.Success
if test.Success { if test.Success {
if csConfig.Cscli.Output == "human" { if cfg.Cscli.Output == "human" {
log.Infof("Test '%s' passed successfully (%d assertions)\n", test.Name, test.ParserAssert.NbAssert+test.ScenarioAssert.NbAssert) log.Infof("Test '%s' passed successfully (%d assertions)\n", test.Name, test.ParserAssert.NbAssert+test.ScenarioAssert.NbAssert)
} }
if !noClean { if !noClean {
if err := test.Clean(); err != nil { if err := test.Clean(); err != nil {
return fmt.Errorf("unable to clean test '%s' env: %s", test.Name, err) return fmt.Errorf("unable to clean test '%s' env: %w", test.Name, err)
} }
} }
} else { } else {
success = false success = false
cleanTestEnv := false cleanTestEnv := false
if csConfig.Cscli.Output == "human" { if cfg.Cscli.Output == "human" {
if len(test.ParserAssert.Fails) > 0 { if len(test.ParserAssert.Fails) > 0 {
fmt.Println() fmt.Println()
log.Errorf("Parser test '%s' failed (%d errors)\n", test.Name, len(test.ParserAssert.Fails)) log.Errorf("Parser test '%s' failed (%d errors)\n", test.Name, len(test.ParserAssert.Fails))
@ -281,21 +352,23 @@ func NewHubTestRunCmd() *cobra.Command {
Default: true, Default: true,
} }
if err := survey.AskOne(prompt, &cleanTestEnv); err != nil { if err := survey.AskOne(prompt, &cleanTestEnv); err != nil {
return fmt.Errorf("unable to ask to remove runtime folder: %s", err) return fmt.Errorf("unable to ask to remove runtime folder: %w", err)
} }
} }
} }
if cleanTestEnv || forceClean { if cleanTestEnv || forceClean {
if err := test.Clean(); err != nil { if err := test.Clean(); err != nil {
return fmt.Errorf("unable to clean test '%s' env: %s", test.Name, err) return fmt.Errorf("unable to clean test '%s' env: %w", test.Name, err)
} }
} }
} }
} }
if csConfig.Cscli.Output == "human" {
switch cfg.Cscli.Output {
case "human":
hubTestResultTable(color.Output, testResult) hubTestResultTable(color.Output, testResult)
} else if csConfig.Cscli.Output == "json" { case "json":
jsonResult := make(map[string][]string, 0) jsonResult := make(map[string][]string, 0)
jsonResult["success"] = make([]string, 0) jsonResult["success"] = make([]string, 0)
jsonResult["fail"] = make([]string, 0) jsonResult["fail"] = make([]string, 0)
@ -308,9 +381,11 @@ func NewHubTestRunCmd() *cobra.Command {
} }
jsonStr, err := json.Marshal(jsonResult) jsonStr, err := json.Marshal(jsonResult)
if err != nil { if err != nil {
return fmt.Errorf("unable to json test result: %s", err) return fmt.Errorf("unable to json test result: %w", err)
} }
fmt.Println(string(jsonStr)) fmt.Println(string(jsonStr))
default:
return errors.New("only human/json output modes are supported")
} }
if !success { if !success {
@ -320,28 +395,30 @@ func NewHubTestRunCmd() *cobra.Command {
return nil return nil
}, },
} }
cmdHubTestRun.Flags().BoolVar(&noClean, "no-clean", false, "Don't clean runtime environment if test succeed")
cmdHubTestRun.Flags().BoolVar(&forceClean, "clean", false, "Clean runtime environment if test fail")
cmdHubTestRun.Flags().BoolVar(&runAll, "all", false, "Run all tests")
return cmdHubTestRun cmd.Flags().BoolVar(&noClean, "no-clean", false, "Don't clean runtime environment if test succeed")
cmd.Flags().BoolVar(&forceClean, "clean", false, "Clean runtime environment if test fail")
cmd.Flags().StringVar(&NucleiTargetHost, "target", hubtest.DefaultNucleiTarget, "Target for AppSec Test")
cmd.Flags().StringVar(&AppSecHost, "host", hubtest.DefaultAppsecHost, "Address to expose AppSec for hubtest")
cmd.Flags().BoolVar(&runAll, "all", false, "Run all tests")
return cmd
} }
func (cli *cliHubTest) NewCleanCmd() *cobra.Command {
func NewHubTestCleanCmd() *cobra.Command { cmd := &cobra.Command{
var cmdHubTestClean = &cobra.Command{
Use: "clean", Use: "clean",
Short: "clean [test_name]", Short: "clean [test_name]",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
for _, testName := range args { for _, testName := range args {
test, err := HubTest.LoadTestItem(testName) test, err := hubPtr.LoadTestItem(testName)
if err != nil { if err != nil {
return fmt.Errorf("unable to load test '%s': %s", testName, err) return fmt.Errorf("unable to load test '%s': %w", testName, err)
} }
if err := test.Clean(); err != nil { if err := test.Clean(); err != nil {
return fmt.Errorf("unable to clean test '%s' env: %s", test.Name, err) return fmt.Errorf("unable to clean test '%s' env: %w", test.Name, err)
} }
} }
@ -349,28 +426,32 @@ func NewHubTestCleanCmd() *cobra.Command {
}, },
} }
return cmdHubTestClean return cmd
} }
func (cli *cliHubTest) NewInfoCmd() *cobra.Command {
func NewHubTestInfoCmd() *cobra.Command { cmd := &cobra.Command{
var cmdHubTestInfo = &cobra.Command{
Use: "info", Use: "info",
Short: "info [test_name]", Short: "info [test_name]",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
for _, testName := range args { for _, testName := range args {
test, err := HubTest.LoadTestItem(testName) test, err := hubPtr.LoadTestItem(testName)
if err != nil { if err != nil {
return fmt.Errorf("unable to load test '%s': %s", testName, err) return fmt.Errorf("unable to load test '%s': %w", testName, err)
} }
fmt.Println() fmt.Println()
fmt.Printf(" Test name : %s\n", test.Name) fmt.Printf(" Test name : %s\n", test.Name)
fmt.Printf(" Test path : %s\n", test.Path) fmt.Printf(" Test path : %s\n", test.Path)
fmt.Printf(" Log file : %s\n", filepath.Join(test.Path, test.Config.LogFile)) if isAppsecTest {
fmt.Printf(" Parser assertion file : %s\n", filepath.Join(test.Path, hubtest.ParserAssertFileName)) fmt.Printf(" Nuclei Template : %s\n", test.Config.NucleiTemplate)
fmt.Printf(" Scenario assertion file : %s\n", filepath.Join(test.Path, hubtest.ScenarioAssertFileName)) fmt.Printf(" Appsec Rules : %s\n", strings.Join(test.Config.AppsecRules, ", "))
} else {
fmt.Printf(" Log file : %s\n", filepath.Join(test.Path, test.Config.LogFile))
fmt.Printf(" Parser assertion file : %s\n", filepath.Join(test.Path, hubtest.ParserAssertFileName))
fmt.Printf(" Scenario assertion file : %s\n", filepath.Join(test.Path, hubtest.ScenarioAssertFileName))
}
fmt.Printf(" Configuration File : %s\n", filepath.Join(test.Path, "config.yaml")) fmt.Printf(" Configuration File : %s\n", filepath.Join(test.Path, "config.yaml"))
} }
@ -378,72 +459,80 @@ func NewHubTestInfoCmd() *cobra.Command {
}, },
} }
return cmdHubTestInfo return cmd
} }
func (cli *cliHubTest) NewListCmd() *cobra.Command {
func NewHubTestListCmd() *cobra.Command { cmd := &cobra.Command{
var cmdHubTestList = &cobra.Command{
Use: "list", Use: "list",
Short: "list", Short: "list",
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
if err := HubTest.LoadAllTests(); err != nil { cfg := cli.cfg()
return fmt.Errorf("unable to load all tests: %s", err)
if err := hubPtr.LoadAllTests(); err != nil {
return fmt.Errorf("unable to load all tests: %w", err)
} }
switch csConfig.Cscli.Output { switch cfg.Cscli.Output {
case "human": case "human":
hubTestListTable(color.Output, HubTest.Tests) hubTestListTable(color.Output, hubPtr.Tests)
case "json": case "json":
j, err := json.MarshalIndent(HubTest.Tests, " ", " ") j, err := json.MarshalIndent(hubPtr.Tests, " ", " ")
if err != nil { if err != nil {
return err return err
} }
fmt.Println(string(j)) fmt.Println(string(j))
default: default:
return fmt.Errorf("only human/json output modes are supported") return errors.New("only human/json output modes are supported")
} }
return nil return nil
}, },
} }
return cmdHubTestList return cmd
} }
func (cli *cliHubTest) NewCoverageCmd() *cobra.Command {
var (
showParserCov bool
showScenarioCov bool
showOnlyPercent bool
showAppsecCov bool
)
func NewHubTestCoverageCmd() *cobra.Command { cmd := &cobra.Command{
var showParserCov bool
var showScenarioCov bool
var showOnlyPercent bool
var cmdHubTestCoverage = &cobra.Command{
Use: "coverage", Use: "coverage",
Short: "coverage", Short: "coverage",
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
cfg := cli.cfg()
// for this one we explicitly don't do for appsec
if err := HubTest.LoadAllTests(); err != nil { if err := HubTest.LoadAllTests(); err != nil {
return fmt.Errorf("unable to load all tests: %+v", err) return fmt.Errorf("unable to load all tests: %+v", err)
} }
var err error var err error
scenarioCoverage := []hubtest.ScenarioCoverage{} scenarioCoverage := []hubtest.Coverage{}
parserCoverage := []hubtest.ParserCoverage{} parserCoverage := []hubtest.Coverage{}
appsecRuleCoverage := []hubtest.Coverage{}
scenarioCoveragePercent := 0 scenarioCoveragePercent := 0
parserCoveragePercent := 0 parserCoveragePercent := 0
appsecRuleCoveragePercent := 0
// if both are false (flag by default), show both // if both are false (flag by default), show both
showAll := !showScenarioCov && !showParserCov showAll := !showScenarioCov && !showParserCov && !showAppsecCov
if showParserCov || showAll { if showParserCov || showAll {
parserCoverage, err = HubTest.GetParsersCoverage() parserCoverage, err = HubTest.GetParsersCoverage()
if err != nil { if err != nil {
return fmt.Errorf("while getting parser coverage: %s", err) return fmt.Errorf("while getting parser coverage: %w", err)
} }
parserTested := 0 parserTested := 0
for _, test := range parserCoverage { for _, test := range parserCoverage {
if test.TestsCount > 0 { if test.TestsCount > 0 {
parserTested += 1 parserTested++
} }
} }
parserCoveragePercent = int(math.Round((float64(parserTested) / float64(len(parserCoverage)) * 100))) parserCoveragePercent = int(math.Round((float64(parserTested) / float64(len(parserCoverage)) * 100)))
@ -452,29 +541,50 @@ func NewHubTestCoverageCmd() *cobra.Command {
if showScenarioCov || showAll { if showScenarioCov || showAll {
scenarioCoverage, err = HubTest.GetScenariosCoverage() scenarioCoverage, err = HubTest.GetScenariosCoverage()
if err != nil { if err != nil {
return fmt.Errorf("while getting scenario coverage: %s", err) return fmt.Errorf("while getting scenario coverage: %w", err)
} }
scenarioTested := 0 scenarioTested := 0
for _, test := range scenarioCoverage { for _, test := range scenarioCoverage {
if test.TestsCount > 0 { if test.TestsCount > 0 {
scenarioTested += 1 scenarioTested++
} }
} }
scenarioCoveragePercent = int(math.Round((float64(scenarioTested) / float64(len(scenarioCoverage)) * 100))) scenarioCoveragePercent = int(math.Round((float64(scenarioTested) / float64(len(scenarioCoverage)) * 100)))
} }
if showAppsecCov || showAll {
appsecRuleCoverage, err = HubTest.GetAppsecCoverage()
if err != nil {
return fmt.Errorf("while getting scenario coverage: %w", err)
}
appsecRuleTested := 0
for _, test := range appsecRuleCoverage {
if test.TestsCount > 0 {
appsecRuleTested++
}
}
appsecRuleCoveragePercent = int(math.Round((float64(appsecRuleTested) / float64(len(appsecRuleCoverage)) * 100)))
}
if showOnlyPercent { if showOnlyPercent {
if showAll { switch {
fmt.Printf("parsers=%d%%\nscenarios=%d%%", parserCoveragePercent, scenarioCoveragePercent) case showAll:
} else if showParserCov { fmt.Printf("parsers=%d%%\nscenarios=%d%%\nappsec_rules=%d%%", parserCoveragePercent, scenarioCoveragePercent, appsecRuleCoveragePercent)
case showParserCov:
fmt.Printf("parsers=%d%%", parserCoveragePercent) fmt.Printf("parsers=%d%%", parserCoveragePercent)
} else if showScenarioCov { case showScenarioCov:
fmt.Printf("scenarios=%d%%", scenarioCoveragePercent) fmt.Printf("scenarios=%d%%", scenarioCoveragePercent)
case showAppsecCov:
fmt.Printf("appsec_rules=%d%%", appsecRuleCoveragePercent)
} }
os.Exit(0) os.Exit(0)
} }
if csConfig.Cscli.Output == "human" { switch cfg.Cscli.Output {
case "human":
if showParserCov || showAll { if showParserCov || showAll {
hubTestParserCoverageTable(color.Output, parserCoverage) hubTestParserCoverageTable(color.Output, parserCoverage)
} }
@ -482,6 +592,11 @@ func NewHubTestCoverageCmd() *cobra.Command {
if showScenarioCov || showAll { if showScenarioCov || showAll {
hubTestScenarioCoverageTable(color.Output, scenarioCoverage) hubTestScenarioCoverageTable(color.Output, scenarioCoverage)
} }
if showAppsecCov || showAll {
hubTestAppsecRuleCoverageTable(color.Output, appsecRuleCoverage)
}
fmt.Println() fmt.Println()
if showParserCov || showAll { if showParserCov || showAll {
fmt.Printf("PARSERS : %d%% of coverage\n", parserCoveragePercent) fmt.Printf("PARSERS : %d%% of coverage\n", parserCoveragePercent)
@ -489,7 +604,10 @@ func NewHubTestCoverageCmd() *cobra.Command {
if showScenarioCov || showAll { if showScenarioCov || showAll {
fmt.Printf("SCENARIOS : %d%% of coverage\n", scenarioCoveragePercent) fmt.Printf("SCENARIOS : %d%% of coverage\n", scenarioCoveragePercent)
} }
} else if csConfig.Cscli.Output == "json" { if showAppsecCov || showAll {
fmt.Printf("APPSEC RULES : %d%% of coverage\n", appsecRuleCoveragePercent)
}
case "json":
dump, err := json.MarshalIndent(parserCoverage, "", " ") dump, err := json.MarshalIndent(parserCoverage, "", " ")
if err != nil { if err != nil {
return err return err
@ -500,61 +618,71 @@ func NewHubTestCoverageCmd() *cobra.Command {
return err return err
} }
fmt.Printf("%s", dump) fmt.Printf("%s", dump)
} else { dump, err = json.MarshalIndent(appsecRuleCoverage, "", " ")
return fmt.Errorf("only human/json output modes are supported") if err != nil {
return err
}
fmt.Printf("%s", dump)
default:
return errors.New("only human/json output modes are supported")
} }
return nil return nil
}, },
} }
cmdHubTestCoverage.PersistentFlags().BoolVar(&showOnlyPercent, "percent", false, "Show only percentages of coverage")
cmdHubTestCoverage.PersistentFlags().BoolVar(&showParserCov, "parsers", false, "Show only parsers coverage")
cmdHubTestCoverage.PersistentFlags().BoolVar(&showScenarioCov, "scenarios", false, "Show only scenarios coverage")
return cmdHubTestCoverage cmd.PersistentFlags().BoolVar(&showOnlyPercent, "percent", false, "Show only percentages of coverage")
cmd.PersistentFlags().BoolVar(&showParserCov, "parsers", false, "Show only parsers coverage")
cmd.PersistentFlags().BoolVar(&showScenarioCov, "scenarios", false, "Show only scenarios coverage")
cmd.PersistentFlags().BoolVar(&showAppsecCov, "appsec", false, "Show only appsec coverage")
return cmd
} }
func (cli *cliHubTest) NewEvalCmd() *cobra.Command {
func NewHubTestEvalCmd() *cobra.Command {
var evalExpression string var evalExpression string
var cmdHubTestEval = &cobra.Command{
cmd := &cobra.Command{
Use: "eval", Use: "eval",
Short: "eval [test_name]", Short: "eval [test_name]",
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
for _, testName := range args { for _, testName := range args {
test, err := HubTest.LoadTestItem(testName) test, err := hubPtr.LoadTestItem(testName)
if err != nil { if err != nil {
return fmt.Errorf("can't load test: %+v", err) return fmt.Errorf("can't load test: %+v", err)
} }
err = test.ParserAssert.LoadTest(test.ParserResultFile) err = test.ParserAssert.LoadTest(test.ParserResultFile)
if err != nil { if err != nil {
return fmt.Errorf("can't load test results from '%s': %+v", test.ParserResultFile, err) return fmt.Errorf("can't load test results from '%s': %+v", test.ParserResultFile, err)
} }
output, err := test.ParserAssert.EvalExpression(evalExpression) output, err := test.ParserAssert.EvalExpression(evalExpression)
if err != nil { if err != nil {
return err return err
} }
fmt.Print(output) fmt.Print(output)
} }
return nil return nil
}, },
} }
cmdHubTestEval.PersistentFlags().StringVarP(&evalExpression, "expr", "e", "", "Expression to eval")
return cmdHubTestEval cmd.PersistentFlags().StringVarP(&evalExpression, "expr", "e", "", "Expression to eval")
return cmd
} }
func (cli *cliHubTest) NewExplainCmd() *cobra.Command {
func NewHubTestExplainCmd() *cobra.Command { cmd := &cobra.Command{
var cmdHubTestExplain = &cobra.Command{
Use: "explain", Use: "explain",
Short: "explain [test_name]", Short: "explain [test_name]",
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
for _, testName := range args { for _, testName := range args {
test, err := HubTest.LoadTestItem(testName) test, err := HubTest.LoadTestItem(testName)
if err != nil { if err != nil {
@ -562,34 +690,32 @@ func NewHubTestExplainCmd() *cobra.Command {
} }
err = test.ParserAssert.LoadTest(test.ParserResultFile) err = test.ParserAssert.LoadTest(test.ParserResultFile)
if err != nil { if err != nil {
err := test.Run() if err = test.Run(); err != nil {
if err != nil {
return fmt.Errorf("running test '%s' failed: %+v", test.Name, err) return fmt.Errorf("running test '%s' failed: %+v", test.Name, err)
} }
err = test.ParserAssert.LoadTest(test.ParserResultFile)
if err != nil { if err = test.ParserAssert.LoadTest(test.ParserResultFile); err != nil {
return fmt.Errorf("unable to load parser result after run: %s", err) return fmt.Errorf("unable to load parser result after run: %w", err)
} }
} }
err = test.ScenarioAssert.LoadTest(test.ScenarioResultFile, test.BucketPourResultFile) err = test.ScenarioAssert.LoadTest(test.ScenarioResultFile, test.BucketPourResultFile)
if err != nil { if err != nil {
err := test.Run() if err = test.Run(); err != nil {
if err != nil {
return fmt.Errorf("running test '%s' failed: %+v", test.Name, err) return fmt.Errorf("running test '%s' failed: %+v", test.Name, err)
} }
err = test.ScenarioAssert.LoadTest(test.ScenarioResultFile, test.BucketPourResultFile)
if err != nil { if err = test.ScenarioAssert.LoadTest(test.ScenarioResultFile, test.BucketPourResultFile); err != nil {
return fmt.Errorf("unable to load scenario result after run: %s", err) return fmt.Errorf("unable to load scenario result after run: %w", err)
} }
} }
opts := hubtest.DumpOpts{} opts := dumps.DumpOpts{}
hubtest.DumpTree(*test.ParserAssert.TestData, *test.ScenarioAssert.PourData, opts) dumps.DumpTree(*test.ParserAssert.TestData, *test.ScenarioAssert.PourData, opts)
} }
return nil return nil
}, },
} }
return cmdHubTestExplain return cmd
} }

View file

@ -5,8 +5,8 @@ import (
"io" "io"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/enescakir/emoji"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
"github.com/crowdsecurity/crowdsec/pkg/hubtest" "github.com/crowdsecurity/crowdsec/pkg/hubtest"
) )
@ -17,9 +17,9 @@ func hubTestResultTable(out io.Writer, testResult map[string]bool) {
t.SetAlignment(table.AlignLeft) t.SetAlignment(table.AlignLeft)
for testName, success := range testResult { for testName, success := range testResult {
status := emoji.CheckMarkButton.String() status := emoji.CheckMarkButton
if !success { if !success {
status = emoji.CrossMark.String() status = emoji.CrossMark
} }
t.AddRow(testName, status) t.AddRow(testName, status)
@ -41,39 +41,64 @@ func hubTestListTable(out io.Writer, tests []*hubtest.HubTestItem) {
t.Render() t.Render()
} }
func hubTestParserCoverageTable(out io.Writer, coverage []hubtest.ParserCoverage) { func hubTestParserCoverageTable(out io.Writer, coverage []hubtest.Coverage) {
t := newLightTable(out) t := newLightTable(out)
t.SetHeaders("Parser", "Status", "Number of tests") t.SetHeaders("Parser", "Status", "Number of tests")
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
parserTested := 0 parserTested := 0
for _, test := range coverage { for _, test := range coverage {
status := emoji.RedCircle.String() status := emoji.RedCircle
if test.TestsCount > 0 { if test.TestsCount > 0 {
status = emoji.GreenCircle.String() status = emoji.GreenCircle
parserTested++ parserTested++
} }
t.AddRow(test.Parser, status, fmt.Sprintf("%d times (across %d tests)", test.TestsCount, len(test.PresentIn)))
t.AddRow(test.Name, status, fmt.Sprintf("%d times (across %d tests)", test.TestsCount, len(test.PresentIn)))
} }
t.Render() t.Render()
} }
func hubTestScenarioCoverageTable(out io.Writer, coverage []hubtest.ScenarioCoverage) { func hubTestAppsecRuleCoverageTable(out io.Writer, coverage []hubtest.Coverage) {
t := newLightTable(out)
t.SetHeaders("Appsec Rule", "Status", "Number of tests")
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
parserTested := 0
for _, test := range coverage {
status := emoji.RedCircle
if test.TestsCount > 0 {
status = emoji.GreenCircle
parserTested++
}
t.AddRow(test.Name, status, fmt.Sprintf("%d times (across %d tests)", test.TestsCount, len(test.PresentIn)))
}
t.Render()
}
func hubTestScenarioCoverageTable(out io.Writer, coverage []hubtest.Coverage) {
t := newLightTable(out) t := newLightTable(out)
t.SetHeaders("Scenario", "Status", "Number of tests") t.SetHeaders("Scenario", "Status", "Number of tests")
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
parserTested := 0 parserTested := 0
for _, test := range coverage { for _, test := range coverage {
status := emoji.RedCircle.String() status := emoji.RedCircle
if test.TestsCount > 0 { if test.TestsCount > 0 {
status = emoji.GreenCircle.String() status = emoji.GreenCircle
parserTested++ parserTested++
} }
t.AddRow(test.Scenario, status, fmt.Sprintf("%d times (across %d tests)", test.TestsCount, len(test.PresentIn)))
t.AddRow(test.Name, status, fmt.Sprintf("%d times (across %d tests)", test.TestsCount, len(test.PresentIn)))
} }
t.Render() t.Render()

View file

@ -0,0 +1,327 @@
package main
import (
"fmt"
"math"
"net/http"
"strconv"
"strings"
"time"
"github.com/fatih/color"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/prom2json"
log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func ShowMetrics(hubItem *cwhub.Item) error {
switch hubItem.Type {
case cwhub.PARSERS:
metrics := GetParserMetric(csConfig.Cscli.PrometheusUrl, hubItem.Name)
parserMetricsTable(color.Output, hubItem.Name, metrics)
case cwhub.SCENARIOS:
metrics := GetScenarioMetric(csConfig.Cscli.PrometheusUrl, hubItem.Name)
scenarioMetricsTable(color.Output, hubItem.Name, metrics)
case cwhub.COLLECTIONS:
for _, sub := range hubItem.SubItems() {
if err := ShowMetrics(sub); err != nil {
return err
}
}
case cwhub.APPSEC_RULES:
metrics := GetAppsecRuleMetric(csConfig.Cscli.PrometheusUrl, hubItem.Name)
appsecMetricsTable(color.Output, hubItem.Name, metrics)
default: // no metrics for this item type
}
return nil
}
// GetParserMetric is a complete rip from prom2json
func GetParserMetric(url string, itemName string) map[string]map[string]int {
stats := make(map[string]map[string]int)
result := GetPrometheusMetric(url)
for idx, fam := range result {
if !strings.HasPrefix(fam.Name, "cs_") {
continue
}
log.Tracef("round %d", idx)
for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric)
if !ok {
log.Debugf("failed to convert metric to prom2json.Metric")
continue
}
name, ok := metric.Labels["name"]
if !ok {
log.Debugf("no name in Metric %v", metric.Labels)
}
if name != itemName {
continue
}
source, ok := metric.Labels["source"]
if !ok {
log.Debugf("no source in Metric %v", metric.Labels)
} else {
if srctype, ok := metric.Labels["type"]; ok {
source = srctype + ":" + source
}
}
value := m.(prom2json.Metric).Value
fval, err := strconv.ParseFloat(value, 32)
if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err)
continue
}
ival := int(fval)
switch fam.Name {
case "cs_reader_hits_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
stats[source]["parsed"] = 0
stats[source]["reads"] = 0
stats[source]["unparsed"] = 0
stats[source]["hits"] = 0
}
stats[source]["reads"] += ival
case "cs_parser_hits_ok_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["parsed"] += ival
case "cs_parser_hits_ko_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["unparsed"] += ival
case "cs_node_hits_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["hits"] += ival
case "cs_node_hits_ok_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["parsed"] += ival
case "cs_node_hits_ko_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["unparsed"] += ival
default:
continue
}
}
}
return stats
}
func GetScenarioMetric(url string, itemName string) map[string]int {
stats := make(map[string]int)
stats["instantiation"] = 0
stats["curr_count"] = 0
stats["overflow"] = 0
stats["pour"] = 0
stats["underflow"] = 0
result := GetPrometheusMetric(url)
for idx, fam := range result {
if !strings.HasPrefix(fam.Name, "cs_") {
continue
}
log.Tracef("round %d", idx)
for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric)
if !ok {
log.Debugf("failed to convert metric to prom2json.Metric")
continue
}
name, ok := metric.Labels["name"]
if !ok {
log.Debugf("no name in Metric %v", metric.Labels)
}
if name != itemName {
continue
}
value := m.(prom2json.Metric).Value
fval, err := strconv.ParseFloat(value, 32)
if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err)
continue
}
ival := int(fval)
switch fam.Name {
case "cs_bucket_created_total":
stats["instantiation"] += ival
case "cs_buckets":
stats["curr_count"] += ival
case "cs_bucket_overflowed_total":
stats["overflow"] += ival
case "cs_bucket_poured_total":
stats["pour"] += ival
case "cs_bucket_underflowed_total":
stats["underflow"] += ival
default:
continue
}
}
}
return stats
}
func GetAppsecRuleMetric(url string, itemName string) map[string]int {
stats := make(map[string]int)
stats["inband_hits"] = 0
stats["outband_hits"] = 0
results := GetPrometheusMetric(url)
for idx, fam := range results {
if !strings.HasPrefix(fam.Name, "cs_") {
continue
}
log.Tracef("round %d", idx)
for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric)
if !ok {
log.Debugf("failed to convert metric to prom2json.Metric")
continue
}
name, ok := metric.Labels["rule_name"]
if !ok {
log.Debugf("no rule_name in Metric %v", metric.Labels)
}
if name != itemName {
continue
}
band, ok := metric.Labels["type"]
if !ok {
log.Debugf("no type in Metric %v", metric.Labels)
}
value := m.(prom2json.Metric).Value
fval, err := strconv.ParseFloat(value, 32)
if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err)
continue
}
ival := int(fval)
switch fam.Name {
case "cs_appsec_rule_hits":
switch band {
case "inband":
stats["inband_hits"] += ival
case "outband":
stats["outband_hits"] += ival
default:
continue
}
default:
continue
}
}
}
return stats
}
func GetPrometheusMetric(url string) []*prom2json.Family {
mfChan := make(chan *dto.MetricFamily, 1024)
// Start with the DefaultTransport for sane defaults.
transport := http.DefaultTransport.(*http.Transport).Clone()
// Conservatively disable HTTP keep-alives as this program will only
// ever need a single HTTP request.
transport.DisableKeepAlives = true
// Timeout early if the server doesn't even return the headers.
transport.ResponseHeaderTimeout = time.Minute
go func() {
defer trace.CatchPanic("crowdsec/GetPrometheusMetric")
err := prom2json.FetchMetricFamilies(url, mfChan, transport)
if err != nil {
log.Fatalf("failed to fetch prometheus metrics : %v", err)
}
}()
result := []*prom2json.Family{}
for mf := range mfChan {
result = append(result, prom2json.NewFamily(mf))
}
log.Debugf("Finished reading prometheus output, %d entries", len(result))
return result
}
type unit struct {
value int64
symbol string
}
var ranges = []unit{
{value: 1e18, symbol: "E"},
{value: 1e15, symbol: "P"},
{value: 1e12, symbol: "T"},
{value: 1e9, symbol: "G"},
{value: 1e6, symbol: "M"},
{value: 1e3, symbol: "k"},
{value: 1, symbol: ""},
}
func formatNumber(num int) string {
goodUnit := unit{}
for _, u := range ranges {
if int64(num) >= u.value {
goodUnit = u
break
}
}
if goodUnit.value == 1 {
return fmt.Sprintf("%d%s", num, goodUnit.symbol)
}
res := math.Round(float64(num)/float64(goodUnit.value)*100) / 100
return fmt.Sprintf("%.2f%s", res, goodUnit.symbol)
}

View file

@ -0,0 +1,85 @@
package main
import (
"fmt"
"slices"
"strings"
"github.com/agext/levenshtein"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
// suggestNearestMessage returns a message with the most similar item name, if one is found
func suggestNearestMessage(hub *cwhub.Hub, itemType string, itemName string) string {
const maxDistance = 7
score := 100
nearest := ""
for _, item := range hub.GetItemMap(itemType) {
d := levenshtein.Distance(itemName, item.Name, nil)
if d < score {
score = d
nearest = item.Name
}
}
msg := fmt.Sprintf("can't find '%s' in %s", itemName, itemType)
if score < maxDistance {
msg += fmt.Sprintf(", did you mean '%s'?", nearest)
}
return msg
}
func compAllItems(itemType string, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
hub, err := require.Hub(csConfig, nil, nil)
if err != nil {
return nil, cobra.ShellCompDirectiveDefault
}
comp := make([]string, 0)
for _, item := range hub.GetItemMap(itemType) {
if !slices.Contains(args, item.Name) && strings.Contains(item.Name, toComplete) {
comp = append(comp, item.Name)
}
}
cobra.CompDebugln(fmt.Sprintf("%s: %+v", itemType, comp), true)
return comp, cobra.ShellCompDirectiveNoFileComp
}
func compInstalledItems(itemType string, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
hub, err := require.Hub(csConfig, nil, nil)
if err != nil {
return nil, cobra.ShellCompDirectiveDefault
}
items, err := hub.GetInstalledNamesByType(itemType)
if err != nil {
cobra.CompDebugln(fmt.Sprintf("list installed %s err: %s", itemType, err), true)
return nil, cobra.ShellCompDirectiveDefault
}
comp := make([]string, 0)
if toComplete != "" {
for _, item := range items {
if strings.Contains(item, toComplete) {
comp = append(comp, item)
}
}
} else {
comp = items
}
cobra.CompDebugln(fmt.Sprintf("%s: %+v", itemType, comp), true)
return comp, cobra.ShellCompDirectiveNoFileComp
}

546
cmd/crowdsec-cli/itemcli.go Normal file
View file

@ -0,0 +1,546 @@
package main
import (
"errors"
"fmt"
"os"
"strings"
"github.com/fatih/color"
"github.com/hexops/gotextdiff"
"github.com/hexops/gotextdiff/myers"
"github.com/hexops/gotextdiff/span"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/go-cs-lib/coalesce"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
type cliHelp struct {
// Example is required, the others have a default value
// generated from the item type
use string
short string
long string
example string
}
type cliItem struct {
cfg configGetter
name string // plural, as used in the hub index
singular string
oneOrMore string // parenthetical pluralizaion: "parser(s)"
help cliHelp
installHelp cliHelp
removeHelp cliHelp
upgradeHelp cliHelp
inspectHelp cliHelp
inspectDetail func(item *cwhub.Item) error
listHelp cliHelp
}
func (cli cliItem) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: coalesce.String(cli.help.use, fmt.Sprintf("%s <action> [item]...", cli.name)),
Short: coalesce.String(cli.help.short, fmt.Sprintf("Manage hub %s", cli.name)),
Long: cli.help.long,
Example: cli.help.example,
Args: cobra.MinimumNArgs(1),
Aliases: []string{cli.singular},
DisableAutoGenTag: true,
}
cmd.AddCommand(cli.newInstallCmd())
cmd.AddCommand(cli.newRemoveCmd())
cmd.AddCommand(cli.newUpgradeCmd())
cmd.AddCommand(cli.newInspectCmd())
cmd.AddCommand(cli.newListCmd())
return cmd
}
func (cli cliItem) install(args []string, downloadOnly bool, force bool, ignoreError bool) error {
cfg := cli.cfg()
hub, err := require.Hub(cfg, require.RemoteHub(cfg), log.StandardLogger())
if err != nil {
return err
}
for _, name := range args {
item := hub.GetItem(cli.name, name)
if item == nil {
msg := suggestNearestMessage(hub, cli.name, name)
if !ignoreError {
return errors.New(msg)
}
log.Errorf(msg)
continue
}
if err := item.Install(force, downloadOnly); err != nil {
if !ignoreError {
return fmt.Errorf("error while installing '%s': %w", item.Name, err)
}
log.Errorf("Error while installing '%s': %s", item.Name, err)
}
}
log.Infof(ReloadMessage())
return nil
}
func (cli cliItem) newInstallCmd() *cobra.Command {
var (
downloadOnly bool
force bool
ignoreError bool
)
cmd := &cobra.Command{
Use: coalesce.String(cli.installHelp.use, "install [item]..."),
Short: coalesce.String(cli.installHelp.short, fmt.Sprintf("Install given %s", cli.oneOrMore)),
Long: coalesce.String(cli.installHelp.long, fmt.Sprintf("Fetch and install one or more %s from the hub", cli.name)),
Example: cli.installHelp.example,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cli.name, args, toComplete)
},
RunE: func(_ *cobra.Command, args []string) error {
return cli.install(args, downloadOnly, force, ignoreError)
},
}
flags := cmd.Flags()
flags.BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
flags.BoolVar(&force, "force", false, "Force install: overwrite tainted and outdated files")
flags.BoolVar(&ignoreError, "ignore", false, fmt.Sprintf("Ignore errors when installing multiple %s", cli.name))
return cmd
}
// return the names of the installed parents of an item, used to check if we can remove it
func istalledParentNames(item *cwhub.Item) []string {
ret := make([]string, 0)
for _, parent := range item.Ancestors() {
if parent.State.Installed {
ret = append(ret, parent.Name)
}
}
return ret
}
func (cli cliItem) remove(args []string, purge bool, force bool, all bool) error {
hub, err := require.Hub(cli.cfg(), nil, log.StandardLogger())
if err != nil {
return err
}
if all {
getter := hub.GetInstalledItemsByType
if purge {
getter = hub.GetItemsByType
}
items, err := getter(cli.name)
if err != nil {
return err
}
removed := 0
for _, item := range items {
didRemove, err := item.Remove(purge, force)
if err != nil {
return err
}
if didRemove {
log.Infof("Removed %s", item.Name)
removed++
}
}
log.Infof("Removed %d %s", removed, cli.name)
if removed > 0 {
log.Infof(ReloadMessage())
}
return nil
}
if len(args) == 0 {
return fmt.Errorf("specify at least one %s to remove or '--all'", cli.singular)
}
removed := 0
for _, itemName := range args {
item := hub.GetItem(cli.name, itemName)
if item == nil {
return fmt.Errorf("can't find '%s' in %s", itemName, cli.name)
}
parents := istalledParentNames(item)
if !force && len(parents) > 0 {
log.Warningf("%s belongs to collections: %s", item.Name, parents)
log.Warningf("Run 'sudo cscli %s remove %s --force' if you want to force remove this %s", item.Type, item.Name, cli.singular)
continue
}
didRemove, err := item.Remove(purge, force)
if err != nil {
return err
}
if didRemove {
log.Infof("Removed %s", item.Name)
removed++
}
}
log.Infof("Removed %d %s", removed, cli.name)
if removed > 0 {
log.Infof(ReloadMessage())
}
return nil
}
func (cli cliItem) newRemoveCmd() *cobra.Command {
var (
purge bool
force bool
all bool
)
cmd := &cobra.Command{
Use: coalesce.String(cli.removeHelp.use, "remove [item]..."),
Short: coalesce.String(cli.removeHelp.short, fmt.Sprintf("Remove given %s", cli.oneOrMore)),
Long: coalesce.String(cli.removeHelp.long, fmt.Sprintf("Remove one or more %s", cli.name)),
Example: cli.removeHelp.example,
Aliases: []string{"delete"},
DisableAutoGenTag: true,
ValidArgsFunction: func(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cli.name, args, toComplete)
},
RunE: func(_ *cobra.Command, args []string) error {
return cli.remove(args, purge, force, all)
},
}
flags := cmd.Flags()
flags.BoolVar(&purge, "purge", false, "Delete source file too")
flags.BoolVar(&force, "force", false, "Force remove: remove tainted and outdated files")
flags.BoolVar(&all, "all", false, fmt.Sprintf("Remove all the %s", cli.name))
return cmd
}
func (cli cliItem) upgrade(args []string, force bool, all bool) error {
cfg := cli.cfg()
hub, err := require.Hub(cfg, require.RemoteHub(cfg), log.StandardLogger())
if err != nil {
return err
}
if all {
items, err := hub.GetInstalledItemsByType(cli.name)
if err != nil {
return err
}
updated := 0
for _, item := range items {
didUpdate, err := item.Upgrade(force)
if err != nil {
return err
}
if didUpdate {
updated++
}
}
log.Infof("Updated %d %s", updated, cli.name)
if updated > 0 {
log.Infof(ReloadMessage())
}
return nil
}
if len(args) == 0 {
return fmt.Errorf("specify at least one %s to upgrade or '--all'", cli.singular)
}
updated := 0
for _, itemName := range args {
item := hub.GetItem(cli.name, itemName)
if item == nil {
return fmt.Errorf("can't find '%s' in %s", itemName, cli.name)
}
didUpdate, err := item.Upgrade(force)
if err != nil {
return err
}
if didUpdate {
log.Infof("Updated %s", item.Name)
updated++
}
}
if updated > 0 {
log.Infof(ReloadMessage())
}
return nil
}
func (cli cliItem) newUpgradeCmd() *cobra.Command {
var (
all bool
force bool
)
cmd := &cobra.Command{
Use: coalesce.String(cli.upgradeHelp.use, "upgrade [item]..."),
Short: coalesce.String(cli.upgradeHelp.short, fmt.Sprintf("Upgrade given %s", cli.oneOrMore)),
Long: coalesce.String(cli.upgradeHelp.long, fmt.Sprintf("Fetch and upgrade one or more %s from the hub", cli.name)),
Example: cli.upgradeHelp.example,
DisableAutoGenTag: true,
ValidArgsFunction: func(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cli.name, args, toComplete)
},
RunE: func(_ *cobra.Command, args []string) error {
return cli.upgrade(args, force, all)
},
}
flags := cmd.Flags()
flags.BoolVarP(&all, "all", "a", false, fmt.Sprintf("Upgrade all the %s", cli.name))
flags.BoolVar(&force, "force", false, "Force upgrade: overwrite tainted and outdated files")
return cmd
}
func (cli cliItem) inspect(args []string, url string, diff bool, rev bool, noMetrics bool) error {
cfg := cli.cfg()
if rev && !diff {
return errors.New("--rev can only be used with --diff")
}
if url != "" {
cfg.Cscli.PrometheusUrl = url
}
remote := (*cwhub.RemoteHubCfg)(nil)
if diff {
remote = require.RemoteHub(cfg)
}
hub, err := require.Hub(cfg, remote, log.StandardLogger())
if err != nil {
return err
}
for _, name := range args {
item := hub.GetItem(cli.name, name)
if item == nil {
return fmt.Errorf("can't find '%s' in %s", name, cli.name)
}
if diff {
fmt.Println(cli.whyTainted(hub, item, rev))
continue
}
if err = inspectItem(item, !noMetrics); err != nil {
return err
}
if cli.inspectDetail != nil {
if err = cli.inspectDetail(item); err != nil {
return err
}
}
}
return nil
}
func (cli cliItem) newInspectCmd() *cobra.Command {
var (
url string
diff bool
rev bool
noMetrics bool
)
cmd := &cobra.Command{
Use: coalesce.String(cli.inspectHelp.use, "inspect [item]..."),
Short: coalesce.String(cli.inspectHelp.short, fmt.Sprintf("Inspect given %s", cli.oneOrMore)),
Long: coalesce.String(cli.inspectHelp.long, fmt.Sprintf("Inspect the state of one or more %s", cli.name)),
Example: cli.inspectHelp.example,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cli.name, args, toComplete)
},
RunE: func(_ *cobra.Command, args []string) error {
return cli.inspect(args, url, diff, rev, noMetrics)
},
}
flags := cmd.Flags()
flags.StringVarP(&url, "url", "u", "", "Prometheus url")
flags.BoolVar(&diff, "diff", false, "Show diff with latest version (for tainted items)")
flags.BoolVar(&rev, "rev", false, "Reverse diff output")
flags.BoolVar(&noMetrics, "no-metrics", false, "Don't show metrics (when cscli.output=human)")
return cmd
}
func (cli cliItem) list(args []string, all bool) error {
hub, err := require.Hub(cli.cfg(), nil, log.StandardLogger())
if err != nil {
return err
}
items := make(map[string][]*cwhub.Item)
items[cli.name], err = selectItems(hub, cli.name, args, !all)
if err != nil {
return err
}
if err = listItems(color.Output, []string{cli.name}, items, false); err != nil {
return err
}
return nil
}
func (cli cliItem) newListCmd() *cobra.Command {
var all bool
cmd := &cobra.Command{
Use: coalesce.String(cli.listHelp.use, "list [item... | -a]"),
Short: coalesce.String(cli.listHelp.short, fmt.Sprintf("List %s", cli.oneOrMore)),
Long: coalesce.String(cli.listHelp.long, fmt.Sprintf("List of installed/available/specified %s", cli.name)),
Example: cli.listHelp.example,
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, args []string) error {
return cli.list(args, all)
},
}
flags := cmd.Flags()
flags.BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmd
}
// return the diff between the installed version and the latest version
func (cli cliItem) itemDiff(item *cwhub.Item, reverse bool) (string, error) {
if !item.State.Installed {
return "", fmt.Errorf("'%s' is not installed", item.FQName())
}
latestContent, remoteURL, err := item.FetchLatest()
if err != nil {
return "", err
}
localContent, err := os.ReadFile(item.State.LocalPath)
if err != nil {
return "", fmt.Errorf("while reading %s: %w", item.State.LocalPath, err)
}
file1 := item.State.LocalPath
file2 := remoteURL
content1 := string(localContent)
content2 := string(latestContent)
if reverse {
file1, file2 = file2, file1
content1, content2 = content2, content1
}
edits := myers.ComputeEdits(span.URIFromPath(file1), content1, content2)
diff := gotextdiff.ToUnified(file1, file2, content1, edits)
return fmt.Sprintf("%s", diff), nil
}
func (cli cliItem) whyTainted(hub *cwhub.Hub, item *cwhub.Item, reverse bool) string {
if !item.State.Installed {
return fmt.Sprintf("# %s is not installed", item.FQName())
}
if !item.State.Tainted {
return fmt.Sprintf("# %s is not tainted", item.FQName())
}
if len(item.State.TaintedBy) == 0 {
return fmt.Sprintf("# %s is tainted but we don't know why. please report this as a bug", item.FQName())
}
ret := []string{
fmt.Sprintf("# Let's see why %s is tainted.", item.FQName()),
}
for _, fqsub := range item.State.TaintedBy {
ret = append(ret, fmt.Sprintf("\n-> %s\n", fqsub))
sub, err := hub.GetItemFQ(fqsub)
if err != nil {
ret = append(ret, err.Error())
}
diff, err := cli.itemDiff(sub, reverse)
if err != nil {
ret = append(ret, err.Error())
}
if diff != "" {
ret = append(ret, diff)
} else if len(sub.State.TaintedBy) > 0 {
taintList := strings.Join(sub.State.TaintedBy, ", ")
if sub.FQName() == taintList {
// hack: avoid message "item is tainted by itself"
continue
}
ret = append(ret, fmt.Sprintf("# %s is tainted by %s", sub.FQName(), taintList))
}
}
return strings.Join(ret, "\n")
}

183
cmd/crowdsec-cli/items.go Normal file
View file

@ -0,0 +1,183 @@
package main
import (
"encoding/csv"
"encoding/json"
"fmt"
"io"
"os"
"path/filepath"
"slices"
"strings"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
// selectItems returns a slice of items of a given type, selected by name and sorted by case-insensitive name
func selectItems(hub *cwhub.Hub, itemType string, args []string, installedOnly bool) ([]*cwhub.Item, error) {
itemNames := hub.GetNamesByType(itemType)
notExist := []string{}
if len(args) > 0 {
for _, arg := range args {
if !slices.Contains(itemNames, arg) {
notExist = append(notExist, arg)
}
}
}
if len(notExist) > 0 {
return nil, fmt.Errorf("item(s) '%s' not found in %s", strings.Join(notExist, ", "), itemType)
}
if len(args) > 0 {
itemNames = args
installedOnly = false
}
items := make([]*cwhub.Item, 0, len(itemNames))
for _, itemName := range itemNames {
item := hub.GetItem(itemType, itemName)
if installedOnly && !item.State.Installed {
continue
}
items = append(items, item)
}
cwhub.SortItemSlice(items)
return items, nil
}
func listItems(out io.Writer, itemTypes []string, items map[string][]*cwhub.Item, omitIfEmpty bool) error {
switch csConfig.Cscli.Output {
case "human":
nothingToDisplay := true
for _, itemType := range itemTypes {
if omitIfEmpty && len(items[itemType]) == 0 {
continue
}
listHubItemTable(out, "\n"+strings.ToUpper(itemType), items[itemType])
nothingToDisplay = false
}
if nothingToDisplay {
fmt.Println("No items to display")
}
case "json":
type itemHubStatus struct {
Name string `json:"name"`
LocalVersion string `json:"local_version"`
LocalPath string `json:"local_path"`
Description string `json:"description"`
UTF8Status string `json:"utf8_status"`
Status string `json:"status"`
}
hubStatus := make(map[string][]itemHubStatus)
for _, itemType := range itemTypes {
// empty slice in case there are no items of this type
hubStatus[itemType] = make([]itemHubStatus, len(items[itemType]))
for i, item := range items[itemType] {
status := item.State.Text()
statusEmo := item.State.Emoji()
hubStatus[itemType][i] = itemHubStatus{
Name: item.Name,
LocalVersion: item.State.LocalVersion,
LocalPath: item.State.LocalPath,
Description: item.Description,
Status: status,
UTF8Status: fmt.Sprintf("%v %s", statusEmo, status),
}
}
}
x, err := json.MarshalIndent(hubStatus, "", " ")
if err != nil {
return fmt.Errorf("failed to unmarshal: %w", err)
}
out.Write(x)
case "raw":
csvwriter := csv.NewWriter(out)
header := []string{"name", "status", "version", "description"}
if len(itemTypes) > 1 {
header = append(header, "type")
}
if err := csvwriter.Write(header); err != nil {
return fmt.Errorf("failed to write header: %w", err)
}
for _, itemType := range itemTypes {
for _, item := range items[itemType] {
row := []string{
item.Name,
item.State.Text(),
item.State.LocalVersion,
item.Description,
}
if len(itemTypes) > 1 {
row = append(row, itemType)
}
if err := csvwriter.Write(row); err != nil {
return fmt.Errorf("failed to write raw output: %w", err)
}
}
}
csvwriter.Flush()
}
return nil
}
func inspectItem(item *cwhub.Item, showMetrics bool) error {
switch csConfig.Cscli.Output {
case "human", "raw":
enc := yaml.NewEncoder(os.Stdout)
enc.SetIndent(2)
if err := enc.Encode(item); err != nil {
return fmt.Errorf("unable to encode item: %w", err)
}
case "json":
b, err := json.MarshalIndent(*item, "", " ")
if err != nil {
return fmt.Errorf("unable to marshal item: %w", err)
}
fmt.Print(string(b))
}
if csConfig.Cscli.Output != "human" {
return nil
}
if item.State.Tainted {
fmt.Println()
fmt.Printf(`This item is tainted. Use "%s %s inspect --diff %s" to see why.`, filepath.Base(os.Args[0]), item.Type, item.Name)
fmt.Println()
}
if showMetrics {
fmt.Printf("\nCurrent metrics: \n")
if err := ShowMetrics(item); err != nil {
return err
}
}
return nil
}

View file

@ -2,21 +2,22 @@ package main
import ( import (
"context" "context"
"errors"
"fmt" "fmt"
"net/url" "net/url"
"os" "os"
"slices"
"sort" "sort"
"strings" "strings"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/slices" "gopkg.in/yaml.v3"
"gopkg.in/yaml.v2"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/alertcontext" "github.com/crowdsecurity/crowdsec/pkg/alertcontext"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
@ -24,103 +25,90 @@ import (
"github.com/crowdsecurity/crowdsec/pkg/exprhelpers" "github.com/crowdsecurity/crowdsec/pkg/exprhelpers"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/parser" "github.com/crowdsecurity/crowdsec/pkg/parser"
"github.com/crowdsecurity/crowdsec/pkg/types"
) )
var LAPIURLPrefix string = "v1" const LAPIURLPrefix = "v1"
func runLapiStatus(cmd *cobra.Command, args []string) error { type cliLapi struct {
var err error cfg configGetter
}
password := strfmt.Password(csConfig.API.Client.Credentials.Password) func NewCLILapi(cfg configGetter) *cliLapi {
apiurl, err := url.Parse(csConfig.API.Client.Credentials.URL) return &cliLapi{
login := csConfig.API.Client.Credentials.Login cfg: cfg,
}
}
func (cli *cliLapi) status() error {
cfg := cli.cfg()
password := strfmt.Password(cfg.API.Client.Credentials.Password)
login := cfg.API.Client.Credentials.Login
origURL := cfg.API.Client.Credentials.URL
apiURL, err := url.Parse(origURL)
if err != nil { if err != nil {
log.Fatalf("parsing api url ('%s'): %s", apiurl, err) return fmt.Errorf("parsing api url: %w", err)
}
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
} }
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil { hub, err := require.Hub(cfg, nil, nil)
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to load hub index : %s", err)
}
scenarios, err := cwhub.GetInstalledScenariosAsString()
if err != nil { if err != nil {
log.Fatalf("failed to get scenarios : %s", err) return err
} }
Client, err = apiclient.NewDefaultClient(apiurl, scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil {
return fmt.Errorf("failed to get scenarios: %w", err)
}
Client, err = apiclient.NewDefaultClient(apiURL,
LAPIURLPrefix, LAPIURLPrefix,
fmt.Sprintf("crowdsec/%s", version.String()), fmt.Sprintf("crowdsec/%s", version.String()),
nil) nil)
if err != nil { if err != nil {
log.Fatalf("init default client: %s", err) return fmt.Errorf("init default client: %w", err)
} }
t := models.WatcherAuthRequest{ t := models.WatcherAuthRequest{
MachineID: &login, MachineID: &login,
Password: &password, Password: &password,
Scenarios: scenarios, Scenarios: scenarios,
} }
log.Infof("Loaded credentials from %s", csConfig.API.Client.CredentialsFilePath)
log.Infof("Trying to authenticate with username %s on %s", login, apiurl) log.Infof("Loaded credentials from %s", cfg.API.Client.CredentialsFilePath)
// use the original string because apiURL would print 'http://unix/'
log.Infof("Trying to authenticate with username %s on %s", login, origURL)
_, _, err = Client.Auth.AuthenticateWatcher(context.Background(), t) _, _, err = Client.Auth.AuthenticateWatcher(context.Background(), t)
if err != nil { if err != nil {
log.Fatalf("Failed to authenticate to Local API (LAPI) : %s", err) return fmt.Errorf("failed to authenticate to Local API (LAPI): %w", err)
} else {
log.Infof("You can successfully interact with Local API (LAPI)")
} }
log.Infof("You can successfully interact with Local API (LAPI)")
return nil return nil
} }
func runLapiRegister(cmd *cobra.Command, args []string) error { func (cli *cliLapi) register(apiURL string, outputFile string, machine string) error {
var err error var err error
flags := cmd.Flags() lapiUser := machine
cfg := cli.cfg()
apiURL, err := flags.GetString("url")
if err != nil {
return err
}
outputFile, err := flags.GetString("file")
if err != nil {
return err
}
lapiUser, err := flags.GetString("machine")
if err != nil {
return err
}
if lapiUser == "" { if lapiUser == "" {
lapiUser, err = generateID("") lapiUser, err = generateID("")
if err != nil { if err != nil {
log.Fatalf("unable to generate machine id: %s", err) return fmt.Errorf("unable to generate machine id: %w", err)
} }
} }
password := strfmt.Password(generatePassword(passwordLength)) password := strfmt.Password(generatePassword(passwordLength))
if apiURL == "" {
if csConfig.API.Client != nil && csConfig.API.Client.Credentials != nil && csConfig.API.Client.Credentials.URL != "" { apiurl, err := prepareAPIURL(cfg.API.Client, apiURL)
apiURL = csConfig.API.Client.Credentials.URL
} else {
log.Fatalf("No Local API URL. Please provide it in your configuration or with the -u parameter")
}
}
/*URL needs to end with /, but user doesn't care*/
if !strings.HasSuffix(apiURL, "/") {
apiURL += "/"
}
/*URL needs to start with http://, but user doesn't care*/
if !strings.HasPrefix(apiURL, "http://") && !strings.HasPrefix(apiURL, "https://") {
apiURL = "http://" + apiURL
}
apiurl, err := url.Parse(apiURL)
if err != nil { if err != nil {
log.Fatalf("parsing api url: %s", err) return fmt.Errorf("parsing api url: %w", err)
} }
_, err = apiclient.RegisterClient(&apiclient.Config{ _, err = apiclient.RegisterClient(&apiclient.Config{
MachineID: lapiUser, MachineID: lapiUser,
Password: password, Password: password,
@ -128,208 +116,284 @@ func runLapiRegister(cmd *cobra.Command, args []string) error {
URL: apiurl, URL: apiurl,
VersionPrefix: LAPIURLPrefix, VersionPrefix: LAPIURLPrefix,
}, nil) }, nil)
if err != nil { if err != nil {
log.Fatalf("api client register: %s", err) return fmt.Errorf("api client register: %w", err)
} }
log.Printf("Successfully registered to Local API (LAPI)") log.Printf("Successfully registered to Local API (LAPI)")
var dumpFile string var dumpFile string
if outputFile != "" { if outputFile != "" {
dumpFile = outputFile dumpFile = outputFile
} else if csConfig.API.Client.CredentialsFilePath != "" { } else if cfg.API.Client.CredentialsFilePath != "" {
dumpFile = csConfig.API.Client.CredentialsFilePath dumpFile = cfg.API.Client.CredentialsFilePath
} else { } else {
dumpFile = "" dumpFile = ""
} }
apiCfg := csconfig.ApiCredentialsCfg{ apiCfg := csconfig.ApiCredentialsCfg{
Login: lapiUser, Login: lapiUser,
Password: password.String(), Password: password.String(),
URL: apiURL, URL: apiURL,
} }
apiConfigDump, err := yaml.Marshal(apiCfg) apiConfigDump, err := yaml.Marshal(apiCfg)
if err != nil { if err != nil {
log.Fatalf("unable to marshal api credentials: %s", err) return fmt.Errorf("unable to marshal api credentials: %w", err)
} }
if dumpFile != "" { if dumpFile != "" {
err = os.WriteFile(dumpFile, apiConfigDump, 0644) err = os.WriteFile(dumpFile, apiConfigDump, 0o600)
if err != nil { if err != nil {
log.Fatalf("write api credentials in '%s' failed: %s", dumpFile, err) return fmt.Errorf("write api credentials to '%s' failed: %w", dumpFile, err)
} }
log.Printf("Local API credentials dumped to '%s'", dumpFile)
log.Printf("Local API credentials written to '%s'", dumpFile)
} else { } else {
fmt.Printf("%s\n", string(apiConfigDump)) fmt.Printf("%s\n", string(apiConfigDump))
} }
log.Warning(ReloadMessage()) log.Warning(ReloadMessage())
return nil return nil
} }
func NewLapiStatusCmd() *cobra.Command { // prepareAPIURL checks/fixes a LAPI connection url (http, https or socket) and returns an URL struct
func prepareAPIURL(clientCfg *csconfig.LocalApiClientCfg, apiURL string) (*url.URL, error) {
if apiURL == "" {
if clientCfg == nil || clientCfg.Credentials == nil || clientCfg.Credentials.URL == "" {
return nil, errors.New("no Local API URL. Please provide it in your configuration or with the -u parameter")
}
apiURL = clientCfg.Credentials.URL
}
// URL needs to end with /, but user doesn't care
if !strings.HasSuffix(apiURL, "/") {
apiURL += "/"
}
// URL needs to start with http://, but user doesn't care
if !strings.HasPrefix(apiURL, "http://") && !strings.HasPrefix(apiURL, "https://") && !strings.HasPrefix(apiURL, "/") {
apiURL = "http://" + apiURL
}
return url.Parse(apiURL)
}
func (cli *cliLapi) newStatusCmd() *cobra.Command {
cmdLapiStatus := &cobra.Command{ cmdLapiStatus := &cobra.Command{
Use: "status", Use: "status",
Short: "Check authentication to Local API (LAPI)", Short: "Check authentication to Local API (LAPI)",
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runLapiStatus, RunE: func(_ *cobra.Command, _ []string) error {
return cli.status()
},
} }
return cmdLapiStatus return cmdLapiStatus
} }
func NewLapiRegisterCmd() *cobra.Command { func (cli *cliLapi) newRegisterCmd() *cobra.Command {
cmdLapiRegister := &cobra.Command{ var (
apiURL string
outputFile string
machine string
)
cmd := &cobra.Command{
Use: "register", Use: "register",
Short: "Register a machine to Local API (LAPI)", Short: "Register a machine to Local API (LAPI)",
Long: `Register you machine to the Local API (LAPI). Long: `Register your machine to the Local API (LAPI).
Keep in mind the machine needs to be validated by an administrator on LAPI side to be effective.`, Keep in mind the machine needs to be validated by an administrator on LAPI side to be effective.`,
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runLapiRegister, RunE: func(_ *cobra.Command, _ []string) error {
return cli.register(apiURL, outputFile, machine)
},
} }
flags := cmdLapiRegister.Flags() flags := cmd.Flags()
flags.StringP("url", "u", "", "URL of the API (ie. http://127.0.0.1)") flags.StringVarP(&apiURL, "url", "u", "", "URL of the API (ie. http://127.0.0.1)")
flags.StringP("file", "f", "", "output file destination") flags.StringVarP(&outputFile, "file", "f", "", "output file destination")
flags.String("machine", "", "Name of the machine to register with") flags.StringVar(&machine, "machine", "", "Name of the machine to register with")
return cmdLapiRegister return cmd
} }
func NewLapiCmd() *cobra.Command { func (cli *cliLapi) NewCommand() *cobra.Command {
var cmdLapi = &cobra.Command{ cmd := &cobra.Command{
Use: "lapi [action]", Use: "lapi [action]",
Short: "Manage interaction with Local API (LAPI)", Short: "Manage interaction with Local API (LAPI)",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIClient(); err != nil { if err := cli.cfg().LoadAPIClient(); err != nil {
return errors.Wrap(err, "loading api client") return fmt.Errorf("loading api client: %w", err)
} }
return nil return nil
}, },
} }
cmdLapi.AddCommand(NewLapiRegisterCmd()) cmd.AddCommand(cli.newRegisterCmd())
cmdLapi.AddCommand(NewLapiStatusCmd()) cmd.AddCommand(cli.newStatusCmd())
cmdLapi.AddCommand(NewLapiContextCmd()) cmd.AddCommand(cli.newContextCmd())
return cmdLapi return cmd
} }
func NewLapiContextCmd() *cobra.Command { func (cli *cliLapi) addContext(key string, values []string) error {
cmdContext := &cobra.Command{ cfg := cli.cfg()
Use: "context [command]",
Short: "Manage context to send with alerts",
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadCrowdsec(); err != nil {
fileNotFoundMessage := fmt.Sprintf("failed to open context file: open %s: no such file or directory", csConfig.Crowdsec.ConsoleContextPath)
if err.Error() != fileNotFoundMessage {
log.Fatalf("Unable to load CrowdSec Agent: %s", err)
}
}
if csConfig.DisableAgent {
log.Fatalf("Agent is disabled and lapi context can only be used on the agent")
}
return nil if err := alertcontext.ValidateContextExpr(key, values); err != nil {
}, return fmt.Errorf("invalid context configuration: %w", err)
Run: func(cmd *cobra.Command, args []string) {
printHelp(cmd)
},
} }
var keyToAdd string if _, ok := cfg.Crowdsec.ContextToSend[key]; !ok {
var valuesToAdd []string cfg.Crowdsec.ContextToSend[key] = make([]string, 0)
cmdContextAdd := &cobra.Command{
log.Infof("key '%s' added", key)
}
data := cfg.Crowdsec.ContextToSend[key]
for _, val := range values {
if !slices.Contains(data, val) {
log.Infof("value '%s' added to key '%s'", val, key)
data = append(data, val)
}
cfg.Crowdsec.ContextToSend[key] = data
}
if err := cfg.Crowdsec.DumpContextConfigFile(); err != nil {
return err
}
return nil
}
func (cli *cliLapi) newContextAddCmd() *cobra.Command {
var (
keyToAdd string
valuesToAdd []string
)
cmd := &cobra.Command{
Use: "add", Use: "add",
Short: "Add context to send with alerts. You must specify the output key with the expr value you want", Short: "Add context to send with alerts. You must specify the output key with the expr value you want",
Example: `cscli lapi context add --key source_ip --value evt.Meta.source_ip Example: `cscli lapi context add --key source_ip --value evt.Meta.source_ip
cscli lapi context add --key file_source --value evt.Line.Src cscli lapi context add --key file_source --value evt.Line.Src
cscli lapi context add --value evt.Meta.source_ip --value evt.Meta.target_user
`, `,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
if err := alertcontext.ValidateContextExpr(keyToAdd, valuesToAdd); err != nil { hub, err := require.Hub(cli.cfg(), nil, nil)
log.Fatalf("invalid context configuration :%s", err) if err != nil {
return err
} }
if _, ok := csConfig.Crowdsec.ContextToSend[keyToAdd]; !ok {
csConfig.Crowdsec.ContextToSend[keyToAdd] = make([]string, 0) if err = alertcontext.LoadConsoleContext(cli.cfg(), hub); err != nil {
log.Infof("key '%s' added", keyToAdd) return fmt.Errorf("while loading context: %w", err)
} }
data := csConfig.Crowdsec.ContextToSend[keyToAdd]
for _, val := range valuesToAdd { if keyToAdd != "" {
if !slices.Contains(data, val) { if err := cli.addContext(keyToAdd, valuesToAdd); err != nil {
log.Infof("value '%s' added to key '%s'", val, keyToAdd) return err
data = append(data, val)
} }
csConfig.Crowdsec.ContextToSend[keyToAdd] = data return nil
} }
if err := csConfig.Crowdsec.DumpContextConfigFile(); err != nil {
log.Fatalf(err.Error()) for _, v := range valuesToAdd {
keySlice := strings.Split(v, ".")
key := keySlice[len(keySlice)-1]
value := []string{v}
if err := cli.addContext(key, value); err != nil {
return err
}
} }
return nil
}, },
} }
cmdContextAdd.Flags().StringVarP(&keyToAdd, "key", "k", "", "The key of the different values to send")
cmdContextAdd.Flags().StringSliceVar(&valuesToAdd, "value", []string{}, "The expr fields to associate with the key")
cmdContextAdd.MarkFlagRequired("key")
cmdContextAdd.MarkFlagRequired("value")
cmdContext.AddCommand(cmdContextAdd)
cmdContextStatus := &cobra.Command{ flags := cmd.Flags()
flags.StringVarP(&keyToAdd, "key", "k", "", "The key of the different values to send")
flags.StringSliceVar(&valuesToAdd, "value", []string{}, "The expr fields to associate with the key")
cmd.MarkFlagRequired("value")
return cmd
}
func (cli *cliLapi) newContextStatusCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "status", Use: "status",
Short: "List context to send with alerts", Short: "List context to send with alerts",
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
if len(csConfig.Crowdsec.ContextToSend) == 0 { cfg := cli.cfg()
fmt.Println("No context found on this agent. You can use 'cscli lapi context add' to add context to your alerts.") hub, err := require.Hub(cfg, nil, nil)
return
}
dump, err := yaml.Marshal(csConfig.Crowdsec.ContextToSend)
if err != nil { if err != nil {
log.Fatalf("unable to show context status: %s", err) return err
} }
fmt.Println(string(dump)) if err = alertcontext.LoadConsoleContext(cfg, hub); err != nil {
return fmt.Errorf("while loading context: %w", err)
}
if len(cfg.Crowdsec.ContextToSend) == 0 {
fmt.Println("No context found on this agent. You can use 'cscli lapi context add' to add context to your alerts.")
return nil
}
dump, err := yaml.Marshal(cfg.Crowdsec.ContextToSend)
if err != nil {
return fmt.Errorf("unable to show context status: %w", err)
}
fmt.Print(string(dump))
return nil
}, },
} }
cmdContext.AddCommand(cmdContextStatus)
return cmd
}
func (cli *cliLapi) newContextDetectCmd() *cobra.Command {
var detectAll bool var detectAll bool
cmdContextDetect := &cobra.Command{
cmd := &cobra.Command{
Use: "detect", Use: "detect",
Short: "Detect available fields from the installed parsers", Short: "Detect available fields from the installed parsers",
Example: `cscli lapi context detect --all Example: `cscli lapi context detect --all
cscli lapi context detect crowdsecurity/sshd-logs cscli lapi context detect crowdsecurity/sshd-logs
`, `,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
var err error cfg := cli.cfg()
if !detectAll && len(args) == 0 { if !detectAll && len(args) == 0 {
log.Infof("Please provide parsers to detect or --all flag.") log.Infof("Please provide parsers to detect or --all flag.")
printHelp(cmd) printHelp(cmd)
} }
// to avoid all the log.Info from the loaders functions // to avoid all the log.Info from the loaders functions
log.SetLevel(log.ErrorLevel) log.SetLevel(log.WarnLevel)
err = exprhelpers.Init(nil) if err := exprhelpers.Init(nil); err != nil {
return fmt.Errorf("failed to init expr helpers: %w", err)
}
hub, err := require.Hub(cfg, nil, nil)
if err != nil { if err != nil {
log.Fatalf("Failed to init expr helpers : %s", err) return err
} }
// Populate cwhub package tools csParsers := parser.NewParsers(hub)
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil { if csParsers, err = parser.LoadParsers(cfg, csParsers); err != nil {
log.Fatalf("Failed to load hub index : %s", err) return fmt.Errorf("unable to load parsers: %w", err)
}
csParsers := parser.NewParsers()
if csParsers, err = parser.LoadParsers(csConfig, csParsers); err != nil {
log.Fatalf("unable to load parsers: %s", err)
} }
fieldByParsers := make(map[string][]string) fieldByParsers := make(map[string][]string)
@ -349,7 +413,6 @@ cscli lapi context detect crowdsecurity/sshd-logs
fieldByParsers[node.Name] = append(fieldByParsers[node.Name], field) fieldByParsers[node.Name] = append(fieldByParsers[node.Name], field)
} }
} }
} }
fmt.Printf("Acquisition :\n\n") fmt.Printf("Acquisition :\n\n")
@ -382,84 +445,89 @@ cscli lapi context detect crowdsecurity/sshd-logs
log.Errorf("parser '%s' not found, can't detect fields", parserNotFound) log.Errorf("parser '%s' not found, can't detect fields", parserNotFound)
} }
} }
return nil
}, },
} }
cmdContextDetect.Flags().BoolVarP(&detectAll, "all", "a", false, "Detect evt field for all installed parser") cmd.Flags().BoolVarP(&detectAll, "all", "a", false, "Detect evt field for all installed parser")
cmdContext.AddCommand(cmdContextDetect)
var keysToDelete []string return cmd
var valuesToDelete []string
cmdContextDelete := &cobra.Command{
Use: "delete",
Short: "Delete context to send with alerts",
Example: `cscli lapi context delete --key source_ip
cscli lapi context delete --value evt.Line.Src
`,
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
if len(keysToDelete) == 0 && len(valuesToDelete) == 0 {
log.Fatalf("please provide at least a key or a value to delete")
}
for _, key := range keysToDelete {
if _, ok := csConfig.Crowdsec.ContextToSend[key]; ok {
delete(csConfig.Crowdsec.ContextToSend, key)
log.Infof("key '%s' has been removed", key)
} else {
log.Warningf("key '%s' doesn't exist", key)
}
}
for _, value := range valuesToDelete {
valueFound := false
for key, context := range csConfig.Crowdsec.ContextToSend {
if slices.Contains(context, value) {
valueFound = true
csConfig.Crowdsec.ContextToSend[key] = removeFromSlice(value, context)
log.Infof("value '%s' has been removed from key '%s'", value, key)
}
if len(csConfig.Crowdsec.ContextToSend[key]) == 0 {
delete(csConfig.Crowdsec.ContextToSend, key)
}
}
if !valueFound {
log.Warningf("value '%s' not found", value)
}
}
if err := csConfig.Crowdsec.DumpContextConfigFile(); err != nil {
log.Fatalf(err.Error())
}
},
}
cmdContextDelete.Flags().StringSliceVarP(&keysToDelete, "key", "k", []string{}, "The keys to delete")
cmdContextDelete.Flags().StringSliceVar(&valuesToDelete, "value", []string{}, "The expr fields to delete")
cmdContext.AddCommand(cmdContextDelete)
return cmdContext
} }
func detectStaticField(GrokStatics []types.ExtraField) []string { func (cli *cliLapi) newContextDeleteCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "delete",
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, _ []string) error {
filePath := cli.cfg().Crowdsec.ConsoleContextPath
if filePath == "" {
filePath = "the context file"
}
fmt.Printf("Command 'delete' is deprecated, please manually edit %s.", filePath)
return nil
},
}
return cmd
}
func (cli *cliLapi) newContextCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "context [command]",
Short: "Manage context to send with alerts",
DisableAutoGenTag: true,
PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
cfg := cli.cfg()
if err := cfg.LoadCrowdsec(); err != nil {
fileNotFoundMessage := fmt.Sprintf("failed to open context file: open %s: no such file or directory", cfg.Crowdsec.ConsoleContextPath)
if err.Error() != fileNotFoundMessage {
return fmt.Errorf("unable to load CrowdSec agent configuration: %w", err)
}
}
if cfg.DisableAgent {
return errors.New("agent is disabled and lapi context can only be used on the agent")
}
return nil
},
Run: func(cmd *cobra.Command, _ []string) {
printHelp(cmd)
},
}
cmd.AddCommand(cli.newContextAddCmd())
cmd.AddCommand(cli.newContextStatusCmd())
cmd.AddCommand(cli.newContextDetectCmd())
cmd.AddCommand(cli.newContextDeleteCmd())
return cmd
}
func detectStaticField(grokStatics []parser.ExtraField) []string {
ret := make([]string, 0) ret := make([]string, 0)
for _, static := range GrokStatics {
for _, static := range grokStatics {
if static.Parsed != "" { if static.Parsed != "" {
fieldName := fmt.Sprintf("evt.Parsed.%s", static.Parsed) fieldName := fmt.Sprintf("evt.Parsed.%s", static.Parsed)
if !slices.Contains(ret, fieldName) { if !slices.Contains(ret, fieldName) {
ret = append(ret, fieldName) ret = append(ret, fieldName)
} }
} }
if static.Meta != "" { if static.Meta != "" {
fieldName := fmt.Sprintf("evt.Meta.%s", static.Meta) fieldName := fmt.Sprintf("evt.Meta.%s", static.Meta)
if !slices.Contains(ret, fieldName) { if !slices.Contains(ret, fieldName) {
ret = append(ret, fieldName) ret = append(ret, fieldName)
} }
} }
if static.TargetByName != "" { if static.TargetByName != "" {
fieldName := static.TargetByName fieldName := static.TargetByName
if !strings.HasPrefix(fieldName, "evt.") { if !strings.HasPrefix(fieldName, "evt.") {
fieldName = "evt." + fieldName fieldName = "evt." + fieldName
} }
if !slices.Contains(ret, fieldName) { if !slices.Contains(ret, fieldName) {
ret = append(ret, fieldName) ret = append(ret, fieldName)
} }
@ -470,7 +538,8 @@ func detectStaticField(GrokStatics []types.ExtraField) []string {
} }
func detectNode(node parser.Node, parserCTX parser.UnixParserCtx) []string { func detectNode(node parser.Node, parserCTX parser.UnixParserCtx) []string {
var ret = make([]string, 0) ret := make([]string, 0)
if node.Grok.RunTimeRegexp != nil { if node.Grok.RunTimeRegexp != nil {
for _, capturedField := range node.Grok.RunTimeRegexp.Names() { for _, capturedField := range node.Grok.RunTimeRegexp.Names() {
fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField) fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField)
@ -482,13 +551,13 @@ func detectNode(node parser.Node, parserCTX parser.UnixParserCtx) []string {
if node.Grok.RegexpName != "" { if node.Grok.RegexpName != "" {
grokCompiled, err := parserCTX.Grok.Get(node.Grok.RegexpName) grokCompiled, err := parserCTX.Grok.Get(node.Grok.RegexpName)
if err != nil { // ignore error (parser does not exist?)
log.Warningf("Can't get subgrok: %s", err) if err == nil {
} for _, capturedField := range grokCompiled.Names() {
for _, capturedField := range grokCompiled.Names() { fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField)
fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField) if !slices.Contains(ret, fieldName) {
if !slices.Contains(ret, fieldName) { ret = append(ret, fieldName)
ret = append(ret, fieldName) }
} }
} }
} }
@ -515,7 +584,7 @@ func detectNode(node parser.Node, parserCTX parser.UnixParserCtx) []string {
} }
func detectSubNode(node parser.Node, parserCTX parser.UnixParserCtx) []string { func detectSubNode(node parser.Node, parserCTX parser.UnixParserCtx) []string {
var ret = make([]string, 0) ret := make([]string, 0)
for _, subnode := range node.LeavesNodes { for _, subnode := range node.LeavesNodes {
if subnode.Grok.RunTimeRegexp != nil { if subnode.Grok.RunTimeRegexp != nil {
@ -526,15 +595,16 @@ func detectSubNode(node parser.Node, parserCTX parser.UnixParserCtx) []string {
} }
} }
} }
if subnode.Grok.RegexpName != "" { if subnode.Grok.RegexpName != "" {
grokCompiled, err := parserCTX.Grok.Get(subnode.Grok.RegexpName) grokCompiled, err := parserCTX.Grok.Get(subnode.Grok.RegexpName)
if err != nil { if err == nil {
log.Warningf("Can't get subgrok: %s", err) // ignore error (parser does not exist?)
} for _, capturedField := range grokCompiled.Names() {
for _, capturedField := range grokCompiled.Names() { fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField)
fieldName := fmt.Sprintf("evt.Parsed.%s", capturedField) if !slices.Contains(ret, fieldName) {
if !slices.Contains(ret, fieldName) { ret = append(ret, fieldName)
ret = append(ret, fieldName) }
} }
} }
} }

View file

@ -0,0 +1,49 @@
package main
import (
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
)
func TestPrepareAPIURL_NoProtocol(t *testing.T) {
url, err := prepareAPIURL(nil, "localhost:81")
require.NoError(t, err)
assert.Equal(t, "http://localhost:81/", url.String())
}
func TestPrepareAPIURL_Http(t *testing.T) {
url, err := prepareAPIURL(nil, "http://localhost:81")
require.NoError(t, err)
assert.Equal(t, "http://localhost:81/", url.String())
}
func TestPrepareAPIURL_Https(t *testing.T) {
url, err := prepareAPIURL(nil, "https://localhost:81")
require.NoError(t, err)
assert.Equal(t, "https://localhost:81/", url.String())
}
func TestPrepareAPIURL_UnixSocket(t *testing.T) {
url, err := prepareAPIURL(nil, "/path/socket")
require.NoError(t, err)
assert.Equal(t, "/path/socket/", url.String())
}
func TestPrepareAPIURL_Empty(t *testing.T) {
_, err := prepareAPIURL(nil, "")
require.Error(t, err)
}
func TestPrepareAPIURL_Empty_ConfigOverride(t *testing.T) {
url, err := prepareAPIURL(&csconfig.LocalApiClientCfg{
Credentials: &csconfig.ApiCredentialsCfg{
URL: "localhost:80",
},
}, "")
require.NoError(t, err)
assert.Equal(t, "http://localhost:80/", url.String())
}

View file

@ -4,34 +4,32 @@ import (
saferand "crypto/rand" saferand "crypto/rand"
"encoding/csv" "encoding/csv"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io"
"math/big" "math/big"
"os" "os"
"slices"
"strings" "strings"
"time" "time"
"github.com/AlecAivazis/survey/v2" "github.com/AlecAivazis/survey/v2"
"github.com/enescakir/emoji"
"github.com/fatih/color" "github.com/fatih/color"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
"github.com/google/uuid" "github.com/google/uuid"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/slices" "gopkg.in/yaml.v3"
"gopkg.in/yaml.v2"
"github.com/crowdsecurity/machineid" "github.com/crowdsecurity/machineid"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/database/ent" "github.com/crowdsecurity/crowdsec/pkg/database/ent"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
) )
var ( const passwordLength = 64
passwordLength = 64
)
func generatePassword(length int) string { func generatePassword(length int) string {
upper := "ABCDEFGHIJKLMNOPQRSTUVWXY" upper := "ABCDEFGHIJKLMNOPQRSTUVWXY"
@ -42,11 +40,13 @@ func generatePassword(length int) string {
charsetLength := len(charset) charsetLength := len(charset)
buf := make([]byte, length) buf := make([]byte, length)
for i := 0; i < length; i++ { for i := 0; i < length; i++ {
rInt, err := saferand.Int(saferand.Reader, big.NewInt(int64(charsetLength))) rInt, err := saferand.Int(saferand.Reader, big.NewInt(int64(charsetLength)))
if err != nil { if err != nil {
log.Fatalf("failed getting data from prng for password generation : %s", err) log.Fatalf("failed getting data from prng for password generation : %s", err)
} }
buf[i] = charset[rInt.Int64()] buf[i] = charset[rInt.Int64()]
} }
@ -61,12 +61,14 @@ func generateIDPrefix() (string, error) {
if err == nil { if err == nil {
return prefix, nil return prefix, nil
} }
log.Debugf("failed to get machine-id with usual files: %s", err) log.Debugf("failed to get machine-id with usual files: %s", err)
bId, err := uuid.NewRandom() bID, err := uuid.NewRandom()
if err == nil { if err == nil {
return bId.String(), nil return bID.String(), nil
} }
return "", fmt.Errorf("generating machine id: %w", err) return "", fmt.Errorf("generating machine id: %w", err)
} }
@ -77,356 +79,427 @@ func generateID(prefix string) (string, error) {
if prefix == "" { if prefix == "" {
prefix, err = generateIDPrefix() prefix, err = generateIDPrefix()
} }
if err != nil { if err != nil {
return "", err return "", err
} }
prefix = strings.ReplaceAll(prefix, "-", "")[:32] prefix = strings.ReplaceAll(prefix, "-", "")[:32]
suffix := generatePassword(16) suffix := generatePassword(16)
return prefix + suffix, nil return prefix + suffix, nil
} }
func displayLastHeartBeat(m *ent.Machine, fancy bool) string { // getLastHeartbeat returns the last heartbeat timestamp of a machine
var hbDisplay string // and a boolean indicating if the machine is considered active or not.
func getLastHeartbeat(m *ent.Machine) (string, bool) {
if m.LastHeartbeat != nil { if m.LastHeartbeat == nil {
lastHeartBeat := time.Now().UTC().Sub(*m.LastHeartbeat) return "-", false
hbDisplay = lastHeartBeat.Truncate(time.Second).String()
if fancy && lastHeartBeat > 2*time.Minute {
hbDisplay = fmt.Sprintf("%s %s", emoji.Warning.String(), lastHeartBeat.Truncate(time.Second).String())
}
} else {
hbDisplay = "-"
if fancy {
hbDisplay = emoji.Warning.String() + " -"
}
} }
return hbDisplay
elapsed := time.Now().UTC().Sub(*m.LastHeartbeat)
hb := elapsed.Truncate(time.Second).String()
if elapsed > 2*time.Minute {
return hb, false
}
return hb, true
} }
func getAgents(out io.Writer, dbClient *database.Client) error { type cliMachines struct {
machines, err := dbClient.ListMachines() db *database.Client
if err != nil { cfg configGetter
return fmt.Errorf("unable to list machines: %s", err) }
func NewCLIMachines(cfg configGetter) *cliMachines {
return &cliMachines{
cfg: cfg,
} }
if csConfig.Cscli.Output == "human" { }
func (cli *cliMachines) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "machines [action]",
Short: "Manage local API machines [requires local API]",
Long: `To list/add/delete/validate/prune machines.
Note: This command requires database direct access, so is intended to be run on the local API machine.
`,
Example: `cscli machines [action]`,
DisableAutoGenTag: true,
Aliases: []string{"machine"},
PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
var err error
if err = require.LAPI(cli.cfg()); err != nil {
return err
}
cli.db, err = database.NewClient(cli.cfg().DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %w", err)
}
return nil
},
}
cmd.AddCommand(cli.newListCmd())
cmd.AddCommand(cli.newAddCmd())
cmd.AddCommand(cli.newDeleteCmd())
cmd.AddCommand(cli.newValidateCmd())
cmd.AddCommand(cli.newPruneCmd())
return cmd
}
func (cli *cliMachines) list() error {
out := color.Output
machines, err := cli.db.ListMachines()
if err != nil {
return fmt.Errorf("unable to list machines: %w", err)
}
switch cli.cfg().Cscli.Output {
case "human":
getAgentsTable(out, machines) getAgentsTable(out, machines)
} else if csConfig.Cscli.Output == "json" { case "json":
enc := json.NewEncoder(out) enc := json.NewEncoder(out)
enc.SetIndent("", " ") enc.SetIndent("", " ")
if err := enc.Encode(machines); err != nil { if err := enc.Encode(machines); err != nil {
return fmt.Errorf("failed to marshal") return errors.New("failed to marshal")
} }
return nil return nil
} else if csConfig.Cscli.Output == "raw" { case "raw":
csvwriter := csv.NewWriter(out) csvwriter := csv.NewWriter(out)
err := csvwriter.Write([]string{"machine_id", "ip_address", "updated_at", "validated", "version", "auth_type", "last_heartbeat"}) err := csvwriter.Write([]string{"machine_id", "ip_address", "updated_at", "validated", "version", "auth_type", "last_heartbeat"})
if err != nil { if err != nil {
return fmt.Errorf("failed to write header: %s", err) return fmt.Errorf("failed to write header: %w", err)
} }
for _, m := range machines { for _, m := range machines {
var validated string validated := "false"
if m.IsValidated { if m.IsValidated {
validated = "true" validated = "true"
} else {
validated = "false"
} }
err := csvwriter.Write([]string{m.MachineId, m.IpAddress, m.UpdatedAt.Format(time.RFC3339), validated, m.Version, m.AuthType, displayLastHeartBeat(m, false)})
if err != nil { hb, _ := getLastHeartbeat(m)
return fmt.Errorf("failed to write raw output : %s", err)
if err := csvwriter.Write([]string{m.MachineId, m.IpAddress, m.UpdatedAt.Format(time.RFC3339), validated, m.Version, m.AuthType, hb}); err != nil {
return fmt.Errorf("failed to write raw output: %w", err)
} }
} }
csvwriter.Flush() csvwriter.Flush()
} else {
log.Errorf("unknown output '%s'", csConfig.Cscli.Output)
} }
return nil return nil
} }
func NewMachinesListCmd() *cobra.Command { func (cli *cliMachines) newListCmd() *cobra.Command {
cmdMachinesList := &cobra.Command{ cmd := &cobra.Command{
Use: "list", Use: "list",
Short: "List machines", Short: "list all machines in the database",
Long: `List `, Long: `list all machines in the database with their status and last heartbeat`,
Example: `cscli machines list`, Example: `cscli machines list`,
Args: cobra.MaximumNArgs(1), Args: cobra.NoArgs,
DisableAutoGenTag: true, DisableAutoGenTag: true,
PreRunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, _ []string) error {
var err error return cli.list()
dbClient, err = database.NewClient(csConfig.DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %s", err)
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
err := getAgents(color.Output, dbClient)
if err != nil {
return fmt.Errorf("unable to list machines: %s", err)
}
return nil
}, },
} }
return cmdMachinesList return cmd
} }
func NewMachinesAddCmd() *cobra.Command { func (cli *cliMachines) newAddCmd() *cobra.Command {
cmdMachinesAdd := &cobra.Command{ var (
password MachinePassword
dumpFile string
apiURL string
interactive bool
autoAdd bool
force bool
)
cmd := &cobra.Command{
Use: "add", Use: "add",
Short: "add machine to the database.", Short: "add a single machine to the database",
DisableAutoGenTag: true, DisableAutoGenTag: true,
Long: `Register a new machine in the database. cscli should be on the same machine as LAPI.`, Long: `Register a new machine in the database. cscli should be on the same machine as LAPI.`,
Example: ` Example: `cscli machines add --auto
cscli machines add --auto
cscli machines add MyTestMachine --auto cscli machines add MyTestMachine --auto
cscli machines add MyTestMachine --password MyPassword cscli machines add MyTestMachine --password MyPassword
`, cscli machines add -f- --auto > /tmp/mycreds.yaml`,
PreRunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
var err error return cli.add(args, string(password), dumpFile, apiURL, interactive, autoAdd, force)
dbClient, err = database.NewClient(csConfig.DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %s", err)
}
return nil
}, },
RunE: runMachinesAdd,
} }
flags := cmdMachinesAdd.Flags()
flags.StringP("password", "p", "", "machine password to login to the API")
flags.StringP("file", "f", "", "output file destination (defaults to "+csconfig.DefaultConfigPath("local_api_credentials.yaml")+")")
flags.StringP("url", "u", "", "URL of the local API")
flags.BoolP("interactive", "i", false, "interfactive mode to enter the password")
flags.BoolP("auto", "a", false, "automatically generate password (and username if not provided)")
flags.Bool("force", false, "will force add the machine if it already exist")
return cmdMachinesAdd
}
func runMachinesAdd(cmd *cobra.Command, args []string) error {
var dumpFile string
var err error
flags := cmd.Flags() flags := cmd.Flags()
flags.VarP(&password, "password", "p", "machine password to login to the API")
flags.StringVarP(&dumpFile, "file", "f", "", "output file destination (defaults to "+csconfig.DefaultConfigPath("local_api_credentials.yaml")+")")
flags.StringVarP(&apiURL, "url", "u", "", "URL of the local API")
flags.BoolVarP(&interactive, "interactive", "i", false, "interfactive mode to enter the password")
flags.BoolVarP(&autoAdd, "auto", "a", false, "automatically generate password (and username if not provided)")
flags.BoolVar(&force, "force", false, "will force add the machine if it already exist")
machinePassword, err := flags.GetString("password") return cmd
if err != nil { }
return err
}
outputFile, err := flags.GetString("file") func (cli *cliMachines) add(args []string, machinePassword string, dumpFile string, apiURL string, interactive bool, autoAdd bool, force bool) error {
if err != nil { var (
return err err error
} machineID string
)
apiURL, err := flags.GetString("url")
if err != nil {
return err
}
interactive, err := flags.GetBool("interactive")
if err != nil {
return err
}
autoAdd, err := flags.GetBool("auto")
if err != nil {
return err
}
forceAdd, err := flags.GetBool("force")
if err != nil {
return err
}
var machineID string
// create machineID if not specified by user // create machineID if not specified by user
if len(args) == 0 { if len(args) == 0 {
if !autoAdd { if !autoAdd {
printHelp(cmd) return errors.New("please specify a machine name to add, or use --auto")
return nil
} }
machineID, err = generateID("") machineID, err = generateID("")
if err != nil { if err != nil {
return fmt.Errorf("unable to generate machine id: %s", err) return fmt.Errorf("unable to generate machine id: %w", err)
} }
} else { } else {
machineID = args[0] machineID = args[0]
} }
clientCfg := cli.cfg().API.Client
serverCfg := cli.cfg().API.Server
/*check if file already exists*/ /*check if file already exists*/
if outputFile != "" { if dumpFile == "" && clientCfg != nil && clientCfg.CredentialsFilePath != "" {
dumpFile = outputFile credFile := clientCfg.CredentialsFilePath
} else if csConfig.API.Client != nil && csConfig.API.Client.CredentialsFilePath != "" { // use the default only if the file does not exist
dumpFile = csConfig.API.Client.CredentialsFilePath _, err = os.Stat(credFile)
switch {
case os.IsNotExist(err) || force:
dumpFile = credFile
case err != nil:
return fmt.Errorf("unable to stat '%s': %w", credFile, err)
default:
return fmt.Errorf(`credentials file '%s' already exists: please remove it, use "--force" or specify a different file with "-f" ("-f -" for standard output)`, credFile)
}
}
if dumpFile == "" {
return errors.New(`please specify a file to dump credentials to, with -f ("-f -" for standard output)`)
} }
// create a password if it's not specified by user // create a password if it's not specified by user
if machinePassword == "" && !interactive { if machinePassword == "" && !interactive {
if !autoAdd { if !autoAdd {
printHelp(cmd) return errors.New("please specify a password with --password or use --auto")
return nil
} }
machinePassword = generatePassword(passwordLength) machinePassword = generatePassword(passwordLength)
} else if machinePassword == "" && interactive { } else if machinePassword == "" && interactive {
qs := &survey.Password{ qs := &survey.Password{
Message: "Please provide a password for the machine", Message: "Please provide a password for the machine:",
} }
survey.AskOne(qs, &machinePassword) survey.AskOne(qs, &machinePassword)
} }
password := strfmt.Password(machinePassword) password := strfmt.Password(machinePassword)
_, err = dbClient.CreateMachine(&machineID, &password, "", true, forceAdd, types.PasswordAuthType)
_, err = cli.db.CreateMachine(&machineID, &password, "", true, force, types.PasswordAuthType)
if err != nil { if err != nil {
return fmt.Errorf("unable to create machine: %s", err) return fmt.Errorf("unable to create machine: %w", err)
} }
log.Infof("Machine '%s' successfully added to the local API", machineID)
fmt.Fprintf(os.Stderr, "Machine '%s' successfully added to the local API.\n", machineID)
if apiURL == "" { if apiURL == "" {
if csConfig.API.Client != nil && csConfig.API.Client.Credentials != nil && csConfig.API.Client.Credentials.URL != "" { if clientCfg != nil && clientCfg.Credentials != nil && clientCfg.Credentials.URL != "" {
apiURL = csConfig.API.Client.Credentials.URL apiURL = clientCfg.Credentials.URL
} else if csConfig.API.Server != nil && csConfig.API.Server.ListenURI != "" { } else if serverCfg.ClientURL() != "" {
apiURL = "http://" + csConfig.API.Server.ListenURI apiURL = serverCfg.ClientURL()
} else { } else {
return fmt.Errorf("unable to dump an api URL. Please provide it in your configuration or with the -u parameter") return errors.New("unable to dump an api URL. Please provide it in your configuration or with the -u parameter")
} }
} }
apiCfg := csconfig.ApiCredentialsCfg{ apiCfg := csconfig.ApiCredentialsCfg{
Login: machineID, Login: machineID,
Password: password.String(), Password: password.String(),
URL: apiURL, URL: apiURL,
} }
apiConfigDump, err := yaml.Marshal(apiCfg) apiConfigDump, err := yaml.Marshal(apiCfg)
if err != nil { if err != nil {
return fmt.Errorf("unable to marshal api credentials: %s", err) return fmt.Errorf("unable to marshal api credentials: %w", err)
} }
if dumpFile != "" && dumpFile != "-" { if dumpFile != "" && dumpFile != "-" {
err = os.WriteFile(dumpFile, apiConfigDump, 0644) if err = os.WriteFile(dumpFile, apiConfigDump, 0o600); err != nil {
if err != nil { return fmt.Errorf("write api credentials in '%s' failed: %w", dumpFile, err)
return fmt.Errorf("write api credentials in '%s' failed: %s", dumpFile, err)
} }
log.Printf("API credentials dumped to '%s'", dumpFile)
fmt.Fprintf(os.Stderr, "API credentials written to '%s'.\n", dumpFile)
} else { } else {
fmt.Printf("%s\n", string(apiConfigDump)) fmt.Print(string(apiConfigDump))
} }
return nil return nil
} }
func NewMachinesDeleteCmd() *cobra.Command { func (cli *cliMachines) deleteValid(_ *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
cmdMachinesDelete := &cobra.Command{ machines, err := cli.db.ListMachines()
Use: "delete [machine_name]...", if err != nil {
Short: "delete machines", cobra.CompError("unable to list machines " + err.Error())
Example: `cscli machines delete "machine1" "machine2"`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"remove"},
DisableAutoGenTag: true,
PreRunE: func(cmd *cobra.Command, args []string) error {
var err error
dbClient, err = database.NewClient(csConfig.DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %s", err)
}
return nil
},
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
var err error
dbClient, err = getDBClient()
if err != nil {
cobra.CompError("unable to create new database client: " + err.Error())
return nil, cobra.ShellCompDirectiveNoFileComp
}
machines, err := dbClient.ListMachines()
if err != nil {
cobra.CompError("unable to list machines " + err.Error())
}
ret := make([]string, 0)
for _, machine := range machines {
if strings.Contains(machine.MachineId, toComplete) && !slices.Contains(args, machine.MachineId) {
ret = append(ret, machine.MachineId)
}
}
return ret, cobra.ShellCompDirectiveNoFileComp
},
RunE: runMachinesDelete,
} }
return cmdMachinesDelete ret := []string{}
for _, machine := range machines {
if strings.Contains(machine.MachineId, toComplete) && !slices.Contains(args, machine.MachineId) {
ret = append(ret, machine.MachineId)
}
}
return ret, cobra.ShellCompDirectiveNoFileComp
} }
func runMachinesDelete(cmd *cobra.Command, args []string) error { func (cli *cliMachines) delete(machines []string) error {
for _, machineID := range args { for _, machineID := range machines {
err := dbClient.DeleteWatcher(machineID) if err := cli.db.DeleteWatcher(machineID); err != nil {
if err != nil {
log.Errorf("unable to delete machine '%s': %s", machineID, err) log.Errorf("unable to delete machine '%s': %s", machineID, err)
return nil return nil
} }
log.Infof("machine '%s' deleted successfully", machineID) log.Infof("machine '%s' deleted successfully", machineID)
} }
return nil return nil
} }
func NewMachinesValidateCmd() *cobra.Command { func (cli *cliMachines) newDeleteCmd() *cobra.Command {
cmdMachinesValidate := &cobra.Command{ cmd := &cobra.Command{
Use: "delete [machine_name]...",
Short: "delete machine(s) by name",
Example: `cscli machines delete "machine1" "machine2"`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"remove"},
DisableAutoGenTag: true,
ValidArgsFunction: cli.deleteValid,
RunE: func(_ *cobra.Command, args []string) error {
return cli.delete(args)
},
}
return cmd
}
func (cli *cliMachines) prune(duration time.Duration, notValidOnly bool, force bool) error {
if duration < 2*time.Minute && !notValidOnly {
if yes, err := askYesNo(
"The duration you provided is less than 2 minutes. " +
"This can break installations if the machines are only temporarily disconnected. Continue?", false); err != nil {
return err
} else if !yes {
fmt.Println("User aborted prune. No changes were made.")
return nil
}
}
machines := []*ent.Machine{}
if pending, err := cli.db.QueryPendingMachine(); err == nil {
machines = append(machines, pending...)
}
if !notValidOnly {
if pending, err := cli.db.QueryLastValidatedHeartbeatLT(time.Now().UTC().Add(-duration)); err == nil {
machines = append(machines, pending...)
}
}
if len(machines) == 0 {
fmt.Println("No machines to prune.")
return nil
}
getAgentsTable(color.Output, machines)
if !force {
if yes, err := askYesNo(
"You are about to PERMANENTLY remove the above machines from the database. " +
"These will NOT be recoverable. Continue?", false); err != nil {
return err
} else if !yes {
fmt.Println("User aborted prune. No changes were made.")
return nil
}
}
deleted, err := cli.db.BulkDeleteWatchers(machines)
if err != nil {
return fmt.Errorf("unable to prune machines: %w", err)
}
fmt.Fprintf(os.Stderr, "successfully delete %d machines\n", deleted)
return nil
}
func (cli *cliMachines) newPruneCmd() *cobra.Command {
var (
duration time.Duration
notValidOnly bool
force bool
)
const defaultDuration = 10 * time.Minute
cmd := &cobra.Command{
Use: "prune",
Short: "prune multiple machines from the database",
Long: `prune multiple machines that are not validated or have not connected to the local API in a given duration.`,
Example: `cscli machines prune
cscli machines prune --duration 1h
cscli machines prune --not-validated-only --force`,
Args: cobra.NoArgs,
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, _ []string) error {
return cli.prune(duration, notValidOnly, force)
},
}
flags := cmd.Flags()
flags.DurationVarP(&duration, "duration", "d", defaultDuration, "duration of time since validated machine last heartbeat")
flags.BoolVar(&notValidOnly, "not-validated-only", false, "only prune machines that are not validated")
flags.BoolVar(&force, "force", false, "force prune without asking for confirmation")
return cmd
}
func (cli *cliMachines) validate(machineID string) error {
if err := cli.db.ValidateMachine(machineID); err != nil {
return fmt.Errorf("unable to validate machine '%s': %w", machineID, err)
}
log.Infof("machine '%s' validated successfully", machineID)
return nil
}
func (cli *cliMachines) newValidateCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "validate", Use: "validate",
Short: "validate a machine to access the local API", Short: "validate a machine to access the local API",
Long: `validate a machine to access the local API.`, Long: `validate a machine to access the local API.`,
Example: `cscli machines validate "machine_name"`, Example: `cscli machines validate "machine_name"`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PreRunE: func(cmd *cobra.Command, args []string) error { RunE: func(_ *cobra.Command, args []string) error {
var err error return cli.validate(args[0])
dbClient, err = database.NewClient(csConfig.DbConfig)
if err != nil {
return fmt.Errorf("unable to create new database client: %s", err)
}
return nil
},
RunE: func(cmd *cobra.Command, args []string) error {
machineID := args[0]
if err := dbClient.ValidateMachine(machineID); err != nil {
return fmt.Errorf("unable to validate machine '%s': %s", machineID, err)
}
log.Infof("machine '%s' validated successfully", machineID)
return nil
}, },
} }
return cmdMachinesValidate return cmd
}
func NewMachinesCmd() *cobra.Command {
var cmdMachines = &cobra.Command{
Use: "machines [action]",
Short: "Manage local API machines [requires local API]",
Long: `To list/add/delete/validate machines.
Note: This command requires database direct access, so is intended to be run on the local API machine.
`,
Example: `cscli machines [action]`,
DisableAutoGenTag: true,
Aliases: []string{"machine"},
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI {
if err != nil {
log.Errorf("local api : %s", err)
}
return fmt.Errorf("local API is disabled, please run this command on the local API machine")
}
return nil
},
}
cmdMachines.AddCommand(NewMachinesListCmd())
cmdMachines.AddCommand(NewMachinesAddCmd())
cmdMachines.AddCommand(NewMachinesDeleteCmd())
cmdMachines.AddCommand(NewMachinesValidateCmd())
return cmdMachines
} }

View file

@ -5,9 +5,9 @@ import (
"time" "time"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/enescakir/emoji"
"github.com/crowdsecurity/crowdsec/pkg/database/ent" "github.com/crowdsecurity/crowdsec/pkg/database/ent"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
) )
func getAgentsTable(out io.Writer, machines []*ent.Machine) { func getAgentsTable(out io.Writer, machines []*ent.Machine) {
@ -17,14 +17,17 @@ func getAgentsTable(out io.Writer, machines []*ent.Machine) {
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
for _, m := range machines { for _, m := range machines {
var validated string validated := emoji.Prohibited
if m.IsValidated { if m.IsValidated {
validated = emoji.CheckMark.String() validated = emoji.CheckMark
} else {
validated = emoji.Prohibited.String()
} }
t.AddRow(m.MachineId, m.IpAddress, m.UpdatedAt.Format(time.RFC3339), validated, m.Version, m.AuthType, displayLastHeartBeat(m, true)) hb, active := getLastHeartbeat(m)
if !active {
hb = emoji.Warning + " " + hb
}
t.AddRow(m.MachineId, m.IpAddress, m.UpdatedAt.Format(time.RFC3339), validated, m.Version, m.AuthType, hb)
} }
t.Render() t.Render()

View file

@ -3,67 +3,111 @@ package main
import ( import (
"fmt" "fmt"
"os" "os"
"path"
"path/filepath" "path/filepath"
"strings" "slices"
"time"
"github.com/fatih/color" "github.com/fatih/color"
cc "github.com/ivanpirog/coloredcobra" cc "github.com/ivanpirog/coloredcobra"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/spf13/cobra/doc"
"golang.org/x/exp/slices" "github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/cwversion"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/fflag" "github.com/crowdsecurity/crowdsec/pkg/fflag"
) )
var trace_lvl, dbg_lvl, nfo_lvl, wrn_lvl, err_lvl bool var (
ConfigFilePath string
csConfig *csconfig.Config
dbClient *database.Client
)
var ConfigFilePath string type configGetter func() *csconfig.Config
var csConfig *csconfig.Config
var dbClient *database.Client
var OutputFormat string
var OutputColor string
var downloadOnly bool
var forceAction bool
var purge bool
var all bool
var prometheusURL string
var mergedConfig string var mergedConfig string
func initConfig() { type cliRoot struct {
var err error logTrace bool
if trace_lvl { logDebug bool
log.SetLevel(log.TraceLevel) logInfo bool
} else if dbg_lvl { logWarn bool
log.SetLevel(log.DebugLevel) logErr bool
} else if nfo_lvl { outputColor string
log.SetLevel(log.InfoLevel) outputFormat string
} else if wrn_lvl { // flagBranch overrides the value in csConfig.Cscli.HubBranch
log.SetLevel(log.WarnLevel) flagBranch string
} else if err_lvl { }
log.SetLevel(log.ErrorLevel)
func newCliRoot() *cliRoot {
return &cliRoot{}
}
// cfg() is a helper function to get the configuration loaded from config.yaml,
// we pass it to subcommands because the file is not read until the Execute() call
func (cli *cliRoot) cfg() *csconfig.Config {
return csConfig
}
// wantedLogLevel returns the log level requested in the command line flags.
func (cli *cliRoot) wantedLogLevel() log.Level {
switch {
case cli.logTrace:
return log.TraceLevel
case cli.logDebug:
return log.DebugLevel
case cli.logInfo:
return log.InfoLevel
case cli.logWarn:
return log.WarnLevel
case cli.logErr:
return log.ErrorLevel
default:
return log.InfoLevel
}
}
// loadConfigFor loads the configuration file for the given sub-command.
// If the sub-command does not need it, it returns a default configuration.
func loadConfigFor(command string) (*csconfig.Config, string, error) {
noNeedConfig := []string{
"doc",
"help",
"completion",
"version",
"hubtest",
} }
if !slices.Contains(NoNeedConfig, os.Args[1]) { if !slices.Contains(noNeedConfig, command) {
csConfig, mergedConfig, err = csconfig.NewConfig(ConfigFilePath, false, false, true)
if err != nil {
log.Fatal(err)
}
log.Debugf("Using %s as configuration file", ConfigFilePath) log.Debugf("Using %s as configuration file", ConfigFilePath)
if err := csConfig.LoadCSCLI(); err != nil {
log.Fatal(err) config, merged, err := csconfig.NewConfig(ConfigFilePath, false, false, true)
if err != nil {
return nil, "", err
} }
} else {
csConfig = csconfig.NewDefaultConfig() // set up directory for trace files
if err := trace.Init(filepath.Join(config.ConfigPaths.DataDir, "trace")); err != nil {
return nil, "", fmt.Errorf("while setting up trace directory: %w", err)
}
return config, merged, nil
}
return csconfig.NewDefaultConfig(), "", nil
}
// initialize is called before the subcommand is executed.
func (cli *cliRoot) initialize() {
var err error
log.SetLevel(cli.wantedLogLevel())
csConfig, mergedConfig, err = loadConfigFor(os.Args[1])
if err != nil {
log.Fatal(err)
} }
// recap of the enabled feature flags, because logging // recap of the enabled feature flags, because logging
@ -72,22 +116,22 @@ func initConfig() {
log.Debugf("Enabled feature flags: %s", fflist) log.Debugf("Enabled feature flags: %s", fflist)
} }
if csConfig.Cscli == nil { if cli.flagBranch != "" {
log.Fatalf("missing 'cscli' configuration in '%s', exiting", ConfigFilePath) csConfig.Cscli.HubBranch = cli.flagBranch
} }
if cwhub.HubBranch == "" && csConfig.Cscli.HubBranch != "" { if cli.outputFormat != "" {
cwhub.HubBranch = csConfig.Cscli.HubBranch csConfig.Cscli.Output = cli.outputFormat
}
if OutputFormat != "" {
csConfig.Cscli.Output = OutputFormat
if OutputFormat != "json" && OutputFormat != "raw" && OutputFormat != "human" {
log.Fatalf("output format %s unknown", OutputFormat)
}
} }
if csConfig.Cscli.Output == "" { if csConfig.Cscli.Output == "" {
csConfig.Cscli.Output = "human" csConfig.Cscli.Output = "human"
} }
if csConfig.Cscli.Output != "human" && csConfig.Cscli.Output != "json" && csConfig.Cscli.Output != "raw" {
log.Fatalf("output format '%s' not supported: must be one of human, json, raw", csConfig.Cscli.Output)
}
if csConfig.Cscli.Output == "json" { if csConfig.Cscli.Output == "json" {
log.SetFormatter(&log.JSONFormatter{}) log.SetFormatter(&log.JSONFormatter{})
log.SetLevel(log.ErrorLevel) log.SetLevel(log.ErrorLevel)
@ -95,47 +139,44 @@ func initConfig() {
log.SetLevel(log.ErrorLevel) log.SetLevel(log.ErrorLevel)
} }
if OutputColor != "" { if cli.outputColor != "" {
csConfig.Cscli.Color = OutputColor csConfig.Cscli.Color = cli.outputColor
if OutputColor != "yes" && OutputColor != "no" && OutputColor != "auto" {
log.Fatalf("output color %s unknown", OutputColor) if cli.outputColor != "yes" && cli.outputColor != "no" && cli.outputColor != "auto" {
log.Fatalf("output color %s unknown", cli.outputColor)
} }
} }
} }
// list of valid subcommands for the shell completion
var validArgs = []string{ var validArgs = []string{
"scenarios", "parsers", "collections", "capi", "lapi", "postoverflows", "machines", "alerts", "appsec-configs", "appsec-rules", "bouncers", "capi", "collections",
"metrics", "bouncers", "alerts", "decisions", "simulation", "hub", "dashboard", "completion", "config", "console", "contexts", "dashboard", "decisions", "explain",
"config", "completion", "version", "console", "notifications", "support", "hub", "hubtest", "lapi", "machines", "metrics", "notifications", "parsers",
"postoverflows", "scenarios", "simulation", "support", "version",
} }
func prepender(filename string) string { func (cli *cliRoot) colorize(cmd *cobra.Command) {
const header = `--- cc.Init(&cc.Config{
id: %s RootCmd: cmd,
title: %s Headings: cc.Yellow,
--- Commands: cc.Green + cc.Bold,
` CmdShortDescr: cc.Cyan,
name := filepath.Base(filename) Example: cc.Italic,
base := strings.TrimSuffix(name, path.Ext(name)) ExecName: cc.Bold,
return fmt.Sprintf(header, base, strings.ReplaceAll(base, "_", " ")) Aliases: cc.Bold + cc.Italic,
FlagsDataType: cc.White,
Flags: cc.Green,
FlagsDescr: cc.Cyan,
NoExtraNewlines: true,
NoBottomNewline: true,
})
cmd.SetOut(color.Output)
} }
func linkHandler(name string) string { func (cli *cliRoot) NewCommand() *cobra.Command {
return fmt.Sprintf("/cscli/%s", name)
}
var (
NoNeedConfig = []string{
"help",
"completion",
"version",
"hubtest",
}
)
func main() {
// set the formatter asap and worry about level later // set the formatter asap and worry about level later
logFormatter := &log.TextFormatter{TimestampFormat: "02-01-2006 15:04:05", FullTimestamp: true} logFormatter := &log.TextFormatter{TimestampFormat: time.RFC3339, FullTimestamp: true}
log.SetFormatter(logFormatter) log.SetFormatter(logFormatter)
if err := fflag.RegisterAllFeatures(); err != nil { if err := fflag.RegisterAllFeatures(); err != nil {
@ -146,7 +187,7 @@ func main() {
log.Fatalf("failed to set feature flags from env: %s", err) log.Fatalf("failed to set feature flags from env: %s", err)
} }
var rootCmd = &cobra.Command{ cmd := &cobra.Command{
Use: "cscli", Use: "cscli",
Short: "cscli allows you to manage crowdsec", Short: "cscli allows you to manage crowdsec",
Long: `cscli is the main command to interact with your crowdsec service, scenarios & db. Long: `cscli is the main command to interact with your crowdsec service, scenarios & db.
@ -158,57 +199,25 @@ It is meant to allow you to manage bans, parsers/scenarios/etc, api and generall
/*TBD examples*/ /*TBD examples*/
} }
cc.Init(&cc.Config{ cli.colorize(cmd)
RootCmd: rootCmd,
Headings: cc.Yellow,
Commands: cc.Green + cc.Bold,
CmdShortDescr: cc.Cyan,
Example: cc.Italic,
ExecName: cc.Bold,
Aliases: cc.Bold + cc.Italic,
FlagsDataType: cc.White,
Flags: cc.Green,
FlagsDescr: cc.Cyan,
})
rootCmd.SetOut(color.Output)
var cmdDocGen = &cobra.Command{ /*don't sort flags so we can enforce order*/
Use: "doc", cmd.Flags().SortFlags = false
Short: "Generate the documentation in `./doc/`. Directory must exist.",
Args: cobra.ExactArgs(0),
Hidden: true,
DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error {
if err := doc.GenMarkdownTreeCustom(rootCmd, "./doc/", prepender, linkHandler); err != nil {
return fmt.Errorf("Failed to generate cobra doc: %s", err)
}
return nil
},
}
rootCmd.AddCommand(cmdDocGen)
/*usage*/
var cmdVersion = &cobra.Command{
Use: "version",
Short: "Display version and exit.",
Args: cobra.ExactArgs(0),
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
cwversion.Show()
},
}
rootCmd.AddCommand(cmdVersion)
rootCmd.PersistentFlags().StringVarP(&ConfigFilePath, "config", "c", csconfig.DefaultConfigPath("config.yaml"), "path to crowdsec config file") pflags := cmd.PersistentFlags()
rootCmd.PersistentFlags().StringVarP(&OutputFormat, "output", "o", "", "Output format: human, json, raw.") pflags.SortFlags = false
rootCmd.PersistentFlags().StringVarP(&OutputColor, "color", "", "auto", "Output color: yes, no, auto.")
rootCmd.PersistentFlags().BoolVar(&dbg_lvl, "debug", false, "Set logging to debug.")
rootCmd.PersistentFlags().BoolVar(&nfo_lvl, "info", false, "Set logging to info.")
rootCmd.PersistentFlags().BoolVar(&wrn_lvl, "warning", false, "Set logging to warning.")
rootCmd.PersistentFlags().BoolVar(&err_lvl, "error", false, "Set logging to error.")
rootCmd.PersistentFlags().BoolVar(&trace_lvl, "trace", false, "Set logging to trace.")
rootCmd.PersistentFlags().StringVar(&cwhub.HubBranch, "branch", "", "Override hub branch on github") pflags.StringVarP(&ConfigFilePath, "config", "c", csconfig.DefaultConfigPath("config.yaml"), "path to crowdsec config file")
if err := rootCmd.PersistentFlags().MarkHidden("branch"); err != nil { pflags.StringVarP(&cli.outputFormat, "output", "o", "", "Output format: human, json, raw")
pflags.StringVarP(&cli.outputColor, "color", "", "auto", "Output color: yes, no, auto")
pflags.BoolVar(&cli.logDebug, "debug", false, "Set logging to debug")
pflags.BoolVar(&cli.logInfo, "info", false, "Set logging to info")
pflags.BoolVar(&cli.logWarn, "warning", false, "Set logging to warning")
pflags.BoolVar(&cli.logErr, "error", false, "Set logging to error")
pflags.BoolVar(&cli.logTrace, "trace", false, "Set logging to trace")
pflags.StringVar(&cli.flagBranch, "branch", "", "Override hub branch on github")
if err := pflags.MarkHidden("branch"); err != nil {
log.Fatalf("failed to hide flag: %s", err) log.Fatalf("failed to hide flag: %s", err)
} }
@ -228,45 +237,47 @@ It is meant to allow you to manage bans, parsers/scenarios/etc, api and generall
} }
if len(os.Args) > 1 { if len(os.Args) > 1 {
cobra.OnInitialize(initConfig) cobra.OnInitialize(cli.initialize)
} }
/*don't sort flags so we can enforce order*/ cmd.AddCommand(NewCLIDoc().NewCommand(cmd))
rootCmd.Flags().SortFlags = false cmd.AddCommand(NewCLIVersion().NewCommand())
rootCmd.PersistentFlags().SortFlags = false cmd.AddCommand(NewCLIConfig(cli.cfg).NewCommand())
cmd.AddCommand(NewCLIHub(cli.cfg).NewCommand())
rootCmd.AddCommand(NewConfigCmd()) cmd.AddCommand(NewCLIMetrics(cli.cfg).NewCommand())
rootCmd.AddCommand(NewHubCmd()) cmd.AddCommand(NewCLIDashboard(cli.cfg).NewCommand())
rootCmd.AddCommand(NewMetricsCmd()) cmd.AddCommand(NewCLIDecisions(cli.cfg).NewCommand())
rootCmd.AddCommand(NewDashboardCmd()) cmd.AddCommand(NewCLIAlerts(cli.cfg).NewCommand())
rootCmd.AddCommand(NewDecisionsCmd()) cmd.AddCommand(NewCLISimulation(cli.cfg).NewCommand())
rootCmd.AddCommand(NewAlertsCmd()) cmd.AddCommand(NewCLIBouncers(cli.cfg).NewCommand())
rootCmd.AddCommand(NewSimulationCmds()) cmd.AddCommand(NewCLIMachines(cli.cfg).NewCommand())
rootCmd.AddCommand(NewBouncersCmd()) cmd.AddCommand(NewCLICapi(cli.cfg).NewCommand())
rootCmd.AddCommand(NewMachinesCmd()) cmd.AddCommand(NewCLILapi(cli.cfg).NewCommand())
rootCmd.AddCommand(NewParsersCmd()) cmd.AddCommand(NewCompletionCmd())
rootCmd.AddCommand(NewScenariosCmd()) cmd.AddCommand(NewCLIConsole(cli.cfg).NewCommand())
rootCmd.AddCommand(NewCollectionsCmd()) cmd.AddCommand(NewCLIExplain(cli.cfg).NewCommand())
rootCmd.AddCommand(NewPostOverflowsCmd()) cmd.AddCommand(NewCLIHubTest(cli.cfg).NewCommand())
rootCmd.AddCommand(NewCapiCmd()) cmd.AddCommand(NewCLINotifications(cli.cfg).NewCommand())
rootCmd.AddCommand(NewLapiCmd()) cmd.AddCommand(NewCLISupport().NewCommand())
rootCmd.AddCommand(NewCompletionCmd()) cmd.AddCommand(NewCLIPapi(cli.cfg).NewCommand())
rootCmd.AddCommand(NewConsoleCmd()) cmd.AddCommand(NewCLICollection(cli.cfg).NewCommand())
rootCmd.AddCommand(NewExplainCmd()) cmd.AddCommand(NewCLIParser(cli.cfg).NewCommand())
rootCmd.AddCommand(NewHubTestCmd()) cmd.AddCommand(NewCLIScenario(cli.cfg).NewCommand())
rootCmd.AddCommand(NewNotificationsCmd()) cmd.AddCommand(NewCLIPostOverflow(cli.cfg).NewCommand())
rootCmd.AddCommand(NewSupportCmd()) cmd.AddCommand(NewCLIContext(cli.cfg).NewCommand())
rootCmd.AddCommand(NewWafRulesCmd()) cmd.AddCommand(NewCLIAppsecConfig(cli.cfg).NewCommand())
cmd.AddCommand(NewCLIAppsecRule(cli.cfg).NewCommand())
if fflag.CscliSetup.IsEnabled() { if fflag.CscliSetup.IsEnabled() {
rootCmd.AddCommand(NewSetupCmd()) cmd.AddCommand(NewSetupCmd())
} }
if fflag.PapiClient.IsEnabled() { return cmd
rootCmd.AddCommand(NewPapiCmd()) }
}
if err := rootCmd.Execute(); err != nil { func main() {
cmd := newCliRoot().NewCommand()
if err := cmd.Execute(); err != nil {
log.Fatal(err) log.Fatal(err)
} }
} }

View file

@ -2,6 +2,7 @@ package main
import ( import (
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@ -14,13 +15,65 @@ import (
"github.com/prometheus/prom2json" "github.com/prometheus/prom2json"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v2" "gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/pkg/trace" "github.com/crowdsecurity/go-cs-lib/maptools"
"github.com/crowdsecurity/go-cs-lib/trace"
) )
// FormatPrometheusMetrics is a complete rip from prom2json type (
func FormatPrometheusMetrics(out io.Writer, url string, formatType string) error { statAcquis map[string]map[string]int
statParser map[string]map[string]int
statBucket map[string]map[string]int
statWhitelist map[string]map[string]map[string]int
statLapi map[string]map[string]int
statLapiMachine map[string]map[string]map[string]int
statLapiBouncer map[string]map[string]map[string]int
statLapiDecision map[string]struct {
NonEmpty int
Empty int
}
statDecision map[string]map[string]map[string]int
statAppsecEngine map[string]map[string]int
statAppsecRule map[string]map[string]map[string]int
statAlert map[string]int
statStash map[string]struct {
Type string
Count int
}
)
var (
ErrMissingConfig = errors.New("prometheus section missing, can't show metrics")
ErrMetricsDisabled = errors.New("prometheus is not enabled, can't show metrics")
)
type metricSection interface {
Table(out io.Writer, noUnit bool, showEmpty bool)
Description() (string, string)
}
type metricStore map[string]metricSection
func NewMetricStore() metricStore {
return metricStore{
"acquisition": statAcquis{},
"scenarios": statBucket{},
"parsers": statParser{},
"lapi": statLapi{},
"lapi-machine": statLapiMachine{},
"lapi-bouncer": statLapiBouncer{},
"lapi-decisions": statLapiDecision{},
"decisions": statDecision{},
"alerts": statAlert{},
"stash": statStash{},
"appsec-engine": statAppsecEngine{},
"appsec-rule": statAppsecRule{},
"whitelists": statWhitelist{},
}
}
func (ms metricStore) Fetch(url string) error {
mfChan := make(chan *dto.MetricFamily, 1024) mfChan := make(chan *dto.MetricFamily, 1024)
errChan := make(chan error, 1) errChan := make(chan error, 1)
@ -33,9 +86,10 @@ func FormatPrometheusMetrics(out io.Writer, url string, formatType string) error
transport.ResponseHeaderTimeout = time.Minute transport.ResponseHeaderTimeout = time.Minute
go func() { go func() {
defer trace.CatchPanic("crowdsec/ShowPrometheus") defer trace.CatchPanic("crowdsec/ShowPrometheus")
err := prom2json.FetchMetricFamilies(url, mfChan, transport) err := prom2json.FetchMetricFamilies(url, mfChan, transport)
if err != nil { if err != nil {
errChan <- fmt.Errorf("failed to fetch prometheus metrics: %w", err) errChan <- fmt.Errorf("failed to fetch metrics: %w", err)
return return
} }
errChan <- nil errChan <- nil
@ -50,40 +104,42 @@ func FormatPrometheusMetrics(out io.Writer, url string, formatType string) error
return err return err
} }
log.Debugf("Finished reading prometheus output, %d entries", len(result)) log.Debugf("Finished reading metrics output, %d entries", len(result))
/*walk*/ /*walk*/
lapi_decisions_stats := map[string]struct {
NonEmpty int mAcquis := ms["acquisition"].(statAcquis)
Empty int mParser := ms["parsers"].(statParser)
}{} mBucket := ms["scenarios"].(statBucket)
acquis_stats := map[string]map[string]int{} mLapi := ms["lapi"].(statLapi)
parsers_stats := map[string]map[string]int{} mLapiMachine := ms["lapi-machine"].(statLapiMachine)
buckets_stats := map[string]map[string]int{} mLapiBouncer := ms["lapi-bouncer"].(statLapiBouncer)
lapi_stats := map[string]map[string]int{} mLapiDecision := ms["lapi-decisions"].(statLapiDecision)
lapi_machine_stats := map[string]map[string]map[string]int{} mDecision := ms["decisions"].(statDecision)
lapi_bouncer_stats := map[string]map[string]map[string]int{} mAppsecEngine := ms["appsec-engine"].(statAppsecEngine)
decisions_stats := map[string]map[string]map[string]int{} mAppsecRule := ms["appsec-rule"].(statAppsecRule)
alerts_stats := map[string]int{} mAlert := ms["alerts"].(statAlert)
stash_stats := map[string]struct { mStash := ms["stash"].(statStash)
Type string mWhitelist := ms["whitelists"].(statWhitelist)
Count int
}{}
for idx, fam := range result { for idx, fam := range result {
if !strings.HasPrefix(fam.Name, "cs_") { if !strings.HasPrefix(fam.Name, "cs_") {
continue continue
} }
log.Tracef("round %d", idx) log.Tracef("round %d", idx)
for _, m := range fam.Metrics { for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric) metric, ok := m.(prom2json.Metric)
if !ok { if !ok {
log.Debugf("failed to convert metric to prom2json.Metric") log.Debugf("failed to convert metric to prom2json.Metric")
continue continue
} }
name, ok := metric.Labels["name"] name, ok := metric.Labels["name"]
if !ok { if !ok {
log.Debugf("no name in Metric %v", metric.Labels) log.Debugf("no name in Metric %v", metric.Labels)
} }
source, ok := metric.Labels["source"] source, ok := metric.Labels["source"]
if !ok { if !ok {
log.Debugf("no source in Metric %v for %s", metric.Labels, fam.Name) log.Debugf("no source in Metric %v for %s", metric.Labels, fam.Name)
@ -104,212 +160,338 @@ func FormatPrometheusMetrics(out io.Writer, url string, formatType string) error
origin := metric.Labels["origin"] origin := metric.Labels["origin"]
action := metric.Labels["action"] action := metric.Labels["action"]
appsecEngine := metric.Labels["appsec_engine"]
appsecRule := metric.Labels["rule_name"]
mtype := metric.Labels["type"] mtype := metric.Labels["type"]
fval, err := strconv.ParseFloat(value, 32) fval, err := strconv.ParseFloat(value, 32)
if err != nil { if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err) log.Errorf("Unexpected int value %s : %s", value, err)
} }
ival := int(fval) ival := int(fval)
switch fam.Name { switch fam.Name {
/*buckets*/ //
// buckets
//
case "cs_bucket_created_total": case "cs_bucket_created_total":
if _, ok := buckets_stats[name]; !ok { mBucket.Process(name, "instantiation", ival)
buckets_stats[name] = make(map[string]int)
}
buckets_stats[name]["instantiation"] += ival
case "cs_buckets": case "cs_buckets":
if _, ok := buckets_stats[name]; !ok { mBucket.Process(name, "curr_count", ival)
buckets_stats[name] = make(map[string]int)
}
buckets_stats[name]["curr_count"] += ival
case "cs_bucket_overflowed_total": case "cs_bucket_overflowed_total":
if _, ok := buckets_stats[name]; !ok { mBucket.Process(name, "overflow", ival)
buckets_stats[name] = make(map[string]int)
}
buckets_stats[name]["overflow"] += ival
case "cs_bucket_poured_total": case "cs_bucket_poured_total":
if _, ok := buckets_stats[name]; !ok { mBucket.Process(name, "pour", ival)
buckets_stats[name] = make(map[string]int) mAcquis.Process(source, "pour", ival)
}
if _, ok := acquis_stats[source]; !ok {
acquis_stats[source] = make(map[string]int)
}
buckets_stats[name]["pour"] += ival
acquis_stats[source]["pour"] += ival
case "cs_bucket_underflowed_total": case "cs_bucket_underflowed_total":
if _, ok := buckets_stats[name]; !ok { mBucket.Process(name, "underflow", ival)
buckets_stats[name] = make(map[string]int) //
} // parsers
buckets_stats[name]["underflow"] += ival //
/*acquis*/
case "cs_parser_hits_total": case "cs_parser_hits_total":
if _, ok := acquis_stats[source]; !ok { mAcquis.Process(source, "reads", ival)
acquis_stats[source] = make(map[string]int)
}
acquis_stats[source]["reads"] += ival
case "cs_parser_hits_ok_total": case "cs_parser_hits_ok_total":
if _, ok := acquis_stats[source]; !ok { mAcquis.Process(source, "parsed", ival)
acquis_stats[source] = make(map[string]int)
}
acquis_stats[source]["parsed"] += ival
case "cs_parser_hits_ko_total": case "cs_parser_hits_ko_total":
if _, ok := acquis_stats[source]; !ok { mAcquis.Process(source, "unparsed", ival)
acquis_stats[source] = make(map[string]int)
}
acquis_stats[source]["unparsed"] += ival
case "cs_node_hits_total": case "cs_node_hits_total":
if _, ok := parsers_stats[name]; !ok { mParser.Process(name, "hits", ival)
parsers_stats[name] = make(map[string]int)
}
parsers_stats[name]["hits"] += ival
case "cs_node_hits_ok_total": case "cs_node_hits_ok_total":
if _, ok := parsers_stats[name]; !ok { mParser.Process(name, "parsed", ival)
parsers_stats[name] = make(map[string]int)
}
parsers_stats[name]["parsed"] += ival
case "cs_node_hits_ko_total": case "cs_node_hits_ko_total":
if _, ok := parsers_stats[name]; !ok { mParser.Process(name, "unparsed", ival)
parsers_stats[name] = make(map[string]int) //
} // whitelists
parsers_stats[name]["unparsed"] += ival //
case "cs_node_wl_hits_total":
mWhitelist.Process(name, reason, "hits", ival)
case "cs_node_wl_hits_ok_total":
mWhitelist.Process(name, reason, "whitelisted", ival)
// track as well whitelisted lines at acquis level
mAcquis.Process(source, "whitelisted", ival)
//
// lapi
//
case "cs_lapi_route_requests_total": case "cs_lapi_route_requests_total":
if _, ok := lapi_stats[route]; !ok { mLapi.Process(route, method, ival)
lapi_stats[route] = make(map[string]int)
}
lapi_stats[route][method] += ival
case "cs_lapi_machine_requests_total": case "cs_lapi_machine_requests_total":
if _, ok := lapi_machine_stats[machine]; !ok { mLapiMachine.Process(machine, route, method, ival)
lapi_machine_stats[machine] = make(map[string]map[string]int)
}
if _, ok := lapi_machine_stats[machine][route]; !ok {
lapi_machine_stats[machine][route] = make(map[string]int)
}
lapi_machine_stats[machine][route][method] += ival
case "cs_lapi_bouncer_requests_total": case "cs_lapi_bouncer_requests_total":
if _, ok := lapi_bouncer_stats[bouncer]; !ok { mLapiBouncer.Process(bouncer, route, method, ival)
lapi_bouncer_stats[bouncer] = make(map[string]map[string]int)
}
if _, ok := lapi_bouncer_stats[bouncer][route]; !ok {
lapi_bouncer_stats[bouncer][route] = make(map[string]int)
}
lapi_bouncer_stats[bouncer][route][method] += ival
case "cs_lapi_decisions_ko_total", "cs_lapi_decisions_ok_total": case "cs_lapi_decisions_ko_total", "cs_lapi_decisions_ok_total":
if _, ok := lapi_decisions_stats[bouncer]; !ok { mLapiDecision.Process(bouncer, fam.Name, ival)
lapi_decisions_stats[bouncer] = struct { //
NonEmpty int // decisions
Empty int //
}{}
}
x := lapi_decisions_stats[bouncer]
if fam.Name == "cs_lapi_decisions_ko_total" {
x.Empty += ival
} else if fam.Name == "cs_lapi_decisions_ok_total" {
x.NonEmpty += ival
}
lapi_decisions_stats[bouncer] = x
case "cs_active_decisions": case "cs_active_decisions":
if _, ok := decisions_stats[reason]; !ok { mDecision.Process(reason, origin, action, ival)
decisions_stats[reason] = make(map[string]map[string]int)
}
if _, ok := decisions_stats[reason][origin]; !ok {
decisions_stats[reason][origin] = make(map[string]int)
}
decisions_stats[reason][origin][action] += ival
case "cs_alerts": case "cs_alerts":
/*if _, ok := alerts_stats[scenario]; !ok { mAlert.Process(reason, ival)
alerts_stats[scenario] = make(map[string]int) //
}*/ // stash
alerts_stats[reason] += ival //
case "cs_cache_size": case "cs_cache_size":
stash_stats[name] = struct { mStash.Process(name, mtype, ival)
Type string //
Count int // appsec
}{Type: mtype, Count: ival} //
case "cs_appsec_reqs_total":
mAppsecEngine.Process(appsecEngine, "processed", ival)
case "cs_appsec_block_total":
mAppsecEngine.Process(appsecEngine, "blocked", ival)
case "cs_appsec_rule_hits":
mAppsecRule.Process(appsecEngine, appsecRule, "triggered", ival)
default: default:
log.Debugf("unknown: %+v", fam.Name)
continue continue
} }
} }
} }
if formatType == "human" {
acquisStatsTable(out, acquis_stats)
bucketStatsTable(out, buckets_stats)
parserStatsTable(out, parsers_stats)
lapiStatsTable(out, lapi_stats)
lapiMachineStatsTable(out, lapi_machine_stats)
lapiBouncerStatsTable(out, lapi_bouncer_stats)
lapiDecisionStatsTable(out, lapi_decisions_stats)
decisionStatsTable(out, decisions_stats)
alertStatsTable(out, alerts_stats)
stashStatsTable(out, stash_stats)
} else if formatType == "json" {
for _, val := range []interface{}{acquis_stats, parsers_stats, buckets_stats, lapi_stats, lapi_bouncer_stats, lapi_machine_stats, lapi_decisions_stats, decisions_stats, alerts_stats, stash_stats} {
x, err := json.MarshalIndent(val, "", " ")
if err != nil {
return fmt.Errorf("failed to unmarshal metrics : %v", err)
}
out.Write(x)
}
return nil
} else if formatType == "raw" {
for _, val := range []interface{}{acquis_stats, parsers_stats, buckets_stats, lapi_stats, lapi_bouncer_stats, lapi_machine_stats, lapi_decisions_stats, decisions_stats, alerts_stats, stash_stats} {
x, err := yaml.Marshal(val)
if err != nil {
return fmt.Errorf("failed to unmarshal metrics : %v", err)
}
out.Write(x)
}
return nil
}
return nil return nil
} }
var noUnit bool type cliMetrics struct {
cfg configGetter
}
func NewCLIMetrics(cfg configGetter) *cliMetrics {
return &cliMetrics{
cfg: cfg,
}
}
func runMetrics(cmd *cobra.Command, args []string) error { func (ms metricStore) Format(out io.Writer, sections []string, formatType string, noUnit bool) error {
if err := csConfig.LoadPrometheus(); err != nil { // copy only the sections we want
return fmt.Errorf("failed to load prometheus config: %w", err) want := map[string]metricSection{}
// if explicitly asking for sections, we want to show empty tables
showEmpty := len(sections) > 0
// if no sections are specified, we want all of them
if len(sections) == 0 {
sections = maptools.SortedKeys(ms)
} }
if csConfig.Prometheus == nil { for _, section := range sections {
return fmt.Errorf("prometheus section missing, can't show metrics") want[section] = ms[section]
} }
if !csConfig.Prometheus.Enabled { switch formatType {
return fmt.Errorf("prometheus is not enabled, can't show metrics") case "human":
for _, section := range maptools.SortedKeys(want) {
want[section].Table(out, noUnit, showEmpty)
}
case "json":
x, err := json.MarshalIndent(want, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal metrics: %w", err)
}
out.Write(x)
case "raw":
x, err := yaml.Marshal(want)
if err != nil {
return fmt.Errorf("failed to marshal metrics: %w", err)
}
out.Write(x)
default:
return fmt.Errorf("unknown format type %s", formatType)
} }
if prometheusURL == "" {
prometheusURL = csConfig.Cscli.PrometheusUrl
}
if prometheusURL == "" {
return fmt.Errorf("no prometheus url, please specify in %s or via -u", *csConfig.FilePath)
}
err := FormatPrometheusMetrics(color.Output, prometheusURL+"/metrics", csConfig.Cscli.Output)
if err != nil {
return fmt.Errorf("could not fetch prometheus metrics: %w", err)
}
return nil return nil
} }
func (cli *cliMetrics) show(sections []string, url string, noUnit bool) error {
cfg := cli.cfg()
func NewMetricsCmd() *cobra.Command { if url != "" {
cmdMetrics := &cobra.Command{ cfg.Cscli.PrometheusUrl = url
Use: "metrics", }
Short: "Display crowdsec prometheus metrics.",
Long: `Fetch metrics from the prometheus server and display them in a human-friendly way`, if cfg.Prometheus == nil {
return ErrMissingConfig
}
if !cfg.Prometheus.Enabled {
return ErrMetricsDisabled
}
ms := NewMetricStore()
if err := ms.Fetch(cfg.Cscli.PrometheusUrl); err != nil {
return err
}
// any section that we don't have in the store is an error
for _, section := range sections {
if _, ok := ms[section]; !ok {
return fmt.Errorf("unknown metrics type: %s", section)
}
}
if err := ms.Format(color.Output, sections, cfg.Cscli.Output, noUnit); err != nil {
return err
}
return nil
}
func (cli *cliMetrics) NewCommand() *cobra.Command {
var (
url string
noUnit bool
)
cmd := &cobra.Command{
Use: "metrics",
Short: "Display crowdsec prometheus metrics.",
Long: `Fetch metrics from a Local API server and display them`,
Example: `# Show all Metrics, skip empty tables (same as "cecli metrics show")
cscli metrics
# Show only some metrics, connect to a different url
cscli metrics --url http://lapi.local:6060/metrics show acquisition parsers
# List available metric types
cscli metrics list`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: runMetrics, RunE: func(_ *cobra.Command, _ []string) error {
return cli.show(nil, url, noUnit)
},
} }
cmdMetrics.PersistentFlags().StringVarP(&prometheusURL, "url", "u", "", "Prometheus url (http://<ip>:<port>/metrics)")
cmdMetrics.PersistentFlags().BoolVar(&noUnit, "no-unit", false, "Show the real number instead of formatted with units")
return cmdMetrics flags := cmd.Flags()
flags.StringVarP(&url, "url", "u", "", "Prometheus url (http://<ip>:<port>/metrics)")
flags.BoolVar(&noUnit, "no-unit", false, "Show the real number instead of formatted with units")
cmd.AddCommand(cli.newShowCmd())
cmd.AddCommand(cli.newListCmd())
return cmd
}
// expandAlias returns a list of sections. The input can be a list of sections or alias.
func (cli *cliMetrics) expandAlias(args []string) []string {
ret := []string{}
for _, section := range args {
switch section {
case "engine":
ret = append(ret, "acquisition", "parsers", "scenarios", "stash", "whitelists")
case "lapi":
ret = append(ret, "alerts", "decisions", "lapi", "lapi-bouncer", "lapi-decisions", "lapi-machine")
case "appsec":
ret = append(ret, "appsec-engine", "appsec-rule")
default:
ret = append(ret, section)
}
}
return ret
}
func (cli *cliMetrics) newShowCmd() *cobra.Command {
var (
url string
noUnit bool
)
cmd := &cobra.Command{
Use: "show [type]...",
Short: "Display all or part of the available metrics.",
Long: `Fetch metrics from a Local API server and display them, optionally filtering on specific types.`,
Example: `# Show all Metrics, skip empty tables
cscli metrics show
# Use an alias: "engine", "lapi" or "appsec" to show a group of metrics
cscli metrics show engine
# Show some specific metrics, show empty tables, connect to a different url
cscli metrics show acquisition parsers scenarios stash --url http://lapi.local:6060/metrics
# To list available metric types, use "cscli metrics list"
cscli metrics list; cscli metrics list -o json
# Show metrics in json format
cscli metrics show acquisition parsers scenarios stash -o json`,
// Positional args are optional
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, args []string) error {
args = cli.expandAlias(args)
return cli.show(args, url, noUnit)
},
}
flags := cmd.Flags()
flags.StringVarP(&url, "url", "u", "", "Metrics url (http://<ip>:<port>/metrics)")
flags.BoolVar(&noUnit, "no-unit", false, "Show the real number instead of formatted with units")
return cmd
}
func (cli *cliMetrics) list() error {
type metricType struct {
Type string `json:"type" yaml:"type"`
Title string `json:"title" yaml:"title"`
Description string `json:"description" yaml:"description"`
}
var allMetrics []metricType
ms := NewMetricStore()
for _, section := range maptools.SortedKeys(ms) {
title, description := ms[section].Description()
allMetrics = append(allMetrics, metricType{
Type: section,
Title: title,
Description: description,
})
}
switch cli.cfg().Cscli.Output {
case "human":
t := newTable(color.Output)
t.SetRowLines(true)
t.SetHeaders("Type", "Title", "Description")
for _, metric := range allMetrics {
t.AddRow(metric.Type, metric.Title, metric.Description)
}
t.Render()
case "json":
x, err := json.MarshalIndent(allMetrics, "", " ")
if err != nil {
return fmt.Errorf("failed to marshal metric types: %w", err)
}
fmt.Println(string(x))
case "raw":
x, err := yaml.Marshal(allMetrics)
if err != nil {
return fmt.Errorf("failed to marshal metric types: %w", err)
}
fmt.Println(string(x))
}
return nil
}
func (cli *cliMetrics) newListCmd() *cobra.Command {
cmd := &cobra.Command{
Use: "list",
Short: "List available types of metrics.",
Long: `List available types of metrics.`,
Args: cobra.ExactArgs(0),
DisableAutoGenTag: true,
RunE: func(_ *cobra.Command, _ []string) error {
return cli.list()
},
}
return cmd
} }

View file

@ -1,25 +1,33 @@
package main package main
import ( import (
"errors"
"fmt" "fmt"
"io" "io"
"sort" "sort"
"strconv"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/go-cs-lib/maptools"
) )
// ErrNilTable means a nil pointer was passed instead of a table instance. This is a programming error.
var ErrNilTable = errors.New("nil table")
func lapiMetricsToTable(t *table.Table, stats map[string]map[string]map[string]int) int { func lapiMetricsToTable(t *table.Table, stats map[string]map[string]map[string]int) int {
// stats: machine -> route -> method -> count // stats: machine -> route -> method -> count
// sort keys to keep consistent order when printing // sort keys to keep consistent order when printing
machineKeys := []string{} machineKeys := []string{}
for k := range stats { for k := range stats {
machineKeys = append(machineKeys, k) machineKeys = append(machineKeys, k)
} }
sort.Strings(machineKeys) sort.Strings(machineKeys)
numRows := 0 numRows := 0
for _, machine := range machineKeys { for _, machine := range machineKeys {
// oneRow: route -> method -> count // oneRow: route -> method -> count
machineRow := stats[machine] machineRow := stats[machine]
@ -31,41 +39,79 @@ func lapiMetricsToTable(t *table.Table, stats map[string]map[string]map[string]i
methodName, methodName,
} }
if count != 0 { if count != 0 {
row = append(row, fmt.Sprintf("%d", count)) row = append(row, strconv.Itoa(count))
} else { } else {
row = append(row, "-") row = append(row, "-")
} }
t.AddRow(row...) t.AddRow(row...)
numRows++ numRows++
} }
} }
} }
return numRows return numRows
} }
func metricsToTable(t *table.Table, stats map[string]map[string]int, keys []string) (int, error) { func wlMetricsToTable(t *table.Table, stats map[string]map[string]map[string]int, noUnit bool) (int, error) {
if t == nil { if t == nil {
return 0, fmt.Errorf("nil table") return 0, ErrNilTable
} }
// sort keys to keep consistent order when printing
sortedKeys := []string{}
for k := range stats {
sortedKeys = append(sortedKeys, k)
}
sort.Strings(sortedKeys)
numRows := 0 numRows := 0
for _, alabel := range sortedKeys {
for _, name := range maptools.SortedKeys(stats) {
for _, reason := range maptools.SortedKeys(stats[name]) {
row := []string{
name,
reason,
"-",
"-",
}
for _, action := range maptools.SortedKeys(stats[name][reason]) {
value := stats[name][reason][action]
switch action {
case "whitelisted":
row[3] = strconv.Itoa(value)
case "hits":
row[2] = strconv.Itoa(value)
default:
log.Debugf("unexpected counter '%s' for whitelists = %d", action, value)
}
}
t.AddRow(row...)
numRows++
}
}
return numRows, nil
}
func metricsToTable(t *table.Table, stats map[string]map[string]int, keys []string, noUnit bool) (int, error) {
if t == nil {
return 0, ErrNilTable
}
numRows := 0
for _, alabel := range maptools.SortedKeys(stats) {
astats, ok := stats[alabel] astats, ok := stats[alabel]
if !ok { if !ok {
continue continue
} }
row := []string{ row := []string{
alabel, alabel,
} }
for _, sl := range keys { for _, sl := range keys {
if v, ok := astats[sl]; ok && v != 0 { if v, ok := astats[sl]; ok && v != 0 {
numberToShow := fmt.Sprintf("%d", v) numberToShow := strconv.Itoa(v)
if !noUnit { if !noUnit {
numberToShow = formatNumber(v) numberToShow = formatNumber(v)
} }
@ -75,45 +121,192 @@ func metricsToTable(t *table.Table, stats map[string]map[string]int, keys []stri
row = append(row, "-") row = append(row, "-")
} }
} }
t.AddRow(row...) t.AddRow(row...)
numRows++ numRows++
} }
return numRows, nil return numRows, nil
} }
func bucketStatsTable(out io.Writer, stats map[string]map[string]int) { func (s statBucket) Description() (string, string) {
return "Scenario Metrics",
`Measure events in different scenarios. Current count is the number of buckets during metrics collection. ` +
`Overflows are past event-producing buckets, while Expired are the ones that didnt receive enough events to Overflow.`
}
func (s statBucket) Process(bucket, metric string, val int) {
if _, ok := s[bucket]; !ok {
s[bucket] = make(map[string]int)
}
s[bucket][metric] += val
}
func (s statBucket) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Bucket", "Current Count", "Overflows", "Instantiated", "Poured", "Expired") t.SetHeaders("Scenario", "Current Count", "Overflows", "Instantiated", "Poured", "Expired")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
keys := []string{"curr_count", "overflow", "instantiation", "pour", "underflow"} keys := []string{"curr_count", "overflow", "instantiation", "pour", "underflow"}
if numRows, err := metricsToTable(t, stats, keys); err != nil { if numRows, err := metricsToTable(t, s, keys, noUnit); err != nil {
log.Warningf("while collecting acquis stats: %s", err) log.Warningf("while collecting scenario stats: %s", err)
} else if numRows > 0 { } else if numRows > 0 || showEmpty {
renderTableTitle(out, "\nBucket Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func acquisStatsTable(out io.Writer, stats map[string]map[string]int) { func (s statAcquis) Description() (string, string) {
return "Acquisition Metrics",
`Measures the lines read, parsed, and unparsed per datasource. ` +
`Zero read lines indicate a misconfigured or inactive datasource. ` +
`Zero parsed lines mean the parser(s) failed. ` +
`Non-zero parsed lines are fine as crowdsec selects relevant lines.`
}
func (s statAcquis) Process(source, metric string, val int) {
if _, ok := s[source]; !ok {
s[source] = make(map[string]int)
}
s[source][metric] += val
}
func (s statAcquis) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Source", "Lines read", "Lines parsed", "Lines unparsed", "Lines poured to bucket") t.SetHeaders("Source", "Lines read", "Lines parsed", "Lines unparsed", "Lines poured to bucket", "Lines whitelisted")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
keys := []string{"reads", "parsed", "unparsed", "pour"} keys := []string{"reads", "parsed", "unparsed", "pour", "whitelisted"}
if numRows, err := metricsToTable(t, stats, keys); err != nil { if numRows, err := metricsToTable(t, s, keys, noUnit); err != nil {
log.Warningf("while collecting acquis stats: %s", err) log.Warningf("while collecting acquis stats: %s", err)
} else if numRows > 0 { } else if numRows > 0 || showEmpty {
renderTableTitle(out, "\nAcquisition Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func parserStatsTable(out io.Writer, stats map[string]map[string]int) { func (s statAppsecEngine) Description() (string, string) {
return "Appsec Metrics",
`Measures the number of parsed and blocked requests by the AppSec Component.`
}
func (s statAppsecEngine) Process(appsecEngine, metric string, val int) {
if _, ok := s[appsecEngine]; !ok {
s[appsecEngine] = make(map[string]int)
}
s[appsecEngine][metric] += val
}
func (s statAppsecEngine) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out)
t.SetRowLines(false)
t.SetHeaders("Appsec Engine", "Processed", "Blocked")
t.SetAlignment(table.AlignLeft, table.AlignLeft)
keys := []string{"processed", "blocked"}
if numRows, err := metricsToTable(t, s, keys, noUnit); err != nil {
log.Warningf("while collecting appsec stats: %s", err)
} else if numRows > 0 || showEmpty {
title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render()
}
}
func (s statAppsecRule) Description() (string, string) {
return "Appsec Rule Metrics",
`Provides “per AppSec Component” information about the number of matches for loaded AppSec Rules.`
}
func (s statAppsecRule) Process(appsecEngine, appsecRule string, metric string, val int) {
if _, ok := s[appsecEngine]; !ok {
s[appsecEngine] = make(map[string]map[string]int)
}
if _, ok := s[appsecEngine][appsecRule]; !ok {
s[appsecEngine][appsecRule] = make(map[string]int)
}
s[appsecEngine][appsecRule][metric] += val
}
func (s statAppsecRule) Table(out io.Writer, noUnit bool, showEmpty bool) {
for appsecEngine, appsecEngineRulesStats := range s {
t := newTable(out)
t.SetRowLines(false)
t.SetHeaders("Rule ID", "Triggered")
t.SetAlignment(table.AlignLeft, table.AlignLeft)
keys := []string{"triggered"}
if numRows, err := metricsToTable(t, appsecEngineRulesStats, keys, noUnit); err != nil {
log.Warningf("while collecting appsec rules stats: %s", err)
} else if numRows > 0 || showEmpty {
renderTableTitle(out, fmt.Sprintf("\nAppsec '%s' Rules Metrics:", appsecEngine))
t.Render()
}
}
}
func (s statWhitelist) Description() (string, string) {
return "Whitelist Metrics",
`Tracks the number of events processed and possibly whitelisted by each parser whitelist.`
}
func (s statWhitelist) Process(whitelist, reason, metric string, val int) {
if _, ok := s[whitelist]; !ok {
s[whitelist] = make(map[string]map[string]int)
}
if _, ok := s[whitelist][reason]; !ok {
s[whitelist][reason] = make(map[string]int)
}
s[whitelist][reason][metric] += val
}
func (s statWhitelist) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out)
t.SetRowLines(false)
t.SetHeaders("Whitelist", "Reason", "Hits", "Whitelisted")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
if numRows, err := wlMetricsToTable(t, s, noUnit); err != nil {
log.Warningf("while collecting parsers stats: %s", err)
} else if numRows > 0 || showEmpty {
title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render()
}
}
func (s statParser) Description() (string, string) {
return "Parser Metrics",
`Tracks the number of events processed by each parser and indicates success of failure. ` +
`Zero parsed lines means the parer(s) failed. ` +
`Non-zero unparsed lines are fine as crowdsec select relevant lines.`
}
func (s statParser) Process(parser, metric string, val int) {
if _, ok := s[parser]; !ok {
s[parser] = make(map[string]int)
}
s[parser][metric] += val
}
func (s statParser) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Parsers", "Hits", "Parsed", "Unparsed") t.SetHeaders("Parsers", "Hits", "Parsed", "Unparsed")
@ -121,187 +314,302 @@ func parserStatsTable(out io.Writer, stats map[string]map[string]int) {
keys := []string{"hits", "parsed", "unparsed"} keys := []string{"hits", "parsed", "unparsed"}
if numRows, err := metricsToTable(t, stats, keys); err != nil { if numRows, err := metricsToTable(t, s, keys, noUnit); err != nil {
log.Warningf("while collecting acquis stats: %s", err) log.Warningf("while collecting parsers stats: %s", err)
} else if numRows > 0 { } else if numRows > 0 || showEmpty {
renderTableTitle(out, "\nParser Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func stashStatsTable(out io.Writer, stats map[string]struct { func (s statStash) Description() (string, string) {
Type string return "Parser Stash Metrics",
Count int `Tracks the status of stashes that might be created by various parsers and scenarios.`
}) { }
func (s statStash) Process(name, mtype string, val int) {
s[name] = struct {
Type string
Count int
}{
Type: mtype,
Count: val,
}
}
func (s statStash) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Name", "Type", "Items") t.SetHeaders("Name", "Type", "Items")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
// unfortunately, we can't reuse metricsToTable as the structure is too different :/ // unfortunately, we can't reuse metricsToTable as the structure is too different :/
sortedKeys := []string{}
for k := range stats {
sortedKeys = append(sortedKeys, k)
}
sort.Strings(sortedKeys)
numRows := 0 numRows := 0
for _, alabel := range sortedKeys {
astats := stats[alabel] for _, alabel := range maptools.SortedKeys(s) {
astats := s[alabel]
row := []string{ row := []string{
alabel, alabel,
astats.Type, astats.Type,
fmt.Sprintf("%d", astats.Count), strconv.Itoa(astats.Count),
} }
t.AddRow(row...) t.AddRow(row...)
numRows++ numRows++
} }
if numRows > 0 {
renderTableTitle(out, "\nParser Stash Metrics:") if numRows > 0 || showEmpty {
title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func lapiStatsTable(out io.Writer, stats map[string]map[string]int) { func (s statLapi) Description() (string, string) {
return "Local API Metrics",
`Monitors the requests made to local API routes.`
}
func (s statLapi) Process(route, method string, val int) {
if _, ok := s[route]; !ok {
s[route] = make(map[string]int)
}
s[route][method] += val
}
func (s statLapi) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Route", "Method", "Hits") t.SetHeaders("Route", "Method", "Hits")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
// unfortunately, we can't reuse metricsToTable as the structure is too different :/ // unfortunately, we can't reuse metricsToTable as the structure is too different :/
sortedKeys := []string{}
for k := range stats {
sortedKeys = append(sortedKeys, k)
}
sort.Strings(sortedKeys)
numRows := 0 numRows := 0
for _, alabel := range sortedKeys {
astats := stats[alabel] for _, alabel := range maptools.SortedKeys(s) {
astats := s[alabel]
subKeys := []string{} subKeys := []string{}
for skey := range astats { for skey := range astats {
subKeys = append(subKeys, skey) subKeys = append(subKeys, skey)
} }
sort.Strings(subKeys) sort.Strings(subKeys)
for _, sl := range subKeys { for _, sl := range subKeys {
row := []string{ row := []string{
alabel, alabel,
sl, sl,
fmt.Sprintf("%d", astats[sl]), strconv.Itoa(astats[sl]),
} }
t.AddRow(row...) t.AddRow(row...)
numRows++ numRows++
} }
} }
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func lapiMachineStatsTable(out io.Writer, stats map[string]map[string]map[string]int) { func (s statLapiMachine) Description() (string, string) {
return "Local API Machines Metrics",
`Tracks the number of calls to the local API from each registered machine.`
}
func (s statLapiMachine) Process(machine, route, method string, val int) {
if _, ok := s[machine]; !ok {
s[machine] = make(map[string]map[string]int)
}
if _, ok := s[machine][route]; !ok {
s[machine][route] = make(map[string]int)
}
s[machine][route][method] += val
}
func (s statLapiMachine) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Machine", "Route", "Method", "Hits") t.SetHeaders("Machine", "Route", "Method", "Hits")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
numRows := lapiMetricsToTable(t, stats) numRows := lapiMetricsToTable(t, s)
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Machines Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func lapiBouncerStatsTable(out io.Writer, stats map[string]map[string]map[string]int) { func (s statLapiBouncer) Description() (string, string) {
return "Local API Bouncers Metrics",
`Tracks total hits to remediation component related API routes.`
}
func (s statLapiBouncer) Process(bouncer, route, method string, val int) {
if _, ok := s[bouncer]; !ok {
s[bouncer] = make(map[string]map[string]int)
}
if _, ok := s[bouncer][route]; !ok {
s[bouncer][route] = make(map[string]int)
}
s[bouncer][route][method] += val
}
func (s statLapiBouncer) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Bouncer", "Route", "Method", "Hits") t.SetHeaders("Bouncer", "Route", "Method", "Hits")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
numRows := lapiMetricsToTable(t, stats) numRows := lapiMetricsToTable(t, s)
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Bouncers Metrics:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func lapiDecisionStatsTable(out io.Writer, stats map[string]struct { func (s statLapiDecision) Description() (string, string) {
NonEmpty int return "Local API Bouncers Decisions",
Empty int `Tracks the number of empty/non-empty answers from LAPI to bouncers that are working in "live" mode.`
}, }
) {
func (s statLapiDecision) Process(bouncer, fam string, val int) {
if _, ok := s[bouncer]; !ok {
s[bouncer] = struct {
NonEmpty int
Empty int
}{}
}
x := s[bouncer]
switch fam {
case "cs_lapi_decisions_ko_total":
x.Empty += val
case "cs_lapi_decisions_ok_total":
x.NonEmpty += val
}
s[bouncer] = x
}
func (s statLapiDecision) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Bouncer", "Empty answers", "Non-empty answers") t.SetHeaders("Bouncer", "Empty answers", "Non-empty answers")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft)
numRows := 0 numRows := 0
for bouncer, hits := range stats {
for bouncer, hits := range s {
t.AddRow( t.AddRow(
bouncer, bouncer,
fmt.Sprintf("%d", hits.Empty), strconv.Itoa(hits.Empty),
fmt.Sprintf("%d", hits.NonEmpty), strconv.Itoa(hits.NonEmpty),
) )
numRows++ numRows++
} }
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Bouncers Decisions:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func decisionStatsTable(out io.Writer, stats map[string]map[string]map[string]int) { func (s statDecision) Description() (string, string) {
return "Local API Decisions",
`Provides information about all currently active decisions. ` +
`Includes both local (crowdsec) and global decisions (CAPI), and lists subscriptions (lists).`
}
func (s statDecision) Process(reason, origin, action string, val int) {
if _, ok := s[reason]; !ok {
s[reason] = make(map[string]map[string]int)
}
if _, ok := s[reason][origin]; !ok {
s[reason][origin] = make(map[string]int)
}
s[reason][origin][action] += val
}
func (s statDecision) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Reason", "Origin", "Action", "Count") t.SetHeaders("Reason", "Origin", "Action", "Count")
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
numRows := 0 numRows := 0
for reason, origins := range stats {
for reason, origins := range s {
for origin, actions := range origins { for origin, actions := range origins {
for action, hits := range actions { for action, hits := range actions {
t.AddRow( t.AddRow(
reason, reason,
origin, origin,
action, action,
fmt.Sprintf("%d", hits), strconv.Itoa(hits),
) )
numRows++ numRows++
} }
} }
} }
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Decisions:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }
func alertStatsTable(out io.Writer, stats map[string]int) { func (s statAlert) Description() (string, string) {
return "Local API Alerts",
`Tracks the total number of past and present alerts for the installed scenarios.`
}
func (s statAlert) Process(reason string, val int) {
s[reason] += val
}
func (s statAlert) Table(out io.Writer, noUnit bool, showEmpty bool) {
t := newTable(out) t := newTable(out)
t.SetRowLines(false) t.SetRowLines(false)
t.SetHeaders("Reason", "Count") t.SetHeaders("Reason", "Count")
t.SetAlignment(table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft)
numRows := 0 numRows := 0
for scenario, hits := range stats {
for scenario, hits := range s {
t.AddRow( t.AddRow(
scenario, scenario,
fmt.Sprintf("%d", hits), strconv.Itoa(hits),
) )
numRows++ numRows++
} }
if numRows > 0 { if numRows > 0 || showEmpty {
renderTableTitle(out, "\nLocal Api Alerts:") title, _ := s.Description()
renderTableTitle(out, "\n"+title+":")
t.Render() t.Render()
} }
} }

View file

@ -4,6 +4,7 @@ import (
"context" "context"
"encoding/csv" "encoding/csv"
"encoding/json" "encoding/json"
"errors"
"fmt" "fmt"
"io/fs" "io/fs"
"net/url" "net/url"
@ -15,148 +16,171 @@ import (
"github.com/fatih/color" "github.com/fatih/color"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/tomb.v2" "gopkg.in/tomb.v2"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/ptr"
"github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/csplugin" "github.com/crowdsecurity/crowdsec/pkg/csplugin"
"github.com/crowdsecurity/crowdsec/pkg/csprofiles" "github.com/crowdsecurity/crowdsec/pkg/csprofiles"
"github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/types"
) )
type NotificationsCfg struct { type NotificationsCfg struct {
Config csplugin.PluginConfig `json:"plugin_config"` Config csplugin.PluginConfig `json:"plugin_config"`
Profiles []*csconfig.ProfileCfg `json:"associated_profiles"` Profiles []*csconfig.ProfileCfg `json:"associated_profiles"`
ids []uint ids []uint
} }
type cliNotifications struct {
cfg configGetter
}
func NewNotificationsCmd() *cobra.Command { func NewCLINotifications(cfg configGetter) *cliNotifications {
var cmdNotifications = &cobra.Command{ return &cliNotifications{
cfg: cfg,
}
}
func (cli *cliNotifications) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "notifications [action]", Use: "notifications [action]",
Short: "Helper for notification plugin configuration", Short: "Helper for notification plugin configuration",
Long: "To list/inspect/test notification template", Long: "To list/inspect/test notification template",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
Aliases: []string{"notifications", "notification"}, Aliases: []string{"notifications", "notification"},
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRun: func(cmd *cobra.Command, args []string) { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
var ( cfg := cli.cfg()
err error if err := require.LAPI(cfg); err != nil {
) return err
if err = csConfig.API.Server.LoadProfiles(); err != nil {
log.Fatal(err)
} }
if csConfig.ConfigPaths.NotificationDir == "" { if err := cfg.LoadAPIClient(); err != nil {
log.Fatalf("config_paths.notification_dir is not set in crowdsec config") return fmt.Errorf("loading api client: %w", err)
} }
if err := require.Notifications(cfg); err != nil {
return err
}
return nil
}, },
} }
cmd.AddCommand(cli.NewListCmd())
cmd.AddCommand(cli.NewInspectCmd())
cmd.AddCommand(cli.NewReinjectCmd())
cmd.AddCommand(cli.NewTestCmd())
cmdNotifications.AddCommand(NewNotificationsListCmd()) return cmd
cmdNotifications.AddCommand(NewNotificationsInspectCmd())
cmdNotifications.AddCommand(NewNotificationsReinjectCmd())
return cmdNotifications
} }
func (cli *cliNotifications) getPluginConfigs() (map[string]csplugin.PluginConfig, error) {
func getNotificationsConfiguration() (map[string]NotificationsCfg, error) { cfg := cli.cfg()
pcfgs := map[string]csplugin.PluginConfig{} pcfgs := map[string]csplugin.PluginConfig{}
wf := func(path string, info fs.FileInfo, err error) error { wf := func(path string, info fs.FileInfo, err error) error {
if info == nil { if info == nil {
return errors.Wrapf(err, "error while traversing directory %s", path) return fmt.Errorf("error while traversing directory %s: %w", path, err)
} }
name := filepath.Join(csConfig.ConfigPaths.NotificationDir, info.Name()) //Avoid calling info.Name() twice
name := filepath.Join(cfg.ConfigPaths.NotificationDir, info.Name()) // Avoid calling info.Name() twice
if (strings.HasSuffix(name, "yaml") || strings.HasSuffix(name, "yml")) && !(info.IsDir()) { if (strings.HasSuffix(name, "yaml") || strings.HasSuffix(name, "yml")) && !(info.IsDir()) {
ts, err := csplugin.ParsePluginConfigFile(name) ts, err := csplugin.ParsePluginConfigFile(name)
if err != nil { if err != nil {
return errors.Wrapf(err, "Loading notifification plugin configuration with %s", name) return fmt.Errorf("loading notifification plugin configuration with %s: %w", name, err)
} }
for _, t := range ts { for _, t := range ts {
csplugin.SetRequiredFields(&t)
pcfgs[t.Name] = t pcfgs[t.Name] = t
} }
} }
return nil return nil
} }
if err := filepath.Walk(csConfig.ConfigPaths.NotificationDir, wf); err != nil { if err := filepath.Walk(cfg.ConfigPaths.NotificationDir, wf); err != nil {
return nil, errors.Wrap(err, "Loading notifification plugin configuration") return nil, fmt.Errorf("while loading notifification plugin configuration: %w", err)
} }
return pcfgs, nil
}
func (cli *cliNotifications) getProfilesConfigs() (map[string]NotificationsCfg, error) {
cfg := cli.cfg()
// A bit of a tricky stuf now: reconcile profiles and notification plugins // A bit of a tricky stuf now: reconcile profiles and notification plugins
ncfgs := map[string]NotificationsCfg{} pcfgs, err := cli.getPluginConfigs()
profiles, err := csprofiles.NewProfile(csConfig.API.Server.Profiles)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "Cannot extract profiles from configuration") return nil, err
} }
for profileID, profile := range profiles {
loop: ncfgs := map[string]NotificationsCfg{}
for _, notif := range profile.Cfg.Notifications { for _, pc := range pcfgs {
for name, pc := range pcfgs { ncfgs[pc.Name] = NotificationsCfg{
if notif == name { Config: pc,
if _, ok := ncfgs[pc.Name]; !ok {
ncfgs[pc.Name] = NotificationsCfg{
Config: pc,
Profiles: []*csconfig.ProfileCfg{profile.Cfg},
ids: []uint{uint(profileID)},
}
continue loop
}
tmp := ncfgs[pc.Name]
for _, pr := range tmp.Profiles {
var profiles []*csconfig.ProfileCfg
if pr.Name == profile.Cfg.Name {
continue
}
profiles = append(tmp.Profiles, profile.Cfg)
ids := append(tmp.ids, uint(profileID))
ncfgs[pc.Name] = NotificationsCfg{
Config: tmp.Config,
Profiles: profiles,
ids: ids,
}
}
}
}
} }
} }
profiles, err := csprofiles.NewProfile(cfg.API.Server.Profiles)
if err != nil {
return nil, fmt.Errorf("while extracting profiles from configuration: %w", err)
}
for profileID, profile := range profiles {
for _, notif := range profile.Cfg.Notifications {
pc, ok := pcfgs[notif]
if !ok {
return nil, fmt.Errorf("notification plugin '%s' does not exist", notif)
}
tmp, ok := ncfgs[pc.Name]
if !ok {
return nil, fmt.Errorf("notification plugin '%s' does not exist", pc.Name)
}
tmp.Profiles = append(tmp.Profiles, profile.Cfg)
tmp.ids = append(tmp.ids, uint(profileID))
ncfgs[pc.Name] = tmp
}
}
return ncfgs, nil return ncfgs, nil
} }
func (cli *cliNotifications) NewListCmd() *cobra.Command {
func NewNotificationsListCmd() *cobra.Command { cmd := &cobra.Command{
var cmdNotificationsList = &cobra.Command{
Use: "list", Use: "list",
Short: "List active notifications plugins", Short: "list active notifications plugins",
Long: `List active notifications plugins`, Long: `list active notifications plugins`,
Example: `cscli notifications list`, Example: `cscli notifications list`,
Args: cobra.ExactArgs(0), Args: cobra.ExactArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, arg []string) error { RunE: func(_ *cobra.Command, _ []string) error {
ncfgs, err := getNotificationsConfiguration() cfg := cli.cfg()
ncfgs, err := cli.getProfilesConfigs()
if err != nil { if err != nil {
return errors.Wrap(err, "Can't build profiles configuration") return fmt.Errorf("can't build profiles configuration: %w", err)
} }
if csConfig.Cscli.Output == "human" { if cfg.Cscli.Output == "human" {
notificationListTable(color.Output, ncfgs) notificationListTable(color.Output, ncfgs)
} else if csConfig.Cscli.Output == "json" { } else if cfg.Cscli.Output == "json" {
x, err := json.MarshalIndent(ncfgs, "", " ") x, err := json.MarshalIndent(ncfgs, "", " ")
if err != nil { if err != nil {
return errors.New("failed to marshal notification configuration") return fmt.Errorf("failed to marshal notification configuration: %w", err)
} }
fmt.Printf("%s", string(x)) fmt.Printf("%s", string(x))
} else if csConfig.Cscli.Output == "raw" { } else if cfg.Cscli.Output == "raw" {
csvwriter := csv.NewWriter(os.Stdout) csvwriter := csv.NewWriter(os.Stdout)
err := csvwriter.Write([]string{"Name", "Type", "Profile name"}) err := csvwriter.Write([]string{"Name", "Type", "Profile name"})
if err != nil { if err != nil {
return errors.Wrap(err, "failed to write raw header") return fmt.Errorf("failed to write raw header: %w", err)
} }
for _, b := range ncfgs { for _, b := range ncfgs {
profilesList := []string{} profilesList := []string{}
@ -165,140 +189,193 @@ func NewNotificationsListCmd() *cobra.Command {
} }
err := csvwriter.Write([]string{b.Config.Name, b.Config.Type, strings.Join(profilesList, ", ")}) err := csvwriter.Write([]string{b.Config.Name, b.Config.Type, strings.Join(profilesList, ", ")})
if err != nil { if err != nil {
return errors.Wrap(err, "failed to write raw content") return fmt.Errorf("failed to write raw content: %w", err)
} }
} }
csvwriter.Flush() csvwriter.Flush()
} }
return nil return nil
}, },
} }
return cmdNotificationsList return cmd
} }
func (cli *cliNotifications) NewInspectCmd() *cobra.Command {
func NewNotificationsInspectCmd() *cobra.Command { cmd := &cobra.Command{
var cmdNotificationsInspect = &cobra.Command{
Use: "inspect", Use: "inspect",
Short: "Inspect active notifications plugin configuration", Short: "Inspect active notifications plugin configuration",
Long: `Inspect active notifications plugin and show configuration`, Long: `Inspect active notifications plugin and show configuration`,
Example: `cscli notifications inspect <plugin_name>`, Example: `cscli notifications inspect <plugin_name>`,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, arg []string) error { RunE: func(_ *cobra.Command, args []string) error {
var ( cfg := cli.cfg()
cfg NotificationsCfg ncfgs, err := cli.getProfilesConfigs()
ok bool
)
pluginName := arg[0]
if pluginName == "" {
errors.New("Please provide a plugin name to inspect")
}
ncfgs, err := getNotificationsConfiguration()
if err != nil { if err != nil {
return errors.Wrap(err, "Can't build profiles configuration") return fmt.Errorf("can't build profiles configuration: %w", err)
} }
if cfg, ok = ncfgs[pluginName]; !ok { ncfg, ok := ncfgs[args[0]]
return errors.New("The provided plugin name doesn't exist or isn't active") if !ok {
return fmt.Errorf("plugin '%s' does not exist or is not active", args[0])
} }
if cfg.Cscli.Output == "human" || cfg.Cscli.Output == "raw" {
if csConfig.Cscli.Output == "human" || csConfig.Cscli.Output == "raw" { fmt.Printf(" - %15s: %15s\n", "Type", ncfg.Config.Type)
fmt.Printf(" - %15s: %15s\n", "Type", cfg.Config.Type) fmt.Printf(" - %15s: %15s\n", "Name", ncfg.Config.Name)
fmt.Printf(" - %15s: %15s\n", "Name", cfg.Config.Name) fmt.Printf(" - %15s: %15s\n", "Timeout", ncfg.Config.TimeOut)
fmt.Printf(" - %15s: %15s\n", "Timeout", cfg.Config.TimeOut) fmt.Printf(" - %15s: %15s\n", "Format", ncfg.Config.Format)
fmt.Printf(" - %15s: %15s\n", "Format", cfg.Config.Format) for k, v := range ncfg.Config.Config {
for k, v := range cfg.Config.Config {
fmt.Printf(" - %15s: %15v\n", k, v) fmt.Printf(" - %15s: %15v\n", k, v)
} }
} else if csConfig.Cscli.Output == "json" { } else if cfg.Cscli.Output == "json" {
x, err := json.MarshalIndent(cfg, "", " ") x, err := json.MarshalIndent(cfg, "", " ")
if err != nil { if err != nil {
return errors.New("failed to marshal notification configuration") return fmt.Errorf("failed to marshal notification configuration: %w", err)
} }
fmt.Printf("%s", string(x)) fmt.Printf("%s", string(x))
} }
return nil return nil
}, },
} }
return cmdNotificationsInspect return cmd
} }
func (cli *cliNotifications) NewTestCmd() *cobra.Command {
var (
pluginBroker csplugin.PluginBroker
pluginTomb tomb.Tomb
alertOverride string
)
func NewNotificationsReinjectCmd() *cobra.Command { cmd := &cobra.Command{
var remediation bool Use: "test [plugin name]",
var alertOverride string Short: "send a generic test alert to notification plugin",
Long: `send a generic test alert to a notification plugin to test configuration even if is not active`,
Example: `cscli notifications test [plugin_name]`,
Args: cobra.ExactArgs(1),
DisableAutoGenTag: true,
PreRunE: func(_ *cobra.Command, args []string) error {
cfg := cli.cfg()
pconfigs, err := cli.getPluginConfigs()
if err != nil {
return fmt.Errorf("can't build profiles configuration: %w", err)
}
pcfg, ok := pconfigs[args[0]]
if !ok {
return fmt.Errorf("plugin name: '%s' does not exist", args[0])
}
// Create a single profile with plugin name as notification name
return pluginBroker.Init(cfg.PluginConfig, []*csconfig.ProfileCfg{
{
Notifications: []string{
pcfg.Name,
},
},
}, cfg.ConfigPaths)
},
RunE: func(_ *cobra.Command, _ []string) error {
pluginTomb.Go(func() error {
pluginBroker.Run(&pluginTomb)
return nil
})
alert := &models.Alert{
Capacity: ptr.Of(int32(0)),
Decisions: []*models.Decision{{
Duration: ptr.Of("4h"),
Scope: ptr.Of("Ip"),
Value: ptr.Of("10.10.10.10"),
Type: ptr.Of("ban"),
Scenario: ptr.Of("test alert"),
Origin: ptr.Of(types.CscliOrigin),
}},
Events: []*models.Event{},
EventsCount: ptr.Of(int32(1)),
Leakspeed: ptr.Of("0"),
Message: ptr.Of("test alert"),
ScenarioHash: ptr.Of(""),
Scenario: ptr.Of("test alert"),
ScenarioVersion: ptr.Of(""),
Simulated: ptr.Of(false),
Source: &models.Source{
AsName: "",
AsNumber: "",
Cn: "",
IP: "10.10.10.10",
Range: "",
Scope: ptr.Of("Ip"),
Value: ptr.Of("10.10.10.10"),
},
StartAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
StopAt: ptr.Of(time.Now().UTC().Format(time.RFC3339)),
CreatedAt: time.Now().UTC().Format(time.RFC3339),
}
if err := yaml.Unmarshal([]byte(alertOverride), alert); err != nil {
return fmt.Errorf("failed to unmarshal alert override: %w", err)
}
var cmdNotificationsReinject = &cobra.Command{ pluginBroker.PluginChannel <- csplugin.ProfileAlert{
ProfileID: uint(0),
Alert: alert,
}
// time.Sleep(2 * time.Second) // There's no mechanism to ensure notification has been sent
pluginTomb.Kill(errors.New("terminating"))
pluginTomb.Wait()
return nil
},
}
cmd.Flags().StringVarP(&alertOverride, "alert", "a", "", "JSON string used to override alert fields in the generic alert (see crowdsec/pkg/models/alert.go in the source tree for the full definition of the object)")
return cmd
}
func (cli *cliNotifications) NewReinjectCmd() *cobra.Command {
var (
alertOverride string
alert *models.Alert
)
cmd := &cobra.Command{
Use: "reinject", Use: "reinject",
Short: "reinject alert into notifications system", Short: "reinject an alert into profiles to trigger notifications",
Long: `Reinject alert into notifications system`, Long: `reinject an alert into profiles to be evaluated by the filter and sent to matched notifications plugins`,
Example: ` Example: `
cscli notifications reinject <alert_id> cscli notifications reinject <alert_id>
cscli notifications reinject <alert_id> --remediation cscli notifications reinject <alert_id> -a '{"remediation": false,"scenario":"notification/test"}'
cscli notifications reinject <alert_id> -a '{"remediation": true,"scenario":"notification/test"}' cscli notifications reinject <alert_id> -a '{"remediation": true,"scenario":"notification/test"}'
`, `,
Args: cobra.ExactArgs(1), Args: cobra.ExactArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
RunE: func(cmd *cobra.Command, args []string) error { PreRunE: func(_ *cobra.Command, args []string) error {
var err error
alert, err = cli.fetchAlertFromArgString(args[0])
if err != nil {
return err
}
return nil
},
RunE: func(_ *cobra.Command, _ []string) error {
var ( var (
pluginBroker csplugin.PluginBroker pluginBroker csplugin.PluginBroker
pluginTomb tomb.Tomb pluginTomb tomb.Tomb
) )
if len(args) != 1 {
printHelp(cmd)
return errors.New("Wrong number of argument: there should be one argument")
}
//first: get the alert cfg := cli.cfg()
id, err := strconv.Atoi(args[0])
if err != nil {
return errors.New(fmt.Sprintf("bad alert id %s", args[0]))
}
if err := csConfig.LoadAPIClient(); err != nil {
return errors.Wrapf(err, "loading api client")
}
if csConfig.API.Client == nil {
return errors.New("There is no configuration on 'api_client:'")
}
if csConfig.API.Client.Credentials == nil {
return errors.New(fmt.Sprintf("Please provide credentials for the API in '%s'", csConfig.API.Client.CredentialsFilePath))
}
apiURL, err := url.Parse(csConfig.API.Client.Credentials.URL)
if err != nil {
return errors.Wrapf(err, "error parsing the URL of the API")
}
client, err := apiclient.NewClient(&apiclient.Config{
MachineID: csConfig.API.Client.Credentials.Login,
Password: strfmt.Password(csConfig.API.Client.Credentials.Password),
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL,
VersionPrefix: "v1",
})
if err != nil {
return errors.Wrapf(err, "error creating the client for the API")
}
alert, _, err := client.Alerts.GetByID(context.Background(), id)
if err != nil {
return errors.Wrapf(err, fmt.Sprintf("can't find alert with id %s", args[0]))
}
if alertOverride != "" { if alertOverride != "" {
if err = json.Unmarshal([]byte(alertOverride), alert); err != nil { if err := json.Unmarshal([]byte(alertOverride), alert); err != nil {
return errors.Wrapf(err, "Can't unmarshal the data given in the alert flag") return fmt.Errorf("can't unmarshal data in the alert flag: %w", err)
} }
} }
if !remediation {
alert.Remediation = true
}
// second we start plugins err := pluginBroker.Init(cfg.PluginConfig, cfg.API.Server.Profiles, cfg.ConfigPaths)
err = pluginBroker.Init(csConfig.PluginConfig, csConfig.API.Server.Profiles, csConfig.ConfigPaths)
if err != nil { if err != nil {
return errors.Wrapf(err, "Can't initialize plugins") return fmt.Errorf("can't initialize plugins: %w", err)
} }
pluginTomb.Go(func() error { pluginTomb.Go(func() error {
@ -306,17 +383,15 @@ cscli notifications reinject <alert_id> -a '{"remediation": true,"scenario":"not
return nil return nil
}) })
//third: get the profile(s), and process the whole stuff profiles, err := csprofiles.NewProfile(cfg.API.Server.Profiles)
profiles, err := csprofiles.NewProfile(csConfig.API.Server.Profiles)
if err != nil { if err != nil {
return errors.Wrap(err, "Cannot extract profiles from configuration") return fmt.Errorf("cannot extract profiles from configuration: %w", err)
} }
for id, profile := range profiles { for id, profile := range profiles {
_, matched, err := profile.EvaluateProfile(alert) _, matched, err := profile.EvaluateProfile(alert)
if err != nil { if err != nil {
return errors.Wrapf(err, "can't evaluate profile %s", profile.Cfg.Name) return fmt.Errorf("can't evaluate profile %s: %w", profile.Cfg.Name, err)
} }
if !matched { if !matched {
log.Infof("The profile %s didn't match", profile.Cfg.Name) log.Infof("The profile %s didn't match", profile.Cfg.Name)
@ -334,23 +409,54 @@ cscli notifications reinject <alert_id> -a '{"remediation": true,"scenario":"not
default: default:
time.Sleep(50 * time.Millisecond) time.Sleep(50 * time.Millisecond)
log.Info("sleeping\n") log.Info("sleeping\n")
} }
} }
if profile.Cfg.OnSuccess == "break" { if profile.Cfg.OnSuccess == "break" {
log.Infof("The profile %s contains a 'on_success: break' so bailing out", profile.Cfg.Name) log.Infof("The profile %s contains a 'on_success: break' so bailing out", profile.Cfg.Name)
break break
} }
} }
// time.Sleep(2 * time.Second) // There's no mechanism to ensure notification has been sent
// time.Sleep(2 * time.Second) // There's no mechanism to ensure notification has been sent
pluginTomb.Kill(errors.New("terminating")) pluginTomb.Kill(errors.New("terminating"))
pluginTomb.Wait() pluginTomb.Wait()
return nil return nil
}, },
} }
cmdNotificationsReinject.Flags().BoolVarP(&remediation, "remediation", "r", false, "Set Alert.Remediation to false in the reinjected alert (see your profile filter configuration)") cmd.Flags().StringVarP(&alertOverride, "alert", "a", "", "JSON string used to override alert fields in the reinjected alert (see crowdsec/pkg/models/alert.go in the source tree for the full definition of the object)")
cmdNotificationsReinject.Flags().StringVarP(&alertOverride, "alert", "a", "", "JSON string used to override alert fields in the reinjected alert (see crowdsec/pkg/models/alert.go in the source tree for the full definition of the object)")
return cmdNotificationsReinject return cmd
}
func (cli *cliNotifications) fetchAlertFromArgString(toParse string) (*models.Alert, error) {
cfg := cli.cfg()
id, err := strconv.Atoi(toParse)
if err != nil {
return nil, fmt.Errorf("bad alert id %s", toParse)
}
apiURL, err := url.Parse(cfg.API.Client.Credentials.URL)
if err != nil {
return nil, fmt.Errorf("error parsing the URL of the API: %w", err)
}
client, err := apiclient.NewClient(&apiclient.Config{
MachineID: cfg.API.Client.Credentials.Login,
Password: strfmt.Password(cfg.API.Client.Credentials.Password),
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL,
VersionPrefix: "v1",
})
if err != nil {
return nil, fmt.Errorf("error creating the client for the API: %w", err)
}
alert, _, err := client.Alerts.GetByID(context.Background(), id)
if err != nil {
return nil, fmt.Errorf("can't find alert with id %d: %w", id, err)
}
return alert, nil
} }

View file

@ -2,23 +2,43 @@ package main
import ( import (
"io" "io"
"sort"
"strings" "strings"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
) )
func notificationListTable(out io.Writer, ncfgs map[string]NotificationsCfg) { func notificationListTable(out io.Writer, ncfgs map[string]NotificationsCfg) {
t := newLightTable(out) t := newLightTable(out)
t.SetHeaders("Name", "Type", "Profile name") t.SetHeaders("Active", "Name", "Type", "Profile name")
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
for _, b := range ncfgs { keys := make([]string, 0, len(ncfgs))
for k := range ncfgs {
keys = append(keys, k)
}
sort.Slice(keys, func(i, j int) bool {
return len(ncfgs[keys[i]].Profiles) > len(ncfgs[keys[j]].Profiles)
})
for _, k := range keys {
b := ncfgs[k]
profilesList := []string{} profilesList := []string{}
for _, p := range b.Profiles { for _, p := range b.Profiles {
profilesList = append(profilesList, p.Name) profilesList = append(profilesList, p.Name)
} }
t.AddRow(b.Config.Name, b.Config.Type, strings.Join(profilesList, ", "))
active := emoji.CheckMark
if len(profilesList) == 0 {
active = emoji.Prohibited
}
t.AddRow(active, b.Config.Name, b.Config.Type, strings.Join(profilesList, ", "))
} }
t.Render() t.Render()

View file

@ -8,67 +8,78 @@ import (
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/tomb.v2" "gopkg.in/tomb.v2"
"github.com/crowdsecurity/go-cs-lib/pkg/ptr" "github.com/crowdsecurity/go-cs-lib/ptr"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiserver" "github.com/crowdsecurity/crowdsec/pkg/apiserver"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
) )
func NewPapiCmd() *cobra.Command { type cliPapi struct {
var cmdLapi = &cobra.Command{ cfg configGetter
}
func NewCLIPapi(cfg configGetter) *cliPapi {
return &cliPapi{
cfg: cfg,
}
}
func (cli *cliPapi) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "papi [action]", Use: "papi [action]",
Short: "Manage interaction with Polling API (PAPI)", Short: "Manage interaction with Polling API (PAPI)",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI { cfg := cli.cfg()
return fmt.Errorf("Local API is disabled, please run this command on the local API machine: %w", err) if err := require.LAPI(cfg); err != nil {
return err
} }
if csConfig.API.Server.OnlineClient == nil { if err := require.CAPI(cfg); err != nil {
log.Fatalf("no configuration for Central API in '%s'", *csConfig.FilePath) return err
} }
if csConfig.API.Server.OnlineClient.Credentials.PapiURL == "" { if err := require.PAPI(cfg); err != nil {
log.Fatalf("no PAPI URL in configuration") return err
} }
return nil return nil
}, },
} }
cmdLapi.AddCommand(NewPapiStatusCmd()) cmd.AddCommand(cli.NewStatusCmd())
cmdLapi.AddCommand(NewPapiSyncCmd()) cmd.AddCommand(cli.NewSyncCmd())
return cmdLapi return cmd
} }
func NewPapiStatusCmd() *cobra.Command { func (cli *cliPapi) NewStatusCmd() *cobra.Command {
cmdCapiStatus := &cobra.Command{ cmd := &cobra.Command{
Use: "status", Use: "status",
Short: "Get status of the Polling API", Short: "Get status of the Polling API",
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error var err error
dbClient, err = database.NewClient(csConfig.DbConfig) cfg := cli.cfg()
dbClient, err = database.NewClient(cfg.DbConfig)
if err != nil { if err != nil {
log.Fatalf("unable to initialize database client : %s", err) return fmt.Errorf("unable to initialize database client: %w", err)
} }
apic, err := apiserver.NewAPIC(csConfig.API.Server.OnlineClient, dbClient, csConfig.API.Server.ConsoleConfig, csConfig.API.Server.CapiWhitelists) apic, err := apiserver.NewAPIC(cfg.API.Server.OnlineClient, dbClient, cfg.API.Server.ConsoleConfig, cfg.API.Server.CapiWhitelists)
if err != nil { if err != nil {
log.Fatalf("unable to initialize API client : %s", err) return fmt.Errorf("unable to initialize API client: %w", err)
} }
papi, err := apiserver.NewPAPI(apic, dbClient, csConfig.API.Server.ConsoleConfig, log.GetLevel()) papi, err := apiserver.NewPAPI(apic, dbClient, cfg.API.Server.ConsoleConfig, log.GetLevel())
if err != nil { if err != nil {
log.Fatalf("unable to initialize PAPI client : %s", err) return fmt.Errorf("unable to initialize PAPI client: %w", err)
} }
perms, err := papi.GetPermissions() perms, err := papi.GetPermissions()
if err != nil { if err != nil {
log.Fatalf("unable to get PAPI permissions: %s", err) return fmt.Errorf("unable to get PAPI permissions: %w", err)
} }
var lastTimestampStr *string var lastTimestampStr *string
lastTimestampStr, err = dbClient.GetConfigItem(apiserver.PapiPullKey) lastTimestampStr, err = dbClient.GetConfigItem(apiserver.PapiPullKey)
@ -83,45 +94,47 @@ func NewPapiStatusCmd() *cobra.Command {
for _, sub := range perms.Categories { for _, sub := range perms.Categories {
log.Infof(" - %s", sub) log.Infof(" - %s", sub)
} }
return nil
}, },
} }
return cmdCapiStatus return cmd
} }
func NewPapiSyncCmd() *cobra.Command { func (cli *cliPapi) NewSyncCmd() *cobra.Command {
cmdCapiSync := &cobra.Command{ cmd := &cobra.Command{
Use: "sync", Use: "sync",
Short: "Sync with the Polling API, pulling all non-expired orders for the instance", Short: "Sync with the Polling API, pulling all non-expired orders for the instance",
Args: cobra.MinimumNArgs(0), Args: cobra.MinimumNArgs(0),
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error var err error
cfg := cli.cfg()
t := tomb.Tomb{} t := tomb.Tomb{}
dbClient, err = database.NewClient(csConfig.DbConfig)
dbClient, err = database.NewClient(cfg.DbConfig)
if err != nil { if err != nil {
log.Fatalf("unable to initialize database client : %s", err) return fmt.Errorf("unable to initialize database client: %w", err)
} }
apic, err := apiserver.NewAPIC(csConfig.API.Server.OnlineClient, dbClient, csConfig.API.Server.ConsoleConfig, csConfig.API.Server.CapiWhitelists) apic, err := apiserver.NewAPIC(cfg.API.Server.OnlineClient, dbClient, cfg.API.Server.ConsoleConfig, cfg.API.Server.CapiWhitelists)
if err != nil { if err != nil {
log.Fatalf("unable to initialize API client : %s", err) return fmt.Errorf("unable to initialize API client: %w", err)
} }
t.Go(apic.Push) t.Go(apic.Push)
papi, err := apiserver.NewPAPI(apic, dbClient, csConfig.API.Server.ConsoleConfig, log.GetLevel()) papi, err := apiserver.NewPAPI(apic, dbClient, cfg.API.Server.ConsoleConfig, log.GetLevel())
if err != nil { if err != nil {
log.Fatalf("unable to initialize PAPI client : %s", err) return fmt.Errorf("unable to initialize PAPI client: %w", err)
} }
t.Go(papi.SyncDecisions) t.Go(papi.SyncDecisions)
err = papi.PullOnce(time.Time{}, true) err = papi.PullOnce(time.Time{}, true)
if err != nil { if err != nil {
log.Fatalf("unable to sync decisions: %s", err) return fmt.Errorf("unable to sync decisions: %w", err)
} }
log.Infof("Sending acknowledgements to CAPI") log.Infof("Sending acknowledgements to CAPI")
@ -131,8 +144,9 @@ func NewPapiSyncCmd() *cobra.Command {
t.Wait() t.Wait()
time.Sleep(5 * time.Second) //FIXME: the push done by apic.Push is run inside a sub goroutine, sleep to make sure it's done time.Sleep(5 * time.Second) //FIXME: the push done by apic.Push is run inside a sub goroutine, sleep to make sure it's done
return nil
}, },
} }
return cmdCapiSync return cmd
} }

View file

@ -1,202 +0,0 @@
package main
import (
"fmt"
"github.com/fatih/color"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewParsersCmd() *cobra.Command {
var cmdParsers = &cobra.Command{
Use: "parsers [action] [config]",
Short: "Install/Remove/Upgrade/Inspect parser(s) from hub",
Example: `cscli parsers install crowdsecurity/sshd-logs
cscli parsers inspect crowdsecurity/sshd-logs
cscli parsers upgrade crowdsecurity/sshd-logs
cscli parsers list
cscli parsers remove crowdsecurity/sshd-logs
`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"parser"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
return nil
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
if cmd.Name() == "inspect" || cmd.Name() == "list" {
return
}
log.Infof(ReloadMessage())
},
}
cmdParsers.AddCommand(NewParsersInstallCmd())
cmdParsers.AddCommand(NewParsersRemoveCmd())
cmdParsers.AddCommand(NewParsersUpgradeCmd())
cmdParsers.AddCommand(NewParsersInspectCmd())
cmdParsers.AddCommand(NewParsersListCmd())
return cmdParsers
}
func NewParsersInstallCmd() *cobra.Command {
var ignoreError bool
var cmdParsersInstall = &cobra.Command{
Use: "install [config]",
Short: "Install given parser(s)",
Long: `Fetch and install given parser(s) from hub`,
Example: `cscli parsers install crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cwhub.PARSERS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
t := cwhub.GetItem(cwhub.PARSERS, name)
if t == nil {
nearestItem, score := GetDistance(cwhub.PARSERS, name)
Suggest(cwhub.PARSERS, name, nearestItem.Name, score, ignoreError)
continue
}
if err := cwhub.InstallItem(csConfig, name, cwhub.PARSERS, forceAction, downloadOnly); err != nil {
if ignoreError {
log.Errorf("Error while installing '%s': %s", name, err)
} else {
log.Fatalf("Error while installing '%s': %s", name, err)
}
}
}
},
}
cmdParsersInstall.PersistentFlags().BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
cmdParsersInstall.PersistentFlags().BoolVar(&forceAction, "force", false, "Force install : Overwrite tainted and outdated files")
cmdParsersInstall.PersistentFlags().BoolVar(&ignoreError, "ignore", false, "Ignore errors when installing multiple parsers")
return cmdParsersInstall
}
func NewParsersRemoveCmd() *cobra.Command {
var cmdParsersRemove = &cobra.Command{
Use: "remove [config]",
Short: "Remove given parser(s)",
Long: `Remove given parse(s) from hub`,
Aliases: []string{"delete"},
Example: `cscli parsers remove crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.RemoveMany(csConfig, cwhub.PARSERS, "", all, purge, forceAction)
return
}
if len(args) == 0 {
log.Fatalf("Specify at least one parser to remove or '--all' flag.")
}
for _, name := range args {
cwhub.RemoveMany(csConfig, cwhub.PARSERS, name, all, purge, forceAction)
}
},
}
cmdParsersRemove.PersistentFlags().BoolVar(&purge, "purge", false, "Delete source file too")
cmdParsersRemove.PersistentFlags().BoolVar(&forceAction, "force", false, "Force remove : Remove tainted and outdated files")
cmdParsersRemove.PersistentFlags().BoolVar(&all, "all", false, "Delete all the parsers")
return cmdParsersRemove
}
func NewParsersUpgradeCmd() *cobra.Command {
var cmdParsersUpgrade = &cobra.Command{
Use: "upgrade [config]",
Short: "Upgrade given parser(s)",
Long: `Fetch and upgrade given parser(s) from hub`,
Example: `cscli parsers upgrade crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS, "", forceAction)
} else {
if len(args) == 0 {
log.Fatalf("no target parser to upgrade")
}
for _, name := range args {
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS, name, forceAction)
}
}
},
}
cmdParsersUpgrade.PersistentFlags().BoolVar(&all, "all", false, "Upgrade all the parsers")
cmdParsersUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
return cmdParsersUpgrade
}
func NewParsersInspectCmd() *cobra.Command {
var cmdParsersInspect = &cobra.Command{
Use: "inspect [name]",
Short: "Inspect given parser",
Long: `Inspect given parser`,
Example: `cscli parsers inspect crowdsec/xxx`,
DisableAutoGenTag: true,
Args: cobra.MinimumNArgs(1),
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
InspectItem(args[0], cwhub.PARSERS)
},
}
cmdParsersInspect.PersistentFlags().StringVarP(&prometheusURL, "url", "u", "", "Prometheus url")
return cmdParsersInspect
}
func NewParsersListCmd() *cobra.Command {
var cmdParsersList = &cobra.Command{
Use: "list [name]",
Short: "List all parsers or given one",
Long: `List all parsers or given one`,
Example: `cscli parsers list
cscli parser list crowdsecurity/xxx`,
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
ListItems(color.Output, []string{cwhub.PARSERS}, args, false, true, all)
},
}
cmdParsersList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmdParsersList
}

View file

@ -1,200 +0,0 @@
package main
import (
"fmt"
"github.com/fatih/color"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewPostOverflowsInstallCmd() *cobra.Command {
var ignoreError bool
cmdPostOverflowsInstall := &cobra.Command{
Use: "install [config]",
Short: "Install given postoverflow(s)",
Long: `Fetch and install given postoverflow(s) from hub`,
Example: `cscli postoverflows install crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cwhub.PARSERS_OVFLW, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
t := cwhub.GetItem(cwhub.PARSERS_OVFLW, name)
if t == nil {
nearestItem, score := GetDistance(cwhub.PARSERS_OVFLW, name)
Suggest(cwhub.PARSERS_OVFLW, name, nearestItem.Name, score, ignoreError)
continue
}
if err := cwhub.InstallItem(csConfig, name, cwhub.PARSERS_OVFLW, forceAction, downloadOnly); err != nil {
if ignoreError {
log.Errorf("Error while installing '%s': %s", name, err)
} else {
log.Fatalf("Error while installing '%s': %s", name, err)
}
}
}
},
}
cmdPostOverflowsInstall.PersistentFlags().BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
cmdPostOverflowsInstall.PersistentFlags().BoolVar(&forceAction, "force", false, "Force install : Overwrite tainted and outdated files")
cmdPostOverflowsInstall.PersistentFlags().BoolVar(&ignoreError, "ignore", false, "Ignore errors when installing multiple postoverflows")
return cmdPostOverflowsInstall
}
func NewPostOverflowsRemoveCmd() *cobra.Command {
cmdPostOverflowsRemove := &cobra.Command{
Use: "remove [config]",
Short: "Remove given postoverflow(s)",
Long: `remove given postoverflow(s)`,
Example: `cscli postoverflows remove crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
Aliases: []string{"delete"},
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS_OVFLW, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.RemoveMany(csConfig, cwhub.PARSERS_OVFLW, "", all, purge, forceAction)
return
}
if len(args) == 0 {
log.Fatalf("Specify at least one postoverflow to remove or '--all' flag.")
}
for _, name := range args {
cwhub.RemoveMany(csConfig, cwhub.PARSERS_OVFLW, name, all, purge, forceAction)
}
},
}
cmdPostOverflowsRemove.PersistentFlags().BoolVar(&purge, "purge", false, "Delete source file too")
cmdPostOverflowsRemove.PersistentFlags().BoolVar(&forceAction, "force", false, "Force remove : Remove tainted and outdated files")
cmdPostOverflowsRemove.PersistentFlags().BoolVar(&all, "all", false, "Delete all the postoverflows")
return cmdPostOverflowsRemove
}
func NewPostOverflowsUpgradeCmd() *cobra.Command {
cmdPostOverflowsUpgrade := &cobra.Command{
Use: "upgrade [config]",
Short: "Upgrade given postoverflow(s)",
Long: `Fetch and Upgrade given postoverflow(s) from hub`,
Example: `cscli postoverflows upgrade crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS_OVFLW, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS_OVFLW, "", forceAction)
} else {
if len(args) == 0 {
log.Fatalf("no target postoverflow to upgrade")
}
for _, name := range args {
cwhub.UpgradeConfig(csConfig, cwhub.PARSERS_OVFLW, name, forceAction)
}
}
},
}
cmdPostOverflowsUpgrade.PersistentFlags().BoolVarP(&all, "all", "a", false, "Upgrade all the postoverflows")
cmdPostOverflowsUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
return cmdPostOverflowsUpgrade
}
func NewPostOverflowsInspectCmd() *cobra.Command {
cmdPostOverflowsInspect := &cobra.Command{
Use: "inspect [config]",
Short: "Inspect given postoverflow",
Long: `Inspect given postoverflow`,
Example: `cscli postoverflows inspect crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.PARSERS_OVFLW, args, toComplete)
},
Args: cobra.MinimumNArgs(1),
Run: func(cmd *cobra.Command, args []string) {
InspectItem(args[0], cwhub.PARSERS_OVFLW)
},
}
return cmdPostOverflowsInspect
}
func NewPostOverflowsListCmd() *cobra.Command {
cmdPostOverflowsList := &cobra.Command{
Use: "list [config]",
Short: "List all postoverflows or given one",
Long: `List all postoverflows or given one`,
Example: `cscli postoverflows list
cscli postoverflows list crowdsecurity/xxx`,
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
ListItems(color.Output, []string{cwhub.PARSERS_OVFLW}, args, false, true, all)
},
}
cmdPostOverflowsList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmdPostOverflowsList
}
func NewPostOverflowsCmd() *cobra.Command {
cmdPostOverflows := &cobra.Command{
Use: "postoverflows [action] [config]",
Short: "Install/Remove/Upgrade/Inspect postoverflow(s) from hub",
Example: `cscli postoverflows install crowdsecurity/cdn-whitelist
cscli postoverflows inspect crowdsecurity/cdn-whitelist
cscli postoverflows upgrade crowdsecurity/cdn-whitelist
cscli postoverflows list
cscli postoverflows remove crowdsecurity/cdn-whitelist`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"postoverflow"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
return nil
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
if cmd.Name() == "inspect" || cmd.Name() == "list" {
return
}
log.Infof(ReloadMessage())
},
}
cmdPostOverflows.AddCommand(NewPostOverflowsInstallCmd())
cmdPostOverflows.AddCommand(NewPostOverflowsRemoveCmd())
cmdPostOverflows.AddCommand(NewPostOverflowsUpgradeCmd())
cmdPostOverflows.AddCommand(NewPostOverflowsInspectCmd())
cmdPostOverflows.AddCommand(NewPostOverflowsListCmd())
return cmdPostOverflows
}

View file

@ -0,0 +1,62 @@
package require
// Set the appropriate hub branch according to config settings and crowdsec version
import (
log "github.com/sirupsen/logrus"
"golang.org/x/mod/semver"
"github.com/crowdsecurity/crowdsec/pkg/cwversion"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
)
func chooseBranch(cfg *csconfig.Config) string {
// this was set from config.yaml or flag
if cfg.Cscli.HubBranch != "" {
log.Debugf("Hub override from config: branch '%s'", cfg.Cscli.HubBranch)
return cfg.Cscli.HubBranch
}
latest, err := cwversion.Latest()
if err != nil {
log.Warningf("Unable to retrieve latest crowdsec version: %s, using hub branch 'master'", err)
return "master"
}
csVersion := cwversion.VersionStrip()
if csVersion == latest {
log.Debugf("Latest crowdsec version (%s), using hub branch 'master'", csVersion)
return "master"
}
// if current version is greater than the latest we are in pre-release
if semver.Compare(csVersion, latest) == 1 {
log.Debugf("Your current crowdsec version seems to be a pre-release (%s), using hub branch 'master'", csVersion)
return "master"
}
if csVersion == "" {
log.Warning("Crowdsec version is not set, using hub branch 'master'")
return "master"
}
log.Warnf("A new CrowdSec release is available (%s). "+
"Your version is '%s'. Please update it to use new parsers/scenarios/collections.",
latest, csVersion)
return csVersion
}
// HubBranch sets the branch (in cscli config) and returns its value
// It can be "master", or the branch corresponding to the current crowdsec version, or the value overridden in config/flag
func HubBranch(cfg *csconfig.Config) string {
branch := chooseBranch(cfg)
cfg.Cscli.HubBranch = branch
return branch
}
func HubURLTemplate(cfg *csconfig.Config) string {
return cfg.Cscli.HubURLTemplate
}

View file

@ -0,0 +1,100 @@
package require
import (
"errors"
"fmt"
"io"
"github.com/sirupsen/logrus"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func LAPI(c *csconfig.Config) error {
if err := c.LoadAPIServer(true); err != nil {
return fmt.Errorf("failed to load Local API: %w", err)
}
if c.DisableAPI {
return errors.New("local API is disabled -- this command must be run on the local API machine")
}
return nil
}
func CAPI(c *csconfig.Config) error {
if c.API.Server.OnlineClient == nil {
return fmt.Errorf("no configuration for Central API (CAPI) in '%s'", *c.FilePath)
}
return nil
}
func PAPI(c *csconfig.Config) error {
if c.API.Server.OnlineClient.Credentials.PapiURL == "" {
return errors.New("no PAPI URL in configuration")
}
return nil
}
func CAPIRegistered(c *csconfig.Config) error {
if c.API.Server.OnlineClient.Credentials == nil {
return errors.New("the Central API (CAPI) must be configured with 'cscli capi register'")
}
return nil
}
func DB(c *csconfig.Config) error {
if err := c.LoadDBConfig(true); err != nil {
return fmt.Errorf("this command requires direct database access (must be run on the local API machine): %w", err)
}
return nil
}
func Notifications(c *csconfig.Config) error {
if c.ConfigPaths.NotificationDir == "" {
return errors.New("config_paths.notification_dir is not set in crowdsec config")
}
return nil
}
// RemoteHub returns the configuration required to download hub index and items: url, branch, etc.
func RemoteHub(c *csconfig.Config) *cwhub.RemoteHubCfg {
// set branch in config, and log if necessary
branch := HubBranch(c)
urlTemplate := HubURLTemplate(c)
remote := &cwhub.RemoteHubCfg{
Branch: branch,
URLTemplate: urlTemplate,
IndexPath: ".index.json",
}
return remote
}
// Hub initializes the hub. If a remote configuration is provided, it can be used to download the index and items.
// If no remote parameter is provided, the hub can only be used for local operations.
func Hub(c *csconfig.Config, remote *cwhub.RemoteHubCfg, logger *logrus.Logger) (*cwhub.Hub, error) {
local := c.Hub
if local == nil {
return nil, errors.New("you must configure cli before interacting with hub")
}
if logger == nil {
logger = logrus.New()
logger.SetOutput(io.Discard)
}
hub, err := cwhub.NewHub(local, remote, false, logger)
if err != nil {
return nil, fmt.Errorf("failed to read Hub index: %w. Run 'sudo cscli hub update' to download the index again", err)
}
return hub, nil
}

View file

@ -1,198 +0,0 @@
package main
import (
"fmt"
"github.com/fatih/color"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewScenariosCmd() *cobra.Command {
var cmdScenarios = &cobra.Command{
Use: "scenarios [action] [config]",
Short: "Install/Remove/Upgrade/Inspect scenario(s) from hub",
Example: `cscli scenarios list [-a]
cscli scenarios install crowdsecurity/ssh-bf
cscli scenarios inspect crowdsecurity/ssh-bf
cscli scenarios upgrade crowdsecurity/ssh-bf
cscli scenarios remove crowdsecurity/ssh-bf
`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"scenario"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return errors.Wrap(err, "while setting hub branch")
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
return nil
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
if cmd.Name() == "inspect" || cmd.Name() == "list" {
return
}
log.Infof(ReloadMessage())
},
}
cmdScenarios.AddCommand(NewCmdScenariosInstall())
cmdScenarios.AddCommand(NewCmdScenariosRemove())
cmdScenarios.AddCommand(NewCmdScenariosUpgrade())
cmdScenarios.AddCommand(NewCmdScenariosInspect())
cmdScenarios.AddCommand(NewCmdScenariosList())
return cmdScenarios
}
func NewCmdScenariosInstall() *cobra.Command {
var ignoreError bool
var cmdScenariosInstall = &cobra.Command{
Use: "install [config]",
Short: "Install given scenario(s)",
Long: `Fetch and install given scenario(s) from hub`,
Example: `cscli scenarios install crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cwhub.SCENARIOS, args, toComplete)
},
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
t := cwhub.GetItem(cwhub.SCENARIOS, name)
if t == nil {
nearestItem, score := GetDistance(cwhub.SCENARIOS, name)
Suggest(cwhub.SCENARIOS, name, nearestItem.Name, score, ignoreError)
continue
}
if err := cwhub.InstallItem(csConfig, name, cwhub.SCENARIOS, forceAction, downloadOnly); err != nil {
if ignoreError {
log.Errorf("Error while installing '%s': %s", name, err)
} else {
log.Fatalf("Error while installing '%s': %s", name, err)
}
}
}
},
}
cmdScenariosInstall.PersistentFlags().BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
cmdScenariosInstall.PersistentFlags().BoolVar(&forceAction, "force", false, "Force install : Overwrite tainted and outdated files")
cmdScenariosInstall.PersistentFlags().BoolVar(&ignoreError, "ignore", false, "Ignore errors when installing multiple scenarios")
return cmdScenariosInstall
}
func NewCmdScenariosRemove() *cobra.Command {
var cmdScenariosRemove = &cobra.Command{
Use: "remove [config]",
Short: "Remove given scenario(s)",
Long: `remove given scenario(s)`,
Example: `cscli scenarios remove crowdsec/xxx crowdsec/xyz`,
Aliases: []string{"delete"},
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.SCENARIOS, args, toComplete)
},
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.RemoveMany(csConfig, cwhub.SCENARIOS, "", all, purge, forceAction)
return
}
if len(args) == 0 {
log.Fatalf("Specify at least one scenario to remove or '--all' flag.")
}
for _, name := range args {
cwhub.RemoveMany(csConfig, cwhub.SCENARIOS, name, all, purge, forceAction)
}
},
}
cmdScenariosRemove.PersistentFlags().BoolVar(&purge, "purge", false, "Delete source file too")
cmdScenariosRemove.PersistentFlags().BoolVar(&forceAction, "force", false, "Force remove : Remove tainted and outdated files")
cmdScenariosRemove.PersistentFlags().BoolVar(&all, "all", false, "Delete all the scenarios")
return cmdScenariosRemove
}
func NewCmdScenariosUpgrade() *cobra.Command {
var cmdScenariosUpgrade = &cobra.Command{
Use: "upgrade [config]",
Short: "Upgrade given scenario(s)",
Long: `Fetch and Upgrade given scenario(s) from hub`,
Example: `cscli scenarios upgrade crowdsec/xxx crowdsec/xyz`,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.SCENARIOS, args, toComplete)
},
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.UpgradeConfig(csConfig, cwhub.SCENARIOS, "", forceAction)
} else {
if len(args) == 0 {
log.Fatalf("no target scenario to upgrade")
}
for _, name := range args {
cwhub.UpgradeConfig(csConfig, cwhub.SCENARIOS, name, forceAction)
}
}
},
}
cmdScenariosUpgrade.PersistentFlags().BoolVarP(&all, "all", "a", false, "Upgrade all the scenarios")
cmdScenariosUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
return cmdScenariosUpgrade
}
func NewCmdScenariosInspect() *cobra.Command {
var cmdScenariosInspect = &cobra.Command{
Use: "inspect [config]",
Short: "Inspect given scenario",
Long: `Inspect given scenario`,
Example: `cscli scenarios inspect crowdsec/xxx`,
Args: cobra.MinimumNArgs(1),
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.SCENARIOS, args, toComplete)
},
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
InspectItem(args[0], cwhub.SCENARIOS)
},
}
cmdScenariosInspect.PersistentFlags().StringVarP(&prometheusURL, "url", "u", "", "Prometheus url")
return cmdScenariosInspect
}
func NewCmdScenariosList() *cobra.Command {
var cmdScenariosList = &cobra.Command{
Use: "list [config]",
Short: "List all scenario(s) or given one",
Long: `List all scenario(s) or given one`,
Example: `cscli scenarios list
cscli scenarios list crowdsecurity/xxx`,
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
ListItems(color.Output, []string{cwhub.SCENARIOS}, args, false, true, all)
},
}
cmdScenariosList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmdScenariosList
}

View file

@ -2,15 +2,17 @@ package main
import ( import (
"bytes" "bytes"
"errors"
"fmt" "fmt"
"os" "os"
"os/exec" "os/exec"
goccyyaml "github.com/goccy/go-yaml"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"gopkg.in/yaml.v3" "gopkg.in/yaml.v3"
goccyyaml "github.com/goccy/go-yaml"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/setup" "github.com/crowdsecurity/crowdsec/pkg/setup"
) )
@ -112,6 +114,22 @@ func runSetupDetect(cmd *cobra.Command, args []string) error {
return err return err
} }
var detectReader *os.File
switch detectConfigFile {
case "-":
log.Tracef("Reading detection rules from stdin")
detectReader = os.Stdin
default:
log.Tracef("Reading detection rules: %s", detectConfigFile)
detectReader, err = os.Open(detectConfigFile)
if err != nil {
return err
}
}
listSupportedServices, err := flags.GetBool("list-supported-services") listSupportedServices, err := flags.GetBool("list-supported-services")
if err != nil { if err != nil {
return err return err
@ -156,6 +174,7 @@ func runSetupDetect(cmd *cobra.Command, args []string) error {
_, err := exec.LookPath("systemctl") _, err := exec.LookPath("systemctl")
if err != nil { if err != nil {
log.Debug("systemctl not available: snubbing systemd") log.Debug("systemctl not available: snubbing systemd")
snubSystemd = true snubSystemd = true
} }
} }
@ -167,11 +186,12 @@ func runSetupDetect(cmd *cobra.Command, args []string) error {
if forcedOSFamily == "" && forcedOSID != "" { if forcedOSFamily == "" && forcedOSID != "" {
log.Debug("force-os-id is set: force-os-family defaults to 'linux'") log.Debug("force-os-id is set: force-os-family defaults to 'linux'")
forcedOSFamily = "linux" forcedOSFamily = "linux"
} }
if listSupportedServices { if listSupportedServices {
supported, err := setup.ListSupported(detectConfigFile) supported, err := setup.ListSupported(detectReader)
if err != nil { if err != nil {
return err return err
} }
@ -195,7 +215,7 @@ func runSetupDetect(cmd *cobra.Command, args []string) error {
SnubSystemd: snubSystemd, SnubSystemd: snubSystemd,
} }
hubSetup, err := setup.Detect(detectConfigFile, opts) hubSetup, err := setup.Detect(detectReader, opts)
if err != nil { if err != nil {
return fmt.Errorf("detecting services: %w", err) return fmt.Errorf("detecting services: %w", err)
} }
@ -204,6 +224,7 @@ func runSetupDetect(cmd *cobra.Command, args []string) error {
if err != nil { if err != nil {
return err return err
} }
fmt.Println(setup) fmt.Println(setup)
return nil return nil
@ -289,7 +310,12 @@ func runSetupInstallHub(cmd *cobra.Command, args []string) error {
return fmt.Errorf("while reading file %s: %w", fromFile, err) return fmt.Errorf("while reading file %s: %w", fromFile, err)
} }
if err = setup.InstallHubItems(csConfig, input, dryRun); err != nil { hub, err := require.Hub(csConfig, require.RemoteHub(csConfig), log.StandardLogger())
if err != nil {
return err
}
if err = setup.InstallHubItems(hub, input, dryRun); err != nil {
return err return err
} }
@ -298,6 +324,7 @@ func runSetupInstallHub(cmd *cobra.Command, args []string) error {
func runSetupValidate(cmd *cobra.Command, args []string) error { func runSetupValidate(cmd *cobra.Command, args []string) error {
fromFile := args[0] fromFile := args[0]
input, err := os.ReadFile(fromFile) input, err := os.ReadFile(fromFile)
if err != nil { if err != nil {
return fmt.Errorf("while reading stdin: %w", err) return fmt.Errorf("while reading stdin: %w", err)
@ -305,7 +332,7 @@ func runSetupValidate(cmd *cobra.Command, args []string) error {
if err = setup.Validate(input); err != nil { if err = setup.Validate(input); err != nil {
fmt.Printf("%v\n", err) fmt.Printf("%v\n", err)
return fmt.Errorf("invalid setup file") return errors.New("invalid setup file")
} }
return nil return nil

View file

@ -1,225 +1,141 @@
package main package main
import ( import (
"errors"
"fmt" "fmt"
"os" "os"
"slices"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"golang.org/x/exp/slices" "gopkg.in/yaml.v3"
"gopkg.in/yaml.v2"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
) )
func addToExclusion(name string) error { type cliSimulation struct {
csConfig.Cscli.SimulationConfig.Exclusions = append(csConfig.Cscli.SimulationConfig.Exclusions, name) cfg configGetter
return nil
} }
func removeFromExclusion(name string) error { func NewCLISimulation(cfg configGetter) *cliSimulation {
index := indexOf(name, csConfig.Cscli.SimulationConfig.Exclusions) return &cliSimulation{
cfg: cfg,
// Remove element from the slice }
csConfig.Cscli.SimulationConfig.Exclusions[index] = csConfig.Cscli.SimulationConfig.Exclusions[len(csConfig.Cscli.SimulationConfig.Exclusions)-1]
csConfig.Cscli.SimulationConfig.Exclusions[len(csConfig.Cscli.SimulationConfig.Exclusions)-1] = ""
csConfig.Cscli.SimulationConfig.Exclusions = csConfig.Cscli.SimulationConfig.Exclusions[:len(csConfig.Cscli.SimulationConfig.Exclusions)-1]
return nil
} }
func enableGlobalSimulation() error { func (cli *cliSimulation) NewCommand() *cobra.Command {
csConfig.Cscli.SimulationConfig.Simulation = new(bool) cmd := &cobra.Command{
*csConfig.Cscli.SimulationConfig.Simulation = true
csConfig.Cscli.SimulationConfig.Exclusions = []string{}
if err := dumpSimulationFile(); err != nil {
log.Fatalf("unable to dump simulation file: %s", err)
}
log.Printf("global simulation: enabled")
return nil
}
func dumpSimulationFile() error {
newConfigSim, err := yaml.Marshal(csConfig.Cscli.SimulationConfig)
if err != nil {
return fmt.Errorf("unable to marshal simulation configuration: %s", err)
}
err = os.WriteFile(csConfig.ConfigPaths.SimulationFilePath, newConfigSim, 0644)
if err != nil {
return fmt.Errorf("write simulation config in '%s' failed: %s", csConfig.ConfigPaths.SimulationFilePath, err)
}
log.Debugf("updated simulation file %s", csConfig.ConfigPaths.SimulationFilePath)
return nil
}
func disableGlobalSimulation() error {
csConfig.Cscli.SimulationConfig.Simulation = new(bool)
*csConfig.Cscli.SimulationConfig.Simulation = false
csConfig.Cscli.SimulationConfig.Exclusions = []string{}
newConfigSim, err := yaml.Marshal(csConfig.Cscli.SimulationConfig)
if err != nil {
return fmt.Errorf("unable to marshal new simulation configuration: %s", err)
}
err = os.WriteFile(csConfig.ConfigPaths.SimulationFilePath, newConfigSim, 0644)
if err != nil {
return fmt.Errorf("unable to write new simulation config in '%s' : %s", csConfig.ConfigPaths.SimulationFilePath, err)
}
log.Printf("global simulation: disabled")
return nil
}
func simulationStatus() error {
if csConfig.Cscli.SimulationConfig == nil {
log.Printf("global simulation: disabled (configuration file is missing)")
return nil
}
if *csConfig.Cscli.SimulationConfig.Simulation {
log.Println("global simulation: enabled")
if len(csConfig.Cscli.SimulationConfig.Exclusions) > 0 {
log.Println("Scenarios not in simulation mode :")
for _, scenario := range csConfig.Cscli.SimulationConfig.Exclusions {
log.Printf(" - %s", scenario)
}
}
} else {
log.Println("global simulation: disabled")
if len(csConfig.Cscli.SimulationConfig.Exclusions) > 0 {
log.Println("Scenarios in simulation mode :")
for _, scenario := range csConfig.Cscli.SimulationConfig.Exclusions {
log.Printf(" - %s", scenario)
}
}
}
return nil
}
func NewSimulationCmds() *cobra.Command {
var cmdSimulation = &cobra.Command{
Use: "simulation [command]", Use: "simulation [command]",
Short: "Manage simulation status of scenarios", Short: "Manage simulation status of scenarios",
Example: `cscli simulation status Example: `cscli simulation status
cscli simulation enable crowdsecurity/ssh-bf cscli simulation enable crowdsecurity/ssh-bf
cscli simulation disable crowdsecurity/ssh-bf`, cscli simulation disable crowdsecurity/ssh-bf`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error { PersistentPreRunE: func(_ *cobra.Command, _ []string) error {
if err := csConfig.LoadSimulation(); err != nil { if err := cli.cfg().LoadSimulation(); err != nil {
log.Fatal(err) return err
} }
if csConfig.Cscli == nil { if cli.cfg().Cscli.SimulationConfig == nil {
return fmt.Errorf("you must configure cli before using simulation") return errors.New("no simulation configured")
}
if csConfig.Cscli.SimulationConfig == nil {
return fmt.Errorf("no simulation configured")
} }
return nil return nil
}, },
PersistentPostRun: func(cmd *cobra.Command, args []string) { PersistentPostRun: func(cmd *cobra.Command, _ []string) {
if cmd.Name() != "status" { if cmd.Name() != "status" {
log.Infof(ReloadMessage()) log.Infof(ReloadMessage())
} }
}, },
} }
cmdSimulation.Flags().SortFlags = false cmd.Flags().SortFlags = false
cmdSimulation.PersistentFlags().SortFlags = false cmd.PersistentFlags().SortFlags = false
cmdSimulation.AddCommand(NewSimulationEnableCmd()) cmd.AddCommand(cli.NewEnableCmd())
cmdSimulation.AddCommand(NewSimulationDisableCmd()) cmd.AddCommand(cli.NewDisableCmd())
cmdSimulation.AddCommand(NewSimulationStatusCmd()) cmd.AddCommand(cli.NewStatusCmd())
return cmdSimulation return cmd
} }
func NewSimulationEnableCmd() *cobra.Command { func (cli *cliSimulation) NewEnableCmd() *cobra.Command {
var forceGlobalSimulation bool var forceGlobalSimulation bool
var cmdSimulationEnable = &cobra.Command{ cmd := &cobra.Command{
Use: "enable [scenario] [-global]", Use: "enable [scenario] [-global]",
Short: "Enable the simulation, globally or on specified scenarios", Short: "Enable the simulation, globally or on specified scenarios",
Example: `cscli simulation enable`, Example: `cscli simulation enable`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil { hub, err := require.Hub(cli.cfg(), nil, nil)
log.Fatal(err) if err != nil {
} return err
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
} }
if len(args) > 0 { if len(args) > 0 {
for _, scenario := range args { for _, scenario := range args {
var item = cwhub.GetItem(cwhub.SCENARIOS, scenario) item := hub.GetItem(cwhub.SCENARIOS, scenario)
if item == nil { if item == nil {
log.Errorf("'%s' doesn't exist or is not a scenario", scenario) log.Errorf("'%s' doesn't exist or is not a scenario", scenario)
continue continue
} }
if !item.Installed { if !item.State.Installed {
log.Warningf("'%s' isn't enabled", scenario) log.Warningf("'%s' isn't enabled", scenario)
} }
isExcluded := slices.Contains(csConfig.Cscli.SimulationConfig.Exclusions, scenario) isExcluded := slices.Contains(cli.cfg().Cscli.SimulationConfig.Exclusions, scenario)
if *csConfig.Cscli.SimulationConfig.Simulation && !isExcluded { if *cli.cfg().Cscli.SimulationConfig.Simulation && !isExcluded {
log.Warning("global simulation is already enabled") log.Warning("global simulation is already enabled")
continue continue
} }
if !*csConfig.Cscli.SimulationConfig.Simulation && isExcluded { if !*cli.cfg().Cscli.SimulationConfig.Simulation && isExcluded {
log.Warningf("simulation for '%s' already enabled", scenario) log.Warningf("simulation for '%s' already enabled", scenario)
continue continue
} }
if *csConfig.Cscli.SimulationConfig.Simulation && isExcluded { if *cli.cfg().Cscli.SimulationConfig.Simulation && isExcluded {
if err := removeFromExclusion(scenario); err != nil { cli.removeFromExclusion(scenario)
log.Fatal(err)
}
log.Printf("simulation enabled for '%s'", scenario) log.Printf("simulation enabled for '%s'", scenario)
continue continue
} }
if err := addToExclusion(scenario); err != nil { cli.addToExclusion(scenario)
log.Fatal(err)
}
log.Printf("simulation mode for '%s' enabled", scenario) log.Printf("simulation mode for '%s' enabled", scenario)
} }
if err := dumpSimulationFile(); err != nil { if err := cli.dumpSimulationFile(); err != nil {
log.Fatalf("simulation enable: %s", err) return fmt.Errorf("simulation enable: %w", err)
} }
} else if forceGlobalSimulation { } else if forceGlobalSimulation {
if err := enableGlobalSimulation(); err != nil { if err := cli.enableGlobalSimulation(); err != nil {
log.Fatalf("unable to enable global simulation mode : %s", err) return fmt.Errorf("unable to enable global simulation mode: %w", err)
} }
} else { } else {
printHelp(cmd) printHelp(cmd)
} }
return nil
}, },
} }
cmdSimulationEnable.Flags().BoolVarP(&forceGlobalSimulation, "global", "g", false, "Enable global simulation (reverse mode)") cmd.Flags().BoolVarP(&forceGlobalSimulation, "global", "g", false, "Enable global simulation (reverse mode)")
return cmdSimulationEnable return cmd
} }
func NewSimulationDisableCmd() *cobra.Command { func (cli *cliSimulation) NewDisableCmd() *cobra.Command {
var forceGlobalSimulation bool var forceGlobalSimulation bool
var cmdSimulationDisable = &cobra.Command{ cmd := &cobra.Command{
Use: "disable [scenario]", Use: "disable [scenario]",
Short: "Disable the simulation mode. Disable only specified scenarios", Short: "Disable the simulation mode. Disable only specified scenarios",
Example: `cscli simulation disable`, Example: `cscli simulation disable`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(cmd *cobra.Command, args []string) error {
if len(args) > 0 { if len(args) > 0 {
for _, scenario := range args { for _, scenario := range args {
isExcluded := slices.Contains(csConfig.Cscli.SimulationConfig.Exclusions, scenario) isExcluded := slices.Contains(cli.cfg().Cscli.SimulationConfig.Exclusions, scenario)
if !*csConfig.Cscli.SimulationConfig.Simulation && !isExcluded { if !*cli.cfg().Cscli.SimulationConfig.Simulation && !isExcluded {
log.Warningf("%s isn't in simulation mode", scenario) log.Warningf("%s isn't in simulation mode", scenario)
continue continue
} }
if !*csConfig.Cscli.SimulationConfig.Simulation && isExcluded { if !*cli.cfg().Cscli.SimulationConfig.Simulation && isExcluded {
if err := removeFromExclusion(scenario); err != nil { cli.removeFromExclusion(scenario)
log.Fatal(err)
}
log.Printf("simulation mode for '%s' disabled", scenario) log.Printf("simulation mode for '%s' disabled", scenario)
continue continue
} }
@ -227,42 +143,140 @@ func NewSimulationDisableCmd() *cobra.Command {
log.Warningf("simulation mode is enabled but is already disable for '%s'", scenario) log.Warningf("simulation mode is enabled but is already disable for '%s'", scenario)
continue continue
} }
if err := addToExclusion(scenario); err != nil { cli.addToExclusion(scenario)
log.Fatal(err)
}
log.Printf("simulation mode for '%s' disabled", scenario) log.Printf("simulation mode for '%s' disabled", scenario)
} }
if err := dumpSimulationFile(); err != nil { if err := cli.dumpSimulationFile(); err != nil {
log.Fatalf("simulation disable: %s", err) return fmt.Errorf("simulation disable: %w", err)
} }
} else if forceGlobalSimulation { } else if forceGlobalSimulation {
if err := disableGlobalSimulation(); err != nil { if err := cli.disableGlobalSimulation(); err != nil {
log.Fatalf("unable to disable global simulation mode : %s", err) return fmt.Errorf("unable to disable global simulation mode: %w", err)
} }
} else { } else {
printHelp(cmd) printHelp(cmd)
} }
return nil
}, },
} }
cmdSimulationDisable.Flags().BoolVarP(&forceGlobalSimulation, "global", "g", false, "Disable global simulation (reverse mode)") cmd.Flags().BoolVarP(&forceGlobalSimulation, "global", "g", false, "Disable global simulation (reverse mode)")
return cmdSimulationDisable return cmd
} }
func NewSimulationStatusCmd() *cobra.Command { func (cli *cliSimulation) NewStatusCmd() *cobra.Command {
var cmdSimulationStatus = &cobra.Command{ cmd := &cobra.Command{
Use: "status", Use: "status",
Short: "Show simulation mode status", Short: "Show simulation mode status",
Example: `cscli simulation status`, Example: `cscli simulation status`,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { Run: func(_ *cobra.Command, _ []string) {
if err := simulationStatus(); err != nil { cli.status()
log.Fatal(err)
}
}, },
PersistentPostRun: func(cmd *cobra.Command, args []string) { PersistentPostRun: func(cmd *cobra.Command, args []string) {
}, },
} }
return cmdSimulationStatus return cmd
}
func (cli *cliSimulation) addToExclusion(name string) {
cfg := cli.cfg()
cfg.Cscli.SimulationConfig.Exclusions = append(cfg.Cscli.SimulationConfig.Exclusions, name)
}
func (cli *cliSimulation) removeFromExclusion(name string) {
cfg := cli.cfg()
index := slices.Index(cfg.Cscli.SimulationConfig.Exclusions, name)
// Remove element from the slice
cfg.Cscli.SimulationConfig.Exclusions[index] = cfg.Cscli.SimulationConfig.Exclusions[len(cfg.Cscli.SimulationConfig.Exclusions)-1]
cfg.Cscli.SimulationConfig.Exclusions[len(cfg.Cscli.SimulationConfig.Exclusions)-1] = ""
cfg.Cscli.SimulationConfig.Exclusions = cfg.Cscli.SimulationConfig.Exclusions[:len(cfg.Cscli.SimulationConfig.Exclusions)-1]
}
func (cli *cliSimulation) enableGlobalSimulation() error {
cfg := cli.cfg()
cfg.Cscli.SimulationConfig.Simulation = new(bool)
*cfg.Cscli.SimulationConfig.Simulation = true
cfg.Cscli.SimulationConfig.Exclusions = []string{}
if err := cli.dumpSimulationFile(); err != nil {
return fmt.Errorf("unable to dump simulation file: %w", err)
}
log.Printf("global simulation: enabled")
return nil
}
func (cli *cliSimulation) dumpSimulationFile() error {
cfg := cli.cfg()
newConfigSim, err := yaml.Marshal(cfg.Cscli.SimulationConfig)
if err != nil {
return fmt.Errorf("unable to marshal simulation configuration: %w", err)
}
err = os.WriteFile(cfg.ConfigPaths.SimulationFilePath, newConfigSim, 0o644)
if err != nil {
return fmt.Errorf("write simulation config in '%s' failed: %w", cfg.ConfigPaths.SimulationFilePath, err)
}
log.Debugf("updated simulation file %s", cfg.ConfigPaths.SimulationFilePath)
return nil
}
func (cli *cliSimulation) disableGlobalSimulation() error {
cfg := cli.cfg()
cfg.Cscli.SimulationConfig.Simulation = new(bool)
*cfg.Cscli.SimulationConfig.Simulation = false
cfg.Cscli.SimulationConfig.Exclusions = []string{}
newConfigSim, err := yaml.Marshal(cfg.Cscli.SimulationConfig)
if err != nil {
return fmt.Errorf("unable to marshal new simulation configuration: %w", err)
}
err = os.WriteFile(cfg.ConfigPaths.SimulationFilePath, newConfigSim, 0o644)
if err != nil {
return fmt.Errorf("unable to write new simulation config in '%s': %w", cfg.ConfigPaths.SimulationFilePath, err)
}
log.Printf("global simulation: disabled")
return nil
}
func (cli *cliSimulation) status() {
cfg := cli.cfg()
if cfg.Cscli.SimulationConfig == nil {
log.Printf("global simulation: disabled (configuration file is missing)")
return
}
if *cfg.Cscli.SimulationConfig.Simulation {
log.Println("global simulation: enabled")
if len(cfg.Cscli.SimulationConfig.Exclusions) > 0 {
log.Println("Scenarios not in simulation mode :")
for _, scenario := range cfg.Cscli.SimulationConfig.Exclusions {
log.Printf(" - %s", scenario)
}
}
} else {
log.Println("global simulation: disabled")
if len(cfg.Cscli.SimulationConfig.Exclusions) > 0 {
log.Println("Scenarios in simulation mode :")
for _, scenario := range cfg.Cscli.SimulationConfig.Exclusions {
log.Printf(" - %s", scenario)
}
}
}
} }

View file

@ -4,6 +4,7 @@ import (
"archive/zip" "archive/zip"
"bytes" "bytes"
"context" "context"
"errors"
"fmt" "fmt"
"io" "io"
"net/http" "net/http"
@ -12,21 +13,23 @@ import (
"path/filepath" "path/filepath"
"regexp" "regexp"
"strings" "strings"
"time"
"github.com/blackfireio/osinfo" "github.com/blackfireio/osinfo"
"github.com/go-openapi/strfmt" "github.com/go-openapi/strfmt"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/cmd/crowdsec-cli/require"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/cwversion" "github.com/crowdsecurity/crowdsec/pkg/cwversion"
"github.com/crowdsecurity/crowdsec/pkg/database" "github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/fflag" "github.com/crowdsecurity/crowdsec/pkg/fflag"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/types"
) )
const ( const (
@ -37,6 +40,7 @@ const (
SUPPORT_OS_INFO_PATH = "osinfo.txt" SUPPORT_OS_INFO_PATH = "osinfo.txt"
SUPPORT_PARSERS_PATH = "hub/parsers.txt" SUPPORT_PARSERS_PATH = "hub/parsers.txt"
SUPPORT_SCENARIOS_PATH = "hub/scenarios.txt" SUPPORT_SCENARIOS_PATH = "hub/scenarios.txt"
SUPPORT_CONTEXTS_PATH = "hub/scenarios.txt"
SUPPORT_COLLECTIONS_PATH = "hub/collections.txt" SUPPORT_COLLECTIONS_PATH = "hub/collections.txt"
SUPPORT_POSTOVERFLOWS_PATH = "hub/postoverflows.txt" SUPPORT_POSTOVERFLOWS_PATH = "hub/postoverflows.txt"
SUPPORT_BOUNCERS_PATH = "lapi/bouncers.txt" SUPPORT_BOUNCERS_PATH = "lapi/bouncers.txt"
@ -46,43 +50,54 @@ const (
SUPPORT_CAPI_STATUS_PATH = "capi_status.txt" SUPPORT_CAPI_STATUS_PATH = "capi_status.txt"
SUPPORT_ACQUISITION_CONFIG_BASE_PATH = "config/acquis/" SUPPORT_ACQUISITION_CONFIG_BASE_PATH = "config/acquis/"
SUPPORT_CROWDSEC_PROFILE_PATH = "config/profiles.yaml" SUPPORT_CROWDSEC_PROFILE_PATH = "config/profiles.yaml"
SUPPORT_CRASH_PATH = "crash/"
) )
// from https://github.com/acarl005/stripansi
var reStripAnsi = regexp.MustCompile("[\u001B\u009B][[\\]()#;?]*(?:(?:(?:[a-zA-Z\\d]*(?:;[a-zA-Z\\d]*)*)?\u0007)|(?:(?:\\d{1,4}(?:;\\d{0,4})*)?[\\dA-PRZcf-ntqry=><~]))")
func stripAnsiString(str string) string {
// the byte version doesn't strip correctly
return reStripAnsi.ReplaceAllString(str, "")
}
func collectMetrics() ([]byte, []byte, error) { func collectMetrics() ([]byte, []byte, error) {
log.Info("Collecting prometheus metrics") log.Info("Collecting prometheus metrics")
err := csConfig.LoadPrometheus()
if err != nil {
return nil, nil, err
}
if csConfig.Cscli.PrometheusUrl == "" { if csConfig.Cscli.PrometheusUrl == "" {
log.Warn("No Prometheus URL configured, metrics will not be collected") log.Warn("No Prometheus URL configured, metrics will not be collected")
return nil, nil, fmt.Errorf("prometheus_uri is not set") return nil, nil, errors.New("prometheus_uri is not set")
} }
humanMetrics := bytes.NewBuffer(nil) humanMetrics := bytes.NewBuffer(nil)
err = FormatPrometheusMetrics(humanMetrics, csConfig.Cscli.PrometheusUrl+"/metrics", "human")
if err != nil { ms := NewMetricStore()
return nil, nil, fmt.Errorf("could not fetch promtheus metrics: %s", err)
if err := ms.Fetch(csConfig.Cscli.PrometheusUrl); err != nil {
return nil, nil, fmt.Errorf("could not fetch prometheus metrics: %w", err)
} }
req, err := http.NewRequest(http.MethodGet, csConfig.Cscli.PrometheusUrl+"/metrics", nil) if err := ms.Format(humanMetrics, nil, "human", false); err != nil {
if err != nil { return nil, nil, err
return nil, nil, fmt.Errorf("could not create requests to prometheus endpoint: %s", err)
} }
req, err := http.NewRequest(http.MethodGet, csConfig.Cscli.PrometheusUrl, nil)
if err != nil {
return nil, nil, fmt.Errorf("could not create requests to prometheus endpoint: %w", err)
}
client := &http.Client{} client := &http.Client{}
resp, err := client.Do(req)
resp, err := client.Do(req)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("could not get metrics from prometheus endpoint: %s", err) return nil, nil, fmt.Errorf("could not get metrics from prometheus endpoint: %w", err)
} }
defer resp.Body.Close() defer resp.Body.Close()
body, err := io.ReadAll(resp.Body) body, err := io.ReadAll(resp.Body)
if err != nil { if err != nil {
return nil, nil, fmt.Errorf("could not read metrics from prometheus endpoint: %s", err) return nil, nil, fmt.Errorf("could not read metrics from prometheus endpoint: %w", err)
} }
return humanMetrics.Bytes(), body, nil return humanMetrics.Bytes(), body, nil
@ -95,89 +110,96 @@ func collectVersion() []byte {
func collectFeatures() []byte { func collectFeatures() []byte {
log.Info("Collecting feature flags") log.Info("Collecting feature flags")
enabledFeatures := fflag.Crowdsec.GetEnabledFeatures() enabledFeatures := fflag.Crowdsec.GetEnabledFeatures()
w := bytes.NewBuffer(nil) w := bytes.NewBuffer(nil)
for _, k := range enabledFeatures { for _, k := range enabledFeatures {
fmt.Fprintf(w, "%s\n", k) fmt.Fprintf(w, "%s\n", k)
} }
return w.Bytes() return w.Bytes()
} }
func collectOSInfo() ([]byte, error) { func collectOSInfo() ([]byte, error) {
log.Info("Collecting OS info") log.Info("Collecting OS info")
info, err := osinfo.GetOSInfo()
info, err := osinfo.GetOSInfo()
if err != nil { if err != nil {
return nil, err return nil, err
} }
w := bytes.NewBuffer(nil) w := bytes.NewBuffer(nil)
w.WriteString(fmt.Sprintf("Architecture: %s\n", info.Architecture)) fmt.Fprintf(w, "Architecture: %s\n", info.Architecture)
w.WriteString(fmt.Sprintf("Family: %s\n", info.Family)) fmt.Fprintf(w, "Family: %s\n", info.Family)
w.WriteString(fmt.Sprintf("ID: %s\n", info.ID)) fmt.Fprintf(w, "ID: %s\n", info.ID)
w.WriteString(fmt.Sprintf("Name: %s\n", info.Name)) fmt.Fprintf(w, "Name: %s\n", info.Name)
w.WriteString(fmt.Sprintf("Codename: %s\n", info.Codename)) fmt.Fprintf(w, "Codename: %s\n", info.Codename)
w.WriteString(fmt.Sprintf("Version: %s\n", info.Version)) fmt.Fprintf(w, "Version: %s\n", info.Version)
w.WriteString(fmt.Sprintf("Build: %s\n", info.Build)) fmt.Fprintf(w, "Build: %s\n", info.Build)
return w.Bytes(), nil return w.Bytes(), nil
} }
func initHub() error { func collectHubItems(hub *cwhub.Hub, itemType string) []byte {
if err := csConfig.LoadHub(); err != nil { var err error
return fmt.Errorf("cannot load hub: %s", err)
}
if csConfig.Hub == nil {
return fmt.Errorf("hub not configured")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("cannot set hub branch: %s", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
return fmt.Errorf("no hub index found: %s", err)
}
return nil
}
func collectHubItems(itemType string) []byte {
out := bytes.NewBuffer(nil) out := bytes.NewBuffer(nil)
log.Infof("Collecting %s list", itemType) log.Infof("Collecting %s list", itemType)
ListItems(out, []string{itemType}, []string{}, false, true, all)
items := make(map[string][]*cwhub.Item)
if items[itemType], err = selectItems(hub, itemType, nil, true); err != nil {
log.Warnf("could not collect %s list: %s", itemType, err)
}
if err := listItems(out, []string{itemType}, items, false); err != nil {
log.Warnf("could not collect %s list: %s", itemType, err)
}
return out.Bytes() return out.Bytes()
} }
func collectBouncers(dbClient *database.Client) ([]byte, error) { func collectBouncers(dbClient *database.Client) ([]byte, error) {
out := bytes.NewBuffer(nil) out := bytes.NewBuffer(nil)
err := getBouncers(out, dbClient)
bouncers, err := dbClient.ListBouncers()
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("unable to list bouncers: %w", err)
} }
getBouncersTable(out, bouncers)
return out.Bytes(), nil return out.Bytes(), nil
} }
func collectAgents(dbClient *database.Client) ([]byte, error) { func collectAgents(dbClient *database.Client) ([]byte, error) {
out := bytes.NewBuffer(nil) out := bytes.NewBuffer(nil)
err := getAgents(out, dbClient)
machines, err := dbClient.ListMachines()
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("unable to list machines: %w", err)
} }
getAgentsTable(out, machines)
return out.Bytes(), nil return out.Bytes(), nil
} }
func collectAPIStatus(login string, password string, endpoint string, prefix string) []byte { func collectAPIStatus(login string, password string, endpoint string, prefix string, hub *cwhub.Hub) []byte {
if csConfig.API.Client == nil || csConfig.API.Client.Credentials == nil { if csConfig.API.Client == nil || csConfig.API.Client.Credentials == nil {
return []byte("No agent credentials found, are we LAPI ?") return []byte("No agent credentials found, are we LAPI ?")
} }
pwd := strfmt.Password(password)
apiurl, err := url.Parse(endpoint)
pwd := strfmt.Password(password)
apiurl, err := url.Parse(endpoint)
if err != nil { if err != nil {
return []byte(fmt.Sprintf("cannot parse API URL: %s", err)) return []byte(fmt.Sprintf("cannot parse API URL: %s", err))
} }
scenarios, err := cwhub.GetInstalledScenariosAsString()
scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil { if err != nil {
return []byte(fmt.Sprintf("could not collect scenarios: %s", err)) return []byte(fmt.Sprintf("could not collect scenarios: %s", err))
} }
@ -189,6 +211,7 @@ func collectAPIStatus(login string, password string, endpoint string, prefix str
if err != nil { if err != nil {
return []byte(fmt.Sprintf("could not init client: %s", err)) return []byte(fmt.Sprintf("could not init client: %s", err))
} }
t := models.WatcherAuthRequest{ t := models.WatcherAuthRequest{
MachineID: &login, MachineID: &login,
Password: &pwd, Password: &pwd,
@ -205,6 +228,7 @@ func collectAPIStatus(login string, password string, endpoint string, prefix str
func collectCrowdsecConfig() []byte { func collectCrowdsecConfig() []byte {
log.Info("Collecting crowdsec config") log.Info("Collecting crowdsec config")
config, err := os.ReadFile(*csConfig.FilePath) config, err := os.ReadFile(*csConfig.FilePath)
if err != nil { if err != nil {
return []byte(fmt.Sprintf("could not read config file: %s", err)) return []byte(fmt.Sprintf("could not read config file: %s", err))
@ -217,15 +241,18 @@ func collectCrowdsecConfig() []byte {
func collectCrowdsecProfile() []byte { func collectCrowdsecProfile() []byte {
log.Info("Collecting crowdsec profile") log.Info("Collecting crowdsec profile")
config, err := os.ReadFile(csConfig.API.Server.ProfilesPath) config, err := os.ReadFile(csConfig.API.Server.ProfilesPath)
if err != nil { if err != nil {
return []byte(fmt.Sprintf("could not read profile file: %s", err)) return []byte(fmt.Sprintf("could not read profile file: %s", err))
} }
return config return config
} }
func collectAcquisitionConfig() map[string][]byte { func collectAcquisitionConfig() map[string][]byte {
log.Info("Collecting acquisition config") log.Info("Collecting acquisition config")
ret := make(map[string][]byte) ret := make(map[string][]byte)
for _, filename := range csConfig.Crowdsec.AcquisitionFiles { for _, filename := range csConfig.Crowdsec.AcquisitionFiles {
@ -240,8 +267,19 @@ func collectAcquisitionConfig() map[string][]byte {
return ret return ret
} }
func NewSupportCmd() *cobra.Command { func collectCrash() ([]string, error) {
var cmdSupport = &cobra.Command{ log.Info("Collecting crash dumps")
return trace.List()
}
type cliSupport struct{}
func NewCLISupport() *cliSupport {
return &cliSupport{}
}
func (cli cliSupport) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "support [action]", Use: "support [action]",
Short: "Provide commands to help during support", Short: "Provide commands to help during support",
Args: cobra.MinimumNArgs(1), Args: cobra.MinimumNArgs(1),
@ -251,9 +289,15 @@ func NewSupportCmd() *cobra.Command {
}, },
} }
cmd.AddCommand(cli.NewDumpCmd())
return cmd
}
func (cli cliSupport) NewDumpCmd() *cobra.Command {
var outFile string var outFile string
cmdDump := &cobra.Command{ cmd := &cobra.Command{
Use: "dump", Use: "dump",
Short: "Dump all your configuration to a zip file for easier support", Short: "Dump all your configuration to a zip file for easier support",
Long: `Dump the following informations: Long: `Dump the following informations:
@ -263,6 +307,7 @@ func NewSupportCmd() *cobra.Command {
- Installed parsers list - Installed parsers list
- Installed scenarios list - Installed scenarios list
- Installed postoverflows list - Installed postoverflows list
- Installed context list
- Bouncers list - Bouncers list
- Machines list - Machines list
- CAPI status - CAPI status
@ -274,7 +319,7 @@ cscli support dump -f /tmp/crowdsec-support.zip
`, `,
Args: cobra.NoArgs, Args: cobra.NoArgs,
DisableAutoGenTag: true, DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) { RunE: func(_ *cobra.Command, _ []string) error {
var err error var err error
var skipHub, skipDB, skipCAPI, skipLAPI, skipAgent bool var skipHub, skipDB, skipCAPI, skipLAPI, skipAgent bool
infos := map[string][]byte{ infos := map[string][]byte{
@ -294,24 +339,25 @@ cscli support dump -f /tmp/crowdsec-support.zip
infos[SUPPORT_AGENTS_PATH] = []byte(err.Error()) infos[SUPPORT_AGENTS_PATH] = []byte(err.Error())
} }
if err := csConfig.LoadAPIServer(); err != nil { if err = csConfig.LoadAPIServer(true); err != nil {
log.Warnf("could not load LAPI, skipping CAPI check") log.Warnf("could not load LAPI, skipping CAPI check")
skipLAPI = true skipLAPI = true
infos[SUPPORT_CAPI_STATUS_PATH] = []byte(err.Error()) infos[SUPPORT_CAPI_STATUS_PATH] = []byte(err.Error())
} }
if err := csConfig.LoadCrowdsec(); err != nil { if err = csConfig.LoadCrowdsec(); err != nil {
log.Warnf("could not load agent config, skipping crowdsec config check") log.Warnf("could not load agent config, skipping crowdsec config check")
skipAgent = true skipAgent = true
} }
err = initHub() hub, err := require.Hub(csConfig, nil, nil)
if err != nil { if err != nil {
log.Warn("Could not init hub, running on LAPI ? Hub related information will not be collected") log.Warn("Could not init hub, running on LAPI ? Hub related information will not be collected")
skipHub = true skipHub = true
infos[SUPPORT_PARSERS_PATH] = []byte(err.Error()) infos[SUPPORT_PARSERS_PATH] = []byte(err.Error())
infos[SUPPORT_SCENARIOS_PATH] = []byte(err.Error()) infos[SUPPORT_SCENARIOS_PATH] = []byte(err.Error())
infos[SUPPORT_POSTOVERFLOWS_PATH] = []byte(err.Error()) infos[SUPPORT_POSTOVERFLOWS_PATH] = []byte(err.Error())
infos[SUPPORT_CONTEXTS_PATH] = []byte(err.Error())
infos[SUPPORT_COLLECTIONS_PATH] = []byte(err.Error()) infos[SUPPORT_COLLECTIONS_PATH] = []byte(err.Error())
} }
@ -344,10 +390,11 @@ cscli support dump -f /tmp/crowdsec-support.zip
infos[SUPPORT_CROWDSEC_CONFIG_PATH] = collectCrowdsecConfig() infos[SUPPORT_CROWDSEC_CONFIG_PATH] = collectCrowdsecConfig()
if !skipHub { if !skipHub {
infos[SUPPORT_PARSERS_PATH] = collectHubItems(cwhub.PARSERS) infos[SUPPORT_PARSERS_PATH] = collectHubItems(hub, cwhub.PARSERS)
infos[SUPPORT_SCENARIOS_PATH] = collectHubItems(cwhub.SCENARIOS) infos[SUPPORT_SCENARIOS_PATH] = collectHubItems(hub, cwhub.SCENARIOS)
infos[SUPPORT_POSTOVERFLOWS_PATH] = collectHubItems(cwhub.PARSERS_OVFLW) infos[SUPPORT_POSTOVERFLOWS_PATH] = collectHubItems(hub, cwhub.POSTOVERFLOWS)
infos[SUPPORT_COLLECTIONS_PATH] = collectHubItems(cwhub.COLLECTIONS) infos[SUPPORT_CONTEXTS_PATH] = collectHubItems(hub, cwhub.POSTOVERFLOWS)
infos[SUPPORT_COLLECTIONS_PATH] = collectHubItems(hub, cwhub.COLLECTIONS)
} }
if !skipDB { if !skipDB {
@ -369,7 +416,8 @@ cscli support dump -f /tmp/crowdsec-support.zip
infos[SUPPORT_CAPI_STATUS_PATH] = collectAPIStatus(csConfig.API.Server.OnlineClient.Credentials.Login, infos[SUPPORT_CAPI_STATUS_PATH] = collectAPIStatus(csConfig.API.Server.OnlineClient.Credentials.Login,
csConfig.API.Server.OnlineClient.Credentials.Password, csConfig.API.Server.OnlineClient.Credentials.Password,
csConfig.API.Server.OnlineClient.Credentials.URL, csConfig.API.Server.OnlineClient.Credentials.URL,
CAPIURLPrefix) CAPIURLPrefix,
hub)
} }
if !skipLAPI { if !skipLAPI {
@ -377,12 +425,12 @@ cscli support dump -f /tmp/crowdsec-support.zip
infos[SUPPORT_LAPI_STATUS_PATH] = collectAPIStatus(csConfig.API.Client.Credentials.Login, infos[SUPPORT_LAPI_STATUS_PATH] = collectAPIStatus(csConfig.API.Client.Credentials.Login,
csConfig.API.Client.Credentials.Password, csConfig.API.Client.Credentials.Password,
csConfig.API.Client.Credentials.URL, csConfig.API.Client.Credentials.URL,
LAPIURLPrefix) LAPIURLPrefix,
hub)
infos[SUPPORT_CROWDSEC_PROFILE_PATH] = collectCrowdsecProfile() infos[SUPPORT_CROWDSEC_PROFILE_PATH] = collectCrowdsecProfile()
} }
if !skipAgent { if !skipAgent {
acquis := collectAcquisitionConfig() acquis := collectAcquisitionConfig()
for filename, content := range acquis { for filename, content := range acquis {
@ -391,33 +439,57 @@ cscli support dump -f /tmp/crowdsec-support.zip
} }
} }
crash, err := collectCrash()
if err != nil {
log.Errorf("could not collect crash dumps: %s", err)
}
for _, filename := range crash {
content, err := os.ReadFile(filename)
if err != nil {
log.Errorf("could not read crash dump %s: %s", filename, err)
}
infos[SUPPORT_CRASH_PATH+filepath.Base(filename)] = content
}
w := bytes.NewBuffer(nil) w := bytes.NewBuffer(nil)
zipWriter := zip.NewWriter(w) zipWriter := zip.NewWriter(w)
for filename, data := range infos { for filename, data := range infos {
fw, err := zipWriter.Create(filename) header := &zip.FileHeader{
Name: filename,
Method: zip.Deflate,
// TODO: retain mtime where possible (esp. trace)
Modified: time.Now(),
}
fw, err := zipWriter.CreateHeader(header)
if err != nil { if err != nil {
log.Errorf("Could not add zip entry for %s: %s", filename, err) log.Errorf("Could not add zip entry for %s: %s", filename, err)
continue continue
} }
fw.Write([]byte(types.StripAnsiString(string(data)))) fw.Write([]byte(stripAnsiString(string(data))))
} }
err = zipWriter.Close() err = zipWriter.Close()
if err != nil { if err != nil {
log.Fatalf("could not finalize zip file: %s", err) return fmt.Errorf("could not finalize zip file: %s", err)
} }
err = os.WriteFile(outFile, w.Bytes(), 0600) if outFile == "-" {
_, err = os.Stdout.Write(w.Bytes())
return err
}
err = os.WriteFile(outFile, w.Bytes(), 0o600)
if err != nil { if err != nil {
log.Fatalf("could not write zip file to %s: %s", outFile, err) return fmt.Errorf("could not write zip file to %s: %s", outFile, err)
} }
log.Infof("Written zip file to %s", outFile) log.Infof("Written zip file to %s", outFile)
return nil
}, },
} }
cmdDump.Flags().StringVarP(&outFile, "outFile", "f", "", "File to dump the information to")
cmdSupport.AddCommand(cmdDump)
return cmdSupport cmd.Flags().StringVarP(&outFile, "outFile", "f", "", "File to dump the information to")
return cmd
} }

View file

@ -1,279 +1,23 @@
package main package main
import ( import (
"encoding/csv"
"encoding/json"
"fmt" "fmt"
"io"
"math"
"net" "net"
"net/http"
"os"
"strconv"
"strings" "strings"
"time"
"github.com/fatih/color"
dto "github.com/prometheus/client_model/go"
"github.com/prometheus/prom2json"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/spf13/cobra" "github.com/spf13/cobra"
"github.com/texttheater/golang-levenshtein/levenshtein"
"golang.org/x/exp/slices"
"gopkg.in/yaml.v2"
"github.com/crowdsecurity/go-cs-lib/pkg/trace"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/database"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
) )
const MaxDistance = 7
func printHelp(cmd *cobra.Command) { func printHelp(cmd *cobra.Command) {
err := cmd.Help() if err := cmd.Help(); err != nil {
if err != nil {
log.Fatalf("unable to print help(): %s", err) log.Fatalf("unable to print help(): %s", err)
} }
} }
func indexOf(s string, slice []string) int {
for i, elem := range slice {
if s == elem {
return i
}
}
return -1
}
func LoadHub() error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("unable to load hub")
}
if err := cwhub.SetHubBranch(); err != nil {
log.Warningf("unable to set hub branch (%s), default to master", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
return fmt.Errorf("Failed to get Hub index : '%w'. Run 'sudo cscli hub update' to get the hub index", err)
}
return nil
}
func Suggest(itemType string, baseItem string, suggestItem string, score int, ignoreErr bool) {
errMsg := ""
if score < MaxDistance {
errMsg = fmt.Sprintf("unable to find %s '%s', did you mean %s ?", itemType, baseItem, suggestItem)
} else {
errMsg = fmt.Sprintf("unable to find %s '%s'", itemType, baseItem)
}
if ignoreErr {
log.Error(errMsg)
} else {
log.Fatalf(errMsg)
}
}
func GetDistance(itemType string, itemName string) (*cwhub.Item, int) {
allItems := make([]string, 0)
nearestScore := 100
nearestItem := &cwhub.Item{}
hubItems := cwhub.GetHubStatusForItemType(itemType, "", true)
for _, item := range hubItems {
allItems = append(allItems, item.Name)
}
for _, s := range allItems {
d := levenshtein.DistanceForStrings([]rune(itemName), []rune(s), levenshtein.DefaultOptions)
if d < nearestScore {
nearestScore = d
nearestItem = cwhub.GetItem(itemType, s)
}
}
return nearestItem, nearestScore
}
func compAllItems(itemType string, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
if err := LoadHub(); err != nil {
return nil, cobra.ShellCompDirectiveDefault
}
comp := make([]string, 0)
hubItems := cwhub.GetHubStatusForItemType(itemType, "", true)
for _, item := range hubItems {
if !slices.Contains(args, item.Name) && strings.Contains(item.Name, toComplete) {
comp = append(comp, item.Name)
}
}
cobra.CompDebugln(fmt.Sprintf("%s: %+v", itemType, comp), true)
return comp, cobra.ShellCompDirectiveNoFileComp
}
func compInstalledItems(itemType string, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
if err := LoadHub(); err != nil {
return nil, cobra.ShellCompDirectiveDefault
}
var items []string
var err error
switch itemType {
case cwhub.PARSERS:
items, err = cwhub.GetInstalledParsersAsString()
case cwhub.SCENARIOS:
items, err = cwhub.GetInstalledScenariosAsString()
case cwhub.PARSERS_OVFLW:
items, err = cwhub.GetInstalledPostOverflowsAsString()
case cwhub.COLLECTIONS:
items, err = cwhub.GetInstalledCollectionsAsString()
case cwhub.WAF_RULES:
items, err = cwhub.GetInstalledWafRulesAsString()
default:
return nil, cobra.ShellCompDirectiveDefault
}
if err != nil {
cobra.CompDebugln(fmt.Sprintf("list installed %s err: %s", itemType, err), true)
return nil, cobra.ShellCompDirectiveDefault
}
comp := make([]string, 0)
if toComplete != "" {
for _, item := range items {
if strings.Contains(item, toComplete) {
comp = append(comp, item)
}
}
} else {
comp = items
}
cobra.CompDebugln(fmt.Sprintf("%s: %+v", itemType, comp), true)
return comp, cobra.ShellCompDirectiveNoFileComp
}
func ListItems(out io.Writer, itemTypes []string, args []string, showType bool, showHeader bool, all bool) {
var hubStatusByItemType = make(map[string][]cwhub.ItemHubStatus)
for _, itemType := range itemTypes {
itemName := ""
if len(args) == 1 {
itemName = args[0]
}
hubStatusByItemType[itemType] = cwhub.GetHubStatusForItemType(itemType, itemName, all)
}
if csConfig.Cscli.Output == "human" {
for _, itemType := range itemTypes {
var statuses []cwhub.ItemHubStatus
var ok bool
if statuses, ok = hubStatusByItemType[itemType]; !ok {
log.Errorf("unknown item type: %s", itemType)
continue
}
listHubItemTable(out, "\n"+strings.ToUpper(itemType), statuses)
}
} else if csConfig.Cscli.Output == "json" {
x, err := json.MarshalIndent(hubStatusByItemType, "", " ")
if err != nil {
log.Fatalf("failed to unmarshal")
}
out.Write(x)
} else if csConfig.Cscli.Output == "raw" {
csvwriter := csv.NewWriter(out)
if showHeader {
header := []string{"name", "status", "version", "description"}
if showType {
header = append(header, "type")
}
err := csvwriter.Write(header)
if err != nil {
log.Fatalf("failed to write header: %s", err)
}
}
for _, itemType := range itemTypes {
var statuses []cwhub.ItemHubStatus
var ok bool
if statuses, ok = hubStatusByItemType[itemType]; !ok {
log.Errorf("unknown item type: %s", itemType)
continue
}
for _, status := range statuses {
if status.LocalVersion == "" {
status.LocalVersion = "n/a"
}
row := []string{
status.Name,
status.Status,
status.LocalVersion,
status.Description,
}
if showType {
row = append(row, itemType)
}
err := csvwriter.Write(row)
if err != nil {
log.Fatalf("failed to write raw output : %s", err)
}
}
}
csvwriter.Flush()
}
}
func InspectItem(name string, objecitemType string) {
hubItem := cwhub.GetItem(objecitemType, name)
if hubItem == nil {
log.Fatalf("unable to retrieve item.")
}
var b []byte
var err error
switch csConfig.Cscli.Output {
case "human", "raw":
b, err = yaml.Marshal(*hubItem)
if err != nil {
log.Fatalf("unable to marshal item : %s", err)
}
case "json":
b, err = json.MarshalIndent(*hubItem, "", " ")
if err != nil {
log.Fatalf("unable to marshal item : %s", err)
}
}
fmt.Printf("%s", string(b))
if csConfig.Cscli.Output == "json" || csConfig.Cscli.Output == "raw" {
return
}
if prometheusURL == "" {
//This is technically wrong to do this, as the prometheus section contains a listen address, not an URL to query prometheus
//But for ease of use, we will use the listen address as the prometheus URL because it will be 127.0.0.1 in the default case
listenAddr := csConfig.Prometheus.ListenAddr
if listenAddr == "" {
listenAddr = "127.0.0.1"
}
listenPort := csConfig.Prometheus.ListenPort
if listenPort == 0 {
listenPort = 6060
}
prometheusURL = fmt.Sprintf("http://%s:%d/metrics", listenAddr, listenPort)
log.Debugf("No prometheus URL provided using: %s", prometheusURL)
}
fmt.Printf("\nCurrent metrics : \n")
ShowMetrics(hubItem)
}
func manageCliDecisionAlerts(ip *string, ipRange *string, scope *string, value *string) error { func manageCliDecisionAlerts(ip *string, ipRange *string, scope *string, value *string) error {
/*if a range is provided, change the scope*/ /*if a range is provided, change the scope*/
if *ipRange != "" { if *ipRange != "" {
_, _, err := net.ParseCIDR(*ipRange) _, _, err := net.ParseCIDR(*ipRange)
@ -281,6 +25,7 @@ func manageCliDecisionAlerts(ip *string, ipRange *string, scope *string, value *
return fmt.Errorf("%s isn't a valid range", *ipRange) return fmt.Errorf("%s isn't a valid range", *ipRange)
} }
} }
if *ip != "" { if *ip != "" {
ipRepr := net.ParseIP(*ip) ipRepr := net.ParseIP(*ip)
if ipRepr == nil { if ipRepr == nil {
@ -288,7 +33,7 @@ func manageCliDecisionAlerts(ip *string, ipRange *string, scope *string, value *
} }
} }
//avoid confusion on scope (ip vs Ip and range vs Range) // avoid confusion on scope (ip vs Ip and range vs Range)
switch strings.ToLower(*scope) { switch strings.ToLower(*scope) {
case "ip": case "ip":
*scope = types.Ip *scope = types.Ip
@ -299,434 +44,10 @@ func manageCliDecisionAlerts(ip *string, ipRange *string, scope *string, value *
case "as": case "as":
*scope = types.AS *scope = types.AS
} }
return nil
}
func ShowMetrics(hubItem *cwhub.Item) {
switch hubItem.Type {
case cwhub.PARSERS:
metrics := GetParserMetric(prometheusURL, hubItem.Name)
parserMetricsTable(color.Output, hubItem.Name, metrics)
case cwhub.SCENARIOS:
metrics := GetScenarioMetric(prometheusURL, hubItem.Name)
scenarioMetricsTable(color.Output, hubItem.Name, metrics)
case cwhub.COLLECTIONS:
for _, item := range hubItem.Parsers {
metrics := GetParserMetric(prometheusURL, item)
parserMetricsTable(color.Output, item, metrics)
}
for _, item := range hubItem.Scenarios {
metrics := GetScenarioMetric(prometheusURL, item)
scenarioMetricsTable(color.Output, item, metrics)
}
for _, item := range hubItem.Collections {
hubItem = cwhub.GetItem(cwhub.COLLECTIONS, item)
if hubItem == nil {
log.Fatalf("unable to retrieve item '%s' from collection '%s'", item, hubItem.Name)
}
ShowMetrics(hubItem)
}
case cwhub.WAF_RULES:
log.Fatalf("FIXME: not implemented yet")
default:
log.Errorf("item of type '%s' is unknown", hubItem.Type)
}
}
// GetParserMetric is a complete rip from prom2json
func GetParserMetric(url string, itemName string) map[string]map[string]int {
stats := make(map[string]map[string]int)
result := GetPrometheusMetric(url)
for idx, fam := range result {
if !strings.HasPrefix(fam.Name, "cs_") {
continue
}
log.Tracef("round %d", idx)
for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric)
if !ok {
log.Debugf("failed to convert metric to prom2json.Metric")
continue
}
name, ok := metric.Labels["name"]
if !ok {
log.Debugf("no name in Metric %v", metric.Labels)
}
if name != itemName {
continue
}
source, ok := metric.Labels["source"]
if !ok {
log.Debugf("no source in Metric %v", metric.Labels)
} else {
if srctype, ok := metric.Labels["type"]; ok {
source = srctype + ":" + source
}
}
value := m.(prom2json.Metric).Value
fval, err := strconv.ParseFloat(value, 32)
if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err)
continue
}
ival := int(fval)
switch fam.Name {
case "cs_reader_hits_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
stats[source]["parsed"] = 0
stats[source]["reads"] = 0
stats[source]["unparsed"] = 0
stats[source]["hits"] = 0
}
stats[source]["reads"] += ival
case "cs_parser_hits_ok_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["parsed"] += ival
case "cs_parser_hits_ko_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["unparsed"] += ival
case "cs_node_hits_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["hits"] += ival
case "cs_node_hits_ok_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["parsed"] += ival
case "cs_node_hits_ko_total":
if _, ok := stats[source]; !ok {
stats[source] = make(map[string]int)
}
stats[source]["unparsed"] += ival
default:
continue
}
}
}
return stats
}
func GetScenarioMetric(url string, itemName string) map[string]int {
stats := make(map[string]int)
stats["instantiation"] = 0
stats["curr_count"] = 0
stats["overflow"] = 0
stats["pour"] = 0
stats["underflow"] = 0
result := GetPrometheusMetric(url)
for idx, fam := range result {
if !strings.HasPrefix(fam.Name, "cs_") {
continue
}
log.Tracef("round %d", idx)
for _, m := range fam.Metrics {
metric, ok := m.(prom2json.Metric)
if !ok {
log.Debugf("failed to convert metric to prom2json.Metric")
continue
}
name, ok := metric.Labels["name"]
if !ok {
log.Debugf("no name in Metric %v", metric.Labels)
}
if name != itemName {
continue
}
value := m.(prom2json.Metric).Value
fval, err := strconv.ParseFloat(value, 32)
if err != nil {
log.Errorf("Unexpected int value %s : %s", value, err)
continue
}
ival := int(fval)
switch fam.Name {
case "cs_bucket_created_total":
stats["instantiation"] += ival
case "cs_buckets":
stats["curr_count"] += ival
case "cs_bucket_overflowed_total":
stats["overflow"] += ival
case "cs_bucket_poured_total":
stats["pour"] += ival
case "cs_bucket_underflowed_total":
stats["underflow"] += ival
default:
continue
}
}
}
return stats
}
// it's a rip of the cli version, but in silent-mode
func silenceInstallItem(name string, obtype string) (string, error) {
var item = cwhub.GetItem(obtype, name)
if item == nil {
return "", fmt.Errorf("error retrieving item")
}
it := *item
if downloadOnly && it.Downloaded && it.UpToDate {
return fmt.Sprintf("%s is already downloaded and up-to-date", it.Name), nil
}
it, err := cwhub.DownloadLatest(csConfig.Hub, it, forceAction, false)
if err != nil {
return "", fmt.Errorf("error while downloading %s : %v", it.Name, err)
}
if err := cwhub.AddItem(obtype, it); err != nil {
return "", err
}
if downloadOnly {
return fmt.Sprintf("Downloaded %s to %s", it.Name, csConfig.Cscli.HubDir+"/"+it.RemotePath), nil
}
it, err = cwhub.EnableItem(csConfig.Hub, it)
if err != nil {
return "", fmt.Errorf("error while enabling %s : %v", it.Name, err)
}
if err := cwhub.AddItem(obtype, it); err != nil {
return "", err
}
return fmt.Sprintf("Enabled %s", it.Name), nil
}
func GetPrometheusMetric(url string) []*prom2json.Family {
mfChan := make(chan *dto.MetricFamily, 1024)
// Start with the DefaultTransport for sane defaults.
transport := http.DefaultTransport.(*http.Transport).Clone()
// Conservatively disable HTTP keep-alives as this program will only
// ever need a single HTTP request.
transport.DisableKeepAlives = true
// Timeout early if the server doesn't even return the headers.
transport.ResponseHeaderTimeout = time.Minute
go func() {
defer trace.CatchPanic("crowdsec/GetPrometheusMetric")
err := prom2json.FetchMetricFamilies(url, mfChan, transport)
if err != nil {
log.Fatalf("failed to fetch prometheus metrics : %v", err)
}
}()
result := []*prom2json.Family{}
for mf := range mfChan {
result = append(result, prom2json.NewFamily(mf))
}
log.Debugf("Finished reading prometheus output, %d entries", len(result))
return result
}
func RestoreHub(dirPath string) error {
var err error
if err := csConfig.LoadHub(); err != nil {
return err
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
for _, itype := range cwhub.ItemTypes {
itemDirectory := fmt.Sprintf("%s/%s/", dirPath, itype)
if _, err = os.Stat(itemDirectory); err != nil {
log.Infof("no %s in backup", itype)
continue
}
/*restore the upstream items*/
upstreamListFN := fmt.Sprintf("%s/upstream-%s.json", itemDirectory, itype)
file, err := os.ReadFile(upstreamListFN)
if err != nil {
return fmt.Errorf("error while opening %s : %s", upstreamListFN, err)
}
var upstreamList []string
err = json.Unmarshal(file, &upstreamList)
if err != nil {
return fmt.Errorf("error unmarshaling %s : %s", upstreamListFN, err)
}
for _, toinstall := range upstreamList {
label, err := silenceInstallItem(toinstall, itype)
if err != nil {
log.Errorf("Error while installing %s : %s", toinstall, err)
} else if label != "" {
log.Infof("Installed %s : %s", toinstall, label)
} else {
log.Printf("Installed %s : ok", toinstall)
}
}
/*restore the local and tainted items*/
files, err := os.ReadDir(itemDirectory)
if err != nil {
return fmt.Errorf("failed enumerating files of %s : %s", itemDirectory, err)
}
for _, file := range files {
//this was the upstream data
if file.Name() == fmt.Sprintf("upstream-%s.json", itype) {
continue
}
if itype == cwhub.PARSERS || itype == cwhub.PARSERS_OVFLW {
//we expect a stage here
if !file.IsDir() {
continue
}
stage := file.Name()
stagedir := fmt.Sprintf("%s/%s/%s/", csConfig.ConfigPaths.ConfigDir, itype, stage)
log.Debugf("Found stage %s in %s, target directory : %s", stage, itype, stagedir)
if err = os.MkdirAll(stagedir, os.ModePerm); err != nil {
return fmt.Errorf("error while creating stage directory %s : %s", stagedir, err)
}
/*find items*/
ifiles, err := os.ReadDir(itemDirectory + "/" + stage + "/")
if err != nil {
return fmt.Errorf("failed enumerating files of %s : %s", itemDirectory+"/"+stage, err)
}
//finally copy item
for _, tfile := range ifiles {
log.Infof("Going to restore local/tainted [%s]", tfile.Name())
sourceFile := fmt.Sprintf("%s/%s/%s", itemDirectory, stage, tfile.Name())
destinationFile := fmt.Sprintf("%s%s", stagedir, tfile.Name())
if err = types.CopyFile(sourceFile, destinationFile); err != nil {
return fmt.Errorf("failed copy %s %s to %s : %s", itype, sourceFile, destinationFile, err)
}
log.Infof("restored %s to %s", sourceFile, destinationFile)
}
} else {
log.Infof("Going to restore local/tainted [%s]", file.Name())
sourceFile := fmt.Sprintf("%s/%s", itemDirectory, file.Name())
destinationFile := fmt.Sprintf("%s/%s/%s", csConfig.ConfigPaths.ConfigDir, itype, file.Name())
if err = types.CopyFile(sourceFile, destinationFile); err != nil {
return fmt.Errorf("failed copy %s %s to %s : %s", itype, sourceFile, destinationFile, err)
}
log.Infof("restored %s to %s", sourceFile, destinationFile)
}
}
}
return nil
}
func BackupHub(dirPath string) error {
var err error
var itemDirectory string
var upstreamParsers []string
for _, itemType := range cwhub.ItemTypes {
clog := log.WithFields(log.Fields{
"type": itemType,
})
itemMap := cwhub.GetItemMap(itemType)
if itemMap == nil {
clog.Infof("No %s to backup.", itemType)
continue
}
itemDirectory = fmt.Sprintf("%s/%s/", dirPath, itemType)
if err := os.MkdirAll(itemDirectory, os.ModePerm); err != nil {
return fmt.Errorf("error while creating %s : %s", itemDirectory, err)
}
upstreamParsers = []string{}
for k, v := range itemMap {
clog = clog.WithFields(log.Fields{
"file": v.Name,
})
if !v.Installed { //only backup installed ones
clog.Debugf("[%s] : not installed", k)
continue
}
//for the local/tainted ones, we backup the full file
if v.Tainted || v.Local || !v.UpToDate {
//we need to backup stages for parsers
if itemType == cwhub.PARSERS || itemType == cwhub.PARSERS_OVFLW {
fstagedir := fmt.Sprintf("%s%s", itemDirectory, v.Stage)
if err := os.MkdirAll(fstagedir, os.ModePerm); err != nil {
return fmt.Errorf("error while creating stage dir %s : %s", fstagedir, err)
}
}
clog.Debugf("[%s] : backuping file (tainted:%t local:%t up-to-date:%t)", k, v.Tainted, v.Local, v.UpToDate)
tfile := fmt.Sprintf("%s%s/%s", itemDirectory, v.Stage, v.FileName)
if err = types.CopyFile(v.LocalPath, tfile); err != nil {
return fmt.Errorf("failed copy %s %s to %s : %s", itemType, v.LocalPath, tfile, err)
}
clog.Infof("local/tainted saved %s to %s", v.LocalPath, tfile)
continue
}
clog.Debugf("[%s] : from hub, just backup name (up-to-date:%t)", k, v.UpToDate)
clog.Infof("saving, version:%s, up-to-date:%t", v.Version, v.UpToDate)
upstreamParsers = append(upstreamParsers, v.Name)
}
//write the upstream items
upstreamParsersFname := fmt.Sprintf("%s/upstream-%s.json", itemDirectory, itemType)
upstreamParsersContent, err := json.MarshalIndent(upstreamParsers, "", " ")
if err != nil {
return fmt.Errorf("failed marshaling upstream parsers : %s", err)
}
err = os.WriteFile(upstreamParsersFname, upstreamParsersContent, 0644)
if err != nil {
return fmt.Errorf("unable to write to %s %s : %s", itemType, upstreamParsersFname, err)
}
clog.Infof("Wrote %d entries for %s to %s", len(upstreamParsers), itemType, upstreamParsersFname)
}
return nil return nil
} }
type unit struct {
value int64
symbol string
}
var ranges = []unit{
{value: 1e18, symbol: "E"},
{value: 1e15, symbol: "P"},
{value: 1e12, symbol: "T"},
{value: 1e9, symbol: "G"},
{value: 1e6, symbol: "M"},
{value: 1e3, symbol: "k"},
{value: 1, symbol: ""},
}
func formatNumber(num int) string {
goodUnit := unit{}
for _, u := range ranges {
if int64(num) >= u.value {
goodUnit = u
break
}
}
if goodUnit.value == 1 {
return fmt.Sprintf("%d%s", num, goodUnit.symbol)
}
res := math.Round(float64(num)/float64(goodUnit.value)*100) / 100
return fmt.Sprintf("%.2f%s", res, goodUnit.symbol)
}
func getDBClient() (*database.Client, error) {
var err error
if err := csConfig.LoadAPIServer(); err != nil || csConfig.DisableAPI {
return nil, err
}
ret, err := database.NewClient(csConfig.DbConfig)
if err != nil {
return nil, err
}
return ret, nil
}
func removeFromSlice(val string, slice []string) []string { func removeFromSlice(val string, slice []string) []string {
var i int var i int
var value string var value string
@ -748,5 +69,4 @@ func removeFromSlice(val string, slice []string) []string {
} }
return slice return slice
} }

View file

@ -3,39 +3,56 @@ package main
import ( import (
"fmt" "fmt"
"io" "io"
"strconv"
"github.com/aquasecurity/table" "github.com/aquasecurity/table"
"github.com/enescakir/emoji"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/emoji"
) )
func listHubItemTable(out io.Writer, title string, statuses []cwhub.ItemHubStatus) { func listHubItemTable(out io.Writer, title string, items []*cwhub.Item) {
t := newLightTable(out) t := newLightTable(out)
t.SetHeaders("Name", fmt.Sprintf("%v Status", emoji.Package), "Version", "Local Path") t.SetHeaders("Name", fmt.Sprintf("%v Status", emoji.Package), "Version", "Local Path")
t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetHeaderAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft) t.SetAlignment(table.AlignLeft, table.AlignLeft, table.AlignLeft, table.AlignLeft)
for _, status := range statuses { for _, item := range items {
t.AddRow(status.Name, status.UTF8_Status, status.LocalVersion, status.LocalPath) status := fmt.Sprintf("%v %s", item.State.Emoji(), item.State.Text())
t.AddRow(item.Name, status, item.State.LocalVersion, item.State.LocalPath)
} }
renderTableTitle(out, title) renderTableTitle(out, title)
t.Render() t.Render()
} }
func appsecMetricsTable(out io.Writer, itemName string, metrics map[string]int) {
t := newTable(out)
t.SetHeaders("Inband Hits", "Outband Hits")
t.AddRow(
strconv.Itoa(metrics["inband_hits"]),
strconv.Itoa(metrics["outband_hits"]),
)
renderTableTitle(out, fmt.Sprintf("\n - (AppSec Rule) %s:", itemName))
t.Render()
}
func scenarioMetricsTable(out io.Writer, itemName string, metrics map[string]int) { func scenarioMetricsTable(out io.Writer, itemName string, metrics map[string]int) {
if metrics["instantiation"] == 0 { if metrics["instantiation"] == 0 {
return return
} }
t := newTable(out) t := newTable(out)
t.SetHeaders("Current Count", "Overflows", "Instantiated", "Poured", "Expired") t.SetHeaders("Current Count", "Overflows", "Instantiated", "Poured", "Expired")
t.AddRow( t.AddRow(
fmt.Sprintf("%d", metrics["curr_count"]), strconv.Itoa(metrics["curr_count"]),
fmt.Sprintf("%d", metrics["overflow"]), strconv.Itoa(metrics["overflow"]),
fmt.Sprintf("%d", metrics["instantiation"]), strconv.Itoa(metrics["instantiation"]),
fmt.Sprintf("%d", metrics["pour"]), strconv.Itoa(metrics["pour"]),
fmt.Sprintf("%d", metrics["underflow"]), strconv.Itoa(metrics["underflow"]),
) )
renderTableTitle(out, fmt.Sprintf("\n - (Scenario) %s:", itemName)) renderTableTitle(out, fmt.Sprintf("\n - (Scenario) %s:", itemName))
@ -43,23 +60,26 @@ func scenarioMetricsTable(out io.Writer, itemName string, metrics map[string]int
} }
func parserMetricsTable(out io.Writer, itemName string, metrics map[string]map[string]int) { func parserMetricsTable(out io.Writer, itemName string, metrics map[string]map[string]int) {
skip := true
t := newTable(out) t := newTable(out)
t.SetHeaders("Parsers", "Hits", "Parsed", "Unparsed") t.SetHeaders("Parsers", "Hits", "Parsed", "Unparsed")
// don't show table if no hits
showTable := false
for source, stats := range metrics { for source, stats := range metrics {
if stats["hits"] > 0 { if stats["hits"] > 0 {
t.AddRow( t.AddRow(
source, source,
fmt.Sprintf("%d", stats["hits"]), strconv.Itoa(stats["hits"]),
fmt.Sprintf("%d", stats["parsed"]), strconv.Itoa(stats["parsed"]),
fmt.Sprintf("%d", stats["unparsed"]), strconv.Itoa(stats["unparsed"]),
) )
skip = false
showTable = true
} }
} }
if !skip { if showTable {
renderTableTitle(out, fmt.Sprintf("\n - (Parser) %s:", itemName)) renderTableTitle(out, fmt.Sprintf("\n - (Parser) %s:", itemName))
t.Render() t.Render()
} }

View file

@ -0,0 +1,27 @@
package main
import (
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwversion"
)
type cliVersion struct{}
func NewCLIVersion() *cliVersion {
return &cliVersion{}
}
func (cli cliVersion) NewCommand() *cobra.Command {
cmd := &cobra.Command{
Use: "version",
Short: "Display version",
Args: cobra.ExactArgs(0),
DisableAutoGenTag: true,
Run: func(_ *cobra.Command, _ []string) {
cwversion.Show()
},
}
return cmd
}

View file

@ -1,196 +0,0 @@
package main
import (
"fmt"
"github.com/fatih/color"
log "github.com/sirupsen/logrus"
"github.com/spf13/cobra"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
)
func NewWafRulesCmd() *cobra.Command {
var cmdWafRules = &cobra.Command{
Use: "waf-rules [action] [config]",
Short: "Install/Remove/Upgrade/Inspect waf-rule(s) from hub",
Example: `cscli waf-rules install crowdsecurity/core-rule-set
cscli waf-rules inspect crowdsecurity/core-rule-set
cscli waf-rules upgrade crowdsecurity/core-rule-set
cscli waf-rules list
cscli waf-rules remove crowdsecurity/core-rule-set
`,
Args: cobra.MinimumNArgs(1),
Aliases: []string{"waf-rule"},
DisableAutoGenTag: true,
PersistentPreRunE: func(cmd *cobra.Command, args []string) error {
if err := csConfig.LoadHub(); err != nil {
log.Fatal(err)
}
if csConfig.Hub == nil {
return fmt.Errorf("you must configure cli before interacting with hub")
}
if err := cwhub.SetHubBranch(); err != nil {
return fmt.Errorf("error while setting hub branch: %s", err)
}
if err := cwhub.GetHubIdx(csConfig.Hub); err != nil {
log.Info("Run 'sudo cscli hub update' to get the hub index")
log.Fatalf("Failed to get Hub index : %v", err)
}
return nil
},
PersistentPostRun: func(cmd *cobra.Command, args []string) {
if cmd.Name() == "inspect" || cmd.Name() == "list" {
return
}
log.Infof(ReloadMessage())
},
}
cmdWafRules.AddCommand(NewWafRulesInstallCmd())
cmdWafRules.AddCommand(NewWafRulesRemoveCmd())
cmdWafRules.AddCommand(NewWafRulesUpgradeCmd())
cmdWafRules.AddCommand(NewWafRulesInspectCmd())
cmdWafRules.AddCommand(NewWafRulesListCmd())
return cmdWafRules
}
func NewWafRulesInstallCmd() *cobra.Command {
var ignoreError bool
var cmdWafRulesInstall = &cobra.Command{
Use: "install [config]",
Short: "Install given waf-rule(s)",
Long: `Fetch and install given waf-rule(s) from hub`,
Example: `cscli waf-rules install crowdsec/xxx crowdsec/xyz`,
Args: cobra.MinimumNArgs(1),
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compAllItems(cwhub.WAF_RULES, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
for _, name := range args {
t := cwhub.GetItem(cwhub.WAF_RULES, name)
if t == nil {
nearestItem, score := GetDistance(cwhub.WAF_RULES, name)
Suggest(cwhub.WAF_RULES, name, nearestItem.Name, score, ignoreError)
continue
}
if err := cwhub.InstallItem(csConfig, name, cwhub.WAF_RULES, forceAction, downloadOnly); err != nil {
if ignoreError {
log.Errorf("Error while installing '%s': %s", name, err)
} else {
log.Fatalf("Error while installing '%s': %s", name, err)
}
}
}
},
}
cmdWafRulesInstall.PersistentFlags().BoolVarP(&downloadOnly, "download-only", "d", false, "Only download packages, don't enable")
cmdWafRulesInstall.PersistentFlags().BoolVar(&forceAction, "force", false, "Force install : Overwrite tainted and outdated files")
cmdWafRulesInstall.PersistentFlags().BoolVar(&ignoreError, "ignore", false, "Ignore errors when installing multiple waf rules")
return cmdWafRulesInstall
}
func NewWafRulesRemoveCmd() *cobra.Command {
var cmdWafRulesRemove = &cobra.Command{
Use: "remove [config]",
Short: "Remove given waf-rule(s)",
Long: `Remove given waf-rule(s) from hub`,
Aliases: []string{"delete"},
Example: `cscli waf-rules remove crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.WAF_RULES, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.RemoveMany(csConfig, cwhub.WAF_RULES, "", all, purge, forceAction)
return
}
if len(args) == 0 {
log.Fatalf("Specify at least one waf rule to remove or '--all' flag.")
}
for _, name := range args {
cwhub.RemoveMany(csConfig, cwhub.WAF_RULES, name, all, purge, forceAction)
}
},
}
cmdWafRulesRemove.PersistentFlags().BoolVar(&purge, "purge", false, "Delete source file too")
cmdWafRulesRemove.PersistentFlags().BoolVar(&forceAction, "force", false, "Force remove : Remove tainted and outdated files")
cmdWafRulesRemove.PersistentFlags().BoolVar(&all, "all", false, "Delete all the waf rules")
return cmdWafRulesRemove
}
func NewWafRulesUpgradeCmd() *cobra.Command {
var cmdWafRulesUpgrade = &cobra.Command{
Use: "upgrade [config]",
Short: "Upgrade given waf-rule(s)",
Long: `Fetch and upgrade given waf-rule(s) from hub`,
Example: `cscli waf-rules upgrade crowdsec/xxx crowdsec/xyz`,
DisableAutoGenTag: true,
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.WAF_RULES, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
if all {
cwhub.UpgradeConfig(csConfig, cwhub.WAF_RULES, "", forceAction)
} else {
if len(args) == 0 {
log.Fatalf("no target waf rule to upgrade")
}
for _, name := range args {
cwhub.UpgradeConfig(csConfig, cwhub.WAF_RULES, name, forceAction)
}
}
},
}
cmdWafRulesUpgrade.PersistentFlags().BoolVar(&all, "all", false, "Upgrade all the waf rules")
cmdWafRulesUpgrade.PersistentFlags().BoolVar(&forceAction, "force", false, "Force upgrade : Overwrite tainted and outdated files")
return cmdWafRulesUpgrade
}
func NewWafRulesInspectCmd() *cobra.Command {
var cmdWafRulesInspect = &cobra.Command{
Use: "inspect [name]",
Short: "Inspect given waf rule",
Long: `Inspect given waf rule`,
Example: `cscli waf-rules inspect crowdsec/xxx`,
DisableAutoGenTag: true,
Args: cobra.MinimumNArgs(1),
ValidArgsFunction: func(cmd *cobra.Command, args []string, toComplete string) ([]string, cobra.ShellCompDirective) {
return compInstalledItems(cwhub.WAF_RULES, args, toComplete)
},
Run: func(cmd *cobra.Command, args []string) {
InspectItem(args[0], cwhub.WAF_RULES)
},
}
cmdWafRulesInspect.PersistentFlags().StringVarP(&prometheusURL, "url", "u", "", "Prometheus url")
return cmdWafRulesInspect
}
func NewWafRulesListCmd() *cobra.Command {
var cmdWafRulesList = &cobra.Command{
Use: "list [name]",
Short: "List all waf rules or given one",
Long: `List all waf rules or given one`,
Example: `cscli waf-rules list
cscli waf-rules list crowdsecurity/xxx`,
DisableAutoGenTag: true,
Run: func(cmd *cobra.Command, args []string) {
ListItems(color.Output, []string{cwhub.WAF_RULES}, args, false, true, all)
},
}
cmdWafRulesList.PersistentFlags().BoolVarP(&all, "all", "a", false, "List disabled items as well")
return cmdWafRulesList
}

View file

@ -4,10 +4,9 @@ ifeq ($(OS), Windows_NT)
EXT = .exe EXT = .exe
endif endif
# Go parameters GO = go
GOCMD = go GOBUILD = $(GO) build
GOBUILD = $(GOCMD) build GOTEST = $(GO) test
GOTEST = $(GOCMD) test
CROWDSEC_BIN = crowdsec$(EXT) CROWDSEC_BIN = crowdsec$(EXT)
# names longer than 15 chars break 'pgrep' # names longer than 15 chars break 'pgrep'
@ -23,7 +22,7 @@ SYSTEMD_PATH_FILE = "/etc/systemd/system/crowdsec.service"
all: clean test build all: clean test build
build: clean build: clean
$(GOBUILD) $(LD_OPTS) $(BUILD_VENDOR_FLAGS) -o $(CROWDSEC_BIN) $(GOBUILD) $(LD_OPTS) -o $(CROWDSEC_BIN)
test: test:
$(GOTEST) $(LD_OPTS) -v ./... $(GOTEST) $(LD_OPTS) -v ./...

View file

@ -1,13 +1,14 @@
package main package main
import ( import (
"errors"
"fmt"
"runtime" "runtime"
"time" "time"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/go-cs-lib/pkg/trace" "github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/crowdsec/pkg/apiserver" "github.com/crowdsecurity/crowdsec/pkg/apiserver"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
@ -20,7 +21,7 @@ func initAPIServer(cConfig *csconfig.Config) (*apiserver.APIServer, error) {
apiServer, err := apiserver.NewServer(cConfig.API.Server) apiServer, err := apiserver.NewServer(cConfig.API.Server)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "unable to run local API") return nil, fmt.Errorf("unable to run local API: %w", err)
} }
if hasPlugins(cConfig.API.Server.Profiles) { if hasPlugins(cConfig.API.Server.Profiles) {
@ -29,29 +30,34 @@ func initAPIServer(cConfig *csconfig.Config) (*apiserver.APIServer, error) {
if cConfig.PluginConfig == nil && runtime.GOOS != "windows" { if cConfig.PluginConfig == nil && runtime.GOOS != "windows" {
return nil, errors.New("plugins are enabled, but the plugin_config section is missing in the configuration") return nil, errors.New("plugins are enabled, but the plugin_config section is missing in the configuration")
} }
if cConfig.ConfigPaths.NotificationDir == "" { if cConfig.ConfigPaths.NotificationDir == "" {
return nil, errors.New("plugins are enabled, but config_paths.notification_dir is not defined") return nil, errors.New("plugins are enabled, but config_paths.notification_dir is not defined")
} }
if cConfig.ConfigPaths.PluginDir == "" { if cConfig.ConfigPaths.PluginDir == "" {
return nil, errors.New("plugins are enabled, but config_paths.plugin_dir is not defined") return nil, errors.New("plugins are enabled, but config_paths.plugin_dir is not defined")
} }
err = pluginBroker.Init(cConfig.PluginConfig, cConfig.API.Server.Profiles, cConfig.ConfigPaths) err = pluginBroker.Init(cConfig.PluginConfig, cConfig.API.Server.Profiles, cConfig.ConfigPaths)
if err != nil { if err != nil {
return nil, errors.Wrap(err, "unable to run local API") return nil, fmt.Errorf("unable to run plugin broker: %w", err)
} }
log.Info("initiated plugin broker") log.Info("initiated plugin broker")
apiServer.AttachPluginBroker(&pluginBroker) apiServer.AttachPluginBroker(&pluginBroker)
} }
err = apiServer.InitController() err = apiServer.InitController()
if err != nil { if err != nil {
return nil, errors.Wrap(err, "unable to run local API") return nil, fmt.Errorf("unable to run local API: %w", err)
} }
return apiServer, nil return apiServer, nil
} }
func serveAPIServer(apiServer *apiserver.APIServer, apiReady chan bool) { func serveAPIServer(apiServer *apiserver.APIServer) {
apiReady := make(chan bool, 1)
apiTomb.Go(func() error { apiTomb.Go(func() error {
defer trace.CatchPanic("crowdsec/serveAPIServer") defer trace.CatchPanic("crowdsec/serveAPIServer")
go func() { go func() {
@ -75,6 +81,7 @@ func serveAPIServer(apiServer *apiserver.APIServer, apiReady chan bool) {
} }
return nil return nil
}) })
<-apiReady
} }
func hasPlugins(profiles []*csconfig.ProfileCfg) bool { func hasPlugins(profiles []*csconfig.ProfileCfg) bool {

View file

@ -1,146 +1,183 @@
package main package main
import ( import (
"context"
"fmt" "fmt"
"os" "os"
"path/filepath"
"sync" "sync"
"time" "time"
"path/filepath" log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"github.com/crowdsecurity/go-cs-lib/pkg/trace" "github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/crowdsec/pkg/acquisition" "github.com/crowdsecurity/crowdsec/pkg/acquisition"
"github.com/crowdsecurity/crowdsec/pkg/acquisition/configuration"
"github.com/crowdsecurity/crowdsec/pkg/alertcontext"
"github.com/crowdsecurity/crowdsec/pkg/appsec"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub" "github.com/crowdsecurity/crowdsec/pkg/cwhub"
leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket" leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket"
"github.com/crowdsecurity/crowdsec/pkg/parser" "github.com/crowdsecurity/crowdsec/pkg/parser"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v2"
) )
func initCrowdsec(cConfig *csconfig.Config) (*parser.Parsers, error) { // initCrowdsec prepares the log processor service
func initCrowdsec(cConfig *csconfig.Config, hub *cwhub.Hub) (*parser.Parsers, []acquisition.DataSource, error) {
var err error var err error
// Populate cwhub package tools if err = alertcontext.LoadConsoleContext(cConfig, hub); err != nil {
if err := cwhub.GetHubIdx(cConfig.Hub); err != nil { return nil, nil, fmt.Errorf("while loading context: %w", err)
return &parser.Parsers{}, fmt.Errorf("Failed to load hub index : %s", err)
} }
// Start loading configs // Start loading configs
csParsers := parser.NewParsers() csParsers := parser.NewParsers(hub)
if csParsers, err = parser.LoadParsers(cConfig, csParsers); err != nil { if csParsers, err = parser.LoadParsers(cConfig, csParsers); err != nil {
return &parser.Parsers{}, fmt.Errorf("Failed to load parsers: %s", err) return nil, nil, fmt.Errorf("while loading parsers: %w", err)
} }
if err := LoadBuckets(cConfig); err != nil { if err := LoadBuckets(cConfig, hub); err != nil {
return &parser.Parsers{}, fmt.Errorf("Failed to load scenarios: %s", err) return nil, nil, fmt.Errorf("while loading scenarios: %w", err)
} }
if err := LoadAcquisition(cConfig); err != nil { if err := appsec.LoadAppsecRules(hub); err != nil {
return &parser.Parsers{}, fmt.Errorf("Error while loading acquisition config : %s", err) return nil, nil, fmt.Errorf("while loading appsec rules: %w", err)
} }
return csParsers, nil
datasources, err := LoadAcquisition(cConfig)
if err != nil {
return nil, nil, fmt.Errorf("while loading acquisition config: %w", err)
}
return csParsers, datasources, nil
} }
func runCrowdsec(cConfig *csconfig.Config, parsers *parser.Parsers) error { // runCrowdsec starts the log processor service
func runCrowdsec(cConfig *csconfig.Config, parsers *parser.Parsers, hub *cwhub.Hub, datasources []acquisition.DataSource) error {
inputEventChan = make(chan types.Event) inputEventChan = make(chan types.Event)
inputLineChan = make(chan types.Event) inputLineChan = make(chan types.Event)
//start go-routines for parsing, buckets pour and outputs. // start go-routines for parsing, buckets pour and outputs.
parserWg := &sync.WaitGroup{} parserWg := &sync.WaitGroup{}
parsersTomb.Go(func() error { parsersTomb.Go(func() error {
parserWg.Add(1) parserWg.Add(1)
for i := 0; i < cConfig.Crowdsec.ParserRoutinesCount; i++ { for i := 0; i < cConfig.Crowdsec.ParserRoutinesCount; i++ {
parsersTomb.Go(func() error { parsersTomb.Go(func() error {
defer trace.CatchPanic("crowdsec/runParse") defer trace.CatchPanic("crowdsec/runParse")
if err := runParse(inputLineChan, inputEventChan, *parsers.Ctx, parsers.Nodes); err != nil { //this error will never happen as parser.Parse is not able to return errors
if err := runParse(inputLineChan, inputEventChan, *parsers.Ctx, parsers.Nodes); err != nil {
// this error will never happen as parser.Parse is not able to return errors
log.Fatalf("starting parse error : %s", err) log.Fatalf("starting parse error : %s", err)
return err return err
} }
return nil return nil
}) })
} }
parserWg.Done() parserWg.Done()
return nil return nil
}) })
parserWg.Wait() parserWg.Wait()
bucketWg := &sync.WaitGroup{} bucketWg := &sync.WaitGroup{}
bucketsTomb.Go(func() error { bucketsTomb.Go(func() error {
bucketWg.Add(1) bucketWg.Add(1)
/*restore previous state as well if present*/ /*restore previous state as well if present*/
if cConfig.Crowdsec.BucketStateFile != "" { if cConfig.Crowdsec.BucketStateFile != "" {
log.Warningf("Restoring buckets state from %s", cConfig.Crowdsec.BucketStateFile) log.Warningf("Restoring buckets state from %s", cConfig.Crowdsec.BucketStateFile)
if err := leaky.LoadBucketsState(cConfig.Crowdsec.BucketStateFile, buckets, holders); err != nil { if err := leaky.LoadBucketsState(cConfig.Crowdsec.BucketStateFile, buckets, holders); err != nil {
return fmt.Errorf("unable to restore buckets : %s", err) return fmt.Errorf("unable to restore buckets: %w", err)
} }
} }
for i := 0; i < cConfig.Crowdsec.BucketsRoutinesCount; i++ { for i := 0; i < cConfig.Crowdsec.BucketsRoutinesCount; i++ {
bucketsTomb.Go(func() error { bucketsTomb.Go(func() error {
defer trace.CatchPanic("crowdsec/runPour") defer trace.CatchPanic("crowdsec/runPour")
if err := runPour(inputEventChan, holders, buckets, cConfig); err != nil { if err := runPour(inputEventChan, holders, buckets, cConfig); err != nil {
log.Fatalf("starting pour error : %s", err) log.Fatalf("starting pour error : %s", err)
return err return err
} }
return nil return nil
}) })
} }
bucketWg.Done() bucketWg.Done()
return nil return nil
}) })
bucketWg.Wait() bucketWg.Wait()
apiClient, err := AuthenticatedLAPIClient(*cConfig.API.Client.Credentials, hub)
if err != nil {
return err
}
log.Debugf("Starting HeartBeat service")
apiClient.HeartBeat.StartHeartBeat(context.Background(), &outputsTomb)
outputWg := &sync.WaitGroup{} outputWg := &sync.WaitGroup{}
outputsTomb.Go(func() error { outputsTomb.Go(func() error {
outputWg.Add(1) outputWg.Add(1)
for i := 0; i < cConfig.Crowdsec.OutputRoutinesCount; i++ { for i := 0; i < cConfig.Crowdsec.OutputRoutinesCount; i++ {
outputsTomb.Go(func() error { outputsTomb.Go(func() error {
defer trace.CatchPanic("crowdsec/runOutput") defer trace.CatchPanic("crowdsec/runOutput")
if err := runOutput(inputEventChan, outputEventChan, buckets, *parsers.Povfwctx, parsers.Povfwnodes, *cConfig.API.Client.Credentials); err != nil {
if err := runOutput(inputEventChan, outputEventChan, buckets, *parsers.Povfwctx, parsers.Povfwnodes, apiClient); err != nil {
log.Fatalf("starting outputs error : %s", err) log.Fatalf("starting outputs error : %s", err)
return err return err
} }
return nil return nil
}) })
} }
outputWg.Done() outputWg.Done()
return nil return nil
}) })
outputWg.Wait() outputWg.Wait()
if cConfig.Prometheus != nil && cConfig.Prometheus.Enabled { if cConfig.Prometheus != nil && cConfig.Prometheus.Enabled {
aggregated := false aggregated := false
if cConfig.Prometheus.Level == "aggregated" { if cConfig.Prometheus.Level == configuration.CFG_METRICS_AGGREGATE {
aggregated = true aggregated = true
} }
if err := acquisition.GetMetrics(dataSources, aggregated); err != nil {
return errors.Wrap(err, "while fetching prometheus metrics for datasources.")
}
if err := acquisition.GetMetrics(dataSources, aggregated); err != nil {
return fmt.Errorf("while fetching prometheus metrics for datasources: %w", err)
}
} }
log.Info("Starting processing data") log.Info("Starting processing data")
if err := acquisition.StartAcquisition(dataSources, inputLineChan, &acquisTomb); err != nil { if err := acquisition.StartAcquisition(dataSources, inputLineChan, &acquisTomb); err != nil {
log.Fatalf("starting acquisition error : %s", err) return fmt.Errorf("starting acquisition error: %w", err)
return err
} }
return nil return nil
} }
func serveCrowdsec(parsers *parser.Parsers, cConfig *csconfig.Config, agentReady chan bool) { // serveCrowdsec wraps the log processor service
func serveCrowdsec(parsers *parser.Parsers, cConfig *csconfig.Config, hub *cwhub.Hub, datasources []acquisition.DataSource, agentReady chan bool) {
crowdsecTomb.Go(func() error { crowdsecTomb.Go(func() error {
defer trace.CatchPanic("crowdsec/serveCrowdsec") defer trace.CatchPanic("crowdsec/serveCrowdsec")
go func() { go func() {
defer trace.CatchPanic("crowdsec/runCrowdsec") defer trace.CatchPanic("crowdsec/runCrowdsec")
// this logs every time, even at config reload // this logs every time, even at config reload
log.Debugf("running agent after %s ms", time.Since(crowdsecT0)) log.Debugf("running agent after %s ms", time.Since(crowdsecT0))
agentReady <- true agentReady <- true
if err := runCrowdsec(cConfig, parsers); err != nil {
if err := runCrowdsec(cConfig, parsers, hub, datasources); err != nil {
log.Fatalf("unable to start crowdsec routines: %s", err) log.Fatalf("unable to start crowdsec routines: %s", err)
} }
}() }()
@ -151,74 +188,88 @@ func serveCrowdsec(parsers *parser.Parsers, cConfig *csconfig.Config, agentReady
*/ */
waitOnTomb() waitOnTomb()
log.Debugf("Shutting down crowdsec routines") log.Debugf("Shutting down crowdsec routines")
if err := ShutdownCrowdsecRoutines(); err != nil { if err := ShutdownCrowdsecRoutines(); err != nil {
log.Fatalf("unable to shutdown crowdsec routines: %s", err) log.Fatalf("unable to shutdown crowdsec routines: %s", err)
} }
log.Debugf("everything is dead, return crowdsecTomb") log.Debugf("everything is dead, return crowdsecTomb")
if dumpStates { if dumpStates {
dumpParserState() dumpParserState()
dumpOverflowState() dumpOverflowState()
dumpBucketsPour() dumpBucketsPour()
os.Exit(0) os.Exit(0)
} }
return nil return nil
}) })
} }
func dumpBucketsPour() { func dumpBucketsPour() {
fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "bucketpour-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0666) fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "bucketpour-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o666)
if err != nil { if err != nil {
log.Fatalf("open: %s", err) log.Fatalf("open: %s", err)
} }
out, err := yaml.Marshal(leaky.BucketPourCache) out, err := yaml.Marshal(leaky.BucketPourCache)
if err != nil { if err != nil {
log.Fatalf("marshal: %s", err) log.Fatalf("marshal: %s", err)
} }
b, err := fd.Write(out) b, err := fd.Write(out)
if err != nil { if err != nil {
log.Fatalf("write: %s", err) log.Fatalf("write: %s", err)
} }
log.Tracef("wrote %d bytes", b) log.Tracef("wrote %d bytes", b)
if err := fd.Close(); err != nil { if err := fd.Close(); err != nil {
log.Fatalf(" close: %s", err) log.Fatalf(" close: %s", err)
} }
} }
func dumpParserState() { func dumpParserState() {
fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "parser-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o666)
fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "parser-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0666)
if err != nil { if err != nil {
log.Fatalf("open: %s", err) log.Fatalf("open: %s", err)
} }
out, err := yaml.Marshal(parser.StageParseCache) out, err := yaml.Marshal(parser.StageParseCache)
if err != nil { if err != nil {
log.Fatalf("marshal: %s", err) log.Fatalf("marshal: %s", err)
} }
b, err := fd.Write(out) b, err := fd.Write(out)
if err != nil { if err != nil {
log.Fatalf("write: %s", err) log.Fatalf("write: %s", err)
} }
log.Tracef("wrote %d bytes", b) log.Tracef("wrote %d bytes", b)
if err := fd.Close(); err != nil { if err := fd.Close(); err != nil {
log.Fatalf(" close: %s", err) log.Fatalf(" close: %s", err)
} }
} }
func dumpOverflowState() { func dumpOverflowState() {
fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "bucket-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0o666)
fd, err := os.OpenFile(filepath.Join(parser.DumpFolder, "bucket-dump.yaml"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, 0666)
if err != nil { if err != nil {
log.Fatalf("open: %s", err) log.Fatalf("open: %s", err)
} }
out, err := yaml.Marshal(bucketOverflows) out, err := yaml.Marshal(bucketOverflows)
if err != nil { if err != nil {
log.Fatalf("marshal: %s", err) log.Fatalf("marshal: %s", err)
} }
b, err := fd.Write(out) b, err := fd.Write(out)
if err != nil { if err != nil {
log.Fatalf("write: %s", err) log.Fatalf("write: %s", err)
} }
log.Tracef("wrote %d bytes", b) log.Tracef("wrote %d bytes", b)
if err := fd.Close(); err != nil { if err := fd.Close(); err != nil {
log.Fatalf(" close: %s", err) log.Fatalf(" close: %s", err)
} }
@ -230,7 +281,7 @@ func waitOnTomb() {
case <-acquisTomb.Dead(): case <-acquisTomb.Dead():
/*if it's acquisition dying it means that we were in "cat" mode. /*if it's acquisition dying it means that we were in "cat" mode.
while shutting down, we need to give time for all buckets to process in flight data*/ while shutting down, we need to give time for all buckets to process in flight data*/
log.Warning("Acquisition is finished, shutting down") log.Info("Acquisition is finished, shutting down")
/* /*
While it might make sense to want to shut-down parser/buckets/etc. as soon as acquisition is finished, While it might make sense to want to shut-down parser/buckets/etc. as soon as acquisition is finished,
we might have some pending buckets: buckets that overflowed, but whose LeakRoutine are still alive because they we might have some pending buckets: buckets that overflowed, but whose LeakRoutine are still alive because they

28
cmd/crowdsec/fatalhook.go Normal file
View file

@ -0,0 +1,28 @@
package main
import (
"io"
log "github.com/sirupsen/logrus"
)
// FatalHook is used to log fatal messages to stderr when the rest goes to a file
type FatalHook struct {
Writer io.Writer
LogLevels []log.Level
}
func (hook *FatalHook) Fire(entry *log.Entry) error {
line, err := entry.String()
if err != nil {
return err
}
_, err = hook.Writer.Write([]byte(line))
return err
}
func (hook *FatalHook) Levels() []log.Level {
return hook.LogLevels
}

View file

@ -0,0 +1,92 @@
package main
import (
"context"
"fmt"
"net/url"
"time"
"github.com/go-openapi/strfmt"
"github.com/crowdsecurity/go-cs-lib/version"
"github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
"github.com/crowdsecurity/crowdsec/pkg/models"
)
func AuthenticatedLAPIClient(credentials csconfig.ApiCredentialsCfg, hub *cwhub.Hub) (*apiclient.ApiClient, error) {
scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil {
return nil, fmt.Errorf("loading list of installed hub scenarios: %w", err)
}
appsecRules, err := hub.GetInstalledNamesByType(cwhub.APPSEC_RULES)
if err != nil {
return nil, fmt.Errorf("loading list of installed hub appsec rules: %w", err)
}
installedScenariosAndAppsecRules := make([]string, 0, len(scenarios)+len(appsecRules))
installedScenariosAndAppsecRules = append(installedScenariosAndAppsecRules, scenarios...)
installedScenariosAndAppsecRules = append(installedScenariosAndAppsecRules, appsecRules...)
apiURL, err := url.Parse(credentials.URL)
if err != nil {
return nil, fmt.Errorf("parsing api url ('%s'): %w", credentials.URL, err)
}
papiURL, err := url.Parse(credentials.PapiURL)
if err != nil {
return nil, fmt.Errorf("parsing polling api url ('%s'): %w", credentials.PapiURL, err)
}
password := strfmt.Password(credentials.Password)
client, err := apiclient.NewClient(&apiclient.Config{
MachineID: credentials.Login,
Password: password,
Scenarios: installedScenariosAndAppsecRules,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL,
PapiURL: papiURL,
VersionPrefix: "v1",
UpdateScenario: func() ([]string, error) {
scenarios, err := hub.GetInstalledNamesByType(cwhub.SCENARIOS)
if err != nil {
return nil, err
}
appsecRules, err := hub.GetInstalledNamesByType(cwhub.APPSEC_RULES)
if err != nil {
return nil, err
}
ret := make([]string, 0, len(scenarios)+len(appsecRules))
ret = append(ret, scenarios...)
ret = append(ret, appsecRules...)
return ret, nil
},
})
if err != nil {
return nil, fmt.Errorf("new client api: %w", err)
}
authResp, _, err := client.Auth.AuthenticateWatcher(context.Background(), models.WatcherAuthRequest{
MachineID: &credentials.Login,
Password: &password,
Scenarios: installedScenariosAndAppsecRules,
})
if err != nil {
return nil, fmt.Errorf("authenticate watcher (%s): %w", credentials.Login, err)
}
var expiration time.Time
if err := expiration.UnmarshalText([]byte(authResp.Expire)); err != nil {
return nil, fmt.Errorf("unable to parse jwt expiration: %w", err)
}
client.GetClient().Transport.(*apiclient.JWTTransport).Token = authResp.Token
client.GetClient().Transport.(*apiclient.JWTTransport).Expiration = expiration
return client, nil
}

View file

@ -1,18 +1,22 @@
package main package main
import ( import (
"errors"
"flag" "flag"
"fmt" "fmt"
_ "net/http/pprof" _ "net/http/pprof"
"os" "os"
"path/filepath"
"runtime" "runtime"
"runtime/pprof"
"strings" "strings"
"time" "time"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"gopkg.in/tomb.v2" "gopkg.in/tomb.v2"
"github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/crowdsec/pkg/acquisition" "github.com/crowdsecurity/crowdsec/pkg/acquisition"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/csplugin" "github.com/crowdsecurity/crowdsec/pkg/csplugin"
@ -51,12 +55,15 @@ var (
) )
type Flags struct { type Flags struct {
ConfigFile string ConfigFile string
TraceLevel bool
DebugLevel bool LogLevelTrace bool
InfoLevel bool LogLevelDebug bool
WarnLevel bool LogLevelInfo bool
ErrorLevel bool LogLevelWarn bool
LogLevelError bool
LogLevelFatal bool
PrintVersion bool PrintVersion bool
SingleFileType string SingleFileType string
Labels map[string]string Labels map[string]string
@ -67,27 +74,35 @@ type Flags struct {
WinSvc string WinSvc string
DisableCAPI bool DisableCAPI bool
Transform string Transform string
OrderEvent bool
CPUProfile string
}
func (f *Flags) haveTimeMachine() bool {
return f.OneShotDSN != ""
} }
type labelsMap map[string]string type labelsMap map[string]string
func LoadBuckets(cConfig *csconfig.Config) error { func LoadBuckets(cConfig *csconfig.Config, hub *cwhub.Hub) error {
var ( var (
err error err error
files []string files []string
) )
for _, hubScenarioItem := range cwhub.GetItemMap(cwhub.SCENARIOS) {
if hubScenarioItem.Installed { for _, hubScenarioItem := range hub.GetItemMap(cwhub.SCENARIOS) {
files = append(files, hubScenarioItem.LocalPath) if hubScenarioItem.State.Installed {
files = append(files, hubScenarioItem.State.LocalPath)
} }
} }
buckets = leakybucket.NewBuckets() buckets = leakybucket.NewBuckets()
log.Infof("Loading %d scenario files", len(files)) log.Infof("Loading %d scenario files", len(files))
holders, outputEventChan, err = leakybucket.LoadBuckets(cConfig.Crowdsec, files, &bucketsTomb, buckets)
holders, outputEventChan, err = leakybucket.LoadBuckets(cConfig.Crowdsec, hub, files, &bucketsTomb, buckets, flags.OrderEvent)
if err != nil { if err != nil {
return fmt.Errorf("scenario loading failed: %v", err) return fmt.Errorf("scenario loading failed: %w", err)
} }
if cConfig.Prometheus != nil && cConfig.Prometheus.Enabled { if cConfig.Prometheus != nil && cConfig.Prometheus.Enabled {
@ -95,10 +110,11 @@ func LoadBuckets(cConfig *csconfig.Config) error {
holders[holderIndex].Profiling = true holders[holderIndex].Profiling = true
} }
} }
return nil return nil
} }
func LoadAcquisition(cConfig *csconfig.Config) error { func LoadAcquisition(cConfig *csconfig.Config) ([]acquisition.DataSource, error) {
var err error var err error
if flags.SingleFileType != "" && flags.OneShotDSN != "" { if flags.SingleFileType != "" && flags.OneShotDSN != "" {
@ -107,16 +123,20 @@ func LoadAcquisition(cConfig *csconfig.Config) error {
dataSources, err = acquisition.LoadAcquisitionFromDSN(flags.OneShotDSN, flags.Labels, flags.Transform) dataSources, err = acquisition.LoadAcquisitionFromDSN(flags.OneShotDSN, flags.Labels, flags.Transform)
if err != nil { if err != nil {
return errors.Wrapf(err, "failed to configure datasource for %s", flags.OneShotDSN) return nil, fmt.Errorf("failed to configure datasource for %s: %w", flags.OneShotDSN, err)
} }
} else { } else {
dataSources, err = acquisition.LoadAcquisitionFromFile(cConfig.Crowdsec) dataSources, err = acquisition.LoadAcquisitionFromFile(cConfig.Crowdsec, cConfig.Prometheus)
if err != nil { if err != nil {
return err return nil, err
} }
} }
return nil if len(dataSources) == 0 {
return nil, errors.New("no datasource enabled")
}
return dataSources, nil
} }
var ( var (
@ -130,21 +150,28 @@ func (l *labelsMap) String() string {
} }
func (l labelsMap) Set(label string) error { func (l labelsMap) Set(label string) error {
split := strings.Split(label, ":") for _, pair := range strings.Split(label, ",") {
if len(split) != 2 { split := strings.Split(pair, ":")
return errors.Wrapf(errors.New("Bad Format"), "for Label '%s'", label) if len(split) != 2 {
return fmt.Errorf("invalid format for label '%s', must be key:value", pair)
}
l[split[0]] = split[1]
} }
l[split[0]] = split[1]
return nil return nil
} }
func (f *Flags) Parse() { func (f *Flags) Parse() {
flag.StringVar(&f.ConfigFile, "c", csconfig.DefaultConfigPath("config.yaml"), "configuration file") flag.StringVar(&f.ConfigFile, "c", csconfig.DefaultConfigPath("config.yaml"), "configuration file")
flag.BoolVar(&f.TraceLevel, "trace", false, "VERY verbose")
flag.BoolVar(&f.DebugLevel, "debug", false, "print debug-level on stderr") flag.BoolVar(&f.LogLevelTrace, "trace", false, "set log level to 'trace' (VERY verbose)")
flag.BoolVar(&f.InfoLevel, "info", false, "print info-level on stderr") flag.BoolVar(&f.LogLevelDebug, "debug", false, "set log level to 'debug'")
flag.BoolVar(&f.WarnLevel, "warning", false, "print warning-level on stderr") flag.BoolVar(&f.LogLevelInfo, "info", false, "set log level to 'info'")
flag.BoolVar(&f.ErrorLevel, "error", false, "print error-level on stderr") flag.BoolVar(&f.LogLevelWarn, "warning", false, "set log level to 'warning'")
flag.BoolVar(&f.LogLevelError, "error", false, "set log level to 'error'")
flag.BoolVar(&f.LogLevelFatal, "fatal", false, "set log level to 'fatal'")
flag.BoolVar(&f.PrintVersion, "version", false, "display version") flag.BoolVar(&f.PrintVersion, "version", false, "display version")
flag.StringVar(&f.OneShotDSN, "dsn", "", "Process a single data source in time-machine") flag.StringVar(&f.OneShotDSN, "dsn", "", "Process a single data source in time-machine")
flag.StringVar(&f.Transform, "transform", "", "expr to apply on the event after acquisition") flag.StringVar(&f.Transform, "transform", "", "expr to apply on the event after acquisition")
@ -154,10 +181,14 @@ func (f *Flags) Parse() {
flag.BoolVar(&f.DisableAgent, "no-cs", false, "disable crowdsec agent") flag.BoolVar(&f.DisableAgent, "no-cs", false, "disable crowdsec agent")
flag.BoolVar(&f.DisableAPI, "no-api", false, "disable local API") flag.BoolVar(&f.DisableAPI, "no-api", false, "disable local API")
flag.BoolVar(&f.DisableCAPI, "no-capi", false, "disable communication with Central API") flag.BoolVar(&f.DisableCAPI, "no-capi", false, "disable communication with Central API")
flag.BoolVar(&f.OrderEvent, "order-event", false, "enforce event ordering with significant performance cost")
if runtime.GOOS == "windows" { if runtime.GOOS == "windows" {
flag.StringVar(&f.WinSvc, "winsvc", "", "Windows service Action: Install, Remove etc..") flag.StringVar(&f.WinSvc, "winsvc", "", "Windows service Action: Install, Remove etc..")
} }
flag.StringVar(&dumpFolder, "dump-data", "", "dump parsers/buckets raw outputs") flag.StringVar(&dumpFolder, "dump-data", "", "dump parsers/buckets raw outputs")
flag.StringVar(&f.CPUProfile, "cpu-profile", "", "write cpu profile to file")
flag.Parse() flag.Parse()
} }
@ -172,16 +203,18 @@ func newLogLevel(curLevelPtr *log.Level, f *Flags) *log.Level {
// override from flags // override from flags
switch { switch {
case f.TraceLevel: case f.LogLevelTrace:
ret = log.TraceLevel ret = log.TraceLevel
case f.DebugLevel: case f.LogLevelDebug:
ret = log.DebugLevel ret = log.DebugLevel
case f.InfoLevel: case f.LogLevelInfo:
ret = log.InfoLevel ret = log.InfoLevel
case f.WarnLevel: case f.LogLevelWarn:
ret = log.WarnLevel ret = log.WarnLevel
case f.ErrorLevel: case f.LogLevelError:
ret = log.ErrorLevel ret = log.ErrorLevel
case f.LogLevelFatal:
ret = log.FatalLevel
default: default:
} }
@ -189,6 +222,7 @@ func newLogLevel(curLevelPtr *log.Level, f *Flags) *log.Level {
// avoid returning a new ptr to the same value // avoid returning a new ptr to the same value
return curLevelPtr return curLevelPtr
} }
return &ret return &ret
} }
@ -196,11 +230,11 @@ func newLogLevel(curLevelPtr *log.Level, f *Flags) *log.Level {
func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet bool) (*csconfig.Config, error) { func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet bool) (*csconfig.Config, error) {
cConfig, _, err := csconfig.NewConfig(configFile, disableAgent, disableAPI, quiet) cConfig, _, err := csconfig.NewConfig(configFile, disableAgent, disableAPI, quiet)
if err != nil { if err != nil {
return nil, err return nil, fmt.Errorf("while loading configuration file: %w", err)
} }
if (cConfig.Common == nil || *cConfig.Common == csconfig.CommonCfg{}) { if err := trace.Init(filepath.Join(cConfig.ConfigPaths.DataDir, "trace")); err != nil {
return nil, fmt.Errorf("unable to load configuration: common section is empty") return nil, fmt.Errorf("while setting up trace directory: %w", err)
} }
cConfig.Common.LogLevel = newLogLevel(cConfig.Common.LogLevel, flags) cConfig.Common.LogLevel = newLogLevel(cConfig.Common.LogLevel, flags)
@ -212,11 +246,6 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
dumpStates = true dumpStates = true
} }
// Configuration paths are dependency to load crowdsec configuration
if err := cConfig.LoadConfigurationPaths(); err != nil {
return nil, err
}
if flags.SingleFileType != "" && flags.OneShotDSN != "" { if flags.SingleFileType != "" && flags.OneShotDSN != "" {
// if we're in time-machine mode, we don't want to log to file // if we're in time-machine mode, we don't want to log to file
cConfig.Common.LogMedia = "stdout" cConfig.Common.LogMedia = "stdout"
@ -231,18 +260,25 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
return nil, err return nil, err
} }
if cConfig.Common.LogMedia != "stdout" {
log.AddHook(&FatalHook{
Writer: os.Stderr,
LogLevels: []log.Level{log.FatalLevel, log.PanicLevel},
})
}
if err := csconfig.LoadFeatureFlagsFile(configFile, log.StandardLogger()); err != nil { if err := csconfig.LoadFeatureFlagsFile(configFile, log.StandardLogger()); err != nil {
return nil, err return nil, err
} }
if !flags.DisableAgent { if !cConfig.DisableAgent {
if err := cConfig.LoadCrowdsec(); err != nil { if err := cConfig.LoadCrowdsec(); err != nil {
return nil, err return nil, err
} }
} }
if !flags.DisableAPI { if !cConfig.DisableAPI {
if err := cConfig.LoadAPIServer(); err != nil { if err := cConfig.LoadAPIServer(false); err != nil {
return nil, err return nil, err
} }
} }
@ -252,11 +288,7 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
} }
if cConfig.DisableAPI && cConfig.DisableAgent { if cConfig.DisableAPI && cConfig.DisableAgent {
return nil, errors.New("You must run at least the API Server or crowdsec") return nil, errors.New("you must run at least the API Server or crowdsec")
}
if flags.TestMode && !cConfig.DisableAgent {
cConfig.Crowdsec.LintOnly = true
} }
if flags.OneShotDSN != "" && flags.SingleFileType == "" { if flags.OneShotDSN != "" && flags.SingleFileType == "" {
@ -276,9 +308,10 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
cConfig.API.Server.OnlineClient = nil cConfig.API.Server.OnlineClient = nil
} }
/*if the api is disabled as well, just read file and exit, don't daemonize*/ /*if the api is disabled as well, just read file and exit, don't daemonize*/
if flags.DisableAPI { if cConfig.DisableAPI {
cConfig.Common.Daemonize = false cConfig.Common.Daemonize = false
} }
log.Infof("single file mode : log_media=%s daemonize=%t", cConfig.Common.LogMedia, cConfig.Common.Daemonize) log.Infof("single file mode : log_media=%s daemonize=%t", cConfig.Common.LogMedia, cConfig.Common.Daemonize)
} }
@ -288,6 +321,7 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
if cConfig.Common.Daemonize && runtime.GOOS == "windows" { if cConfig.Common.Daemonize && runtime.GOOS == "windows" {
log.Debug("Daemonization is not supported on Windows, disabling") log.Debug("Daemonization is not supported on Windows, disabling")
cConfig.Common.Daemonize = false cConfig.Common.Daemonize = false
} }
@ -305,12 +339,16 @@ func LoadConfig(configFile string, disableAgent bool, disableAPI bool, quiet boo
var crowdsecT0 time.Time var crowdsecT0 time.Time
func main() { func main() {
// The initial log level is INFO, even if the user provided an -error or -warning flag
// because we need feature flags before parsing cli flags
log.SetFormatter(&log.TextFormatter{TimestampFormat: time.RFC3339, FullTimestamp: true})
if err := fflag.RegisterAllFeatures(); err != nil { if err := fflag.RegisterAllFeatures(); err != nil {
log.Fatalf("failed to register features: %s", err) log.Fatalf("failed to register features: %s", err)
} }
// some features can require configuration or command-line options, // some features can require configuration or command-line options,
// so wwe need to parse them asap. we'll load from feature.yaml later. // so we need to parse them asap. we'll load from feature.yaml later.
if err := csconfig.LoadFeatureFlagsEnv(log.StandardLogger()); err != nil { if err := csconfig.LoadFeatureFlagsEnv(log.StandardLogger()); err != nil {
log.Fatalf("failed to set feature flags from environment: %s", err) log.Fatalf("failed to set feature flags from environment: %s", err)
} }
@ -335,9 +373,28 @@ func main() {
os.Exit(0) os.Exit(0)
} }
if flags.CPUProfile != "" {
f, err := os.Create(flags.CPUProfile)
if err != nil {
log.Fatalf("could not create CPU profile: %s", err)
}
log.Infof("CPU profile will be written to %s", flags.CPUProfile)
if err := pprof.StartCPUProfile(f); err != nil {
f.Close()
log.Fatalf("could not start CPU profile: %s", err)
}
defer f.Close()
defer pprof.StopCPUProfile()
}
err := StartRunSvc() err := StartRunSvc()
if err != nil { if err != nil {
log.Fatal(err) pprof.StopCPUProfile()
log.Fatal(err) //nolint:gocritic // Disable warning for the defer pprof.StopCPUProfile() call
} }
os.Exit(0) os.Exit(0)
} }

View file

@ -3,16 +3,15 @@ package main
import ( import (
"fmt" "fmt"
"net/http" "net/http"
"time"
"github.com/prometheus/client_golang/prometheus" "github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/promhttp"
log "github.com/sirupsen/logrus" log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/go-cs-lib/pkg/trace" "github.com/crowdsecurity/go-cs-lib/trace"
"github.com/crowdsecurity/go-cs-lib/pkg/version" "github.com/crowdsecurity/go-cs-lib/version"
waf "github.com/crowdsecurity/crowdsec/pkg/acquisition/modules/waf" "github.com/crowdsecurity/crowdsec/pkg/acquisition/configuration"
v1 "github.com/crowdsecurity/crowdsec/pkg/apiserver/controllers/v1" v1 "github.com/crowdsecurity/crowdsec/pkg/apiserver/controllers/v1"
"github.com/crowdsecurity/crowdsec/pkg/cache" "github.com/crowdsecurity/crowdsec/pkg/cache"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
@ -22,7 +21,8 @@ import (
"github.com/crowdsecurity/crowdsec/pkg/parser" "github.com/crowdsecurity/crowdsec/pkg/parser"
) )
/*prometheus*/ // Prometheus
var globalParserHits = prometheus.NewCounterVec( var globalParserHits = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "cs_parser_hits_total", Name: "cs_parser_hits_total",
@ -30,6 +30,7 @@ var globalParserHits = prometheus.NewCounterVec(
}, },
[]string{"source", "type"}, []string{"source", "type"},
) )
var globalParserHitsOk = prometheus.NewCounterVec( var globalParserHitsOk = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "cs_parser_hits_ok_total", Name: "cs_parser_hits_ok_total",
@ -37,6 +38,7 @@ var globalParserHitsOk = prometheus.NewCounterVec(
}, },
[]string{"source", "type"}, []string{"source", "type"},
) )
var globalParserHitsKo = prometheus.NewCounterVec( var globalParserHitsKo = prometheus.NewCounterVec(
prometheus.CounterOpts{ prometheus.CounterOpts{
Name: "cs_parser_hits_ko_total", Name: "cs_parser_hits_ko_total",
@ -103,25 +105,29 @@ var globalPourHistogram = prometheus.NewHistogramVec(
func computeDynamicMetrics(next http.Handler, dbClient *database.Client) http.HandlerFunc { func computeDynamicMetrics(next http.Handler, dbClient *database.Client) http.HandlerFunc {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
//update cache metrics (stash) // catch panics here because they are not handled by servePrometheus
defer trace.CatchPanic("crowdsec/computeDynamicMetrics")
// update cache metrics (stash)
cache.UpdateCacheMetrics() cache.UpdateCacheMetrics()
//update cache metrics (regexp) // update cache metrics (regexp)
exprhelpers.UpdateRegexpCacheMetrics() exprhelpers.UpdateRegexpCacheMetrics()
//decision metrics are only relevant for LAPI // decision metrics are only relevant for LAPI
if dbClient == nil { if dbClient == nil {
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
return return
} }
decisionsFilters := make(map[string][]string, 0) decisions, err := dbClient.QueryDecisionCountByScenario()
decisions, err := dbClient.QueryDecisionCountByScenario(decisionsFilters)
if err != nil { if err != nil {
log.Errorf("Error querying decisions for metrics: %v", err) log.Errorf("Error querying decisions for metrics: %v", err)
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
return return
} }
globalActiveDecisions.Reset() globalActiveDecisions.Reset()
for _, d := range decisions { for _, d := range decisions {
globalActiveDecisions.With(prometheus.Labels{"reason": d.Scenario, "origin": d.Origin, "action": d.Type}).Set(float64(d.Count)) globalActiveDecisions.With(prometheus.Labels{"reason": d.Scenario, "origin": d.Origin, "action": d.Type}).Set(float64(d.Count))
} }
@ -133,10 +139,10 @@ func computeDynamicMetrics(next http.Handler, dbClient *database.Client) http.Ha
} }
alerts, err := dbClient.AlertsCountPerScenario(alertsFilter) alerts, err := dbClient.AlertsCountPerScenario(alertsFilter)
if err != nil { if err != nil {
log.Errorf("Error querying alerts for metrics: %v", err) log.Errorf("Error querying alerts for metrics: %v", err)
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
return return
} }
@ -152,26 +158,18 @@ func registerPrometheus(config *csconfig.PrometheusCfg) {
if !config.Enabled { if !config.Enabled {
return return
} }
if config.ListenAddr == "" {
log.Warning("prometheus is enabled, but the listen address is empty, using '127.0.0.1'")
config.ListenAddr = "127.0.0.1"
}
if config.ListenPort == 0 {
log.Warning("prometheus is enabled, but the listen port is empty, using '6060'")
config.ListenPort = 6060
}
// Registering prometheus // Registering prometheus
// If in aggregated mode, do not register events associated with a source, to keep the cardinality low // If in aggregated mode, do not register events associated with a source, to keep the cardinality low
if config.Level == "aggregated" { if config.Level == configuration.CFG_METRICS_AGGREGATE {
log.Infof("Loading aggregated prometheus collectors") log.Infof("Loading aggregated prometheus collectors")
prometheus.MustRegister(globalParserHits, globalParserHitsOk, globalParserHitsKo, prometheus.MustRegister(globalParserHits, globalParserHitsOk, globalParserHitsKo,
globalCsInfo, globalParsingHistogram, globalPourHistogram, globalCsInfo, globalParsingHistogram, globalPourHistogram,
leaky.BucketsUnderflow, leaky.BucketsCanceled, leaky.BucketsInstantiation, leaky.BucketsOverflow, leaky.BucketsUnderflow, leaky.BucketsCanceled, leaky.BucketsInstantiation, leaky.BucketsOverflow,
v1.LapiRouteHits, v1.LapiRouteHits,
leaky.BucketsCurrentCount, leaky.BucketsCurrentCount,
cache.CacheMetrics, exprhelpers.RegexpCacheMetrics, cache.CacheMetrics, exprhelpers.RegexpCacheMetrics, parser.NodesWlHitsOk, parser.NodesWlHits,
waf.WafParsingHistogram, waf.WafReqCounter, waf.WafRuleHits) )
} else { } else {
log.Infof("Loading prometheus collectors") log.Infof("Loading prometheus collectors")
prometheus.MustRegister(globalParserHits, globalParserHitsOk, globalParserHitsKo, prometheus.MustRegister(globalParserHits, globalParserHitsOk, globalParserHitsKo,
@ -179,14 +177,15 @@ func registerPrometheus(config *csconfig.PrometheusCfg) {
globalCsInfo, globalParsingHistogram, globalPourHistogram, globalCsInfo, globalParsingHistogram, globalPourHistogram,
v1.LapiRouteHits, v1.LapiMachineHits, v1.LapiBouncerHits, v1.LapiNilDecisions, v1.LapiNonNilDecisions, v1.LapiResponseTime, v1.LapiRouteHits, v1.LapiMachineHits, v1.LapiBouncerHits, v1.LapiNilDecisions, v1.LapiNonNilDecisions, v1.LapiResponseTime,
leaky.BucketsPour, leaky.BucketsUnderflow, leaky.BucketsCanceled, leaky.BucketsInstantiation, leaky.BucketsOverflow, leaky.BucketsCurrentCount, leaky.BucketsPour, leaky.BucketsUnderflow, leaky.BucketsCanceled, leaky.BucketsInstantiation, leaky.BucketsOverflow, leaky.BucketsCurrentCount,
globalActiveDecisions, globalAlerts, globalActiveDecisions, globalAlerts, parser.NodesWlHitsOk, parser.NodesWlHits,
cache.CacheMetrics, exprhelpers.RegexpCacheMetrics, cache.CacheMetrics, exprhelpers.RegexpCacheMetrics,
waf.WafParsingHistogram, waf.WafReqCounter, waf.WafRuleHits) )
} }
} }
func servePrometheus(config *csconfig.PrometheusCfg, dbClient *database.Client, apiReady chan bool, agentReady chan bool) { func servePrometheus(config *csconfig.PrometheusCfg, dbClient *database.Client, agentReady chan bool) {
<-agentReady
if !config.Enabled { if !config.Enabled {
return return
} }
@ -194,10 +193,11 @@ func servePrometheus(config *csconfig.PrometheusCfg, dbClient *database.Client,
defer trace.CatchPanic("crowdsec/servePrometheus") defer trace.CatchPanic("crowdsec/servePrometheus")
http.Handle("/metrics", computeDynamicMetrics(promhttp.Handler(), dbClient)) http.Handle("/metrics", computeDynamicMetrics(promhttp.Handler(), dbClient))
<-apiReady
<-agentReady
log.Debugf("serving metrics after %s ms", time.Since(crowdsecT0))
if err := http.ListenAndServe(fmt.Sprintf("%s:%d", config.ListenAddr, config.ListenPort), nil); err != nil { if err := http.ListenAndServe(fmt.Sprintf("%s:%d", config.ListenAddr, config.ListenPort), nil); err != nil {
log.Warningf("prometheus: %s", err) // in time machine, we most likely have the LAPI using the port
if !flags.haveTimeMachine() {
log.Warningf("prometheus: %s", err)
}
} }
} }

View file

@ -3,26 +3,19 @@ package main
import ( import (
"context" "context"
"fmt" "fmt"
"net/url"
"sync" "sync"
"time" "time"
"github.com/crowdsecurity/go-cs-lib/pkg/version" log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/crowdsec/pkg/apiclient" "github.com/crowdsecurity/crowdsec/pkg/apiclient"
"github.com/crowdsecurity/crowdsec/pkg/csconfig"
"github.com/crowdsecurity/crowdsec/pkg/cwhub"
leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket" leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket"
"github.com/crowdsecurity/crowdsec/pkg/models" "github.com/crowdsecurity/crowdsec/pkg/models"
"github.com/crowdsecurity/crowdsec/pkg/parser" "github.com/crowdsecurity/crowdsec/pkg/parser"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
"github.com/go-openapi/strfmt"
"github.com/pkg/errors"
log "github.com/sirupsen/logrus"
) )
func dedupAlerts(alerts []types.RuntimeAlert) ([]*models.Alert, error) { func dedupAlerts(alerts []types.RuntimeAlert) ([]*models.Alert, error) {
var dedupCache []*models.Alert var dedupCache []*models.Alert
for idx, alert := range alerts { for idx, alert := range alerts {
@ -32,16 +25,21 @@ func dedupAlerts(alerts []types.RuntimeAlert) ([]*models.Alert, error) {
dedupCache = append(dedupCache, alert.Alert) dedupCache = append(dedupCache, alert.Alert)
continue continue
} }
for k, src := range alert.Sources { for k, src := range alert.Sources {
refsrc := *alert.Alert //copy refsrc := *alert.Alert // copy
log.Tracef("source[%s]", k) log.Tracef("source[%s]", k)
refsrc.Source = &src refsrc.Source = &src
dedupCache = append(dedupCache, &refsrc) dedupCache = append(dedupCache, &refsrc)
} }
} }
if len(dedupCache) != len(alerts) { if len(dedupCache) != len(alerts) {
log.Tracef("went from %d to %d alerts", len(alerts), len(dedupCache)) log.Tracef("went from %d to %d alerts", len(alerts), len(dedupCache))
} }
return dedupCache, nil return dedupCache, nil
} }
@ -50,72 +48,27 @@ func PushAlerts(alerts []types.RuntimeAlert, client *apiclient.ApiClient) error
alertsToPush, err := dedupAlerts(alerts) alertsToPush, err := dedupAlerts(alerts)
if err != nil { if err != nil {
return errors.Wrap(err, "failed to transform alerts for api") return fmt.Errorf("failed to transform alerts for api: %w", err)
} }
_, _, err = client.Alerts.Add(ctx, alertsToPush) _, _, err = client.Alerts.Add(ctx, alertsToPush)
if err != nil { if err != nil {
return errors.Wrap(err, "failed sending alert to LAPI") return fmt.Errorf("failed sending alert to LAPI: %w", err)
} }
return nil return nil
} }
var bucketOverflows []types.Event var bucketOverflows []types.Event
func runOutput(input chan types.Event, overflow chan types.Event, buckets *leaky.Buckets, func runOutput(input chan types.Event, overflow chan types.Event, buckets *leaky.Buckets, postOverflowCTX parser.UnixParserCtx,
postOverflowCTX parser.UnixParserCtx, postOverflowNodes []parser.Node, apiConfig csconfig.ApiCredentialsCfg) error { postOverflowNodes []parser.Node, client *apiclient.ApiClient) error {
var (
cache []types.RuntimeAlert
cacheMutex sync.Mutex
)
var err error
ticker := time.NewTicker(1 * time.Second) ticker := time.NewTicker(1 * time.Second)
var cache []types.RuntimeAlert
var cacheMutex sync.Mutex
scenarios, err := cwhub.GetInstalledScenariosAsString()
if err != nil {
return errors.Wrapf(err, "loading list of installed hub scenarios: %s", err)
}
apiURL, err := url.Parse(apiConfig.URL)
if err != nil {
return errors.Wrapf(err, "parsing api url ('%s'): %s", apiConfig.URL, err)
}
papiURL, err := url.Parse(apiConfig.PapiURL)
if err != nil {
return errors.Wrapf(err, "parsing polling api url ('%s'): %s", apiConfig.PapiURL, err)
}
password := strfmt.Password(apiConfig.Password)
Client, err := apiclient.NewClient(&apiclient.Config{
MachineID: apiConfig.Login,
Password: password,
Scenarios: scenarios,
UserAgent: fmt.Sprintf("crowdsec/%s", version.String()),
URL: apiURL,
PapiURL: papiURL,
VersionPrefix: "v1",
UpdateScenario: cwhub.GetInstalledScenariosAsString,
})
if err != nil {
return errors.Wrapf(err, "new client api: %s", err)
}
authResp, _, err := Client.Auth.AuthenticateWatcher(context.Background(), models.WatcherAuthRequest{
MachineID: &apiConfig.Login,
Password: &password,
Scenarios: scenarios,
})
if err != nil {
return errors.Wrapf(err, "authenticate watcher (%s)", apiConfig.Login)
}
if err := Client.GetClient().Transport.(*apiclient.JWTTransport).Expiration.UnmarshalText([]byte(authResp.Expire)); err != nil {
return errors.Wrap(err, "unable to parse jwt expiration")
}
Client.GetClient().Transport.(*apiclient.JWTTransport).Token = authResp.Token
//start the heartbeat service
log.Debugf("Starting HeartBeat service")
Client.HeartBeat.StartHeartBeat(context.Background(), &outputsTomb)
LOOP: LOOP:
for { for {
select { select {
@ -126,9 +79,9 @@ LOOP:
newcache := make([]types.RuntimeAlert, 0) newcache := make([]types.RuntimeAlert, 0)
cache = newcache cache = newcache
cacheMutex.Unlock() cacheMutex.Unlock()
if err := PushAlerts(cachecopy, Client); err != nil { if err := PushAlerts(cachecopy, client); err != nil {
log.Errorf("while pushing to api : %s", err) log.Errorf("while pushing to api : %s", err)
//just push back the events to the queue // just push back the events to the queue
cacheMutex.Lock() cacheMutex.Lock()
cache = append(cache, cachecopy...) cache = append(cache, cachecopy...)
cacheMutex.Unlock() cacheMutex.Unlock()
@ -139,19 +92,13 @@ LOOP:
cacheMutex.Lock() cacheMutex.Lock()
cachecopy := cache cachecopy := cache
cacheMutex.Unlock() cacheMutex.Unlock()
if err := PushAlerts(cachecopy, Client); err != nil { if err := PushAlerts(cachecopy, client); err != nil {
log.Errorf("while pushing leftovers to api : %s", err) log.Errorf("while pushing leftovers to api : %s", err)
} }
} }
break LOOP break LOOP
case event := <-overflow: case event := <-overflow:
//if the Alert is nil, it's to signal bucket is ready for GC, don't track this
if dumpStates && event.Overflow.Alert != nil {
if bucketOverflows == nil {
bucketOverflows = make([]types.Event, 0)
}
bucketOverflows = append(bucketOverflows, event)
}
/*if alert is empty and mapKey is present, the overflow is just to cleanup bucket*/ /*if alert is empty and mapKey is present, the overflow is just to cleanup bucket*/
if event.Overflow.Alert == nil && event.Overflow.Mapkey != "" { if event.Overflow.Alert == nil && event.Overflow.Mapkey != "" {
buckets.Bucket_map.Delete(event.Overflow.Mapkey) buckets.Bucket_map.Delete(event.Overflow.Mapkey)
@ -160,9 +107,17 @@ LOOP:
/* process post overflow parser nodes */ /* process post overflow parser nodes */
event, err := parser.Parse(postOverflowCTX, event, postOverflowNodes) event, err := parser.Parse(postOverflowCTX, event, postOverflowNodes)
if err != nil { if err != nil {
return fmt.Errorf("postoverflow failed : %s", err) return fmt.Errorf("postoverflow failed: %w", err)
} }
log.Printf("%s", *event.Overflow.Alert.Message) log.Printf("%s", *event.Overflow.Alert.Message)
// if the Alert is nil, it's to signal bucket is ready for GC, don't track this
// dump after postoveflow processing to avoid missing whitelist info
if dumpStates && event.Overflow.Alert != nil {
if bucketOverflows == nil {
bucketOverflows = make([]types.Event, 0)
}
bucketOverflows = append(bucketOverflows, event)
}
if event.Overflow.Whitelisted { if event.Overflow.Whitelisted {
log.Printf("[%s] is whitelisted, skip.", *event.Overflow.Alert.Message) log.Printf("[%s] is whitelisted, skip.", *event.Overflow.Alert.Message)
continue continue
@ -182,6 +137,6 @@ LOOP:
} }
ticker.Stop() ticker.Stop()
return nil
return nil
} }

View file

@ -11,7 +11,6 @@ import (
) )
func runParse(input chan types.Event, output chan types.Event, parserCTX parser.UnixParserCtx, nodes []parser.Node) error { func runParse(input chan types.Event, output chan types.Event, parserCTX parser.UnixParserCtx, nodes []parser.Node) error {
LOOP: LOOP:
for { for {
select { select {
@ -22,6 +21,13 @@ LOOP:
if !event.Process { if !event.Process {
continue continue
} }
/*Application security engine is going to generate 2 events:
- one that is treated as a log and can go to scenarios
- another one that will go directly to LAPI*/
if event.Type == types.APPSEC {
outputEventChan <- event
continue
}
if event.Line.Module == "" { if event.Line.Module == "" {
log.Errorf("empty event.Line.Module field, the acquisition module must set it ! : %+v", event.Line) log.Errorf("empty event.Line.Module field, the acquisition module must set it ! : %+v", event.Line)
continue continue
@ -49,5 +55,6 @@ LOOP:
output <- parsed output <- parsed
} }
} }
return nil return nil
} }

View file

@ -4,29 +4,30 @@ import (
"fmt" "fmt"
"time" "time"
"github.com/prometheus/client_golang/prometheus"
log "github.com/sirupsen/logrus"
"github.com/crowdsecurity/crowdsec/pkg/csconfig" "github.com/crowdsecurity/crowdsec/pkg/csconfig"
leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket" leaky "github.com/crowdsecurity/crowdsec/pkg/leakybucket"
"github.com/crowdsecurity/crowdsec/pkg/types" "github.com/crowdsecurity/crowdsec/pkg/types"
"github.com/prometheus/client_golang/prometheus"
log "github.com/sirupsen/logrus"
) )
func runPour(input chan types.Event, holders []leaky.BucketFactory, buckets *leaky.Buckets, cConfig *csconfig.Config) error { func runPour(input chan types.Event, holders []leaky.BucketFactory, buckets *leaky.Buckets, cConfig *csconfig.Config) error {
var ( count := 0
count int
)
for { for {
//bucket is now ready // bucket is now ready
select { select {
case <-bucketsTomb.Dying(): case <-bucketsTomb.Dying():
log.Infof("Bucket routine exiting") log.Infof("Bucket routine exiting")
return nil return nil
case parsed := <-input: case parsed := <-input:
startTime := time.Now() startTime := time.Now()
count++ count++
if count%5000 == 0 { if count%5000 == 0 {
log.Infof("%d existing buckets", leaky.LeakyRoutineCount) log.Infof("%d existing buckets", leaky.LeakyRoutineCount)
//when in forensics mode, garbage collect buckets // when in forensics mode, garbage collect buckets
if cConfig.Crowdsec.BucketsGCEnabled { if cConfig.Crowdsec.BucketsGCEnabled {
if parsed.MarshaledTime != "" { if parsed.MarshaledTime != "" {
z := &time.Time{} z := &time.Time{}
@ -34,26 +35,30 @@ func runPour(input chan types.Event, holders []leaky.BucketFactory, buckets *lea
log.Warningf("Failed to unmarshal time from event '%s' : %s", parsed.MarshaledTime, err) log.Warningf("Failed to unmarshal time from event '%s' : %s", parsed.MarshaledTime, err)
} else { } else {
log.Warning("Starting buckets garbage collection ...") log.Warning("Starting buckets garbage collection ...")
if err = leaky.GarbageCollectBuckets(*z, buckets); err != nil { if err = leaky.GarbageCollectBuckets(*z, buckets); err != nil {
return fmt.Errorf("failed to start bucket GC : %s", err) return fmt.Errorf("failed to start bucket GC : %w", err)
} }
} }
} }
} }
} }
//here we can bucketify with parsed // here we can bucketify with parsed
poured, err := leaky.PourItemToHolders(parsed, holders, buckets) poured, err := leaky.PourItemToHolders(parsed, holders, buckets)
if err != nil { if err != nil {
log.Errorf("bucketify failed for: %v", parsed) log.Errorf("bucketify failed for: %v", parsed)
continue continue
} }
elapsed := time.Since(startTime) elapsed := time.Since(startTime)
globalPourHistogram.With(prometheus.Labels{"type": parsed.Line.Module, "source": parsed.Line.Src}).Observe(elapsed.Seconds()) globalPourHistogram.With(prometheus.Labels{"type": parsed.Line.Module, "source": parsed.Line.Src}).Observe(elapsed.Seconds())
if poured { if poured {
globalBucketPourOk.Inc() globalBucketPourOk.Inc()
} else { } else {
globalBucketPourKo.Inc() globalBucketPourKo.Inc()
} }
if len(parsed.MarshaledTime) != 0 { if len(parsed.MarshaledTime) != 0 {
if err := lastProcessedItem.UnmarshalText([]byte(parsed.MarshaledTime)); err != nil { if err := lastProcessedItem.UnmarshalText([]byte(parsed.MarshaledTime)); err != nil {
log.Warningf("failed to unmarshal time from event : %s", err) log.Warningf("failed to unmarshal time from event : %s", err)

Some files were not shown because too many files have changed in this diff Show more