Compare commits
103 commits
Author | SHA1 | Date | |
---|---|---|---|
![]() |
4d68d1f08e | ||
![]() |
197aa5fdaa | ||
![]() |
f3994d8b89 | ||
![]() |
222ae0d960 | ||
![]() |
701e214929 | ||
![]() |
1d857c7f00 | ||
![]() |
d5f327755c | ||
![]() |
9193dbe56b | ||
![]() |
289f311c4f | ||
![]() |
465bd4dc57 | ||
![]() |
f9815662c1 | ||
![]() |
74d88787c8 | ||
![]() |
9215b011f8 | ||
![]() |
748adfebd9 | ||
![]() |
1f778bc94c | ||
![]() |
6abed54756 | ||
![]() |
b5bc886561 | ||
![]() |
1af976a093 | ||
![]() |
3451dadde5 | ||
![]() |
b961a2b74a | ||
![]() |
774ab4f1e8 | ||
![]() |
26714cee49 | ||
![]() |
1efa0ed408 | ||
![]() |
97683aa9ba | ||
![]() |
be678a126e | ||
![]() |
662639d905 | ||
![]() |
03b36c958c | ||
![]() |
3314c4f7de | ||
![]() |
1f60236985 | ||
![]() |
32c68874c5 | ||
![]() |
ed7859eab8 | ||
![]() |
003ff10d07 | ||
![]() |
8b4a49597e | ||
![]() |
01cf61deab | ||
![]() |
619d3781f5 | ||
![]() |
025494ed9c | ||
![]() |
824762d18d | ||
![]() |
9243ea9374 | ||
![]() |
132cfe6e32 | ||
![]() |
bc72c58ae5 | ||
![]() |
866c185a08 | ||
![]() |
97a84bc2b8 | ||
![]() |
6c1a98dc4b | ||
![]() |
a03707e5f8 | ||
![]() |
913a4a4585 | ||
![]() |
c30a18a30b | ||
![]() |
286a4bd9e7 | ||
![]() |
ddf8e857fd | ||
![]() |
4d5ff0210b | ||
![]() |
89cd9fb611 | ||
![]() |
22a6270657 | ||
![]() |
0a970f4bb2 | ||
![]() |
9b111e2493 | ||
![]() |
b8feb77ef4 | ||
![]() |
3c44604316 | ||
![]() |
1e1a054686 | ||
![]() |
d584a41e60 | ||
![]() |
56074ae035 | ||
![]() |
30631b0fc5 | ||
![]() |
84da4e6000 | ||
![]() |
58ded74181 | ||
![]() |
3fd2e3efa9 | ||
![]() |
7cda439c80 | ||
![]() |
9700b59cf8 | ||
![]() |
e7ee4bc5b4 | ||
![]() |
dc787b67b4 | ||
![]() |
09cf3b3755 | ||
![]() |
fc04c8d723 | ||
![]() |
2aa4229e0b | ||
![]() |
e1be9a5eeb | ||
![]() |
91fc74b408 | ||
![]() |
d7244ed920 | ||
![]() |
e0c0b5053c | ||
![]() |
268b31685d | ||
![]() |
ab71abbc7c | ||
![]() |
87e6df9e28 | ||
![]() |
558f2db31f | ||
![]() |
c23dd701f0 | ||
![]() |
0a7b9d5089 | ||
![]() |
1eddf9a220 | ||
![]() |
78d71498fa | ||
![]() |
b41a0ad80e | ||
![]() |
78569e9a88 | ||
![]() |
8cb360fe36 | ||
![]() |
f534a530d4 | ||
![]() |
2abcafd670 | ||
![]() |
3c3d62ac27 | ||
![]() |
d829d74048 | ||
![]() |
2aca421415 | ||
![]() |
99474b348f | ||
![]() |
8bebaf6a48 | ||
![]() |
a0d44f3d05 | ||
![]() |
55dbd9d93c | ||
![]() |
3503b5a1f0 | ||
![]() |
ddcc929a13 | ||
![]() |
9004bb6e8e | ||
![]() |
69d8fdef99 | ||
![]() |
eeee712cf3 | ||
![]() |
8f42d97b54 | ||
![]() |
04f4910b51 | ||
![]() |
e5ffae7791 | ||
![]() |
6e40c69cb5 | ||
![]() |
c0e54f87d7 |
40 changed files with 893 additions and 559 deletions
5
.gitattributes
vendored
Normal file
5
.gitattributes
vendored
Normal file
|
@ -0,0 +1,5 @@
|
|||
# All text should use Unix-style Line-endings
|
||||
* text eol=lf
|
||||
|
||||
# Except mta-sts.txt (RFC 8461)
|
||||
mta-sts.txt text eol=crlf
|
57
CHANGELOG.md
57
CHANGELOG.md
|
@ -1,6 +1,63 @@
|
|||
CHANGELOG
|
||||
=========
|
||||
|
||||
Version 60.1 (October 30, 2022)
|
||||
-------------------------------
|
||||
|
||||
* A setup issue where the DNS server nsd isn't running at the end of setup is (hopefully) fixed.
|
||||
* Nextcloud is updated to 23.0.10 (contacts to 4.2.2, calendar to 3.5.1).
|
||||
|
||||
Version 60 (October 11, 2022)
|
||||
-----------------------------
|
||||
|
||||
This is the first release for Ubuntu 22.04.
|
||||
|
||||
**Before upgrading**, you must **first upgrade your existing Ubuntu 18.04 box to Mail-in-a-Box v0.51 or later**, if you haven't already done so. That may not be possible after Ubuntu 18.04 reaches its end of life in April 2023, so please complete the upgrade well before then. (If you are not using Nextcloud's contacts or calendar, you can migrate to the latest version of Mail-in-a-Box from any previous version.)
|
||||
|
||||
For complete upgrade instructions, see:
|
||||
|
||||
https://discourse.mailinabox.email/t/version-60-for-ubuntu-22-04-is-about-to-be-released/9558
|
||||
|
||||
No major features of Mail-in-a-Box have changed in this release, although some minor fixes were made.
|
||||
|
||||
With the newer version of Ubuntu the following software packages we use are updated:
|
||||
|
||||
* dovecot is upgraded to 2.3.16, postfix to 3.6.4, opendmark to 1.4 (which adds ARC-Authentication-Results headers), and spampd to 2.53 (alleviating a mail delivery rate limiting bug).
|
||||
* Nextcloud is upgraded to 23.0.4 (contacts to 4.2.0, calendar to 3.5.0).
|
||||
* Roundcube is upgraded to 1.6.0.
|
||||
* certbot is upgraded to 1.21 (via the Ubuntu repository instead of a PPA).
|
||||
* fail2ban is upgraded to 0.11.2.
|
||||
* nginx is upgraded to 1.18.
|
||||
* PHP is upgraded from 7.2 to 8.0.
|
||||
|
||||
Also:
|
||||
|
||||
* Roundcube's login session cookie was tightened. Existing sessions may require a manual logout.
|
||||
* Moved Postgrey's database under $STORAGE_ROOT.
|
||||
|
||||
Version 57a (June 19, 2022)
|
||||
---------------------------
|
||||
|
||||
* The Backblaze backups fix posted in Version 57 was incomplete. It's now fixed.
|
||||
|
||||
Version 57 (June 12, 2022)
|
||||
--------------------------
|
||||
|
||||
Setup:
|
||||
|
||||
* Fixed issue upgrading from Mail-in-a-Box v0.40-v0.50 because of a changed URL that Nextcloud is downloaded from.
|
||||
|
||||
Backups:
|
||||
|
||||
* Fixed S3 backups which broke with duplicity 0.8.23.
|
||||
* Fixed Backblaze backups which broke with latest b2sdk package by rolling back its version.
|
||||
|
||||
Control panel:
|
||||
|
||||
* Fixed spurious changes in system status checks messages by sorting DNSSEC DS records.
|
||||
* Fixed fail2ban lockout over IPv6 from excessive loads of the system status checks.
|
||||
* Fixed an incorrect IPv6 system status check message.
|
||||
|
||||
Version 56 (January 19, 2022)
|
||||
-----------------------------
|
||||
|
||||
|
|
29
README.md
29
README.md
|
@ -1,5 +1,6 @@
|
|||
# Power Mail-in-a-Box
|
||||
**[Installation](#installation)** (current version: v56.4)
|
||||
## **[Installation](#installation)** (current version: v60.5)
|
||||
## **[Upgrading Quick Start](#upgrading)**
|
||||
|
||||
[](https://ko-fi.com/davness)
|
||||
|
||||
|
@ -38,13 +39,13 @@ The machine this appliance will be installed on needs to have the following spec
|
|||
- 512MB of RAM (**at least 1GB** is recommended);
|
||||
- 10GB of disk;
|
||||
- **One of the following operating systems:**
|
||||
- - Debian GNU/Linux 10 (Buster)
|
||||
- - Debian GNU/Linux 11 (Bullseye)
|
||||
- - Ubuntu LTS 20.04 (Focal Fossa)
|
||||
- - Ubuntu LTS 22.04 (Jammy Jellyfish)
|
||||
|
||||
**Ubuntu LTS 18.04 (Bionic Beaver) and earlier versions are not supported.**
|
||||
|
||||
**Debian 9 (Stretch) and earlier versions are not supported.**
|
||||
## Legacy Support
|
||||
The following distributions are no longer supported for the latest version, but they used to be supported at a earlier time:
|
||||
- **Debian 10 (Buster)** <= **v56.5**
|
||||
|
||||
<small>_These network requirements are usually not provided by residential ISP's. They are not **strictly required** for Power Mail-in-a-Box to install, but it will take more work to get it running as intended._</small>
|
||||
- Static, public IPv4 (most residential connections **do not** provide static addresses);
|
||||
|
@ -80,3 +81,21 @@ sudo dpkg-reconfigure locales
|
|||
```
|
||||
curl -L https://power-mailinabox.net/setup.sh | sudo bash
|
||||
```
|
||||
|
||||
# Upgrading
|
||||
|
||||
To upgrade an existing box to the latest version, run the same command as you do to perform a new installation:
|
||||
|
||||
```
|
||||
curl -L https://power-mailinabox.net/setup.sh | sudo bash
|
||||
```
|
||||
|
||||
## Installing or upgrading to a different version
|
||||
If for some reason you wish to install a different version (for example, an older version for a workaround, or a beta/release candidate version for testing), you can use the following command.
|
||||
|
||||
```
|
||||
curl -L https://power-mailinabox.net/<VERSION>/setup.sh | sudo bash
|
||||
```
|
||||
Where `<VERSION>` is the version you want to install. (**Example:** `v60.0`).
|
||||
|
||||
> ⚠️ **Downgrading might not always be possible and is not supported!** Make sure you know what you're doing before doing so.
|
||||
|
|
17
Vagrantfile
vendored
17
Vagrantfile
vendored
|
@ -15,20 +15,24 @@ machines = [
|
|||
{
|
||||
'iso' => "debian/bullseye64",
|
||||
'host' => "bullseye"
|
||||
}
|
||||
},
|
||||
{
|
||||
'iso' => "generic/ubuntu2204",
|
||||
'host' => "jammy"
|
||||
},
|
||||
]
|
||||
|
||||
Vagrant.configure("2") do |config|
|
||||
config.vm.provider :virtualbox do |vb|
|
||||
vb.customize ["modifyvm", :id, "--cpus", 1, "--memory", 512]
|
||||
vb.customize ["modifyvm", :id, "--cpus", 1, "--memory", 768]
|
||||
end
|
||||
config.vm.provider :libvirt do |v|
|
||||
v.memory = 512
|
||||
v.memory = 768
|
||||
v.cpus = 1
|
||||
v.nested = true
|
||||
end
|
||||
config.vm.provider :kvm do |kvm|
|
||||
kvm.memory_size = '512m'
|
||||
kvm.memory_size = '768m'
|
||||
end
|
||||
|
||||
# Network config: Since it's a mail server, the machine must be connected
|
||||
|
@ -45,9 +49,8 @@ Vagrant.configure("2") do |config|
|
|||
m.vm.network "private_network", ip: "192.168.168.#{ip+n}"
|
||||
|
||||
m.vm.provision "shell", :inline => <<-SH
|
||||
# Make sure we have IPv6 loopback (::1)
|
||||
sysctl -w net.ipv6.conf.lo.disable_ipv6=0
|
||||
echo -e "fs.inotify.max_user_instances=1024\nnet.ipv6.conf.lo.disable_ipv6=0" > /etc/sysctl.conf
|
||||
git config --global --add safe.directory /vagrant
|
||||
|
||||
# Set environment variables so that the setup script does
|
||||
# not ask any questions during provisioning. We'll let the
|
||||
# machine figure out its own public IP.
|
||||
|
|
|
@ -15,7 +15,7 @@ info:
|
|||
license:
|
||||
name: CC0 1.0 Universal
|
||||
url: https://creativecommons.org/publicdomain/zero/1.0/legalcode
|
||||
version: 56.4
|
||||
version: 60.5
|
||||
x-logo:
|
||||
url: https://mailinabox.email/static/logo.png
|
||||
altText: Mail-in-a-Box logo
|
||||
|
|
|
@ -5,7 +5,7 @@
|
|||
# Whitelist our own IP addresses. 127.0.0.1/8 is the default. But our status checks
|
||||
# ping services over the public interface so we should whitelist that address of
|
||||
# ours too. The string is substituted during installation.
|
||||
ignoreip = 127.0.0.1/8 PUBLIC_IP
|
||||
ignoreip = 127.0.0.1/8 PUBLIC_IP ::1 PUBLIC_IPV6
|
||||
|
||||
[dovecot]
|
||||
enabled = true
|
||||
|
|
|
@ -4,6 +4,7 @@ After=multi-user.target
|
|||
|
||||
[Service]
|
||||
Type=idle
|
||||
IgnoreSIGPIPE=False
|
||||
ExecStart=/usr/local/lib/mailinabox/start
|
||||
|
||||
[Install]
|
||||
|
|
|
@ -1,4 +1,4 @@
|
|||
version: STSv1
|
||||
mode: MODE
|
||||
mx: PRIMARY_HOSTNAME
|
||||
max_age: 604800
|
||||
version: STSv1
|
||||
mode: MODE
|
||||
mx: PRIMARY_HOSTNAME
|
||||
max_age: 604800
|
||||
|
|
|
@ -31,21 +31,8 @@ class AuthService:
|
|||
def init_system_api_key(self):
|
||||
"""Write an API key to a local file so local processes can use the API"""
|
||||
|
||||
def create_file_with_mode(path, mode):
|
||||
# Based on answer by A-B-B: http://stackoverflow.com/a/15015748
|
||||
old_umask = os.umask(0)
|
||||
try:
|
||||
return os.fdopen(os.open(path, os.O_WRONLY | os.O_CREAT, mode),
|
||||
'w')
|
||||
finally:
|
||||
os.umask(old_umask)
|
||||
|
||||
self.key = secrets.token_hex(32)
|
||||
|
||||
os.makedirs(os.path.dirname(self.key_path), exist_ok=True)
|
||||
|
||||
with create_file_with_mode(self.key_path, 0o640) as key_file:
|
||||
key_file.write(self.key + '\n')
|
||||
with open(self.key_path, 'r') as file:
|
||||
self.key = file.read()
|
||||
|
||||
def authenticate(self, request, env, login_only=False, logout=False):
|
||||
"""Test if the HTTP Authorization header's username matches the system key, a session key,
|
||||
|
|
|
@ -20,24 +20,7 @@ import dateutil.tz
|
|||
import rtyaml
|
||||
from exclusiveprocess import Lock, CannotAcquireLock
|
||||
|
||||
from utils import load_environment, shell, wait_for_service, fix_boto, get_php_version, get_os_code
|
||||
|
||||
|
||||
def rsync_ssh_options(port=22, direct=False):
|
||||
# Just in case we pass a string
|
||||
try:
|
||||
port = int(port)
|
||||
except Exception:
|
||||
port = 22
|
||||
|
||||
if direct:
|
||||
return f"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {port} -i /root/.ssh/id_rsa_miab"
|
||||
else:
|
||||
return [
|
||||
f"--ssh-options= -i /root/.ssh/id_rsa_miab -p {port}",
|
||||
f"--rsync-options= -e \"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {port} -i /root/.ssh/id_rsa_miab\"",
|
||||
]
|
||||
|
||||
from utils import load_environment, shell, wait_for_service, get_php_version
|
||||
|
||||
def backup_status(env):
|
||||
# If backups are disabled, return no status.
|
||||
|
@ -87,20 +70,15 @@ def backup_status(env):
|
|||
"volumes": int(keys[2]),
|
||||
}
|
||||
|
||||
code, collection_status = shell(
|
||||
'check_output',
|
||||
[
|
||||
"/usr/bin/duplicity",
|
||||
"collection-status",
|
||||
"--archive-dir",
|
||||
backup_cache_dir,
|
||||
"--gpg-options",
|
||||
"--cipher-algo=AES256",
|
||||
"--log-fd",
|
||||
"1",
|
||||
config["target"],
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]),
|
||||
get_env(env),
|
||||
code, collection_status = shell('check_output', [
|
||||
"/usr/local/bin/duplicity",
|
||||
"collection-status",
|
||||
"--archive-dir", backup_cache_dir,
|
||||
"--gpg-options", "--cipher-algo=AES256",
|
||||
"--log-fd", "1",
|
||||
get_duplicity_target_url(config),
|
||||
] + get_duplicity_additional_args(env),
|
||||
get_duplicity_env_vars(env),
|
||||
trap=True)
|
||||
if code != 0:
|
||||
# Command failed. This is likely due to an improperly configured remote
|
||||
|
@ -249,8 +227,51 @@ def get_passphrase(env):
|
|||
|
||||
return passphrase
|
||||
|
||||
def get_duplicity_target_url(config):
|
||||
target = config["target"]
|
||||
|
||||
def get_env(env):
|
||||
if get_target_type(config) == "s3":
|
||||
from urllib.parse import urlsplit, urlunsplit
|
||||
target = list(urlsplit(target))
|
||||
|
||||
# Although we store the S3 hostname in the target URL,
|
||||
# duplicity no longer accepts it in the target URL. The hostname in
|
||||
# the target URL must be the bucket name. The hostname is passed
|
||||
# via get_duplicity_additional_args. Move the first part of the
|
||||
# path (the bucket name) into the hostname URL component, and leave
|
||||
# the rest for the path.
|
||||
target_bucket = target[2].lstrip('/').split('/', 1)
|
||||
target[1] = target_bucket[0]
|
||||
target[2] = target_bucket[1] if len(target_bucket) > 1 else ''
|
||||
|
||||
target = urlunsplit(target)
|
||||
|
||||
return target
|
||||
|
||||
def get_duplicity_additional_args(env):
|
||||
config = get_backup_config(env)
|
||||
port = 0
|
||||
|
||||
try:
|
||||
port = int(config["target_rsync_port"])
|
||||
except Exception:
|
||||
port = 22
|
||||
|
||||
if get_target_type(config) == 'rsync':
|
||||
return [
|
||||
f"--ssh-options= -i /root/.ssh/id_rsa_miab -p {port}",
|
||||
f"--rsync-options= -e \"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {port} -i /root/.ssh/id_rsa_miab\"",
|
||||
]
|
||||
elif get_target_type(config) == 's3':
|
||||
# See note about hostname in get_duplicity_target_url.
|
||||
from urllib.parse import urlsplit, urlunsplit
|
||||
target = urlsplit(config["target"])
|
||||
endpoint_url = urlunsplit(("https", target.netloc, '', '', ''))
|
||||
return ["--s3-endpoint-url", endpoint_url]
|
||||
|
||||
return []
|
||||
|
||||
def get_duplicity_env_vars(env):
|
||||
config = get_backup_config(env)
|
||||
|
||||
env = {"PASSPHRASE": get_passphrase(env)}
|
||||
|
@ -319,6 +340,7 @@ def perform_backup(full_backup, user_initiated=False):
|
|||
service_command(php_fpm, "stop", quit=True)
|
||||
service_command("postfix", "stop", quit=True)
|
||||
service_command("dovecot", "stop", quit=True)
|
||||
service_command("postgrey", "stop", quit=True)
|
||||
|
||||
# Execute a pre-backup script that copies files outside the homedir.
|
||||
# Run as the STORAGE_USER user, not as root. Pass our settings in
|
||||
|
@ -334,14 +356,21 @@ def perform_backup(full_backup, user_initiated=False):
|
|||
# after the first backup. See #396.
|
||||
try:
|
||||
shell('check_call', [
|
||||
"/usr/bin/duplicity", "full" if full_backup else "incr",
|
||||
"--verbosity", "warning", "--no-print-statistics", "--archive-dir",
|
||||
backup_cache_dir, "--exclude", backup_root, "--volsize", "250",
|
||||
"--gpg-options", "--cipher-algo=AES256", env["STORAGE_ROOT"],
|
||||
config["target"], "--allow-source-mismatch"
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]), get_env(env))
|
||||
"/usr/local/bin/duplicity",
|
||||
"full" if full_backup else "incr",
|
||||
"--verbosity", "warning", "--no-print-statistics",
|
||||
"--archive-dir", backup_cache_dir,
|
||||
"--exclude", backup_root,
|
||||
"--volsize", "250",
|
||||
"--gpg-options", "--cipher-algo=AES256",
|
||||
env["STORAGE_ROOT"],
|
||||
get_duplicity_target_url(config),
|
||||
"--allow-source-mismatch"
|
||||
] + get_duplicity_additional_args(env),
|
||||
get_duplicity_env_vars(env))
|
||||
finally:
|
||||
# Start services again.
|
||||
service_command("postgrey", "start", quit=False)
|
||||
service_command("dovecot", "start", quit=False)
|
||||
service_command("postfix", "start", quit=False)
|
||||
service_command(php_fpm, "start", quit=False)
|
||||
|
@ -349,10 +378,15 @@ def perform_backup(full_backup, user_initiated=False):
|
|||
# Remove old backups. This deletes all backup data no longer needed
|
||||
# from more than 3 days ago.
|
||||
shell('check_call', [
|
||||
"/usr/bin/duplicity", "remove-older-than",
|
||||
"%dD" % config["min_age_in_days"], "--verbosity", "error",
|
||||
"--archive-dir", backup_cache_dir, "--force", config["target"]
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]), get_env(env))
|
||||
"/usr/local/bin/duplicity",
|
||||
"remove-older-than",
|
||||
"%dD" % config["min_age_in_days"],
|
||||
"--verbosity", "error",
|
||||
"--archive-dir", backup_cache_dir,
|
||||
"--force",
|
||||
get_duplicity_target_url(config)
|
||||
] + get_duplicity_additional_args(env),
|
||||
get_duplicity_env_vars(env))
|
||||
|
||||
# From duplicity's manual:
|
||||
# "This should only be necessary after a duplicity session fails or is
|
||||
|
@ -360,9 +394,14 @@ def perform_backup(full_backup, user_initiated=False):
|
|||
# That may be unlikely here but we may as well ensure we tidy up if
|
||||
# that does happen - it might just have been a poorly timed reboot.
|
||||
shell('check_call', [
|
||||
"/usr/bin/duplicity", "cleanup", "--verbosity", "error",
|
||||
"--archive-dir", backup_cache_dir, "--force", config["target"]
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]), get_env(env))
|
||||
"/usr/local/bin/duplicity",
|
||||
"cleanup",
|
||||
"--verbosity", "error",
|
||||
"--archive-dir", backup_cache_dir,
|
||||
"--force",
|
||||
get_duplicity_target_url(config)
|
||||
] + get_duplicity_additional_args(env),
|
||||
get_duplicity_env_vars(env))
|
||||
|
||||
# Change ownership of backups to the user-data user, so that the after-bcakup
|
||||
# script can access them.
|
||||
|
@ -399,33 +438,28 @@ def run_duplicity_verification():
|
|||
backup_cache_dir = os.path.join(backup_root, 'cache')
|
||||
|
||||
shell('check_call', [
|
||||
"/usr/bin/duplicity",
|
||||
"/usr/local/bin/duplicity",
|
||||
"--verbosity",
|
||||
"info",
|
||||
"verify",
|
||||
"--compare-data",
|
||||
"--archive-dir",
|
||||
backup_cache_dir,
|
||||
"--exclude",
|
||||
backup_root,
|
||||
config["target"],
|
||||
"--archive-dir", backup_cache_dir,
|
||||
"--exclude", backup_root,
|
||||
get_duplicity_target_url(config),
|
||||
env["STORAGE_ROOT"],
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]), get_env(env))
|
||||
|
||||
] + get_duplicity_additional_args(env), get_duplicity_env_vars(env))
|
||||
|
||||
def run_duplicity_restore(args):
|
||||
env = load_environment()
|
||||
config = get_backup_config(env)
|
||||
backup_cache_dir = os.path.join(env["STORAGE_ROOT"], 'backup', 'cache')
|
||||
shell('check_call', [
|
||||
"/usr/bin/duplicity",
|
||||
"/usr/local/bin/duplicity",
|
||||
"restore",
|
||||
"--archive-dir",
|
||||
backup_cache_dir,
|
||||
config["target"],
|
||||
] + rsync_ssh_options(port=config["target_rsync_port"]) + args,
|
||||
get_env(env))
|
||||
|
||||
"--archive-dir", backup_cache_dir,
|
||||
get_duplicity_target_url(config),
|
||||
] + get_duplicity_additional_args(env) + args,
|
||||
get_duplicity_env_vars(env))
|
||||
|
||||
def list_target_files(config):
|
||||
import urllib.parse
|
||||
|
@ -450,7 +484,7 @@ def list_target_files(config):
|
|||
|
||||
rsync_command = [
|
||||
'rsync', '-e',
|
||||
rsync_ssh_options(config["target_rsync_port"], direct=True),
|
||||
f"/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p {int(config.get('target_rsync_port', 22))} -i /root/.ssh/id_rsa_miab",
|
||||
'--list-only', '-r',
|
||||
rsync_target.format(host=target.netloc, path=target_path)
|
||||
]
|
||||
|
@ -486,28 +520,13 @@ def list_target_files(config):
|
|||
"Connection to rsync host failed: {}".format(reason))
|
||||
|
||||
elif target.scheme == "s3":
|
||||
# match to a Region
|
||||
fix_boto() # must call prior to importing boto
|
||||
import boto.s3
|
||||
from boto.exception import BotoServerError
|
||||
custom_region = False
|
||||
for region in boto.s3.regions():
|
||||
if region.endpoint == target.hostname:
|
||||
break
|
||||
else:
|
||||
# If region is not found this is a custom region
|
||||
custom_region = True
|
||||
import boto3.s3
|
||||
from botocore.exceptions import ClientError
|
||||
|
||||
# separate bucket from path in target
|
||||
bucket = target.path[1:].split('/')[0]
|
||||
path = '/'.join(target.path[1:].split('/')[1:]) + '/'
|
||||
|
||||
# Create a custom region with custom endpoint
|
||||
if custom_region:
|
||||
from boto.s3.connection import S3Connection
|
||||
region = boto.s3.S3RegionInfo(name=bucket,
|
||||
endpoint=target.hostname,
|
||||
connection_cls=S3Connection)
|
||||
|
||||
# If no prefix is specified, set the path to '', otherwise boto won't list the files
|
||||
if path == '/':
|
||||
path = ''
|
||||
|
@ -517,34 +536,22 @@ def list_target_files(config):
|
|||
|
||||
# connect to the region & bucket
|
||||
try:
|
||||
conn = region.connect(aws_access_key_id=config["target_user"],
|
||||
aws_secret_access_key=config["target_pass"])
|
||||
bucket = conn.get_bucket(bucket)
|
||||
except BotoServerError as e:
|
||||
if e.status == 403:
|
||||
raise ValueError("Invalid S3 access key or secret access key.")
|
||||
elif e.status == 404:
|
||||
raise ValueError("Invalid S3 bucket name.")
|
||||
elif e.status == 301:
|
||||
raise ValueError("Incorrect region for this bucket.")
|
||||
raise ValueError(e.reason)
|
||||
|
||||
return [(key.name[len(path):], key.size)
|
||||
for key in bucket.list(prefix=path)]
|
||||
s3 = boto3.client('s3', \
|
||||
endpoint_url=f'https://{target.hostname}', \
|
||||
aws_access_key_id=config['target_user'], \
|
||||
aws_secret_access_key=config['target_pass'])
|
||||
bucket_objects = s3.list_objects_v2(Bucket=bucket, Prefix=path).get("Contents", [])
|
||||
backup_list = [(key['Key'][len(path):], key['Size']) for key in bucket_objects]
|
||||
except ClientError as e:
|
||||
raise ValueError(e)
|
||||
return backup_list
|
||||
elif target.scheme == 'b2':
|
||||
InMemoryAccountInfo = None
|
||||
B2Api = None
|
||||
NonExistentBucket = None
|
||||
|
||||
if get_os_code() == "Debian10":
|
||||
# WARNING: This is deprecated code using a legacy library.
|
||||
# We need it because Debian 10 ships with an old version of Duplicity
|
||||
from b2.account_info import InMemoryAccountInfo
|
||||
from b2.api import B2Api
|
||||
from b2.exception import NonExistentBucket
|
||||
else:
|
||||
from b2sdk.v1 import InMemoryAccountInfo, B2Api
|
||||
from b2sdk.v1.exception import NonExistentBucket
|
||||
from b2sdk.v1 import InMemoryAccountInfo, B2Api
|
||||
from b2sdk.v1.exception import NonExistentBucket
|
||||
|
||||
info = InMemoryAccountInfo()
|
||||
b2_api = B2Api(info)
|
||||
|
@ -569,8 +576,7 @@ def list_target_files(config):
|
|||
raise ValueError(config["target"])
|
||||
|
||||
|
||||
def backup_set_custom(env, target, target_user, target_pass, target_rsync_port,
|
||||
min_age):
|
||||
def backup_set_custom(env, target, target_user, target_pass, target_rsync_port, min_age):
|
||||
config = get_backup_config(env, for_save=True)
|
||||
|
||||
# min_age must be an int
|
||||
|
|
|
@ -56,71 +56,70 @@ app = Flask(__name__,
|
|||
|
||||
# Decorator to protect views that require a user with 'admin' privileges.
|
||||
|
||||
def authorized_personnel_only(admin = True):
|
||||
def gatekeeper(viewfunc):
|
||||
|
||||
def authorized_personnel_only(viewfunc):
|
||||
@wraps(viewfunc)
|
||||
def newview(*args, **kwargs):
|
||||
# Authenticate the passed credentials, which is either the API key or a username:password pair
|
||||
# and an optional X-Auth-Token token.
|
||||
error = None
|
||||
privs = []
|
||||
|
||||
@wraps(viewfunc)
|
||||
def newview(*args, **kwargs):
|
||||
# Authenticate the passed credentials, which is either the API key or a username:password pair
|
||||
# and an optional X-Auth-Token token.
|
||||
error = None
|
||||
privs = []
|
||||
try:
|
||||
email, privs = auth_service.authenticate(request, env)
|
||||
|
||||
try:
|
||||
email, privs = auth_service.authenticate(request, env)
|
||||
except ValueError as e:
|
||||
# Write a line in the log recording the failed login, unless no authorization header
|
||||
# was given which can happen on an initial request before a 403 response.
|
||||
if "Authorization" in request.headers:
|
||||
log_failed_login(request)
|
||||
# Store the email address of the logged in user so it can be accessed
|
||||
# from the API methods that affect the calling user.
|
||||
request.user_email = email
|
||||
request.user_privs = privs
|
||||
|
||||
# Authentication failed.
|
||||
error = str(e)
|
||||
if not admin or "admin" in privs:
|
||||
return viewfunc(*args, **kwargs)
|
||||
else:
|
||||
error = "You are not an administrator."
|
||||
except ValueError as e:
|
||||
# Write a line in the log recording the failed login, unless no authorization header
|
||||
# was given which can happen on an initial request before a 403 response.
|
||||
if "Authorization" in request.headers:
|
||||
log_failed_login(request)
|
||||
|
||||
# Authorized to access an API view?
|
||||
if "admin" in privs:
|
||||
# Store the email address of the logged in user so it can be accessed
|
||||
# from the API methods that affect the calling user.
|
||||
request.user_email = email
|
||||
request.user_privs = privs
|
||||
# Authentication failed.
|
||||
error = str(e)
|
||||
|
||||
# Call view func.
|
||||
return viewfunc(*args, **kwargs)
|
||||
# Not authorized. Return a 401 (send auth) and a prompt to authorize by default.
|
||||
status = 401
|
||||
headers = {
|
||||
'WWW-Authenticate':
|
||||
'Basic realm="{0}"'.format(auth_service.auth_realm),
|
||||
'X-Reason': error,
|
||||
}
|
||||
|
||||
if not error:
|
||||
error = "You are not an administrator."
|
||||
if request.headers.get('X-Requested-With') == 'XMLHttpRequest':
|
||||
# Don't issue a 401 to an AJAX request because the user will
|
||||
# be prompted for credentials, which is not helpful.
|
||||
status = 403
|
||||
headers = None
|
||||
|
||||
# Not authorized. Return a 401 (send auth) and a prompt to authorize by default.
|
||||
status = 401
|
||||
headers = {
|
||||
'WWW-Authenticate':
|
||||
'Basic realm="{0}"'.format(auth_service.auth_realm),
|
||||
'X-Reason': error,
|
||||
}
|
||||
if request.headers.get('Accept') in (None, "", "*/*"):
|
||||
# Return plain text output.
|
||||
return Response(error + "\n",
|
||||
status=status,
|
||||
mimetype='text/plain',
|
||||
headers=headers)
|
||||
else:
|
||||
# Return JSON output.
|
||||
return Response(json.dumps({
|
||||
"status": "error",
|
||||
"reason": error,
|
||||
}) + "\n",
|
||||
status=status,
|
||||
mimetype='application/json',
|
||||
headers=headers)
|
||||
|
||||
if request.headers.get('X-Requested-With') == 'XMLHttpRequest':
|
||||
# Don't issue a 401 to an AJAX request because the user will
|
||||
# be prompted for credentials, which is not helpful.
|
||||
status = 403
|
||||
headers = None
|
||||
return newview
|
||||
|
||||
if request.headers.get('Accept') in (None, "", "*/*"):
|
||||
# Return plain text output.
|
||||
return Response(error + "\n",
|
||||
status=status,
|
||||
mimetype='text/plain',
|
||||
headers=headers)
|
||||
else:
|
||||
# Return JSON output.
|
||||
return Response(json.dumps({
|
||||
"status": "error",
|
||||
"reason": error,
|
||||
}) + "\n",
|
||||
status=status,
|
||||
mimetype='application/json',
|
||||
headers=headers)
|
||||
|
||||
return newview
|
||||
return gatekeeper
|
||||
|
||||
|
||||
@app.errorhandler(401)
|
||||
|
@ -147,9 +146,9 @@ def index():
|
|||
no_users_exist = (len(get_mail_users(env)) == 0)
|
||||
no_admins_exist = (len(get_admins(env)) == 0)
|
||||
|
||||
utils.fix_boto() # must call prior to importing boto
|
||||
import boto.s3
|
||||
backup_s3_hosts = [(r.name, r.endpoint) for r in boto.s3.regions()]
|
||||
import boto3.s3
|
||||
backup_s3_hosts = [(r, f"s3.{r}.amazonaws.com") for r in boto3.session.Session().get_available_regions('s3')]
|
||||
|
||||
|
||||
return render_template(
|
||||
'index.html',
|
||||
|
@ -213,7 +212,7 @@ def logout():
|
|||
|
||||
|
||||
@app.route('/mail/users')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_users():
|
||||
if request.args.get("format", "") == "json":
|
||||
return json_response(get_mail_users_ex(env, with_archived=True))
|
||||
|
@ -222,7 +221,7 @@ def mail_users():
|
|||
|
||||
|
||||
@app.route('/mail/users/add', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_users_add():
|
||||
quota = request.form.get('quota', get_default_quota(env))
|
||||
try:
|
||||
|
@ -234,7 +233,7 @@ def mail_users_add():
|
|||
|
||||
|
||||
@app.route('/mail/users/quota', methods=['GET'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def get_mail_users_quota():
|
||||
email = request.values.get('email', '')
|
||||
quota = get_mail_quota(email, env)
|
||||
|
@ -246,7 +245,7 @@ def get_mail_users_quota():
|
|||
|
||||
|
||||
@app.route('/mail/users/quota', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_users_quota():
|
||||
try:
|
||||
return set_mail_quota(request.form.get('email', ''),
|
||||
|
@ -256,8 +255,13 @@ def mail_users_quota():
|
|||
|
||||
|
||||
@app.route('/mail/users/password', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only(admin = False)
|
||||
def mail_users_password():
|
||||
if "admin" not in request.user_privs:
|
||||
# Non-admins can only change their own password.
|
||||
if request.form.get('email', '') != request.user_email:
|
||||
return ("You are not an administrator; you can only change your own password!", 403)
|
||||
|
||||
try:
|
||||
return set_mail_password(request.form.get('email', ''),
|
||||
request.form.get('password', ''), env)
|
||||
|
@ -266,13 +270,13 @@ def mail_users_password():
|
|||
|
||||
|
||||
@app.route('/mail/users/remove', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_users_remove():
|
||||
return remove_mail_user(request.form.get('email', ''), env)
|
||||
|
||||
|
||||
@app.route('/mail/users/privileges')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_user_privs():
|
||||
privs = get_mail_user_privileges(request.args.get('email', ''), env)
|
||||
if isinstance(privs, tuple):
|
||||
|
@ -281,7 +285,7 @@ def mail_user_privs():
|
|||
|
||||
|
||||
@app.route('/mail/users/privileges/add', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_user_privs_add():
|
||||
return add_remove_mail_user_privilege(request.form.get('email', ''),
|
||||
request.form.get('privilege', ''),
|
||||
|
@ -289,7 +293,7 @@ def mail_user_privs_add():
|
|||
|
||||
|
||||
@app.route('/mail/users/privileges/remove', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_user_privs_remove():
|
||||
return add_remove_mail_user_privilege(request.form.get('email', ''),
|
||||
request.form.get('privilege', ''),
|
||||
|
@ -297,7 +301,7 @@ def mail_user_privs_remove():
|
|||
|
||||
|
||||
@app.route('/mail/aliases')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_aliases():
|
||||
if request.args.get("format", "") == "json":
|
||||
return json_response(get_mail_aliases_ex(env))
|
||||
|
@ -308,7 +312,7 @@ def mail_aliases():
|
|||
|
||||
|
||||
@app.route('/mail/aliases/add', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_aliases_add():
|
||||
return add_mail_alias(request.form.get('address', ''),
|
||||
request.form.get('forwards_to', ''),
|
||||
|
@ -319,13 +323,13 @@ def mail_aliases_add():
|
|||
|
||||
|
||||
@app.route('/mail/aliases/remove', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_aliases_remove():
|
||||
return remove_mail_alias(request.form.get('address', ''), env)
|
||||
|
||||
|
||||
@app.route('/mail/domains')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def mail_domains():
|
||||
return "".join(x + "\n" for x in get_mail_domains(env))
|
||||
|
||||
|
@ -334,14 +338,14 @@ def mail_domains():
|
|||
|
||||
|
||||
@app.route('/dns/zones')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_zones():
|
||||
from dns_update import get_dns_zones
|
||||
return json_response([z[0] for z in get_dns_zones(env)])
|
||||
|
||||
|
||||
@app.route('/dns/update', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_update():
|
||||
from dns_update import do_dns_update
|
||||
try:
|
||||
|
@ -351,7 +355,7 @@ def dns_update():
|
|||
|
||||
|
||||
@app.route('/dns/secondary-nameserver')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_get_secondary_nameserver():
|
||||
from dns_update import get_custom_dns_config, get_secondary_dns
|
||||
return json_response({
|
||||
|
@ -361,7 +365,7 @@ def dns_get_secondary_nameserver():
|
|||
|
||||
|
||||
@app.route('/dns/secondary-nameserver', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_set_secondary_nameserver():
|
||||
from dns_update import set_secondary_dns
|
||||
try:
|
||||
|
@ -375,7 +379,7 @@ def dns_set_secondary_nameserver():
|
|||
|
||||
|
||||
@app.route('/dns/custom')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_get_records(qname=None, rtype=None):
|
||||
# Get the current set of custom DNS records.
|
||||
from dns_update import get_custom_dns_config, get_dns_zones
|
||||
|
@ -431,7 +435,7 @@ def dns_get_records(qname=None, rtype=None):
|
|||
@app.route('/dns/custom/<qname>', methods=['GET', 'POST', 'PUT', 'DELETE'])
|
||||
@app.route('/dns/custom/<qname>/<rtype>',
|
||||
methods=['GET', 'POST', 'PUT', 'DELETE'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_set_record(qname, rtype="A"):
|
||||
from dns_update import do_dns_update, set_custom_dns_record
|
||||
try:
|
||||
|
@ -498,14 +502,14 @@ def dns_set_record(qname, rtype="A"):
|
|||
|
||||
|
||||
@app.route('/dns/dump')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_get_dump():
|
||||
from dns_update import build_recommended_dns
|
||||
return json_response(build_recommended_dns(env))
|
||||
|
||||
|
||||
@app.route('/dns/zonefile/<zone>')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def dns_get_zonefile(zone):
|
||||
from dns_update import get_dns_zonefile
|
||||
return Response(get_dns_zonefile(zone, env),
|
||||
|
@ -517,7 +521,7 @@ def dns_get_zonefile(zone):
|
|||
|
||||
|
||||
@app.route('/ssl/status')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def ssl_get_status():
|
||||
from ssl_certificates import get_certificates_to_provision
|
||||
from web_update import get_web_domains_info, get_web_domains
|
||||
|
@ -557,7 +561,7 @@ def ssl_get_status():
|
|||
|
||||
|
||||
@app.route('/ssl/csr/<domain>', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def ssl_get_csr(domain):
|
||||
from ssl_certificates import create_csr
|
||||
ssl_private_key = os.path.join(
|
||||
|
@ -567,7 +571,7 @@ def ssl_get_csr(domain):
|
|||
|
||||
|
||||
@app.route('/ssl/install', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def ssl_install_cert():
|
||||
from web_update import get_web_domains
|
||||
from ssl_certificates import install_cert
|
||||
|
@ -580,7 +584,7 @@ def ssl_install_cert():
|
|||
|
||||
|
||||
@app.route('/ssl/provision', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def ssl_provision_certs():
|
||||
from ssl_certificates import provision_certificates
|
||||
requests = provision_certificates(env, limit_domains=None)
|
||||
|
@ -591,7 +595,7 @@ def ssl_provision_certs():
|
|||
|
||||
|
||||
@app.route('/mfa/status', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only(admin = False)
|
||||
def mfa_get_status():
|
||||
# Anyone accessing this route is an admin, and we permit them to
|
||||
# see the MFA status for any user if they submit a 'user' form
|
||||
|
@ -599,6 +603,9 @@ def mfa_get_status():
|
|||
# only provision for themselves.
|
||||
# user field if given, otherwise the user making the request
|
||||
email = request.form.get('user', request.user_email)
|
||||
if "admin" not in request.user_privs and email != request.user_email:
|
||||
return ("You are not an administrator; you can only view your own MFA status!", 403)
|
||||
|
||||
try:
|
||||
resp = {"enabled_mfa": get_public_mfa_state(email, env)}
|
||||
if email == request.user_email:
|
||||
|
@ -609,7 +616,7 @@ def mfa_get_status():
|
|||
|
||||
|
||||
@app.route('/mfa/totp/enable', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only(admin = False)
|
||||
def totp_post_enable():
|
||||
secret = request.form.get('secret')
|
||||
token = request.form.get('token')
|
||||
|
@ -625,13 +632,16 @@ def totp_post_enable():
|
|||
|
||||
|
||||
@app.route('/mfa/disable', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only(admin = False)
|
||||
def totp_post_disable():
|
||||
# Anyone accessing this route is an admin, and we permit them to
|
||||
# disable the MFA status for any user if they submit a 'user' form
|
||||
# field.
|
||||
# user field if given, otherwise the user making the request
|
||||
email = request.form.get('user', request.user_email)
|
||||
if "admin" not in request.user_privs and email != request.user_email:
|
||||
return ("You are not an administrator; you can only view your own MFA status!", 403)
|
||||
|
||||
try:
|
||||
result = disable_mfa(email,
|
||||
request.form.get('mfa-id') or None,
|
||||
|
@ -648,14 +658,14 @@ def totp_post_disable():
|
|||
|
||||
|
||||
@app.route('/web/domains')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def web_get_domains():
|
||||
from web_update import get_web_domains_info
|
||||
return json_response(get_web_domains_info(env))
|
||||
|
||||
|
||||
@app.route('/web/update', methods=['POST'])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def web_update():
|
||||
from web_update import do_web_update
|
||||
try:
|
||||
|
@ -668,7 +678,7 @@ def web_update():
|
|||
|
||||
|
||||
@app.route('/system/version', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def system_version():
|
||||
from status_checks import what_version_is_this
|
||||
try:
|
||||
|
@ -678,7 +688,7 @@ def system_version():
|
|||
|
||||
|
||||
@app.route('/system/latest-upstream-version', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def system_latest_upstream_version():
|
||||
from status_checks import get_latest_miab_version
|
||||
try:
|
||||
|
@ -688,7 +698,7 @@ def system_latest_upstream_version():
|
|||
|
||||
|
||||
@app.route('/system/status', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def system_status():
|
||||
from status_checks import run_checks
|
||||
|
||||
|
@ -730,11 +740,13 @@ def system_status():
|
|||
# Create a temporary pool of processes for the status checks
|
||||
with multiprocessing.pool.Pool(processes=5) as pool:
|
||||
run_checks(False, env, output, pool)
|
||||
pool.close()
|
||||
pool.join()
|
||||
return json_response(output.items)
|
||||
|
||||
|
||||
@app.route('/system/updates')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def show_updates():
|
||||
from status_checks import list_apt_updates
|
||||
return "".join("%s (%s)\n" % (p["package"], p["version"])
|
||||
|
@ -742,7 +754,7 @@ def show_updates():
|
|||
|
||||
|
||||
@app.route('/system/update-packages', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def do_updates():
|
||||
utils.shell("check_call", ["/usr/bin/apt-get", "-qq", "update"])
|
||||
return utils.shell("check_output", ["/usr/bin/apt-get", "-y", "upgrade"],
|
||||
|
@ -750,7 +762,7 @@ def do_updates():
|
|||
|
||||
|
||||
@app.route('/system/reboot', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def needs_reboot():
|
||||
from status_checks import is_reboot_needed_due_to_package_installation
|
||||
if is_reboot_needed_due_to_package_installation():
|
||||
|
@ -760,7 +772,7 @@ def needs_reboot():
|
|||
|
||||
|
||||
@app.route('/system/reboot', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def do_reboot():
|
||||
# To keep the attack surface low, we don't allow a remote reboot if one isn't necessary.
|
||||
from status_checks import is_reboot_needed_due_to_package_installation
|
||||
|
@ -772,7 +784,7 @@ def do_reboot():
|
|||
|
||||
|
||||
@app.route('/system/backup/status')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def backup_status():
|
||||
from backup import backup_status
|
||||
try:
|
||||
|
@ -782,14 +794,14 @@ def backup_status():
|
|||
|
||||
|
||||
@app.route('/system/backup/config', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def backup_get_custom():
|
||||
from backup import get_backup_config
|
||||
return json_response(get_backup_config(env, for_ui=True))
|
||||
|
||||
|
||||
@app.route('/system/backup/config', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def backup_set_custom():
|
||||
from backup import backup_set_custom
|
||||
return json_response(
|
||||
|
@ -801,7 +813,7 @@ def backup_set_custom():
|
|||
|
||||
|
||||
@app.route('/system/backup/new', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def backup_new():
|
||||
from backup import perform_backup, get_backup_config
|
||||
|
||||
|
@ -815,14 +827,14 @@ def backup_new():
|
|||
|
||||
|
||||
@app.route('/system/privacy', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def privacy_status_get():
|
||||
config = utils.load_settings(env)
|
||||
return json_response(config.get("privacy", True))
|
||||
|
||||
|
||||
@app.route('/system/privacy', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def privacy_status_set():
|
||||
config = utils.load_settings(env)
|
||||
config["privacy"] = (request.form.get('value') == "private")
|
||||
|
@ -831,7 +843,7 @@ def privacy_status_set():
|
|||
|
||||
|
||||
@app.route('/system/smtp/relay', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def smtp_relay_get():
|
||||
config = utils.load_settings(env)
|
||||
|
||||
|
@ -862,7 +874,7 @@ def smtp_relay_get():
|
|||
|
||||
|
||||
@app.route('/system/smtp/relay', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def smtp_relay_set():
|
||||
from editconf import edit_conf
|
||||
from os import chmod
|
||||
|
@ -874,30 +886,39 @@ def smtp_relay_set():
|
|||
newconf = request.form
|
||||
|
||||
# Is DKIM configured?
|
||||
sel = newconf.get("dkim_selector")
|
||||
sel = newconf.get("dkim_selector", "")
|
||||
rr = newconf.get("dkim_rr", "")
|
||||
check_dkim = True
|
||||
if sel is None or sel.strip() == "":
|
||||
config["SMTP_RELAY_DKIM_SELECTOR"] = None
|
||||
# Check that the key RR doesn't exist either, otherwise we cannot be
|
||||
# sure that the user wants to remove it.
|
||||
if rr.strip() != "":
|
||||
return ("Cannot publish a DKIM key without a selector!\n\
|
||||
If you want to set up a relay without a DKIM record, both the selector and the key need to be empty.", 400)
|
||||
config["SMTP_RELAY_DKIM_RR"] = None
|
||||
elif re.fullmatch(r"[a-z\d\._]+", sel.strip()) is None:
|
||||
check_dkim = False
|
||||
elif re.fullmatch(r"[a-z\d\._][a-z\d\._\-]*", sel.strip()) is None:
|
||||
return ("The DKIM selector is invalid!", 400)
|
||||
|
||||
# DKIM selector looks good, try processing the RR
|
||||
rr = newconf.get("dkim_rr", "")
|
||||
if rr.strip() == "":
|
||||
return ("Cannot publish a selector with an empty key!", 400)
|
||||
if check_dkim:
|
||||
# DKIM selector looks good, try processing the RR
|
||||
if rr.strip() == "":
|
||||
return ("Cannot publish a selector with an empty key!\n\
|
||||
If you want to set up a relay without a DKIM record, both the selector and the key need to be empty.", 400)
|
||||
|
||||
components = {}
|
||||
for r in re.split(r"[;\s]+", rr):
|
||||
sp = re.split(r"\=", r)
|
||||
if len(sp) != 2:
|
||||
return ("DKIM public key RR is malformed!", 400)
|
||||
components[sp[0]] = sp[1]
|
||||
components = {}
|
||||
for r in re.split(r"[;\s]+", rr):
|
||||
sp = re.split(r"\=", r)
|
||||
if len(sp) != 2:
|
||||
return ("DKIM public key RR is malformed!", 400)
|
||||
components[sp[0]] = sp[1]
|
||||
|
||||
if not components.get("p"):
|
||||
return ("The DKIM public key doesn't exist!", 400)
|
||||
if not components.get("p"):
|
||||
return ("The DKIM public key doesn't exist!", 400)
|
||||
|
||||
config["SMTP_RELAY_DKIM_SELECTOR"] = sel
|
||||
config["SMTP_RELAY_DKIM_RR"] = components
|
||||
config["SMTP_RELAY_DKIM_SELECTOR"] = sel
|
||||
config["SMTP_RELAY_DKIM_RR"] = components
|
||||
|
||||
relay_on = False
|
||||
implicit_tls = False
|
||||
|
@ -916,7 +937,7 @@ def smtp_relay_set():
|
|||
implicit_tls = True
|
||||
except ssl.SSLError as sle:
|
||||
# Couldn't connect via TLS, configure Postfix to send via STARTTLS
|
||||
print(sle.reason)
|
||||
pass
|
||||
except (socket.herror, socket.gaierror) as he:
|
||||
return (
|
||||
f"Unable to resolve hostname (it probably is incorrect): {he.strerror}",
|
||||
|
@ -993,7 +1014,7 @@ def smtp_relay_set():
|
|||
|
||||
|
||||
@app.route('/system/pgp/', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def get_keys():
|
||||
from pgp import get_daemon_key, get_imported_keys, key_representation
|
||||
return {
|
||||
|
@ -1003,7 +1024,7 @@ def get_keys():
|
|||
|
||||
|
||||
@app.route('/system/pgp/<fpr>', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def get_key(fpr):
|
||||
from pgp import get_key, key_representation
|
||||
k = get_key(fpr)
|
||||
|
@ -1013,7 +1034,7 @@ def get_key(fpr):
|
|||
|
||||
|
||||
@app.route('/system/pgp/<fpr>', methods=["DELETE"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def delete_key(fpr):
|
||||
from pgp import delete_key
|
||||
from wkd import parse_wkd_list, build_wkd
|
||||
|
@ -1028,7 +1049,7 @@ def delete_key(fpr):
|
|||
|
||||
|
||||
@app.route('/system/pgp/<fpr>/export', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def export_key(fpr):
|
||||
from pgp import export_key
|
||||
exp = export_key(fpr)
|
||||
|
@ -1038,7 +1059,7 @@ def export_key(fpr):
|
|||
|
||||
|
||||
@app.route('/system/pgp/import', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def import_key():
|
||||
from pgp import import_key
|
||||
from wkd import build_wkd
|
||||
|
@ -1063,7 +1084,7 @@ def import_key():
|
|||
|
||||
|
||||
@app.route('/system/pgp/wkd', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def get_wkd_status():
|
||||
from pgp import get_daemon_key, get_imported_keys, key_representation
|
||||
from wkd import get_user_fpr_maps, get_wkd_config
|
||||
|
@ -1097,7 +1118,7 @@ def get_wkd_status():
|
|||
|
||||
|
||||
@app.route('/system/pgp/wkd', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def update_wkd():
|
||||
from wkd import update_wkd_config, build_wkd
|
||||
update_wkd_config(request.form)
|
||||
|
@ -1106,7 +1127,7 @@ def update_wkd():
|
|||
|
||||
|
||||
@app.route('/system/default-quota', methods=["GET"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def default_quota_get():
|
||||
if request.values.get('text'):
|
||||
return get_default_quota(env)
|
||||
|
@ -1117,7 +1138,7 @@ def default_quota_get():
|
|||
|
||||
|
||||
@app.route('/system/default-quota', methods=["POST"])
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def default_quota_set():
|
||||
config = utils.load_settings(env)
|
||||
try:
|
||||
|
@ -1135,7 +1156,7 @@ def default_quota_set():
|
|||
|
||||
|
||||
@app.route('/munin/')
|
||||
@authorized_personnel_only
|
||||
@authorized_personnel_only()
|
||||
def munin_start():
|
||||
# Munin pages, static images, and dynamically generated images are served
|
||||
# outside of the AJAX API. We'll start with a 'start' API that sets a cookie
|
||||
|
|
|
@ -114,9 +114,18 @@ def do_dns_update(env, force=False):
|
|||
if len(updated_domains) == 0:
|
||||
updated_domains.append("DNS configuration")
|
||||
|
||||
# Kick nsd if anything changed.
|
||||
# Tell nsd to reload changed zone files.
|
||||
if len(updated_domains) > 0:
|
||||
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
|
||||
# 'reconfig' is needed if there are added or removed zones, but
|
||||
# it may not reload existing zones, so we call 'reload' too. If
|
||||
# nsd isn't running, nsd-control fails, so in that case revert
|
||||
# to restarting nsd to make sure it is running. Restarting nsd
|
||||
# should also refresh everything.
|
||||
try:
|
||||
shell('check_call', ["/usr/sbin/nsd-control", "reconfig"])
|
||||
shell('check_call', ["/usr/sbin/nsd-control", "reload"])
|
||||
except:
|
||||
shell('check_call', ["/usr/sbin/service", "nsd", "restart"])
|
||||
|
||||
# Write the OpenDKIM configuration tables for all of the mail domains.
|
||||
from mailconfig import get_mail_domains
|
||||
|
@ -397,7 +406,8 @@ def build_zone(domain,
|
|||
# the domain, and no one else (unless the user is using an SMTP relay and authorized other servers).
|
||||
# Skip if the user has set a custom SPF record.
|
||||
if not has_rec(None, "TXT", prefix="v=spf1 "):
|
||||
if settings.get("SMTP_RELAY_SPF_RECORD", "").strip() != "" and relay_on:
|
||||
rawrecord = settings.get("SMTP_RELAY_SPF_RECORD", "")
|
||||
if rawrecord is not None and rawrecord.strip() != "" and relay_on:
|
||||
records.append((None, "TXT", settings.get("SMTP_RELAY_SPF_RECORD"), "Added by your SMTP Relay provider so that they can send @%s mail on your behalf." % domain, None))
|
||||
elif spf_extra is None:
|
||||
records.append((None, "TXT", "v=spf1 mx -all", "Recommended. Specifies that only the box is permitted to send @%s mail." % domain, None))
|
||||
|
@ -1298,13 +1308,9 @@ def get_secondary_dns(custom_dns, mode=None):
|
|||
# doesn't.
|
||||
if not hostname.startswith("xfr:"):
|
||||
if mode == "xfr":
|
||||
response = dns.resolver.resolve(hostname + '.',
|
||||
"A",
|
||||
raise_on_no_answer=False)
|
||||
response = dns.resolver.resolve(hostname+'.', "A", raise_on_no_answer=False)
|
||||
values.extend(map(str, response))
|
||||
response = dns.resolver.resolve(hostname + '.',
|
||||
"AAAA",
|
||||
raise_on_no_answer=False)
|
||||
response = dns.resolver.resolve(hostname+'.', "AAAA", raise_on_no_answer=False)
|
||||
values.extend(map(str, response))
|
||||
continue
|
||||
values.append(hostname)
|
||||
|
@ -1329,14 +1335,11 @@ def set_secondary_dns(hostnames, env):
|
|||
# Resolve hostname.
|
||||
try:
|
||||
response = resolver.resolve(item, "A")
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN,
|
||||
dns.resolver.NoAnswer):
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
|
||||
try:
|
||||
response = resolver.query(item, "AAAA")
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN,
|
||||
dns.resolver.NoAnswer):
|
||||
raise ValueError(
|
||||
"Could not resolve the IP address of %s." % item)
|
||||
response = resolver.resolve(item, "AAAA")
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
|
||||
raise ValueError("Could not resolve the IP address of %s." % item)
|
||||
else:
|
||||
# Validate IP address.
|
||||
try:
|
||||
|
|
|
@ -34,7 +34,8 @@ def edit_conf(filename,
|
|||
delimiter,
|
||||
comment_char,
|
||||
folded_lines=False,
|
||||
testing=False):
|
||||
testing=False,
|
||||
erase_setting=False):
|
||||
found = set()
|
||||
buf = ""
|
||||
input_lines = list(open(filename, "r+"))
|
||||
|
@ -50,7 +51,7 @@ def edit_conf(filename,
|
|||
|
||||
# See if this line is for any settings passed on the command line.
|
||||
for i in range(len(settings)):
|
||||
# Check that this line contain this setting from the command-line arguments.
|
||||
# Check whether this line contain this setting from the command-line arguments.
|
||||
name, val = settings[i].split("=", 1)
|
||||
m = re.match(
|
||||
"(\s*)" + "(" + re.escape(comment_char) + "\s*)?" +
|
||||
|
@ -59,8 +60,10 @@ def edit_conf(filename,
|
|||
continue
|
||||
indent, is_comment, existing_val = m.groups()
|
||||
|
||||
# If this is already the setting, do nothing.
|
||||
if is_comment is None and existing_val == val:
|
||||
# If this is already the setting, keep it in the file, except:
|
||||
# * If we've already seen it before, then remove this duplicate line.
|
||||
# * If val is empty and erase_setting is on, then comment it out.
|
||||
if is_comment is None and existing_val == val and not (not val and erase_setting):
|
||||
# It may be that we've already inserted this setting higher
|
||||
# in the file so check for that first.
|
||||
if i in found:
|
||||
|
@ -78,7 +81,8 @@ def edit_conf(filename,
|
|||
buf += line
|
||||
|
||||
# if this option oddly appears more than once, don't add the setting again
|
||||
if i in found:
|
||||
# Or if we're clearing it, don't add it
|
||||
if (i in found) or (not val and erase_setting):
|
||||
break
|
||||
|
||||
# add the new setting
|
||||
|
@ -92,11 +96,12 @@ def edit_conf(filename,
|
|||
# If did not match any setting names, pass this line through.
|
||||
buf += line
|
||||
|
||||
# Put any settings we didn't see at the end of the file.
|
||||
# Put any settings we didn't see at the end of the file, except those being erased.
|
||||
for i in range(len(settings)):
|
||||
if i not in found:
|
||||
name, val = settings[i].split("=", 1)
|
||||
buf += name + delimiter + val + "\n"
|
||||
if not (not val and erase_setting):
|
||||
buf += name + delimiter + val + "\n"
|
||||
|
||||
if not testing:
|
||||
# Write out the new file.
|
||||
|
@ -125,12 +130,16 @@ if __name__ == "__main__":
|
|||
comment_char = "#"
|
||||
folded_lines = False
|
||||
testing = False
|
||||
erase_setting = False
|
||||
while settings[0][0] == "-" and settings[0] != "--":
|
||||
opt = settings.pop(0)
|
||||
if opt == "-s":
|
||||
# Space is the delimiter
|
||||
delimiter = " "
|
||||
delimiter_re = r"\s+"
|
||||
elif opt == "-e":
|
||||
# Erase settings that have empty values.
|
||||
erase_setting = True
|
||||
elif opt == "-w":
|
||||
# Line folding is possible in this file.
|
||||
folded_lines = True
|
||||
|
@ -153,4 +162,4 @@ if __name__ == "__main__":
|
|||
sys.exit(1)
|
||||
|
||||
edit_conf(filename, settings, delimiter_re, delimiter, comment_char,
|
||||
folded_lines, testing)
|
||||
folded_lines, testing, erase_setting)
|
||||
|
|
|
@ -64,37 +64,33 @@ def get_ssl_certificates(env):
|
|||
# Not a valid PEM format for a PEM type we care about.
|
||||
continue
|
||||
|
||||
# Remember where we got this object.
|
||||
pem._filename = fn
|
||||
|
||||
# Is it a private key?
|
||||
if isinstance(pem, RSAPrivateKey):
|
||||
private_keys[pem.public_key().public_numbers()] = pem
|
||||
private_keys[pem.public_key().public_numbers()] = { "filename": fn, "key": pem }
|
||||
|
||||
# Is it a certificate?
|
||||
if isinstance(pem, Certificate):
|
||||
certificates.append(pem)
|
||||
certificates.append({ "filename": fn, "cert": pem })
|
||||
|
||||
# Process the certificates.
|
||||
domains = {}
|
||||
for cert in certificates:
|
||||
# What domains is this certificate good for?
|
||||
cert_domains, primary_domain = get_certificate_domains(cert)
|
||||
cert._primary_domain = primary_domain
|
||||
cert_domains, primary_domain = get_certificate_domains(cert["cert"])
|
||||
cert["primary_domain"] = primary_domain
|
||||
|
||||
# Is there a private key file for this certificate?
|
||||
private_key = private_keys.get(cert.public_key().public_numbers())
|
||||
private_key = private_keys.get(cert["cert"].public_key().public_numbers())
|
||||
if not private_key:
|
||||
continue
|
||||
cert._private_key = private_key
|
||||
cert["private_key"] = private_key
|
||||
|
||||
# Add this cert to the list of certs usable for the domains.
|
||||
for domain in cert_domains:
|
||||
# The primary hostname can only use a certificate mapped
|
||||
# to the system private key.
|
||||
if domain == env['PRIMARY_HOSTNAME']:
|
||||
if cert._private_key._filename != os.path.join(
|
||||
env['STORAGE_ROOT'], 'ssl', 'ssl_private_key.pem'):
|
||||
if cert["private_key"]["filename"] != os.path.join(env['STORAGE_ROOT'], 'ssl', 'ssl_private_key.pem'):
|
||||
continue
|
||||
|
||||
domains.setdefault(domain, []).append(cert)
|
||||
|
@ -105,13 +101,12 @@ def get_ssl_certificates(env):
|
|||
ret = {}
|
||||
for domain, cert_list in domains.items():
|
||||
#for c in cert_list: print(domain, c.not_valid_before, c.not_valid_after, "("+str(now)+")", c.issuer, c.subject, c._filename)
|
||||
cert_list.sort(
|
||||
key=lambda cert: (
|
||||
# must be valid NOW
|
||||
cert.not_valid_before <= now <= cert.not_valid_after,
|
||||
cert_list.sort(key = lambda cert : (
|
||||
# must be valid NOW
|
||||
cert["cert"].not_valid_before <= now <= cert["cert"].not_valid_after,
|
||||
|
||||
# prefer one that is not self-signed
|
||||
cert.issuer != cert.subject,
|
||||
# prefer one that is not self-signed
|
||||
cert["cert"].issuer != cert["cert"].subject,
|
||||
|
||||
###########################################################
|
||||
# The above lines ensure that valid certificates are chosen
|
||||
|
@ -119,9 +114,9 @@ def get_ssl_certificates(env):
|
|||
# multiple valid certificates available for this domain.
|
||||
###########################################################
|
||||
|
||||
# prefer one with the expiration furthest into the future so
|
||||
# that we can easily rotate to new certs as we get them
|
||||
cert.not_valid_after,
|
||||
# prefer one with the expiration furthest into the future so
|
||||
# that we can easily rotate to new certs as we get them
|
||||
cert["cert"].not_valid_after,
|
||||
|
||||
###########################################################
|
||||
# We always choose the certificate that is good for the
|
||||
|
@ -134,18 +129,18 @@ def get_ssl_certificates(env):
|
|||
# domain.
|
||||
###########################################################
|
||||
|
||||
# in case a certificate is installed in multiple paths,
|
||||
# prefer the... lexicographically last one?
|
||||
cert._filename,
|
||||
),
|
||||
reverse=True)
|
||||
# in case a certificate is installed in multiple paths,
|
||||
# prefer the... lexicographically last one?
|
||||
cert["filename"],
|
||||
|
||||
), reverse=True)
|
||||
cert = cert_list.pop(0)
|
||||
ret[domain] = {
|
||||
"private-key": cert._private_key._filename,
|
||||
"certificate": cert._filename,
|
||||
"primary-domain": cert._primary_domain,
|
||||
"certificate_object": cert,
|
||||
}
|
||||
"private-key": cert["private_key"]["filename"],
|
||||
"certificate": cert["filename"],
|
||||
"primary-domain": cert["primary_domain"],
|
||||
"certificate_object": cert["cert"],
|
||||
}
|
||||
|
||||
return ret
|
||||
|
||||
|
|
|
@ -228,9 +228,7 @@ def check_service(i, service, env):
|
|||
|
||||
# IPv4 ok but IPv6 failed. Try the PRIVATE_IPV6 address to see if the service is bound to the interface.
|
||||
elif service["port"] != 53 and try_connect(env["PRIVATE_IPV6"]):
|
||||
output.print_error(
|
||||
"%s is running (and available over IPv4 and the local IPv6 address), but it is not publicly accessible at %s:%d."
|
||||
% (service['name'], env['PUBLIC_IP'], service['port']))
|
||||
output.print_error("%s is running (and available over IPv4 and the local IPv6 address), but it is not publicly accessible at %s:%d." % (service['name'], env['PUBLIC_IPV6'], service['port']))
|
||||
else:
|
||||
output.print_error(
|
||||
"%s is running and available over IPv4 but is not accessible over IPv6 at %s port %d."
|
||||
|
@ -344,13 +342,13 @@ def check_software_updates(env, output):
|
|||
# Check for any software package updates.
|
||||
pkgs = list_apt_updates(apt_update=False)
|
||||
if is_reboot_needed_due_to_package_installation():
|
||||
output.print_error(
|
||||
output.print_warning(
|
||||
"System updates have been installed and a reboot of the machine is required."
|
||||
)
|
||||
elif len(pkgs) == 0:
|
||||
output.print_ok("System software is up to date.")
|
||||
else:
|
||||
output.print_error(
|
||||
output.print_warning(
|
||||
"There are %d software packages that can be updated." % len(pkgs))
|
||||
for p in pkgs:
|
||||
output.print_line("%s (%s)" % (p["package"], p["version"]))
|
||||
|
@ -383,6 +381,17 @@ def check_free_disk_space(rounded_values, env, output):
|
|||
disk_msg = "The disk has less than 15% free space."
|
||||
output.print_error(disk_msg)
|
||||
|
||||
# Check that there's only one duplicity cache. If there's more than one,
|
||||
# it's probably no longer in use, and we can recommend clearing the cache
|
||||
# to save space. The cache directory may not exist yet, which is OK.
|
||||
backup_cache_path = os.path.join(env['STORAGE_ROOT'], 'backup/cache')
|
||||
try:
|
||||
backup_cache_count = len(os.listdir(backup_cache_path))
|
||||
except:
|
||||
backup_cache_count = 0
|
||||
if backup_cache_count > 1:
|
||||
output.print_warning("The backup cache directory {} has more than one backup target cache. Consider clearing this directory to save disk space."
|
||||
.format(backup_cache_path))
|
||||
|
||||
def check_free_memory(rounded_values, env, output):
|
||||
# Check free memory.
|
||||
|
@ -1113,11 +1122,8 @@ def check_dnssec(domain,
|
|||
if len(ds) > 0:
|
||||
output.print_line("")
|
||||
output.print_line("The DS record is currently set to:")
|
||||
for rr in ds:
|
||||
output.print_line(
|
||||
"Key Tag: {0}, Algorithm: {1}, Digest Type: {2}, Digest: {3}".
|
||||
format(*rr))
|
||||
|
||||
for rr in sorted(ds):
|
||||
output.print_line("Key Tag: {0}, Algorithm: {1}, Digest Type: {2}, Digest: {3}".format(*rr))
|
||||
|
||||
def check_mail_domain(domain, env, output):
|
||||
# Check the MX record.
|
||||
|
@ -1169,7 +1175,7 @@ def check_mail_domain(domain, env, output):
|
|||
output.print_ok(good_news)
|
||||
|
||||
# Check MTA-STS policy.
|
||||
loop = asyncio.get_event_loop()
|
||||
loop = asyncio.new_event_loop()
|
||||
sts_resolver = postfix_mta_sts_resolver.resolver.STSResolver(loop=loop)
|
||||
valid, policy = loop.run_until_complete(sts_resolver.resolve(domain))
|
||||
if valid == postfix_mta_sts_resolver.resolver.STSFetchResult.VALID:
|
||||
|
@ -1269,8 +1275,7 @@ def query_dns(qname, rtype, nxdomain='[Not Set]', at=None, as_list=False):
|
|||
# Do the query.
|
||||
try:
|
||||
response = resolver.resolve(qname, rtype)
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN,
|
||||
dns.resolver.NoAnswer):
|
||||
except (dns.resolver.NoNameservers, dns.resolver.NXDOMAIN, dns.resolver.NoAnswer):
|
||||
# Host did not have an answer for this query; not sure what the
|
||||
# difference is between the two exceptions.
|
||||
return nxdomain
|
||||
|
@ -1378,16 +1383,18 @@ def list_apt_updates(apt_update=True):
|
|||
m = re.match(r'^Inst (.*) \[(.*)\] \((\S*)', line)
|
||||
if m:
|
||||
pkgs.append({
|
||||
"package": m.group(1),
|
||||
"version": m.group(3),
|
||||
"current_version": m.group(2)
|
||||
"package": m.group(1).strip(),
|
||||
"version": m.group(3).strip(),
|
||||
"current_version": m.group(2).strip()
|
||||
})
|
||||
else:
|
||||
pkgs.append({
|
||||
"package": "[" + line + "]",
|
||||
"version": "",
|
||||
"current_version": ""
|
||||
})
|
||||
continue
|
||||
# TODO: Check whether this is actually an issue or not
|
||||
# pkgs.append({
|
||||
# "package": "[" + line.strip() + "]",
|
||||
# "version": "",
|
||||
# "current_version": ""
|
||||
# })
|
||||
|
||||
# Cache for future requests.
|
||||
_apt_updates = (datetime.datetime.now(), pkgs)
|
||||
|
@ -1409,7 +1416,7 @@ def what_version_is_this(env):
|
|||
|
||||
|
||||
def get_latest_miab_version():
|
||||
# This pings https://mailinabox.email/setup.sh and extracts the tag named in
|
||||
# This pings https://power-mailinabox.net/setup.sh and extracts the tag named in
|
||||
# the script to determine the current product version.
|
||||
from urllib.request import urlopen, HTTPError, URLError
|
||||
from socket import timeout
|
||||
|
@ -1418,7 +1425,7 @@ def get_latest_miab_version():
|
|||
return re.search(
|
||||
b'TAG=(.*)',
|
||||
urlopen(
|
||||
"https://raw.githubusercontent.com/ddavness/power-mailinabox/main/setup/bootstrap.sh",
|
||||
"https://power-mailinabox.net/setup.sh",
|
||||
timeout=5).read()).group(1).decode("utf8")
|
||||
except (HTTPError, URLError, timeout):
|
||||
return None
|
||||
|
|
|
@ -135,6 +135,13 @@
|
|||
Monitoring</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li class="nav-item me-1 me-xl-4 dropdown if-logged-in-not-admin">
|
||||
<button class="btn dropdown-toggle" type="button" data-bs-toggle="dropdown" aria-expanded="false">Your Account</button>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a class="dropdown-item" href="#manage-password" onclick="return show_panel(this);">Manage Password</a></li>
|
||||
<li><a class="dropdown-item" href="#mfa" onclick="return show_panel(this);">Two-Factor Authentication</a></li>
|
||||
</ul>
|
||||
</li>
|
||||
<li class="nav-item me-1 me-xl-4 btn if-logged-in-not-admin" type="button" href="#mail-guide"
|
||||
onclick="return show_panel(this);">
|
||||
Mail Guide
|
||||
|
@ -198,6 +205,10 @@
|
|||
{% include "wkd.html" %}
|
||||
</div>
|
||||
|
||||
<div id="panel_manage-password" class="admin_panel">
|
||||
{% include "manage-password.html" %}
|
||||
</div>
|
||||
|
||||
<div id="panel_mfa" class="admin_panel">
|
||||
{% include "mfa.html" %}
|
||||
</div>
|
||||
|
|
57
management/templates/manage-password.html
Normal file
57
management/templates/manage-password.html
Normal file
|
@ -0,0 +1,57 @@
|
|||
<div>
|
||||
<h2>Manage Password</h2>
|
||||
<p>Here you can change your account password. The new password is then valid for both this panel and your email.</p>
|
||||
<p>If you have client emails configured, you'll then need to update the configuration with the new password. See the <a href="#mail-guide" onclick="return show_panel(this);">Mail Guide</a> for more information about this.</p>
|
||||
|
||||
<form class="form-horizontal" role="form" onsubmit="set_password_self(); return false;">
|
||||
<div class="col-lg-10 col-xl-8 mb-3">
|
||||
<div class="input-group">
|
||||
<label for="manage-password-new" class="input-group-text col-3">New Password</label>
|
||||
<input type="password" placeholder="password" class="form-control" id="manage-password-new">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-lg-10 col-xl-8 mb-3">
|
||||
<div class="input-group">
|
||||
<label for="manage-password-confirm" class="input-group-text col-3">Confirm Password</label>
|
||||
<input type="password" placeholder="password" class="form-control" id="manage-password-confirm">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="mt-3">
|
||||
<button id="manage-password-submit" type="submit" class="btn btn-primary">Save</button>
|
||||
</div>
|
||||
<small>After changing your password, you'll be logged out from the account and will need to log in again.</small>
|
||||
</form>
|
||||
</div>
|
||||
|
||||
<script>
|
||||
function set_password_self() {
|
||||
if ($('#manage-password-new').val() !== $('#manage-password-confirm').val()) {
|
||||
show_modal_error("Set Password", 'Passwords do not match!');
|
||||
return;
|
||||
}
|
||||
|
||||
let password = $('#manage-password-new').val()
|
||||
|
||||
api(
|
||||
"/mail/users/password",
|
||||
"POST",
|
||||
{
|
||||
email: api_credentials.username,
|
||||
password: password
|
||||
},
|
||||
function (r) {
|
||||
// Responses are multiple lines of pre-formatted text.
|
||||
show_modal_error("Set Password", $("<pre/>").text(r), () => {
|
||||
do_logout()
|
||||
$('#manage-password-new').val("")
|
||||
$('#manage-password-confirm').val("")
|
||||
});
|
||||
},
|
||||
function (r) {
|
||||
show_modal_error("Set Password", r);
|
||||
}
|
||||
);
|
||||
}
|
||||
</script>
|
|
@ -78,7 +78,7 @@
|
|||
|
||||
<h3>DKIM Configuration</h3>
|
||||
<p>DKIM allows receivers to verify that the email was sent by the relay you configured (this is, somebody you
|
||||
trust). <b>Not doing so will have your email sent to spam.</b></p>
|
||||
trust). <b>If your relay provider does not provide you with this information, it's probably safe to skip this step.</b></p>
|
||||
|
||||
<div class="col-lg-6 col-md-8 col-12">
|
||||
<div class="input-group">
|
||||
|
|
|
@ -304,6 +304,15 @@
|
|||
$("#backup-target-type").val("s3");
|
||||
var hostpath = r.target.substring(5).split('/');
|
||||
var host = hostpath.shift();
|
||||
let s3_options = $("#backup-target-s3-host-select option").map(function() {return this.value}).get()
|
||||
$("#backup-target-s3-host-select").val("other")
|
||||
for (let h of s3_options) {
|
||||
console.log(h)
|
||||
if (h == host) {
|
||||
$("#backup-target-s3-host-select").val(host)
|
||||
break
|
||||
}
|
||||
}
|
||||
$("#backup-target-s3-host").val(host);
|
||||
$("#backup-target-s3-path").val(hostpath.join('/'));
|
||||
} else if (r.target.substring(0, 5) == "b2://") {
|
||||
|
@ -365,18 +374,18 @@
|
|||
}
|
||||
|
||||
function init_inputs(target_type) {
|
||||
function set_host(host) {
|
||||
function set_host(host, overwrite_other) {
|
||||
if (host !== 'other') {
|
||||
$("#backup-target-s3-host").val(host);
|
||||
} else {
|
||||
} else if (overwrite_other) {
|
||||
$("#backup-target-s3-host").val('');
|
||||
}
|
||||
}
|
||||
if (target_type == "s3") {
|
||||
$('#backup-target-s3-host-select').off('change').on('change', function () {
|
||||
set_host($('#backup-target-s3-host-select').val());
|
||||
set_host($('#backup-target-s3-host-select').val(), true);
|
||||
});
|
||||
set_host($('#backup-target-s3-host-select').val());
|
||||
set_host($('#backup-target-s3-host-select').val(), false);
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
@ -75,7 +75,7 @@
|
|||
}
|
||||
|
||||
#system-checks .showhide {
|
||||
display: none;
|
||||
display: block;
|
||||
font-size: 85%;
|
||||
}
|
||||
|
||||
|
@ -131,8 +131,8 @@
|
|||
"POST",
|
||||
{},
|
||||
function (r) {
|
||||
for (var i = 0; i < r.length; i++) {
|
||||
var n = $("<div class='col-12'><div class='icon'></div><p class='message status-text' style='margin: 0'/><p class='showhide btn btn-light mt-3' href='#'/><div class='extra ps-4 col-12'></div>");
|
||||
for (let i = 0; i < r.length; i++) {
|
||||
let n = $("<div class='col-12'><div class='icon'></div><p class='message status-text' style='margin: 0'/>");
|
||||
if (i == 0) n.addClass('first')
|
||||
|
||||
if (r[i].type == "heading") {
|
||||
|
@ -150,7 +150,8 @@
|
|||
|
||||
if (r[i].extra.length > 0) {
|
||||
let open = false
|
||||
n.find('.showhide').show().text("Show More").click(function () {
|
||||
n.append("<p class='showhide btn btn-light mt-3' href='#'>Show More</p><div class='extra ps-4 col-12'></div>")
|
||||
n.find('.showhide').click(function () {
|
||||
let extra = $(this).parent().find('.extra')
|
||||
|
||||
if (open) {
|
||||
|
|
|
@ -204,23 +204,10 @@ def wait_for_service(port, public, env, timeout):
|
|||
return False
|
||||
time.sleep(min(timeout / 4, 1))
|
||||
|
||||
|
||||
def fix_boto():
|
||||
# Google Compute Engine instances install some Python-2-only boto plugins that
|
||||
# conflict with boto running under Python 3. Disable boto's default configuration
|
||||
# file prior to importing boto so that GCE's plugin is not loaded:
|
||||
import os
|
||||
os.environ["BOTO_CONFIG"] = "/etc/boto3.cfg"
|
||||
|
||||
|
||||
def get_php_version():
|
||||
# Gets the version of PHP installed in the system.
|
||||
return shell("check_output", ["/usr/bin/php", "-v"])[4:7]
|
||||
|
||||
|
||||
os_codes = {None, "Debian10", "Ubuntu2004"}
|
||||
|
||||
|
||||
def get_os_code():
|
||||
# Massive mess incoming
|
||||
dist = shell("check_output", ["/usr/bin/lsb_release", "-is"]).strip()
|
||||
|
@ -234,10 +221,11 @@ def get_os_code():
|
|||
elif dist == "Ubuntu":
|
||||
if version == "20.04":
|
||||
return "Ubuntu2004"
|
||||
elif version == "22.04":
|
||||
return "Ubuntu2204"
|
||||
|
||||
return None
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
from web_update import get_web_domains
|
||||
env = load_environment()
|
||||
|
|
7
management/wsgi.py
Normal file
7
management/wsgi.py
Normal file
|
@ -0,0 +1,7 @@
|
|||
from daemon import app
|
||||
import auth, utils
|
||||
|
||||
app.logger.addHandler(utils.create_syslog_handler())
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run(port=10222)
|
|
@ -2,7 +2,7 @@
|
|||
#########################################################
|
||||
# This script is intended to be run like this:
|
||||
#
|
||||
# curl https://dvn.pt/power-miab | sudo bash
|
||||
# curl -L https://power-mailinabox.net/setup.sh | sudo bash
|
||||
#
|
||||
#########################################################
|
||||
|
||||
|
@ -20,6 +20,7 @@ if [ ! -f /usr/bin/lsb_release ]; then
|
|||
echo "* Debian 10 (buster)"
|
||||
echo "* Debian 11 (bullseye)"
|
||||
echo "* Ubuntu 20.04 LTS (Focal Fossa)"
|
||||
echo "* Ubuntu 22.04 LTS (Jammy Jellyfish)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
|
@ -31,16 +32,44 @@ fi
|
|||
if [ -z "$TAG" ]; then
|
||||
# Make sure we're running on the correct operating system
|
||||
OS=$(lsb_release -d | sed 's/.*:\s*//')
|
||||
if [ "$OS" == "Debian GNU/Linux 10 (buster)" ] ||
|
||||
[ "$OS" == "Debian GNU/Linux 11 (bullseye)" ] ||
|
||||
[ "$(echo $OS | grep -o 'Ubuntu 20.04')" == "Ubuntu 20.04" ]
|
||||
if [ "$OS" == "Debian GNU/Linux 11 (bullseye)" ] ||
|
||||
[ "$(echo $OS | grep -o 'Ubuntu 20.04')" == "Ubuntu 20.04" ] ||
|
||||
[ "$(echo $OS | grep -o 'Ubuntu 22.04')" == "Ubuntu 22.04" ]
|
||||
then
|
||||
TAG=v56.4
|
||||
else
|
||||
echo "This script must be run on a system running one of the following OS-es:"
|
||||
echo "* Debian 10 (buster)"
|
||||
TAG=v60.5
|
||||
elif [ "$OS" == "Debian GNU/Linux 10 (buster)" ]; then
|
||||
echo "We are going to install the last version of Power Mail-in-a-Box supporting Debian 10 (buster)."
|
||||
echo "IF THIS IS A NEW INSTALLATION, STOP NOW, AND USE A SUPPORTED DISTRIBUTION INSTEAD (ONE OF THESE):"
|
||||
echo "* Debian 11 (bullseye)"
|
||||
echo "* Ubuntu 20.04 LTS (Focal Fossa)"
|
||||
echo "* Ubuntu 22.04 LTS (Jammy Jellyfish)"
|
||||
echo
|
||||
echo "IF YOU'RE UPGRADING THE BOX TO THE LATEST VERSION, PLEASE VISIT THIS PAGE FOR NOTES ON HOW TO"
|
||||
echo "UPGRADE YOUR SISTEM TO DEBIAN 11 (bullseye)"
|
||||
echo "https://power-mailinabox.net/buster-eol"
|
||||
|
||||
while true; do
|
||||
read -p "Do you want to proceed? ([Y]es/[N]o) " yn
|
||||
|
||||
case $yn in
|
||||
Yes | Y | yes | y )
|
||||
break
|
||||
;;
|
||||
No | N | no | n )
|
||||
echo "Installation cancelled."
|
||||
exit 1
|
||||
;;
|
||||
* )
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
TAG=v56.5
|
||||
else
|
||||
echo "This script must be run on a system running one of the following OS-es:"
|
||||
echo "* Debian 11 (bullseye)"
|
||||
echo "* Ubuntu 20.04 LTS (Focal Fossa)"
|
||||
echo "* Ubuntu 22.04 LTS (Jammy Jellyfish)"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
@ -57,7 +86,7 @@ if [ ! -d $HOME/mailinabox ]; then
|
|||
echo Downloading Mail-in-a-Box $TAG. . .
|
||||
git clone \
|
||||
-b $TAG --depth 1 \
|
||||
https://github.com/ddavness/power-mailinabox \
|
||||
https://git.nibbletools.com/beenull/power-mailinabox \
|
||||
$HOME/mailinabox \
|
||||
< /dev/null 2> /dev/null
|
||||
|
||||
|
|
44
setup/dns.sh
44
setup/dns.sh
|
@ -10,16 +10,9 @@
|
|||
source setup/functions.sh # load our functions
|
||||
source /etc/mailinabox.conf # load global vars
|
||||
|
||||
# Install the packages.
|
||||
#
|
||||
# * nsd: The non-recursive nameserver that publishes our DNS records.
|
||||
# * ldnsutils: Helper utilities for signing DNSSEC zones.
|
||||
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
|
||||
echo "Installing nsd (DNS server)..."
|
||||
apt_install ldnsutils openssh-client
|
||||
|
||||
# Prepare nsd's configuration.
|
||||
|
||||
# We configure nsd before installation as we only want it to bind to some addresses
|
||||
# and it otherwise will have port / bind conflicts with bind9 used as the local resolver
|
||||
mkdir -p /var/run/nsd
|
||||
mkdir -p /etc/nsd
|
||||
mkdir -p /etc/nsd/zones
|
||||
|
@ -46,18 +39,6 @@ server:
|
|||
|
||||
EOF
|
||||
|
||||
# Add log rotation
|
||||
cat > /etc/logrotate.d/nsd <<EOF;
|
||||
/var/log/nsd.log {
|
||||
weekly
|
||||
missingok
|
||||
rotate 12
|
||||
compress
|
||||
delaycompress
|
||||
notifempty
|
||||
}
|
||||
EOF
|
||||
|
||||
# Since we have bind9 listening on localhost for locally-generated
|
||||
# DNS queries that require a recursive nameserver, and the system
|
||||
# might have other network interfaces for e.g. tunnelling, we have
|
||||
|
@ -74,8 +55,25 @@ echo "include: /etc/nsd/nsd.conf.d/*.conf" >> /etc/nsd/nsd.conf;
|
|||
# now be stored in /etc/nsd/nsd.conf.d.
|
||||
rm -f /etc/nsd/zones.conf
|
||||
|
||||
# Attempting a late install of nsd (after configuration)
|
||||
apt_install nsd
|
||||
# Add log rotation
|
||||
cat > /etc/logrotate.d/nsd <<EOF;
|
||||
/var/log/nsd.log {
|
||||
weekly
|
||||
missingok
|
||||
rotate 12
|
||||
compress
|
||||
delaycompress
|
||||
notifempty
|
||||
}
|
||||
EOF
|
||||
|
||||
# Install the packages.
|
||||
#
|
||||
# * nsd: The non-recursive nameserver that publishes our DNS records.
|
||||
# * ldnsutils: Helper utilities for signing DNSSEC zones.
|
||||
# * openssh-client: Provides ssh-keyscan which we use to create SSHFP records.
|
||||
echo "Installing nsd (DNS server)..."
|
||||
apt_install nsd ldnsutils openssh-client
|
||||
|
||||
# Create DNSSEC signing keys.
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@ function hide_output {
|
|||
# Execute command, redirecting stderr/stdout to the temporary file. Since we
|
||||
# check the return code ourselves, disable 'set -e' temporarily.
|
||||
set +e
|
||||
"$@" &> $OUTPUT
|
||||
"$@" &> "$OUTPUT"
|
||||
E=$?
|
||||
set -e
|
||||
|
||||
|
@ -24,7 +24,7 @@ function hide_output {
|
|||
echo
|
||||
echo FAILED: "$@"
|
||||
echo -----------------------------------------
|
||||
cat $OUTPUT
|
||||
cat "$OUTPUT"
|
||||
echo -----------------------------------------
|
||||
exit $E
|
||||
fi
|
||||
|
@ -222,17 +222,18 @@ function git_clone {
|
|||
}
|
||||
|
||||
function php_version {
|
||||
php --version | head -n 1 | cut -d " " -f 2 | cut -c 1-3
|
||||
php --version | head -n 1 | cut -d " " -f 2 | cut -d "." -f 1,2
|
||||
}
|
||||
|
||||
function python_version {
|
||||
python3 --version | cut -d " " -f 2 | cut -c 1-3
|
||||
python3 --version | cut -d " " -f 2 | cut -d "." -f 1,2
|
||||
}
|
||||
|
||||
export OS_UNSUPPORTED=0
|
||||
export OS_DEBIAN_10=1
|
||||
export OS_UBUNTU_2004=2
|
||||
export OS_DEBIAN_11=3
|
||||
export OS_UBUNTU_2204=4
|
||||
|
||||
function get_os_code {
|
||||
# A lot of if-statements here - dirty code looking tasting today
|
||||
|
@ -251,8 +252,11 @@ function get_os_code {
|
|||
if [[ $VER == "20.04" ]]; then
|
||||
echo $OS_UBUNTU_2004
|
||||
return 0
|
||||
elif [[ $VER == "22.04" ]]; then
|
||||
echo $OS_UBUNTU_2204
|
||||
return 0
|
||||
fi
|
||||
fi
|
||||
|
||||
echo $OS_UNSUPPORTED
|
||||
}
|
||||
}
|
||||
|
|
|
@ -89,6 +89,8 @@ management/editconf.py /etc/dovecot/conf.d/10-ssl.conf \
|
|||
"ssl_min_protocol=TLSv1.2" \
|
||||
"ssl_cipher_list=ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384" \
|
||||
"ssl_prefer_server_ciphers=no" \
|
||||
"ssl_dh_parameters_length=2048" \
|
||||
"ssl_dh=<$STORAGE_ROOT/ssl/dh2048.pem"
|
||||
|
||||
# Disable in-the-clear IMAP/POP because there is no reason for a user to transmit
|
||||
# login credentials outside of an encrypted connection. Only the over-TLS versions
|
||||
|
@ -185,6 +187,7 @@ plugin {
|
|||
sieve = $STORAGE_ROOT/mail/sieve/%d/%n.sieve
|
||||
sieve_dir = $STORAGE_ROOT/mail/sieve/%d/%n
|
||||
sieve_redirect_envelope_from = recipient
|
||||
sieve_vacation_send_from_recipient = yes
|
||||
}
|
||||
EOF
|
||||
|
||||
|
|
|
@ -13,8 +13,8 @@
|
|||
# destinations according to aliases, and passses email on to
|
||||
# another service for local mail delivery.
|
||||
#
|
||||
# The first hop in local mail delivery is to Spamassassin via
|
||||
# LMTP. Spamassassin then passes mail over to Dovecot for
|
||||
# The first hop in local mail delivery is to spampd via
|
||||
# LMTP. spampd then passes mail over to Dovecot for
|
||||
# storage in the user's mailbox.
|
||||
#
|
||||
# Postfix also listens on ports 465/587 (SMTPS, SMTP+STARTLS) for
|
||||
|
@ -193,17 +193,17 @@ management/editconf.py /etc/postfix/main.cf \
|
|||
|
||||
# ### Incoming Mail
|
||||
|
||||
# Pass any incoming mail over to a local delivery agent. Spamassassin
|
||||
# will act as the LDA agent at first. It is listening on port 10025
|
||||
# with LMTP. Spamassassin will pass the mail over to Dovecot after.
|
||||
# Pass mail to spampd, which acts as the local delivery agent (LDA),
|
||||
# which then passes the mail over to the Dovecot LMTP server after.
|
||||
# spampd runs on port 10025 by default.
|
||||
#
|
||||
# In a basic setup we would pass mail directly to Dovecot by setting
|
||||
# virtual_transport to `lmtp:unix:private/dovecot-lmtp`.
|
||||
management/editconf.py /etc/postfix/main.cf "virtual_transport=lmtp:[127.0.0.1]:10025"
|
||||
# Because of a spampd bug, limit the number of recipients in each connection.
|
||||
# Clear the lmtp_destination_recipient_limit setting which in previous
|
||||
# versions of Mail-in-a-Box was set to 1 because of a spampd bug.
|
||||
# See https://github.com/mail-in-a-box/mailinabox/issues/1523.
|
||||
management/editconf.py /etc/postfix/main.cf lmtp_destination_recipient_limit=1
|
||||
|
||||
management/editconf.py /etc/postfix/main.cf -e lmtp_destination_recipient_limit=
|
||||
|
||||
# Who can send mail to us? Some basic filters.
|
||||
#
|
||||
|
@ -232,11 +232,32 @@ management/editconf.py /etc/postfix/main.cf \
|
|||
# As a matter of fact RFC is not strict about retry timer so postfix and
|
||||
# other MTA have their own intervals. To fix the problem of receiving
|
||||
# e-mails really latter, delay of greylisting has been set to
|
||||
# 180 seconds (default is 300 seconds).
|
||||
# 180 seconds (default is 300 seconds). We will move the postgrey database
|
||||
# under $STORAGE_ROOT. This prevents a "warming up" that would have occured
|
||||
# previously with a migrated or reinstalled OS. We will specify this new path
|
||||
# with the --dbdir=... option. Arguments within POSTGREY_OPTS can not have spaces,
|
||||
# including dbdir. This is due to the way the init script sources the
|
||||
# /etc/default/postgrey file. --dbdir=... either needs to be a path without spaces
|
||||
# (luckily $STORAGE_ROOT does not currently work with spaces), or it needs to be a
|
||||
# symlink without spaces that can point to a folder with spaces). We'll just assume
|
||||
# $STORAGE_ROOT won't have spaces to simplify things.
|
||||
management/editconf.py /etc/default/postgrey \
|
||||
POSTGREY_OPTS=\"'--inet=127.0.0.1:10023 --delay=180'\"
|
||||
POSTGREY_OPTS=\""--inet=127.0.0.1:10023 --delay=180 --dbdir=$STORAGE_ROOT/mail/postgrey/db"\"
|
||||
|
||||
|
||||
# If the $STORAGE_ROOT/mail/postgrey is empty, copy the postgrey database over from the old location
|
||||
if [ ! -d $STORAGE_ROOT/mail/postgrey/db ]; then
|
||||
# Stop the service
|
||||
service postgrey stop
|
||||
# Ensure the new paths for postgrey db exists
|
||||
mkdir -p $STORAGE_ROOT/mail/postgrey/db
|
||||
# Move over database files
|
||||
mv /var/lib/postgrey/* $STORAGE_ROOT/mail/postgrey/db/ || true
|
||||
fi
|
||||
# Ensure permissions are set
|
||||
chown -R postgrey:postgrey $STORAGE_ROOT/mail/postgrey/
|
||||
chmod 700 $STORAGE_ROOT/mail/postgrey/{,db}
|
||||
|
||||
# We are going to setup a newer whitelist for postgrey, the version included in the distribution is old
|
||||
cat > /etc/cron.daily/mailinabox-postgrey-whitelist << EOF;
|
||||
#!/bin/bash
|
||||
|
|
|
@ -25,10 +25,20 @@ if [ ! -f $db_path ]; then
|
|||
echo "CREATE TABLE noreply (id INTEGER PRIMARY KEY AUTOINCREMENT, email TEXT NOT NULL UNIQUE);" | sqlite3 $db_path
|
||||
echo "CREATE TABLE mfa (id INTEGER PRIMARY KEY AUTOINCREMENT, user_id INTEGER NOT NULL, type TEXT NOT NULL, secret TEXT NOT NULL, mru_token TEXT, label TEXT, FOREIGN KEY (user_id) REFERENCES users(id) ON DELETE CASCADE);" | sqlite3 $db_path;
|
||||
echo "CREATE TABLE auto_aliases (id INTEGER PRIMARY KEY AUTOINCREMENT, source TEXT NOT NULL UNIQUE, destination TEXT NOT NULL, permitted_senders TEXT);" | sqlite3 $db_path;
|
||||
elif sqlite3 $db_path ".schema users" | grep --invert-match quota; then
|
||||
echo "ALTER TABLE users ADD COLUMN quota TEXT NOT NULL DEFAULT '0';" | sqlite3 $db_path;
|
||||
else
|
||||
sql=$(sqlite3 $db_path "SELECT sql FROM sqlite_master WHERE name = 'users'");
|
||||
if echo $sql | grep --invert-match quota; then
|
||||
echo "ALTER TABLE users ADD COLUMN quota TEXT NOT NULL DEFAULT '0';" | sqlite3 $db_path;
|
||||
fi
|
||||
fi
|
||||
|
||||
# Recover the database if it was hit by the Roundcube password changer "bug" (#85)
|
||||
# If the journal_mode is set to wal, postfix cannot read it and we wouldn't
|
||||
# be able to send or receive mail.
|
||||
#
|
||||
# This operation is idempotent so it's safe to run even in healthy databases, too.
|
||||
echo "PRAGMA journal_mode=delete;" | sqlite3 $db_path > /dev/null
|
||||
|
||||
# ### User Authentication
|
||||
|
||||
# Have Dovecot query our database, and not system users, for authentication.
|
||||
|
@ -158,7 +168,7 @@ EOF
|
|||
|
||||
# SQL statement to check if we're sending to a noreply address.
|
||||
cat > /etc/postfix/noreply-addresses.cf << EOF;
|
||||
dbpath=/home/user-data/mail/users.sqlite
|
||||
dbpath=$db_path
|
||||
query = SELECT 'REJECT This address is not ready to receive email.' FROM noreply WHERE email='%s'
|
||||
EOF
|
||||
|
||||
|
|
|
@ -1,35 +1,30 @@
|
|||
#!/bin/bash
|
||||
|
||||
source setup/functions.sh
|
||||
source /etc/mailinabox.conf # load global vars
|
||||
|
||||
echo "Installing Mail-in-a-Box system management daemon..."
|
||||
|
||||
# DEPENDENCIES
|
||||
|
||||
# We used to install management daemon-related Python packages
|
||||
# directly to /usr/local/lib. We moved to a virtualenv because
|
||||
# these packages might conflict with apt-installed packages.
|
||||
# We may have a lingering version of acme that conflcits with
|
||||
# certbot, which we're about to install below, so remove it
|
||||
# first. Once acme is installed by an apt package, this might
|
||||
# break the package version and `apt-get install --reinstall python3-acme`
|
||||
# might be needed in that case.
|
||||
while [ -d /usr/local/lib/python3.4/dist-packages/acme ]; do
|
||||
pip3 uninstall -y acme;
|
||||
done
|
||||
|
||||
# duplicity is used to make backups of user data.
|
||||
#
|
||||
# virtualenv is used to isolate the Python 3 packages we
|
||||
# install via pip from the system-installed packages.
|
||||
#
|
||||
# certbot installs EFF's certbot which we use to
|
||||
# provision free TLS certificates.
|
||||
apt_install duplicity python3-pip python3-gpg virtualenv certbot rsync
|
||||
#
|
||||
# gcc and build tools are required to install the latest version
|
||||
# of duplicity
|
||||
apt_install python3-pip python3-gpg virtualenv certbot rsync librsync2 python3-fasteners python3-future python3-lockfile \
|
||||
gcc python3-dev librsync-dev gettext
|
||||
|
||||
# boto is used for amazon aws backups.
|
||||
apt_get_quiet remove --autoremove --purge duplicity || /bin/true
|
||||
|
||||
# Duplicity does the actual backups.
|
||||
# b2sdk is used for backblaze backups.
|
||||
# boto3 is used for amazon aws backups.
|
||||
# Both are installed outside the pipenv, so they can be used by duplicity
|
||||
hide_output pip3 install --upgrade boto
|
||||
hide_output pip3 install --upgrade b2sdk boto3 duplicity
|
||||
|
||||
# Create a virtualenv for the installation of Python 3 packages
|
||||
# used by the management daemon.
|
||||
|
@ -57,26 +52,9 @@ hide_output $venv/bin/pip install --upgrade pip
|
|||
# NOTE: email_validator is repeated in setup/questions.sh, so please keep the versions synced.
|
||||
hide_output $venv/bin/pip install --upgrade \
|
||||
rtyaml "email_validator>=1.0.0" "exclusiveprocess" \
|
||||
flask dnspython python-dateutil expiringdict \
|
||||
flask dnspython python-dateutil expiringdict gunicorn \
|
||||
qrcode[pil] pyotp \
|
||||
"idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver
|
||||
|
||||
# Install backblaze B2 libraries.
|
||||
# Depending on the OS, Duplicity may require different dependencies.
|
||||
case $(get_os_code) in
|
||||
|
||||
$OS_DEBIAN_10)
|
||||
apt_install python-pip python-backports.functools-lru-cache
|
||||
hide_output pip2 install --upgrade "b2<2.0.0" "logfury<1.0.0"
|
||||
hide_output $venv/bin/pip install --upgrade "b2<2.0.0"
|
||||
;;
|
||||
|
||||
$OS_UBUNTU_2004 | $OS_DEBIAN_11)
|
||||
hide_output pip3 install --upgrade "b2sdk==1.7.0"
|
||||
hide_output $venv/bin/pip install --upgrade "b2sdk==1.7.0"
|
||||
;;
|
||||
|
||||
esac
|
||||
"idna>=2.0.0" "cryptography==2.2.2" boto psutil postfix-mta-sts-resolver boto3 b2sdk
|
||||
|
||||
# Make the venv use the packaged gpgme bindings (the ones pip provides are severely out-of-date)
|
||||
if [ ! -d $venv/lib/python$(python_version)/site-packages/gpg/ ]; then
|
||||
|
@ -102,34 +80,39 @@ rm -rf $assets_dir
|
|||
mkdir -p $assets_dir
|
||||
|
||||
# jQuery CDN URL
|
||||
jquery_version=3.6.0
|
||||
jquery_url=https://code.jquery.com
|
||||
jquery_version=3.6.1
|
||||
jquery_url=https://code.jquery.com # Check this link for new versions
|
||||
|
||||
# Get jQuery
|
||||
wget_verify $jquery_url/jquery-$jquery_version.min.js b82d238d4e31fdf618bae8ac11a6c812c03dd0d4 $assets_dir/jquery.min.js
|
||||
wget_verify $jquery_url/jquery-$jquery_version.min.js ea61688671d0c3044f2c5b2f2c4af0a6620ac6c2 $assets_dir/jquery.min.js
|
||||
|
||||
# Bootstrap CDN URL
|
||||
bootstrap_version=5.1.3
|
||||
# See https://github.com/twbs/bootstrap/releases to check for new versions
|
||||
bootstrap_version=5.2.2
|
||||
bootstrap_url=https://github.com/twbs/bootstrap/releases/download/v$bootstrap_version/bootstrap-$bootstrap_version-dist.zip
|
||||
|
||||
# Get Bootstrap
|
||||
wget_verify $bootstrap_url 2b56a45f7108051642bfc446947fc1d626cb1c9f /tmp/bootstrap.zip
|
||||
wget_verify $bootstrap_url 740b34c22cef5c2f12a34f084b813ea308fedf74 /tmp/bootstrap.zip
|
||||
unzip -q /tmp/bootstrap.zip -d $assets_dir
|
||||
mv $assets_dir/bootstrap-$bootstrap_version-dist $assets_dir/bootstrap
|
||||
rm -f /tmp/bootstrap.zip
|
||||
|
||||
# FontAwesome CDN URL
|
||||
fontawesome_version=6.1.1
|
||||
# See https://github.com/FortAwesome/Font-Awesome/releases to check for new versions
|
||||
fontawesome_version=6.2.1
|
||||
fontawesome_url=https://github.com/FortAwesome/Font-Awesome/releases/download/$fontawesome_version/fontawesome-free-$fontawesome_version-web.zip
|
||||
|
||||
# Get FontAwesome
|
||||
wget_verify $fontawesome_url d712b10472f7209d5284f394ef94a7be71fc2ad3 /tmp/fontawesome.zip
|
||||
wget_verify $fontawesome_url cd0f2bcc9653b56e3e2dd82d6598aa6bbca8d796 /tmp/fontawesome.zip
|
||||
unzip -q /tmp/fontawesome.zip -d $assets_dir
|
||||
mv $assets_dir/fontawesome-free-$fontawesome_version-web $assets_dir/fontawesome
|
||||
rm -f /tmp/fontawesome.zip
|
||||
|
||||
# Create an init script to start the management daemon and keep it
|
||||
# running after a reboot.
|
||||
# Set a long timeout since some commands take a while to run, matching
|
||||
# the timeout we set for PHP (fastcgi_read_timeout in the nginx confs).
|
||||
# Note: Authentication currently breaks with more than 1 gunicorn worker.
|
||||
cat > $inst_dir/start <<EOF;
|
||||
#!/bin/bash
|
||||
# Set character encoding flags to ensure that any non-ASCII don't cause problems.
|
||||
|
@ -138,8 +121,13 @@ export LC_ALL=en_US.UTF-8
|
|||
export LANG=en_US.UTF-8
|
||||
export LC_TYPE=en_US.UTF-8
|
||||
|
||||
mkdir -p /var/lib/mailinabox
|
||||
tr -cd '[:xdigit:]' < /dev/urandom | head -c 32 > /var/lib/mailinabox/api.key
|
||||
chmod 640 /var/lib/mailinabox/api.key
|
||||
|
||||
source $venv/bin/activate
|
||||
exec python $(pwd)/management/daemon.py
|
||||
export PYTHONPATH=$(pwd)/management
|
||||
exec gunicorn -b localhost:10222 -w 1 --timeout 630 wsgi:app
|
||||
EOF
|
||||
chmod +x $inst_dir/start
|
||||
cp --remove-destination conf/mailinabox.service /lib/systemd/system/mailinabox.service # target was previously a symlink so remove it first
|
||||
|
|
|
@ -21,8 +21,8 @@ echo "Installing Nextcloud (contacts/calendar)..."
|
|||
# we automatically install intermediate versions as needed.
|
||||
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
|
||||
# copying it from the error message when it doesn't match what is below.
|
||||
nextcloud_ver=23.0.3
|
||||
nextcloud_hash=72c004d39df4e97d9272c57394f756d90d948770
|
||||
nextcloud_ver=24.0.7
|
||||
nextcloud_hash=7fb1afeb3c212bf5530c3d234b23bf314b47655a
|
||||
|
||||
# Nextcloud apps
|
||||
# --------------
|
||||
|
@ -30,15 +30,15 @@ nextcloud_hash=72c004d39df4e97d9272c57394f756d90d948770
|
|||
# consulting the <dependencies>...<nextcloud> node at:
|
||||
# https://github.com/nextcloud-releases/contacts/blob/master/appinfo/info.xml
|
||||
# https://github.com/nextcloud-releases/calendar/blob/master/appinfo/info.xml
|
||||
# https://github.com/nextcloud/user_external/blob/master/appinfo/info.xml
|
||||
# https://github.com/nextcloud-releases/user_external
|
||||
# * The hash is the SHA1 hash of the ZIP package, which you can find by just running this script and
|
||||
# copying it from the error message when it doesn't match what is below.
|
||||
contacts_ver=4.1.0
|
||||
contacts_hash=38653b507bd7d953816bbc5e8bea7855867eb1cd
|
||||
calendar_ver=3.2.2
|
||||
calendar_hash=54e9a836adc739be4a2a9301b8d6d2e9d88e02f4
|
||||
user_external_ver=2.1.0
|
||||
user_external_hash=6e5afe7f36f398f864bfdce9cad72200e70322aa
|
||||
contacts_ver=4.2.2
|
||||
contacts_hash=cbab9a7acdc11a9e2779c20b850bb21faec1c80f
|
||||
calendar_ver=3.5.2
|
||||
calendar_hash=dcf2cba6933dc8805ca4b4d04ed7b993ff4652a1
|
||||
user_external_ver=3.0.0
|
||||
user_external_hash=0df781b261f55bbde73d8c92da3f99397000972f
|
||||
|
||||
# Clear prior packages and install dependencies from apt.
|
||||
|
||||
|
@ -47,10 +47,15 @@ apt-get purge -qq -y owncloud* 2> /dev/null || /bin/true
|
|||
|
||||
apt_install php php-fpm \
|
||||
php-cli php-sqlite3 php-gd php-imap php-curl php-pear curl \
|
||||
php-dev php-gd php-xml php-mbstring php-zip php-apcu php-json \
|
||||
php-intl php-imagick php-gmp php-bcmath php-apcu
|
||||
php-dev php-xml php-mbstring php-zip php-apcu php-json \
|
||||
php-intl php-imagick php-gmp php-bcmath
|
||||
|
||||
phpenmod apcu
|
||||
|
||||
management/editconf.py /etc/php/$(php_version)/mods-available/apcu.ini -c ';' \
|
||||
apc.enabled=1 \
|
||||
apc.enable_cli=1
|
||||
|
||||
management/editconf.py /etc/php/$(php_version)/cli/php.ini -c ';' \
|
||||
apc.enable_cli=1
|
||||
|
||||
|
@ -84,18 +89,42 @@ InstallNextcloud() {
|
|||
# their github repositories.
|
||||
mkdir -p /usr/local/lib/owncloud/apps
|
||||
|
||||
wget_verify https://github.com/nextcloud-releases/contacts/releases/download/v$version_contacts/contacts-v$version_contacts.tar.gz $hash_contacts /tmp/contacts.tgz
|
||||
IFS='.'
|
||||
read -a checkVer <<< "$version_contacts"
|
||||
unset IFS
|
||||
if [ "${checkVer[0]}" -gt 4 ] || [ "${checkVer[0]}" -eq 4 -a "${checkVer[1]}" -gt 0 ] || [ "${checkVer[0]}" -eq 4 -a "${checkVer[2]}" -gt 0 ]; then
|
||||
# Contacts 4.0.1 and later are downloaded from here
|
||||
wget_verify https://github.com/nextcloud-releases/contacts/releases/download/v$version_contacts/contacts-v$version_contacts.tar.gz $hash_contacts /tmp/contacts.tgz
|
||||
else
|
||||
# 4.0.0 and earlier are downloaded from here
|
||||
wget_verify https://github.com/Nextcloud/contacts/releases/download/v$version_contacts/contacts.tar.gz $hash_contacts /tmp/contacts.tgz
|
||||
fi
|
||||
tar xf /tmp/contacts.tgz -C /usr/local/lib/owncloud/apps/
|
||||
rm /tmp/contacts.tgz
|
||||
|
||||
wget_verify https://github.com/nextcloud-releases/calendar/releases/download/v$version_calendar/calendar-v$version_calendar.tar.gz $hash_calendar /tmp/calendar.tgz
|
||||
IFS='.'
|
||||
read -a checkVer <<< "$version_calendar"
|
||||
unset IFS
|
||||
if [ "${checkVer[0]}" -eq 2 -a "${checkVer[1]}" -gt 2 ] || [ "${checkVer[0]}" -gt 2 ]; then
|
||||
# Calendar 2.3.0 and later are downloaded from here
|
||||
wget_verify https://github.com/nextcloud-releases/calendar/releases/download/v$version_calendar/calendar-v$version_calendar.tar.gz $hash_calendar /tmp/calendar.tgz
|
||||
else
|
||||
wget_verify https://github.com/nextcloud/calendar/releases/download/v$version_calendar/calendar.tar.gz $hash_calendar /tmp/calendar.tgz
|
||||
fi
|
||||
tar xf /tmp/calendar.tgz -C /usr/local/lib/owncloud/apps/
|
||||
rm /tmp/calendar.tgz
|
||||
|
||||
# Starting with Nextcloud 15, the app user_external is no longer included in Nextcloud core,
|
||||
# we will install from their github repository.
|
||||
if [ -n "$version_user_external" ]; then
|
||||
wget_verify https://github.com/nextcloud/user_external/releases/download/v$version_user_external/user_external-$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz
|
||||
IFS='.'
|
||||
read -a checkVer <<< "$version_user_external"
|
||||
unset IFS
|
||||
if [ "${checkVer[0]}" -gt 2 ]; then
|
||||
wget_verify https://github.com/nextcloud-releases/user_external/releases/download/v$version_user_external/user_external-v$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz
|
||||
else
|
||||
wget_verify https://github.com/nextcloud/user_external/releases/download/v$version_user_external/user_external-$version_user_external.tar.gz $hash_user_external /tmp/user_external.tgz
|
||||
fi
|
||||
tar -xf /tmp/user_external.tgz -C /usr/local/lib/owncloud/apps/
|
||||
rm /tmp/user_external.tgz
|
||||
fi
|
||||
|
@ -139,10 +168,28 @@ InstallNextcloud() {
|
|||
# $STORAGE_ROOT/owncloud is kept together even during a backup. It is better to rely on config.php than
|
||||
# version.php since the restore procedure can leave the system in a state where you have a newer Nextcloud
|
||||
# application version than the database.
|
||||
|
||||
# If config.php exists, get version number, otherwise CURRENT_NEXTCLOUD_VER is empty.
|
||||
#
|
||||
# Config unlocking, power-mailinabox#86
|
||||
# If a configuration file already exists, remove the "readonly" tag before starting the upgrade. This is
|
||||
# necessary (otherwise upgrades will fail).
|
||||
#
|
||||
# The lock will be re-applied further down the line when it's safe to do so.
|
||||
CONFIG_TEMP=$(/bin/mktemp)
|
||||
if [ -f "$STORAGE_ROOT/owncloud/config.php" ]; then
|
||||
CURRENT_NEXTCLOUD_VER=$(php -r "include(\"$STORAGE_ROOT/owncloud/config.php\"); echo(\$CONFIG['version']);")
|
||||
# Unlock configuration directory for upgrades
|
||||
php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
|
||||
<?php
|
||||
include("$STORAGE_ROOT/owncloud/config.php");
|
||||
|
||||
\$CONFIG['config_is_read_only'] = false;
|
||||
|
||||
echo "<?php\n\\\$CONFIG = ";
|
||||
var_export(\$CONFIG);
|
||||
echo ";";
|
||||
?>
|
||||
EOF
|
||||
else
|
||||
CURRENT_NEXTCLOUD_VER=""
|
||||
fi
|
||||
|
@ -221,9 +268,13 @@ if [ ! -d /usr/local/lib/owncloud/ ] || [[ ! ${CURRENT_NEXTCLOUD_VER} =~ ^$nextc
|
|||
CURRENT_NEXTCLOUD_VER="21.0.9"
|
||||
fi
|
||||
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^21 ]]; then
|
||||
InstallNextcloud 22.2.6 9d39741f051a8da42ff7df46ceef2653a1dc70d9 4.1.0 38653b507bd7d953816bbc5e8bea7855867eb1cd 3.2.2 54e9a836adc739be4a2a9301b8d6d2e9d88e02f4 2.1.0 6e5afe7f36f398f864bfdce9cad72200e70322aa
|
||||
InstallNextcloud 22.2.6 9d39741f051a8da42ff7df46ceef2653a1dc70d9 4.1.0 38653b507bd7d953816bbc5e8bea7855867eb1cd 3.2.2 54e9a836adc739be4a2a9301b8d6d2e9d88e02f4 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f
|
||||
CURRENT_NEXTCLOUD_VER="22.2.6"
|
||||
fi
|
||||
if [[ ${CURRENT_NEXTCLOUD_VER} =~ ^22 ]]; then
|
||||
InstallNextcloud 23.0.4 87afec0bf90b3c66289e6fedd851867bc5a58f01 4.1.0 38653b507bd7d953816bbc5e8bea7855867eb1cd 3.2.2 54e9a836adc739be4a2a9301b8d6d2e9d88e02f4 3.0.0 0df781b261f55bbde73d8c92da3f99397000972f
|
||||
CURRENT_NEXTCLOUD_VER="23.0.4"
|
||||
fi
|
||||
fi
|
||||
|
||||
InstallNextcloud $nextcloud_ver $nextcloud_hash $contacts_ver $contacts_hash $calendar_ver $calendar_hash $user_external_ver $user_external_hash
|
||||
|
@ -252,10 +303,10 @@ if [ ! -f $STORAGE_ROOT/owncloud/owncloud.db ]; then
|
|||
'overwrite.cli.url' => '/cloud',
|
||||
'user_backends' => array(
|
||||
array(
|
||||
'class' => 'OC_User_IMAP',
|
||||
'arguments' => array(
|
||||
'127.0.0.1', 143, null
|
||||
),
|
||||
'class' => '\OCA\UserExternal\IMAP',
|
||||
'arguments' => array(
|
||||
'127.0.0.1', 143, null, null, false, false
|
||||
),
|
||||
),
|
||||
),
|
||||
'memcache.local' => '\OC\Memcache\APCu',
|
||||
|
@ -282,6 +333,7 @@ EOF
|
|||
# storage/database
|
||||
'directory' => '$STORAGE_ROOT/owncloud',
|
||||
'dbtype' => 'sqlite3',
|
||||
'dbname' => 'owncloud',
|
||||
|
||||
# create an administrator account with a random password so that
|
||||
# the user does not have to enter anything on first load of Nextcloud
|
||||
|
@ -312,11 +364,12 @@ fi
|
|||
# the correct domain name if the domain is being change from the previous setup.
|
||||
# Use PHP to read the settings file, modify it, and write out the new settings array.
|
||||
TIMEZONE=$(cat /etc/timezone)
|
||||
CONFIG_TEMP=$(/bin/mktemp)
|
||||
php <<EOF > $CONFIG_TEMP && mv $CONFIG_TEMP $STORAGE_ROOT/owncloud/config.php;
|
||||
<?php
|
||||
include("$STORAGE_ROOT/owncloud/config.php");
|
||||
|
||||
\$CONFIG['config_is_read_only'] = true;
|
||||
|
||||
\$CONFIG['trusted_domains'] = array('$PRIMARY_HOSTNAME');
|
||||
|
||||
\$CONFIG['memcache.local'] = '\OC\Memcache\APCu';
|
||||
|
@ -328,7 +381,14 @@ include("$STORAGE_ROOT/owncloud/config.php");
|
|||
|
||||
\$CONFIG['mail_domain'] = '$PRIMARY_HOSTNAME';
|
||||
|
||||
\$CONFIG['user_backends'] = array(array('class' => 'OC_User_IMAP','arguments' => array('127.0.0.1', 143, null),),);
|
||||
\$CONFIG['user_backends'] = array(
|
||||
array(
|
||||
'class' => '\OCA\UserExternal\IMAP',
|
||||
'arguments' => array(
|
||||
'127.0.0.1', 143, null, null, false, false
|
||||
),
|
||||
),
|
||||
);
|
||||
|
||||
echo "<?php\n\\\$CONFIG = ";
|
||||
var_export(\$CONFIG);
|
||||
|
@ -342,7 +402,7 @@ chown www-data.www-data $STORAGE_ROOT/owncloud/config.php
|
|||
# user_external is what allows Nextcloud to use IMAP for login. The contacts
|
||||
# and calendar apps are the extensions we really care about here.
|
||||
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:disable firstrunwizard
|
||||
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable user_external --force
|
||||
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable user_external
|
||||
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable contacts
|
||||
hide_output sudo -u www-data php /usr/local/lib/owncloud/console.php app:enable calendar
|
||||
|
||||
|
@ -377,11 +437,11 @@ management/editconf.py /etc/php/$(php_version)/cli/conf.d/10-opcache.ini -c ';'
|
|||
opcache.save_comments=1 \
|
||||
opcache.revalidate_freq=1
|
||||
|
||||
# If apc is explicitly disabled we need to enable it
|
||||
if grep -q apc.enabled=0 /etc/php/$(php_version)/mods-available/apcu.ini; then
|
||||
management/editconf.py /etc/php/$(php_version)/mods-available/apcu.ini -c ';' \
|
||||
apc.enabled=1
|
||||
fi
|
||||
# Migrate users_external data from <0.6.0 to version 3.0.0 (see https://github.com/nextcloud/user_external).
|
||||
# This version was probably in use in Mail-in-a-Box v0.41 (February 26, 2019) and earlier.
|
||||
# We moved to v0.6.3 in 193763f8. Ignore errors - maybe there are duplicated users with the
|
||||
# correct backend already.
|
||||
sqlite3 $STORAGE_ROOT/owncloud/owncloud.db "UPDATE oc_users_external SET backend='127.0.0.1';" || /bin/true
|
||||
|
||||
# Set up a cron job for Nextcloud.
|
||||
cat > /etc/cron.d/mailinabox-nextcloud << EOF;
|
||||
|
@ -391,9 +451,6 @@ cat > /etc/cron.d/mailinabox-nextcloud << EOF;
|
|||
EOF
|
||||
chmod +x /etc/cron.d/mailinabox-nextcloud
|
||||
|
||||
# Remove previous hourly cronjob
|
||||
rm -f /etc/cron.hourly/mailinabox-owncloud
|
||||
|
||||
# There's nothing much of interest that a user could do as an admin for Nextcloud,
|
||||
# and there's a lot they could mess up, so we don't make any users admins of Nextcloud.
|
||||
# But if we wanted to, we would do this:
|
||||
|
|
|
@ -9,19 +9,33 @@ if [[ $EUID -ne 0 ]]; then
|
|||
exit 1
|
||||
fi
|
||||
|
||||
# Check that we are running on Debian GNU/Linux, or Ubuntu 20.04
|
||||
if [ $(get_os_code) = $OS_UNSUPPORTED ]; then
|
||||
echo "Mail-in-a-Box only supports being installed on one of these operating systems:"
|
||||
echo "* Debian 10 (buster)"
|
||||
echo "* Debian 11 (bullseye)"
|
||||
echo "* Ubuntu 20.04 LTS (Focal Fossa)"
|
||||
echo
|
||||
echo "You're running:"
|
||||
lsb_release -ds
|
||||
echo
|
||||
echo "We can't write scripts that run on every possible setup, sorry."
|
||||
exit 1
|
||||
fi
|
||||
# Check that we are running on Debian GNU/Linux, or Ubuntu 20.04/22.04
|
||||
case $(get_os_code) in
|
||||
$OS_UNSUPPORTED)
|
||||
echo "This version of Power Mail-in-a-Box only supports being installed on one of these operating systems:"
|
||||
# echo "* Debian 10 (buster)"
|
||||
echo "* Debian 11 (bullseye)"
|
||||
echo "* Ubuntu 20.04 LTS (Focal Fossa)"
|
||||
echo "* Ubuntu 22.04 LTS (Jammy Jellyfish)"
|
||||
echo
|
||||
echo "You're running:"
|
||||
lsb_release -ds
|
||||
echo
|
||||
echo "We can't write scripts that run on every possible setup, sorry."
|
||||
exit 1
|
||||
;;
|
||||
|
||||
$OS_DEBIAN_10)
|
||||
echo "You're trying to install Power Mail-in-a-Box on Debian 10 (buster), which is no longer supported."
|
||||
echo "You can install the latest version of Power Mail-in-a-Box supporting Debian 10 by running the following command:"
|
||||
echo
|
||||
echo "curl -L https://power-mailinabox.net/setup.sh | sudo bash"
|
||||
echo
|
||||
echo "Then upgrade to Debian 11 (bullseye). A short guide on how to do so is available here:"
|
||||
echo "https://power-mailinabox.net/buster-eol"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
|
||||
# Check that we have enough memory.
|
||||
#
|
||||
|
|
|
@ -49,12 +49,12 @@ fi
|
|||
|
||||
# Put a start script in a global location. We tell the user to run 'mailinabox'
|
||||
# in the first dialog prompt, so we should do this before that starts.
|
||||
cat > /usr/local/bin/mailinabox << EOF;
|
||||
cat > /usr/local/sbin/mailinabox << EOF;
|
||||
#!/bin/bash
|
||||
cd $(pwd)
|
||||
source setup/start.sh
|
||||
EOF
|
||||
chmod +x /usr/local/bin/mailinabox
|
||||
chmod 744 /usr/local/sbin/mailinabox
|
||||
|
||||
# Ask the user for the PRIMARY_HOSTNAME, PUBLIC_IP, and PUBLIC_IPV6,
|
||||
# if values have not already been set in environment variables. When running
|
||||
|
@ -72,6 +72,10 @@ fi
|
|||
fi
|
||||
|
||||
# Create the STORAGE_USER and STORAGE_ROOT directory if they don't already exist.
|
||||
#
|
||||
# Set the directory and all of its parent directories' permissions to world
|
||||
# readable since it holds files owned by different processes.
|
||||
#
|
||||
# If the STORAGE_ROOT is missing the mailinabox.version file that lists a
|
||||
# migration (schema) number for the files stored there, assume this is a fresh
|
||||
# installation to that directory and write the file to contain the current
|
||||
|
@ -82,11 +86,15 @@ fi
|
|||
if [ ! -d $STORAGE_ROOT ]; then
|
||||
mkdir -p $STORAGE_ROOT
|
||||
fi
|
||||
f=$STORAGE_ROOT
|
||||
while [[ $f != / ]]; do chmod a+rx "$f"; f=$(dirname "$f"); done;
|
||||
if [ ! -f $STORAGE_ROOT/mailinabox.version ]; then
|
||||
setup/migrate.py --current > $STORAGE_ROOT/mailinabox.version
|
||||
chown $STORAGE_USER.$STORAGE_USER $STORAGE_ROOT/mailinabox.version
|
||||
fi
|
||||
|
||||
chmod 751 $STORAGE_ROOT
|
||||
|
||||
# Save the global options in /etc/mailinabox.conf so that standalone
|
||||
# tools know where to look for data. The default MTA_STS_MODE setting
|
||||
# is blank unless set by an environment variable, but see web.sh for
|
||||
|
@ -121,6 +129,14 @@ source setup/zpush.sh
|
|||
source setup/management.sh
|
||||
source setup/munin.sh
|
||||
|
||||
# Create a shorthand alias for the cli interface
|
||||
cat > /usr/local/sbin/miabadm << EOF;
|
||||
#!/bin/bash
|
||||
cd $(pwd)
|
||||
/usr/bin/env python3 management/cli.py \$@
|
||||
EOF
|
||||
chmod 744 /usr/local/sbin/miabadm
|
||||
|
||||
# Wait for the management daemon to start...
|
||||
until nc -z -w 4 127.0.0.1 10222
|
||||
do
|
||||
|
|
|
@ -14,6 +14,15 @@ source setup/functions.sh # load our functions
|
|||
echo $PRIMARY_HOSTNAME > /etc/hostname
|
||||
hostname $PRIMARY_HOSTNAME
|
||||
|
||||
# ### Enable IPv6 at Kernel Level
|
||||
|
||||
# This doesn't mean that the cloud provider must provide IPv6 connectivity. We just want
|
||||
# the loopback interface to also work on IPv6 (that is, we want :: to be available). This
|
||||
# is required because apparently nsd expects this to exist.
|
||||
|
||||
management/editconf.py /etc/sysctl.conf "net.ipv6.conf.lo.disable_ipv6 = 0"
|
||||
hide_output sysctl --system
|
||||
|
||||
# ### Fix permissions
|
||||
|
||||
# The default Ubuntu Bionic image on Scaleway throws warnings during setup about incorrect
|
||||
|
@ -102,9 +111,6 @@ apt_get_quiet autoremove
|
|||
|
||||
# Install basic utilities.
|
||||
#
|
||||
# * haveged: Provides extra entropy to /dev/random so it doesn't stall
|
||||
# when generating random numbers for private keys (e.g. during
|
||||
# ldns-keygen).
|
||||
# * unattended-upgrades: Apt tool to install security updates automatically.
|
||||
# * cron: Runs background processes periodically.
|
||||
# * ntp: keeps the system time correct
|
||||
|
@ -118,8 +124,8 @@ apt_get_quiet autoremove
|
|||
|
||||
echo Installing system packages...
|
||||
apt_install python3 python3-dev python3-pip python3-setuptools \
|
||||
netcat-openbsd wget curl git sudo coreutils bc \
|
||||
haveged pollinate openssh-client unzip \
|
||||
netcat-openbsd wget curl git sudo coreutils bc file \
|
||||
pollinate openssh-client unzip \
|
||||
unattended-upgrades cron ntp fail2ban rsyslog
|
||||
|
||||
# ### Suppress Upgrade Prompts
|
||||
|
@ -354,6 +360,7 @@ systemctl restart systemd-resolved
|
|||
rm -f /etc/fail2ban/jail.local # we used to use this file but don't anymore
|
||||
rm -f /etc/fail2ban/jail.d/defaults-debian.conf # removes default config so we can manage all of fail2ban rules in one config
|
||||
cat conf/fail2ban/jails.conf \
|
||||
| sed "s/PUBLIC_IPV6/$PUBLIC_IPV6/g" \
|
||||
| sed "s/PUBLIC_IP/$PUBLIC_IP/g" \
|
||||
| sed "s#STORAGE_ROOT#$STORAGE_ROOT#" \
|
||||
> /etc/fail2ban/jail.d/mailinabox.conf
|
||||
|
|
|
@ -95,7 +95,7 @@ else
|
|||
pm.max_spare_servers=18
|
||||
fi
|
||||
|
||||
# Duplicate the socket to isolate MiaB apps from user apps that happen to run php
|
||||
# Duplicate the socket to isolate MiaB apps from user apps that happen to run php
|
||||
cp /etc/php/$(php_version)/fpm/pool.d/www.conf /etc/php/$(php_version)/fpm/pool.d/miab.conf
|
||||
|
||||
management/editconf.py /etc/php/$(php_version)/fpm/pool.d/miab.conf -c ';' \
|
||||
|
@ -132,7 +132,7 @@ chmod a+r /var/lib/mailinabox/mozilla-autoconfig.xml
|
|||
|
||||
# Create a generic mta-sts.txt file which is exposed via the
|
||||
# nginx configuration at /.well-known/mta-sts.txt
|
||||
# more documentation is available on:
|
||||
# more documentation is available on:
|
||||
# https://www.uriports.com/blog/mta-sts-explained/
|
||||
# default mode is "enforce". In /etc/mailinabox.conf change
|
||||
# "MTA_STS_MODE=testing" which means "Messages will be delivered
|
||||
|
@ -160,3 +160,6 @@ restart_service php$(php_version)-fpm
|
|||
# Open ports.
|
||||
ufw_allow http
|
||||
ufw_allow https
|
||||
|
||||
# Allow the webserver to access directories group-owned by user-data
|
||||
usermod -a -G user-data www-data
|
||||
|
|
|
@ -30,17 +30,17 @@ apt_install \
|
|||
# whether we have the latest version of everything.
|
||||
# For the latest versions, see:
|
||||
# https://github.com/roundcube/roundcubemail/releases
|
||||
# https://github.com/mfreiholz/persistent_login/commits/master
|
||||
# https://github.com/stremlau/html5_notifier/commits/master
|
||||
# https://github.com/mfreiholz/persistent_login/
|
||||
# https://github.com/stremlau/html5_notifier/
|
||||
# https://github.com/mstilkerich/rcmcarddav/releases
|
||||
# The easiest way to get the package hashes is to run this script and get the hash from
|
||||
# the error message.
|
||||
VERSION=1.5.2
|
||||
HASH=208ce4ca0be423cc0f7070ff59bd03588b4439bf
|
||||
VERSION=1.6.0
|
||||
HASH=fd84b4fac74419bb73e7a3bcae1978d5589c52de
|
||||
PERSISTENT_LOGIN_VERSION=version-5.3.0
|
||||
HTML5_NOTIFIER_VERSION=68d9ca194212e15b3c7225eb6085dbcf02fd13d7 # version 0.6.4+
|
||||
CARDDAV_VERSION=4.3.0
|
||||
CARDDAV_HASH=4ad7df8843951062878b1375f77c614f68bc5c61
|
||||
CARDDAV_VERSION=4.4.4
|
||||
CARDDAV_HASH=743fd6925b775f821aa8860982d2bdeec05f5d7b
|
||||
|
||||
UPDATE_KEY=$VERSION:$PERSISTENT_LOGIN_VERSION:$HTML5_NOTIFIER_VERSION:$CARDDAV_VERSION
|
||||
|
||||
|
@ -83,7 +83,7 @@ if [ $needs_update == 1 ]; then
|
|||
|
||||
# download and verify the full release of the carddav plugin
|
||||
wget_verify \
|
||||
https://github.com/blind-coder/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \
|
||||
https://github.com/mstilkerich/rcmcarddav/releases/download/v${CARDDAV_VERSION}/carddav-v${CARDDAV_VERSION}.tar.gz \
|
||||
$CARDDAV_HASH \
|
||||
/tmp/carddav.tar.gz
|
||||
|
||||
|
@ -115,23 +115,22 @@ cat > $RCM_CONFIG <<EOF;
|
|||
\$config['log_dir'] = '/var/log/roundcubemail/';
|
||||
\$config['temp_dir'] = '/var/tmp/roundcubemail/';
|
||||
\$config['db_dsnw'] = 'sqlite:///$STORAGE_ROOT/mail/roundcube/roundcube.sqlite?mode=0640';
|
||||
\$config['default_host'] = 'ssl://localhost';
|
||||
\$config['default_port'] = 993;
|
||||
\$config['imap_host'] = 'ssl://localhost:993';
|
||||
\$config['imap_conn_options'] = array(
|
||||
'ssl' => array(
|
||||
'verify_peer' => false,
|
||||
'verify_peer_name' => false,
|
||||
),
|
||||
);
|
||||
'ssl'=> array(
|
||||
'verify_peer' => false,
|
||||
'verify_peer_name' => false,
|
||||
),
|
||||
);
|
||||
\$config['imap_timeout'] = 15;
|
||||
\$config['smtp_server'] = 'tls://127.0.0.1';
|
||||
\$config['smtp_host'] = 'tls://127.0.0.1:587';
|
||||
\$config['smtp_conn_options'] = array(
|
||||
'ssl' => array(
|
||||
'verify_peer' => false,
|
||||
'verify_peer_name' => false,
|
||||
),
|
||||
);
|
||||
\$config['support_url'] = 'https://mailinabox.email/';
|
||||
'ssl'=> array(
|
||||
'verify_peer' => false,
|
||||
'verify_peer_name' => false,
|
||||
),
|
||||
);
|
||||
\$config['support_url'] = 'https://power-mailinabox.net/';
|
||||
\$config['product_name'] = '$PRIMARY_HOSTNAME Webmail';
|
||||
\$config['plugins'] = array('html5_notifier', 'archive', 'zipdownload', 'password', 'managesieve', 'jqueryui', 'persistent_login', 'carddav', 'enigma');
|
||||
\$config['cipher_method'] = 'AES-256-CBC'; # persistent login cookie and potentially other things
|
||||
|
@ -141,6 +140,11 @@ cat > $RCM_CONFIG <<EOF;
|
|||
\$config['login_username_filter'] = 'email';
|
||||
\$config['password_charset'] = 'UTF-8';
|
||||
\$config['junk_mbox'] = 'Spam';
|
||||
|
||||
/* ensure roudcube session id's aren't leaked to other parts of the server */
|
||||
\$config['session_path'] = '/mail/';
|
||||
/* prevent CSRF, requires php 7.3+ */
|
||||
\$config['session_samesite'] = 'Strict';
|
||||
\$config['quota_zero_as_unlimited'] = true;
|
||||
EOF
|
||||
|
||||
|
@ -183,7 +187,7 @@ cat > ${RCM_PLUGIN_DIR}/carddav/config.inc.php <<EOF;
|
|||
'name' => 'ownCloud',
|
||||
'username' => '%u', // login username
|
||||
'password' => '%p', // login password
|
||||
'url' => 'https://${PRIMARY_HOSTNAME}/cloud/remote.php/carddav/addressbooks/%u/contacts',
|
||||
'url' => 'https://${PRIMARY_HOSTNAME}/cloud/remote.php/dav/addressbooks/users/%u/contacts/',
|
||||
'active' => true,
|
||||
'readonly' => false,
|
||||
'refresh_time' => '02:00:00',
|
||||
|
@ -207,13 +211,12 @@ sudo -u www-data touch /var/log/roundcubemail/errors.log
|
|||
cp ${RCM_PLUGIN_DIR}/password/config.inc.php.dist \
|
||||
${RCM_PLUGIN_DIR}/password/config.inc.php
|
||||
|
||||
management/editconf.py ${RCM_PLUGIN_DIR}/password/config.inc.php \
|
||||
"\$config['password_minimum_length']=8;" \
|
||||
"\$config['password_db_dsn']='sqlite:///$STORAGE_ROOT/mail/users.sqlite';" \
|
||||
"\$config['password_query']='UPDATE users SET password=%D WHERE email=%u';" \
|
||||
"\$config['password_dovecotpw']='/usr/bin/doveadm pw';" \
|
||||
"\$config['password_dovecotpw_method']='SHA512-CRYPT';" \
|
||||
"\$config['password_dovecotpw_with_method']=true;"
|
||||
management/editconf.py ${RCM_PLUGIN_DIR}/password/config.inc.php -c "//" \
|
||||
"\$config['password_driver'] = 'miab';" \
|
||||
"\$config['password_minimum_length'] = 8;" \
|
||||
"\$config['password_miab_url'] = 'http://127.0.0.1:10222/';" \
|
||||
"\$config['password_miab_user'] = '';" \
|
||||
"\$config['password_miab_pass'] = '';"
|
||||
|
||||
# so PHP can use doveadm, for the password changing plugin
|
||||
usermod -a -G dovecot www-data
|
||||
|
@ -231,7 +234,7 @@ chown -f -R root.www-data ${RCM_PLUGIN_DIR}/carddav
|
|||
chmod -R 774 ${RCM_PLUGIN_DIR}/carddav
|
||||
|
||||
# Run Roundcube database migration script (database is created if it does not exist)
|
||||
${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcube
|
||||
php ${RCM_DIR}/bin/updatedb.sh --dir ${RCM_DIR}/SQL --package roundcube
|
||||
chown www-data:www-data $STORAGE_ROOT/mail/roundcube/roundcube.sqlite
|
||||
chmod 664 $STORAGE_ROOT/mail/roundcube/roundcube.sqlite
|
||||
|
||||
|
|
|
@ -17,7 +17,7 @@ source /etc/mailinabox.conf # load global vars
|
|||
|
||||
echo "Installing Z-Push (Exchange/ActiveSync server)..."
|
||||
apt_install \
|
||||
php-soap php-imap libawl-php php-xsl
|
||||
php-soap php-imap libawl-php php-xml
|
||||
|
||||
phpenmod -v php imap
|
||||
|
||||
|
@ -42,8 +42,6 @@ if [ $needs_update == 1 ]; then
|
|||
rm -rf /tmp/z-push.zip /tmp/z-push
|
||||
|
||||
rm -f /usr/sbin/z-push-{admin,top}
|
||||
ln -s /usr/local/lib/z-push/z-push-admin.php /usr/sbin/z-push-admin
|
||||
ln -s /usr/local/lib/z-push/z-push-top.php /usr/sbin/z-push-top
|
||||
echo $VERSION > /usr/local/lib/z-push/version
|
||||
fi
|
||||
|
||||
|
@ -106,4 +104,4 @@ restart_service php$(php_version)-fpm
|
|||
|
||||
# Fix states after upgrade
|
||||
|
||||
hide_output z-push-admin -a fixstates
|
||||
hide_output php /usr/local/lib/z-push/z-push-admin.php -a fixstates
|
||||
|
|
|
@ -30,7 +30,7 @@ def test(server, description):
|
|||
(hostname, "TXT", "\"v=spf1 mx -all\""),
|
||||
("mail._domainkey." + hostname, "TXT", "\"v=DKIM1; k=rsa; s=email; \" \"p=__KEY__\""),
|
||||
#("_adsp._domainkey." + hostname, "TXT", "\"dkim=all\""),
|
||||
("_dmarc." + hostname, "TXT", "\"v=DMARC1; p=quarantine\""),
|
||||
("_dmarc." + hostname, "TXT", "\"v=DMARC1; p=quarantine;\""),
|
||||
]
|
||||
return test2(tests, server, description)
|
||||
|
||||
|
@ -48,7 +48,7 @@ def test2(tests, server, description):
|
|||
for qname, rtype, expected_answer in tests:
|
||||
# do the query and format the result as a string
|
||||
try:
|
||||
response = dns.resolver.query(qname, rtype)
|
||||
response = dns.resolver.resolve(qname, rtype)
|
||||
except dns.resolver.NoNameservers:
|
||||
# host did not have an answer for this query
|
||||
print("Could not connect to %s for DNS query." % server)
|
||||
|
|
|
@ -48,7 +48,7 @@ server = smtplib.SMTP_SSL(host)
|
|||
ipaddr = socket.gethostbyname(host) # IPv4 only!
|
||||
reverse_ip = dns.reversename.from_address(ipaddr) # e.g. "1.0.0.127.in-addr.arpa."
|
||||
try:
|
||||
reverse_dns = dns.resolver.query(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname
|
||||
reverse_dns = dns.resolver.resolve(reverse_ip, 'PTR')[0].target.to_text(omit_final_dot=True) # => hostname
|
||||
except dns.resolver.NXDOMAIN:
|
||||
print("Reverse DNS lookup failed for %s. SMTP EHLO name check skipped." % ipaddr)
|
||||
reverse_dns = None
|
||||
|
|
|
@ -1,3 +0,0 @@
|
|||
#!/bin/bash
|
||||
# This script has moved.
|
||||
management/cli.py "$@"
|
|
@ -8,16 +8,16 @@
|
|||
|
||||
source /etc/mailinabox.conf # load global vars
|
||||
|
||||
ADMIN=$(./mail.py user admins | head -n 1)
|
||||
test -z "$1" || ADMIN=$1
|
||||
ADMIN=$(management/cli.py user admins | head -n 1)
|
||||
test -z "$1" || ADMIN=$1
|
||||
|
||||
echo I am going to unlock admin features for $ADMIN.
|
||||
echo You can provide another user to unlock as the first argument of this script.
|
||||
echo
|
||||
echo WARNING: you could break mail-in-a-box when fiddling around with Nextcloud\'s admin interface
|
||||
echo If in doubt, press CTRL-C to cancel.
|
||||
echo
|
||||
echo
|
||||
echo Press enter to continue.
|
||||
read
|
||||
|
||||
sudo -u www-data php /usr/local/lib/owncloud/occ group:adduser admin $ADMIN && echo Done.
|
||||
sudo -u www-data php /usr/local/lib/owncloud/occ group:adduser admin "$ADMIN" && echo Done.
|
||||
|
|
Loading…
Add table
Reference in a new issue