update notary vendors
Signed-off-by: Jessica Frazelle <acidburn@docker.com>
This commit is contained in:
parent
4302c14a64
commit
a52a7a6991
89 changed files with 9469 additions and 1875 deletions
11
vendor/src/github.com/docker/notary/.gitignore
vendored
Normal file
11
vendor/src/github.com/docker/notary/.gitignore
vendored
Normal file
|
@ -0,0 +1,11 @@
|
|||
/cmd/notary-server/notary-server
|
||||
/cmd/notary-server/local.config.json
|
||||
/cmd/notary-signer/local.config.json
|
||||
cover
|
||||
bin
|
||||
cross
|
||||
.cover
|
||||
*.swp
|
||||
.idea
|
||||
*.iml
|
||||
coverage.out
|
86
vendor/src/github.com/docker/notary/CONTRIBUTING.md
vendored
Normal file
86
vendor/src/github.com/docker/notary/CONTRIBUTING.md
vendored
Normal file
|
@ -0,0 +1,86 @@
|
|||
# Contributing to notary
|
||||
|
||||
## Before reporting an issue...
|
||||
|
||||
### If your problem is with...
|
||||
|
||||
- automated builds
|
||||
- your account on the [Docker Hub](https://hub.docker.com/)
|
||||
- any other [Docker Hub](https://hub.docker.com/) issue
|
||||
|
||||
Then please do not report your issue here - you should instead report it to [https://support.docker.com](https://support.docker.com)
|
||||
|
||||
### If you...
|
||||
|
||||
- need help setting up notary
|
||||
- can't figure out something
|
||||
- are not sure what's going on or what your problem is
|
||||
|
||||
Then please do not open an issue here yet - you should first try one of the following support forums:
|
||||
|
||||
- irc: #docker-trust on freenode
|
||||
- mailing-list: <trust@dockerproject.org> or https://groups.google.com/a/dockerproject.org/forum/#!forum/trust
|
||||
|
||||
## Reporting an issue properly
|
||||
|
||||
By following these simple rules you will get better and faster feedback on your issue.
|
||||
|
||||
- search the bugtracker for an already reported issue
|
||||
|
||||
### If you found an issue that describes your problem:
|
||||
|
||||
- please read other user comments first, and confirm this is the same issue: a given error condition might be indicative of different problems - you may also find a workaround in the comments
|
||||
- please refrain from adding "same thing here" or "+1" comments
|
||||
- you don't need to comment on an issue to get notified of updates: just hit the "subscribe" button
|
||||
- comment if you have some new, technical and relevant information to add to the case
|
||||
|
||||
### If you have not found an existing issue that describes your problem:
|
||||
|
||||
1. create a new issue, with a succinct title that describes your issue:
|
||||
- bad title: "It doesn't work with my docker"
|
||||
- good title: "Publish fail: 400 error with E_INVALID_DIGEST"
|
||||
2. copy the output of:
|
||||
- `docker version`
|
||||
- `docker info`
|
||||
- `docker exec <registry-container> registry -version`
|
||||
3. copy the command line you used to run `notary` or launch `notaryserver`
|
||||
4. if relevant, copy your `notaryserver` logs that show the error
|
||||
|
||||
## Contributing a patch for a known bug, or a small correction
|
||||
|
||||
You should follow the basic GitHub workflow:
|
||||
|
||||
1. fork
|
||||
2. commit a change
|
||||
3. make sure the tests pass
|
||||
4. PR
|
||||
|
||||
Additionally, you must [sign your commits](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work). It's very simple:
|
||||
|
||||
- configure your name with git: `git config user.name "Real Name" && git config user.email mail@example.com`
|
||||
- sign your commits using `-s`: `git commit -s -m "My commit"`
|
||||
|
||||
Some simple rules to ensure quick merge:
|
||||
|
||||
- clearly point to the issue(s) you want to fix in your PR comment (e.g., `closes #12345`)
|
||||
- prefer multiple (smaller) PRs addressing individual issues over a big one trying to address multiple issues at once
|
||||
- if you need to amend your PR following comments, please squash instead of adding more commits
|
||||
|
||||
## Contributing new features
|
||||
|
||||
You are heavily encouraged to first discuss what you want to do. You can do so on the irc channel, or by opening an issue that clearly describes the use case you want to fulfill, or the problem you are trying to solve.
|
||||
|
||||
If this is a major new feature, you should then submit a proposal that describes your technical solution and reasoning.
|
||||
If you did discuss it first, this will likely be greenlighted very fast. It's advisable to address all feedback on this proposal before starting actual work
|
||||
|
||||
Then you should submit your implementation, clearly linking to the issue (and possible proposal).
|
||||
|
||||
Your PR will be reviewed by the community, then ultimately by the project maintainers, before being merged.
|
||||
|
||||
It's mandatory to:
|
||||
|
||||
- interact respectfully with other community members and maintainers - more generally, you are expected to abide by the [Docker community rules](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#docker-community-guidelines)
|
||||
- address maintainers' comments and modify your submission accordingly
|
||||
- write tests for any new code
|
||||
|
||||
Complying to these simple rules will greatly accelerate the review process, and will ensure you have a pleasant experience in contributing code to the Registry.
|
|
@ -1,3 +1,4 @@
|
|||
David Williamson <david.williamson@docker.com> (github: davidwilliamson)
|
||||
Aaron Lehmann <aaron.lehmann@docker.com> (github: aaronlehmann)
|
||||
Lewis Marshall <lewis@flynn.io> (github: lmars)
|
||||
Jonathan Rudenberg <jonathan@flynn.io> (github: titanous)
|
17
vendor/src/github.com/docker/notary/Dockerfile
vendored
Normal file
17
vendor/src/github.com/docker/notary/Dockerfile
vendored
Normal file
|
@ -0,0 +1,17 @@
|
|||
FROM golang:1.5.1
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
libltdl-dev \
|
||||
libsqlite3-dev \
|
||||
--no-install-recommends \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
RUN go get golang.org/x/tools/cmd/vet \
|
||||
&& go get golang.org/x/tools/cmd/cover \
|
||||
&& go get github.com/tools/godep
|
||||
|
||||
COPY . /go/src/github.com/docker/notary
|
||||
|
||||
ENV GOPATH /go/src/github.com/docker/notary/Godeps/_workspace:$GOPATH
|
||||
|
||||
WORKDIR /go/src/github.com/docker/notary
|
23
vendor/src/github.com/docker/notary/Dockerfile.server
vendored
Normal file
23
vendor/src/github.com/docker/notary/Dockerfile.server
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
FROM golang:1.5.1
|
||||
|
||||
RUN apt-get update && apt-get install -y \
|
||||
libltdl-dev \
|
||||
--no-install-recommends \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
EXPOSE 4443
|
||||
|
||||
ENV NOTARYPKG github.com/docker/notary
|
||||
ENV GOPATH /go/src/${NOTARYPKG}/Godeps/_workspace:$GOPATH
|
||||
|
||||
COPY . /go/src/github.com/docker/notary
|
||||
|
||||
WORKDIR /go/src/${NOTARYPKG}
|
||||
|
||||
RUN go install \
|
||||
-tags pkcs11 \
|
||||
-ldflags "-w -X ${NOTARYPKG}/version.GitCommit=`git rev-parse --short HEAD` -X ${NOTARYPKG}/version.NotaryVersion=`cat NOTARY_VERSION`" \
|
||||
${NOTARYPKG}/cmd/notary-server
|
||||
|
||||
ENTRYPOINT [ "notary-server" ]
|
||||
CMD [ "-config", "cmd/notary-server/config.json" ]
|
41
vendor/src/github.com/docker/notary/Dockerfile.signer
vendored
Normal file
41
vendor/src/github.com/docker/notary/Dockerfile.signer
vendored
Normal file
|
@ -0,0 +1,41 @@
|
|||
FROM dockersecurity/golang-softhsm2
|
||||
MAINTAINER Diogo Monica "diogo@docker.com"
|
||||
|
||||
# CHANGE-ME: Default values for SoftHSM2 PIN and SOPIN, used to initialize the first token
|
||||
ENV NOTARY_SIGNER_PIN="1234"
|
||||
ENV SOPIN="1234"
|
||||
ENV LIBDIR="/usr/local/lib/softhsm/"
|
||||
ENV NOTARY_SIGNER_DEFAULT_ALIAS="timestamp_1"
|
||||
ENV NOTARY_SIGNER_TIMESTAMP_1="testpassword"
|
||||
|
||||
# Install openSC and dependencies
|
||||
RUN apt-get update && apt-get install -y \
|
||||
libltdl-dev \
|
||||
libpcsclite-dev \
|
||||
opensc \
|
||||
usbutils \
|
||||
--no-install-recommends \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Initialize the SoftHSM2 token on slod 0, using PIN and SOPIN varaibles
|
||||
RUN softhsm2-util --init-token --slot 0 --label "test_token" --pin $NOTARY_SIGNER_PIN --so-pin $SOPIN
|
||||
|
||||
ENV NOTARYPKG github.com/docker/notary
|
||||
ENV GOPATH /go/src/${NOTARYPKG}/Godeps/_workspace:$GOPATH
|
||||
|
||||
EXPOSE 4443
|
||||
|
||||
# Copy the local repo to the expected go path
|
||||
COPY . /go/src/github.com/docker/notary
|
||||
|
||||
WORKDIR /go/src/${NOTARYPKG}
|
||||
|
||||
# Install notary-signer
|
||||
RUN go install \
|
||||
-tags pkcs11 \
|
||||
-ldflags "-w -X ${NOTARYPKG}/version.GitCommit=`git rev-parse --short HEAD` -X ${NOTARYPKG}/version.NotaryVersion=`cat NOTARY_VERSION`" \
|
||||
${NOTARYPKG}/cmd/notary-signer
|
||||
|
||||
|
||||
ENTRYPOINT [ "notary-signer" ]
|
||||
CMD [ "-config=cmd/notary-signer/config.json" ]
|
5
vendor/src/github.com/docker/notary/MAINTAINERS
vendored
Normal file
5
vendor/src/github.com/docker/notary/MAINTAINERS
vendored
Normal file
|
@ -0,0 +1,5 @@
|
|||
David Lawrence <david.lawrence@docker.com> (@endophage)
|
||||
Ying Li <ying.li@docker.com> (@cyli)
|
||||
Nathan McCauley <nathan.mccauley@docker.com> (@NathanMcCauley)
|
||||
Derek McGowan <derek@docker.com> (@dmcgowan)
|
||||
Diogo Monica <diogo@docker.com> (@diogomonica)
|
163
vendor/src/github.com/docker/notary/Makefile
vendored
Normal file
163
vendor/src/github.com/docker/notary/Makefile
vendored
Normal file
|
@ -0,0 +1,163 @@
|
|||
# Set an output prefix, which is the local directory if not specified
|
||||
PREFIX?=$(shell pwd)
|
||||
|
||||
# Populate version variables
|
||||
# Add to compile time flags
|
||||
NOTARY_PKG := github.com/docker/notary
|
||||
NOTARY_VERSION := $(shell cat NOTARY_VERSION)
|
||||
GITCOMMIT := $(shell git rev-parse --short HEAD)
|
||||
GITUNTRACKEDCHANGES := $(shell git status --porcelain --untracked-files=no)
|
||||
ifneq ($(GITUNTRACKEDCHANGES),)
|
||||
GITCOMMIT := $(GITCOMMIT)-dirty
|
||||
endif
|
||||
CTIMEVAR=-X $(NOTARY_PKG)/version.GitCommit=$(GITCOMMIT) -X $(NOTARY_PKG)/version.NotaryVersion=$(NOTARY_VERSION)
|
||||
GO_LDFLAGS=-ldflags "-w $(CTIMEVAR)"
|
||||
GO_LDFLAGS_STATIC=-ldflags "-w $(CTIMEVAR) -extldflags -static"
|
||||
GOOSES = darwin freebsd linux
|
||||
GOARCHS = amd64
|
||||
NOTARY_BUILDTAGS ?= pkcs11
|
||||
GO_EXC = go
|
||||
NOTARYDIR := /go/src/github.com/docker/notary
|
||||
|
||||
# check to be sure pkcs11 lib is always imported with a build tag
|
||||
GO_LIST_PKCS11 := $(shell go list -e -f '{{join .Deps "\n"}}' ./... | xargs go list -e -f '{{if not .Standard}}{{.ImportPath}}{{end}}' | grep -q pkcs11)
|
||||
ifeq ($(GO_LIST_PKCS11),)
|
||||
$(info pkcs11 import was not found anywhere without a build tag, yay)
|
||||
else
|
||||
$(error You are importing pkcs11 somewhere and not using a build tag)
|
||||
endif
|
||||
|
||||
_empty :=
|
||||
_space := $(empty) $(empty)
|
||||
|
||||
# go cover test variables
|
||||
COVERDIR=.cover
|
||||
COVERPROFILE=$(COVERDIR)/cover.out
|
||||
COVERMODE=count
|
||||
PKGS = $(shell go list ./... | tr '\n' ' ')
|
||||
|
||||
GO_VERSION = $(shell go version | awk '{print $$3}')
|
||||
|
||||
.PHONY: clean all fmt vet lint build test binaries cross cover docker-images notary-dockerfile
|
||||
.DELETE_ON_ERROR: cover
|
||||
.DEFAULT: default
|
||||
|
||||
go_version:
|
||||
ifneq ("$(GO_VERSION)", "go1.5.1")
|
||||
$(error Requires go version 1.5.1 - found $(GO_VERSION))
|
||||
else
|
||||
@echo
|
||||
endif
|
||||
|
||||
|
||||
all: AUTHORS clean fmt vet fmt lint build test binaries
|
||||
|
||||
AUTHORS: .git/HEAD
|
||||
git log --format='%aN <%aE>' | sort -fu > $@
|
||||
|
||||
# This only needs to be generated by hand when cutting full releases.
|
||||
version/version.go:
|
||||
./version/version.sh > $@
|
||||
|
||||
${PREFIX}/bin/notary-server: NOTARY_VERSION $(shell find . -type f -name '*.go')
|
||||
@echo "+ $@"
|
||||
@godep go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary-server
|
||||
|
||||
${PREFIX}/bin/notary: NOTARY_VERSION $(shell find . -type f -name '*.go')
|
||||
@echo "+ $@"
|
||||
@godep go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary
|
||||
|
||||
${PREFIX}/bin/notary-signer: NOTARY_VERSION $(shell find . -type f -name '*.go')
|
||||
@echo "+ $@"
|
||||
@godep go build -tags ${NOTARY_BUILDTAGS} -o $@ ${GO_LDFLAGS} ./cmd/notary-signer
|
||||
|
||||
vet: go_version
|
||||
@echo "+ $@"
|
||||
@test -z "$$(go tool vet -printf=false . 2>&1 | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
|
||||
|
||||
fmt:
|
||||
@echo "+ $@"
|
||||
@test -z "$$(gofmt -s -l .| grep -v .pb. | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
|
||||
|
||||
lint:
|
||||
@echo "+ $@"
|
||||
@test -z "$$(golint ./... | grep -v .pb. | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
|
||||
|
||||
build: go_version
|
||||
@echo "+ $@"
|
||||
@go build -tags "${NOTARY_BUILDTAGS}" -v ${GO_LDFLAGS} ./...
|
||||
|
||||
test: TESTOPTS =
|
||||
test: go_version
|
||||
@echo "+ $@ $(TESTOPTS)"
|
||||
go test -tags "${NOTARY_BUILDTAGS}" $(TESTOPTS) ./...
|
||||
|
||||
test-full: vet lint
|
||||
@echo "+ $@"
|
||||
go test -tags "${NOTARY_BUILDTAGS}" -v ./...
|
||||
|
||||
protos:
|
||||
@protoc --go_out=plugins=grpc:. proto/*.proto
|
||||
|
||||
# This allows coverage for a package to come from tests in different package.
|
||||
# Requires that the following:
|
||||
# go get github.com/wadey/gocovmerge; go install github.com/wadey/gocovmerge
|
||||
#
|
||||
# be run first
|
||||
|
||||
define gocover
|
||||
$(GO_EXC) test $(OPTS) $(TESTOPTS) -covermode="$(COVERMODE)" -coverprofile="$(COVERDIR)/$(subst /,-,$(1)).$(subst $(_space),.,$(NOTARY_BUILDTAGS)).cover" "$(1)" || exit 1;
|
||||
endef
|
||||
|
||||
gen-cover: go_version
|
||||
@mkdir -p "$(COVERDIR)"
|
||||
$(foreach PKG,$(PKGS),$(call gocover,$(PKG)))
|
||||
|
||||
cover: GO_EXC := go
|
||||
OPTS = -tags "${NOTARY_BUILDTAGS}" -coverpkg "$(shell ./coverpkg.sh $(1) $(NOTARY_PKG))"
|
||||
cover: gen-cover covmerge
|
||||
@go tool cover -html="$(COVERPROFILE)"
|
||||
|
||||
# Codecov knows how to merge multiple coverage files
|
||||
ci: OPTS = -tags "${NOTARY_BUILDTAGS}" -race -coverpkg "$(shell ./coverpkg.sh $(1) $(NOTARY_PKG))"
|
||||
GO_EXC := godep go
|
||||
ci: gen-cover
|
||||
|
||||
covmerge:
|
||||
@gocovmerge $(shell ls -1 $(COVERDIR)/* | tr "\n" " ") > $(COVERPROFILE)
|
||||
@go tool cover -func="$(COVERPROFILE)"
|
||||
|
||||
clean-protos:
|
||||
@rm proto/*.pb.go
|
||||
|
||||
binaries: go_version ${PREFIX}/bin/notary-server ${PREFIX}/bin/notary ${PREFIX}/bin/notary-signer
|
||||
@echo "+ $@"
|
||||
|
||||
define template
|
||||
mkdir -p ${PREFIX}/cross/$(1)/$(2);
|
||||
GOOS=$(1) GOARCH=$(2) CGO_ENABLED=0 go build -o ${PREFIX}/cross/$(1)/$(2)/notary -a -tags "static_build netgo" -installsuffix netgo ${GO_LDFLAGS_STATIC} ./cmd/notary;
|
||||
endef
|
||||
|
||||
cross: go_version
|
||||
$(foreach GOARCH,$(GOARCHS),$(foreach GOOS,$(GOOSES),$(call template,$(GOOS),$(GOARCH))))
|
||||
|
||||
|
||||
notary-dockerfile:
|
||||
@docker build --rm --force-rm -t notary .
|
||||
|
||||
server-dockerfile:
|
||||
@docker build --rm --force-rm -f Dockerfile.server -t notary-server .
|
||||
|
||||
signer-dockerfile:
|
||||
@docker build --rm --force-rm -f Dockerfile.signer -t notary-signer .
|
||||
|
||||
docker-images: notary-dockerfile server-dockerfile signer-dockerfile
|
||||
|
||||
shell: notary-dockerfile
|
||||
docker run --rm -it -v $(CURDIR)/cross:$(NOTARYDIR)/cross -v $(CURDIR)/bin:$(NOTARYDIR)/bin notary bash
|
||||
|
||||
|
||||
clean:
|
||||
@echo "+ $@"
|
||||
@rm -rf "$(COVERDIR)"
|
||||
@rm -rf "${PREFIX}/bin/notary-server" "${PREFIX}/bin/notary" "${PREFIX}/bin/notary-signer"
|
1
vendor/src/github.com/docker/notary/NOTARY_VERSION
vendored
Normal file
1
vendor/src/github.com/docker/notary/NOTARY_VERSION
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
1.0-rc1
|
193
vendor/src/github.com/docker/notary/README.md
vendored
Normal file
193
vendor/src/github.com/docker/notary/README.md
vendored
Normal file
|
@ -0,0 +1,193 @@
|
|||
# Notary [![Circle CI](https://circleci.com/gh/docker/notary/tree/master.svg?style=shield)](https://circleci.com/gh/docker/notary/tree/master)
|
||||
|
||||
The Notary project comprises a [server](cmd/notary-server) and a [client](cmd/notary) for running and interacting
|
||||
with trusted collections.
|
||||
|
||||
|
||||
Notary aims to make the internet more secure by making it easy for people to
|
||||
publish and verify content. We often rely on TLS to secure our communications
|
||||
with a web server which is inherently flawed, as any compromise of the server
|
||||
enables malicious content to be substituted for the legitimate content.
|
||||
|
||||
With Notary, publishers can sign their content offline using keys kept highly
|
||||
secure. Once the publisher is ready to make the content available, they can
|
||||
push their signed trusted collection to a Notary Server.
|
||||
|
||||
Consumers, having acquired the publisher's public key through a secure channel,
|
||||
can then communicate with any notary server or (insecure) mirror, relying
|
||||
only on the publisher's key to determine the validity and integrity of the
|
||||
received content.
|
||||
|
||||
## Goals
|
||||
|
||||
Notary is based on [The Update Framework](http://theupdateframework.com/), a secure general design for the problem of software distribution and updates. By using TUF, notary achieves a number of key advantages:
|
||||
|
||||
* **Survivable Key Compromise**: Content publishers must manage keys in order to sign their content. Signing keys may be compromised or lost so systems must be designed in order to be flexible and recoverable in the case of key compromise. TUF's notion of key roles is utilized to separate responsibilities across a hierarchy of keys such that loss of any particular key (except the root role) by itself is not fatal to the security of the system.
|
||||
* **Freshness Guarantees**: Replay attacks are a common problem in designing secure systems, where previously valid payloads are replayed to trick another system. The same problem exists in the software update systems, where old signed can be presented as the most recent. notary makes use of timestamping on publishing so that consumers can know that they are receiving the most up to date content. This is particularly important when dealing with software update where old vulnerable versions could be used to attack users.
|
||||
* **Configurable Trust Thresholds**: Oftentimes there are a large number of publishers that are allowed to publish a particular piece of content. For example, open source projects where there are a number of core maintainers. Trust thresholds can be used so that content consumers require a configurable number of signatures on a piece of content in order to trust it. Using thresholds increases security so that loss of individual signing keys doesn't allow publishing of malicious content.
|
||||
* **Signing Delegation**: To allow for flexible publishing of trusted collections, a content publisher can delegate part of their collection to another signer. This delegation is represented as signed metadata so that a consumer of the content can verify both the content and the delegation.
|
||||
* **Use of Existing Distribution**: Notary's trust guarantees are not tied at all to particular distribution channels from which content is delivered. Therefore, trust can be added to any existing content delivery mechanism.
|
||||
* **Untrusted Mirrors and Transport**: All of the notary metadata can be mirrored and distributed via arbitrary channels.
|
||||
|
||||
# Notary CLI
|
||||
|
||||
Notary is a tool for publishing and managing trusted collections of content. Publishers can digitally sign collections and consumers can verify integrity and origin of content. This ability is built on a straightforward key management and signing interface to create signed collections and configure trusted publishers.
|
||||
|
||||
## Using Notary
|
||||
Lets try using notary.
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- Requirements from the [Compiling Notary Server](#compiling-notary-server) section (such as go 1.5.1)
|
||||
- [docker and docker-compose](http://docs.docker.com/compose/install/)
|
||||
- [Notary server configuration](#configuring-notary-server)
|
||||
|
||||
As setup, let's build notary and then start up a local notary-server (don't forget to add `127.0.0.1 notary-server` to your `/etc/hosts`, or if using docker-machine, add `$(docker-machine ip) notary-server`).
|
||||
|
||||
```sh
|
||||
make binaries
|
||||
docker-compose build
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
Note: In order to have notary use the local notary server and development root CA we can load the local development configuration by appending `-c cmd/notary/config.json` to every command. If you would rather not have to use `-c` on every command, copy `cmd/notary/config.json and cmd/notary/root-ca.crt` to `~/.notary`.
|
||||
|
||||
|
||||
First, let's initiate a notary collection called `example.com/scripts`
|
||||
|
||||
```sh
|
||||
notary init example.com/scripts
|
||||
```
|
||||
|
||||
Now, look at the keys you created as a result of initialization
|
||||
```sh
|
||||
notary key list
|
||||
```
|
||||
|
||||
Cool, now add a local file `install.sh` and call it `v1`
|
||||
```sh
|
||||
notary add example.com/scripts v1 install.sh
|
||||
```
|
||||
|
||||
Wouldn't it be nice if others could know that you've signed this content? Use `publish` to publish your collection to your default notary-server
|
||||
```sh
|
||||
notary publish example.com/scripts
|
||||
```
|
||||
|
||||
Now, others can pull your trusted collection
|
||||
```sh
|
||||
notary list example.com/scripts
|
||||
```
|
||||
|
||||
More importantly, they can verify the content of your script by using `notary verify`:
|
||||
```sh
|
||||
curl example.com/install.sh | notary verify example.com/scripts v1 | sh
|
||||
```
|
||||
|
||||
# Notary Server
|
||||
|
||||
Notary Server manages TUF data over an HTTP API compatible with the
|
||||
[notary client](../notary/).
|
||||
|
||||
It may be configured to use either JWT or HTTP Basic Auth for authentication.
|
||||
Currently it only supports MySQL for storage of the TUF data, we intend to
|
||||
expand this to other storage options.
|
||||
|
||||
## Setup for Development
|
||||
|
||||
The notary repository comes with Dockerfiles and a docker-compose file
|
||||
to facilitate development. Simply run the following commands to start
|
||||
a notary server with a temporary MySQL database in containers:
|
||||
|
||||
```
|
||||
$ docker-compose build
|
||||
$ docker-compose up
|
||||
```
|
||||
|
||||
If you are on Mac OSX with boot2docker or kitematic, you'll need to
|
||||
update your hosts file such that the name `notary` is associated with
|
||||
the IP address of your VM (for boot2docker, this can be determined
|
||||
by running `boot2docker ip`, with kitematic, `echo $DOCKER_HOST` should
|
||||
show the IP of the VM). If you are using the default Linux setup,
|
||||
you need to add `127.0.0.1 notary` to your hosts file.
|
||||
|
||||
## Successfully connecting over TLS
|
||||
|
||||
By default notary-server runs with TLS with certificates signed by a local
|
||||
CA. In order to be able to successfully connect to it using
|
||||
either `curl` or `openssl`, you will have to use the root CA file in `fixtures/root-ca.crt`.
|
||||
|
||||
OpenSSL example:
|
||||
|
||||
`openssl s_client -connect notary-server:4443 -CAfile fixtures/root-ca.crt`
|
||||
|
||||
## Compiling Notary Server
|
||||
|
||||
Prerequisites:
|
||||
|
||||
- Go = 1.5.1
|
||||
- [godep](https://github.com/tools/godep) installed
|
||||
- libtool development headers installed
|
||||
|
||||
Install dependencies by running `godep restore`.
|
||||
|
||||
From the root of this git repository, run `make binaries`. This will
|
||||
compile the `notary`, `notary-server`, and `notary-signer` applications and
|
||||
place them in a `bin` directory at the root of the git repository (the `bin`
|
||||
directory is ignored by the .gitignore file).
|
||||
|
||||
`notary-signer` depends upon `pkcs11`, which requires that libtool headers be installed (`libtool-dev` on Ubuntu, `libtool-ltdl-devel` on CentOS/RedHat). If you are using Mac OS, you can `brew install libtool`, and run `make binaries` with the following environment variables (assuming a standard installation of Homebrew):
|
||||
|
||||
```sh
|
||||
export CPATH=/usr/local/include:${CPATH}
|
||||
export LIBRARY_PATH=/usr/local/lib:${LIBRARY_PATH}
|
||||
```
|
||||
|
||||
## Running Notary Server
|
||||
|
||||
The `notary-server` application has the following usage:
|
||||
|
||||
```
|
||||
$ bin/notary-server --help
|
||||
usage: bin/notary-serve
|
||||
-config="": Path to configuration file
|
||||
-debug=false: Enable the debugging server on localhost:8080
|
||||
```
|
||||
|
||||
## Configuring Notary Server
|
||||
|
||||
The configuration file must be a json file with the following format:
|
||||
|
||||
```json
|
||||
{
|
||||
"server": {
|
||||
"addr": ":4443",
|
||||
"tls_cert_file": "./fixtures/notary-server.crt",
|
||||
"tls_key_file": "./fixtures/notary-server.key"
|
||||
},
|
||||
"logging": {
|
||||
"level": 5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The pem and key provided in fixtures are purely for local development and
|
||||
testing. For production, you must create your own keypair and certificate,
|
||||
either via the CA of your choice, or a self signed certificate.
|
||||
|
||||
If using the pem and key provided in fixtures, either:
|
||||
- Add `fixtures/root-ca.crt` to your trusted root certificates
|
||||
- Use the default configuration for notary client that loads the CA root for you by using the flag `-c ./cmd/notary/config.json`
|
||||
- Disable TLS verification by adding the following option notary configuration file in `~/.notary/config.json`:
|
||||
|
||||
"skipTLSVerify": true
|
||||
|
||||
Otherwise, you will see TLS errors or X509 errors upon initializing the
|
||||
notary collection:
|
||||
|
||||
```
|
||||
$ notary list diogomonica.com/openvpn
|
||||
* fatal: Get https://notary-server:4443/v2/: x509: certificate signed by unknown authority
|
||||
$ notary list diogomonica.com/openvpn -c cmd/notary/config.json
|
||||
latest b1df2ad7cbc19f06f08b69b4bcd817649b509f3e5420cdd2245a85144288e26d 4056
|
||||
```
|
7
vendor/src/github.com/docker/notary/ROADMAP.md
vendored
Normal file
7
vendor/src/github.com/docker/notary/ROADMAP.md
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
# Roadmap
|
||||
|
||||
The Trust project consists of a number of moving parts of which Notary Server is one. Notary Server is the front line metadata service
|
||||
that clients interact with. It manages TUF metadata and interacts with a pluggable signing service to issue new TUF timestamp
|
||||
files.
|
||||
|
||||
The Notary-signer is provided as our reference implementation of a signing service. It supports HSMs along with Ed25519 software signing.
|
79
vendor/src/github.com/docker/notary/circle.yml
vendored
Normal file
79
vendor/src/github.com/docker/notary/circle.yml
vendored
Normal file
|
@ -0,0 +1,79 @@
|
|||
# Pony-up!
|
||||
machine:
|
||||
pre:
|
||||
# Install gvm
|
||||
- bash < <(curl -s -S -L https://raw.githubusercontent.com/moovweb/gvm/1.0.22/binscripts/gvm-installer)
|
||||
|
||||
post:
|
||||
# Install many go versions
|
||||
- gvm install go1.5.1 -B --name=stable
|
||||
|
||||
environment:
|
||||
# Convenient shortcuts to "common" locations
|
||||
CHECKOUT: /home/ubuntu/$CIRCLE_PROJECT_REPONAME
|
||||
BASE_DIR: src/github.com/docker/notary
|
||||
# Trick circle brainflat "no absolute path" behavior
|
||||
BASE_STABLE: ../../../$HOME/.gvm/pkgsets/stable/global/$BASE_DIR
|
||||
# Workaround Circle parsing dumb bugs and/or YAML wonkyness
|
||||
CIRCLE_PAIN: "mode: set"
|
||||
|
||||
hosts:
|
||||
# Not used yet
|
||||
fancy: 127.0.0.1
|
||||
|
||||
dependencies:
|
||||
pre:
|
||||
# Copy the code to the gopath of all go versions
|
||||
- >
|
||||
gvm use stable &&
|
||||
mkdir -p "$(dirname $BASE_STABLE)" &&
|
||||
cp -R "$CHECKOUT" "$BASE_STABLE"
|
||||
|
||||
override:
|
||||
# Install dependencies for every copied clone/go version
|
||||
- gvm use stable && go get github.com/tools/godep:
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
post:
|
||||
# For the stable go version, additionally install linting tools
|
||||
- >
|
||||
gvm use stable &&
|
||||
go get github.com/golang/lint/golint github.com/wadey/gocovmerge &&
|
||||
go install github.com/wadey/gocovmerge
|
||||
test:
|
||||
pre:
|
||||
# Output the go versions we are going to test
|
||||
- gvm use stable && go version
|
||||
|
||||
# CLEAN
|
||||
- gvm use stable && make clean:
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
# FMT
|
||||
- gvm use stable && make fmt:
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
# VET
|
||||
- gvm use stable && make vet:
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
# LINT
|
||||
- gvm use stable && make lint:
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
override:
|
||||
# Test stable, and report
|
||||
# hacking this to be parallel
|
||||
- case $CIRCLE_NODE_INDEX in 0) gvm use stable && NOTARY_BUILDTAGS=pkcs11 make ci ;; 1) gvm use stable && NOTARY_BUILDTAGS=none make ci ;; esac:
|
||||
parallel: true
|
||||
timeout: 600
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
post:
|
||||
- gvm use stable && make covmerge:
|
||||
timeout: 600
|
||||
pwd: $BASE_STABLE
|
||||
|
||||
# Report to codecov.io
|
||||
# - bash <(curl -s https://codecov.io/bash):
|
||||
# pwd: $BASE_STABLE
|
|
@ -1,7 +1,7 @@
|
|||
package changelist
|
||||
|
||||
import (
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// Scopes for TufChanges are simply the TUF roles.
|
||||
|
@ -38,8 +38,8 @@ type TufChange struct {
|
|||
// TufRootData represents a modification of the keys associated
|
||||
// with a role that appears in the root.json
|
||||
type TufRootData struct {
|
||||
Keys []data.TUFKey `json:"keys"`
|
||||
RoleName string `json:"role"`
|
||||
Keys data.KeyList `json:"keys"`
|
||||
RoleName string `json:"role"`
|
||||
}
|
||||
|
||||
// NewTufChange initializes a tufChange object
|
||||
|
|
144
vendor/src/github.com/docker/notary/client/client.go
vendored
144
vendor/src/github.com/docker/notary/client/client.go
vendored
|
@ -14,18 +14,18 @@ import (
|
|||
"github.com/docker/notary/client/changelist"
|
||||
"github.com/docker/notary/cryptoservice"
|
||||
"github.com/docker/notary/keystoremanager"
|
||||
"github.com/docker/notary/pkg/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/endophage/gotuf"
|
||||
tufclient "github.com/endophage/gotuf/client"
|
||||
"github.com/endophage/gotuf/data"
|
||||
tuferrors "github.com/endophage/gotuf/errors"
|
||||
"github.com/endophage/gotuf/keys"
|
||||
"github.com/endophage/gotuf/signed"
|
||||
"github.com/endophage/gotuf/store"
|
||||
"github.com/docker/notary/tuf"
|
||||
tufclient "github.com/docker/notary/tuf/client"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/keys"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
"github.com/docker/notary/tuf/store"
|
||||
)
|
||||
|
||||
const maxSize = 5 << 20
|
||||
const (
|
||||
maxSize = 5 << 20
|
||||
)
|
||||
|
||||
func init() {
|
||||
data.SetDefaultExpiryTimes(
|
||||
|
@ -68,8 +68,8 @@ type NotaryRepository struct {
|
|||
baseURL string
|
||||
tufRepoPath string
|
||||
fileStore store.MetadataStore
|
||||
cryptoService signed.CryptoService
|
||||
tufRepo *tuf.TufRepo
|
||||
CryptoService signed.CryptoService
|
||||
tufRepo *tuf.Repo
|
||||
roundTrip http.RoundTripper
|
||||
KeyStoreManager *keystoremanager.KeyStoreManager
|
||||
}
|
||||
|
@ -97,47 +97,16 @@ func NewTarget(targetName string, targetPath string) (*Target, error) {
|
|||
return &Target{Name: targetName, Hashes: meta.Hashes, Length: meta.Length}, nil
|
||||
}
|
||||
|
||||
// NewNotaryRepository is a helper method that returns a new notary repository.
|
||||
// It takes the base directory under where all the trust files will be stored
|
||||
// (usually ~/.docker/trust/).
|
||||
func NewNotaryRepository(baseDir, gun, baseURL string, rt http.RoundTripper,
|
||||
passphraseRetriever passphrase.Retriever) (*NotaryRepository, error) {
|
||||
|
||||
keyStoreManager, err := keystoremanager.NewKeyStoreManager(baseDir, passphraseRetriever)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cryptoService := cryptoservice.NewCryptoService(gun, keyStoreManager.NonRootKeyStore())
|
||||
|
||||
nRepo := &NotaryRepository{
|
||||
gun: gun,
|
||||
baseDir: baseDir,
|
||||
baseURL: baseURL,
|
||||
tufRepoPath: filepath.Join(baseDir, tufDir, filepath.FromSlash(gun)),
|
||||
cryptoService: cryptoService,
|
||||
roundTrip: rt,
|
||||
KeyStoreManager: keyStoreManager,
|
||||
}
|
||||
|
||||
fileStore, err := store.NewFilesystemStore(
|
||||
nRepo.tufRepoPath,
|
||||
"metadata",
|
||||
"json",
|
||||
"",
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nRepo.fileStore = fileStore
|
||||
|
||||
return nRepo, nil
|
||||
}
|
||||
|
||||
// Initialize creates a new repository by using rootKey as the root Key for the
|
||||
// TUF repository.
|
||||
func (r *NotaryRepository) Initialize(uCryptoService *cryptoservice.UnlockedCryptoService) error {
|
||||
rootCert, err := uCryptoService.GenerateCertificate(r.gun)
|
||||
func (r *NotaryRepository) Initialize(rootKeyID string) error {
|
||||
privKey, _, err := r.CryptoService.GetPrivateKey(rootKeyID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rootCert, err := cryptoservice.GenerateCertificate(privKey, r.gun)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -148,27 +117,14 @@ func (r *NotaryRepository) Initialize(uCryptoService *cryptoservice.UnlockedCryp
|
|||
// If the key is RSA, we store it as type RSAx509, if it is ECDSA we store it
|
||||
// as ECDSAx509 to allow the gotuf verifiers to correctly decode the
|
||||
// key on verification of signatures.
|
||||
var algorithmType data.KeyAlgorithm
|
||||
algorithm := uCryptoService.PrivKey.Algorithm()
|
||||
switch algorithm {
|
||||
var rootKey data.PublicKey
|
||||
switch privKey.Algorithm() {
|
||||
case data.RSAKey:
|
||||
algorithmType = data.RSAx509Key
|
||||
rootKey = data.NewRSAx509PublicKey(trustmanager.CertToPEM(rootCert))
|
||||
case data.ECDSAKey:
|
||||
algorithmType = data.ECDSAx509Key
|
||||
rootKey = data.NewECDSAx509PublicKey(trustmanager.CertToPEM(rootCert))
|
||||
default:
|
||||
return fmt.Errorf("invalid format for root key: %s", algorithm)
|
||||
}
|
||||
|
||||
// Generate a x509Key using the rootCert as the public key
|
||||
rootKey := data.NewPublicKey(algorithmType, trustmanager.CertToPEM(rootCert))
|
||||
|
||||
// Creates a symlink between the certificate ID and the real public key it
|
||||
// is associated with. This is used to be able to retrieve the root private key
|
||||
// associated with a particular certificate
|
||||
logrus.Debugf("Linking %s to %s.", rootKey.ID(), uCryptoService.ID())
|
||||
err = r.KeyStoreManager.RootKeyStore().Link(uCryptoService.ID()+"_root", rootKey.ID()+"_root")
|
||||
if err != nil {
|
||||
return err
|
||||
return fmt.Errorf("invalid format for root key: %s", privKey.Algorithm())
|
||||
}
|
||||
|
||||
// All the timestamp keys are generated by the remote server.
|
||||
|
@ -181,23 +137,20 @@ func (r *NotaryRepository) Initialize(uCryptoService *cryptoservice.UnlockedCryp
|
|||
return err
|
||||
}
|
||||
|
||||
parsedKey := &data.TUFKey{}
|
||||
err = json.Unmarshal(rawTSKey, parsedKey)
|
||||
timestampKey, err := data.UnmarshalPublicKey(rawTSKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Turn the JSON timestamp key from the remote server into a TUFKey
|
||||
timestampKey := data.NewPublicKey(parsedKey.Algorithm(), parsedKey.Public())
|
||||
logrus.Debugf("got remote %s timestamp key with keyID: %s", parsedKey.Algorithm(), timestampKey.ID())
|
||||
logrus.Debugf("got remote %s timestamp key with keyID: %s", timestampKey.Algorithm(), timestampKey.ID())
|
||||
|
||||
// This is currently hardcoding the targets and snapshots keys to ECDSA
|
||||
// Targets and snapshot keys are always generated locally.
|
||||
targetsKey, err := r.cryptoService.Create("targets", data.ECDSAKey)
|
||||
targetsKey, err := r.CryptoService.Create("targets", data.ECDSAKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
snapshotKey, err := r.cryptoService.Create("snapshot", data.ECDSAKey)
|
||||
snapshotKey, err := r.CryptoService.Create("snapshot", data.ECDSAKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -214,13 +167,13 @@ func (r *NotaryRepository) Initialize(uCryptoService *cryptoservice.UnlockedCryp
|
|||
return err
|
||||
}
|
||||
|
||||
r.tufRepo = tuf.NewTufRepo(kdb, r.cryptoService)
|
||||
r.tufRepo = tuf.NewRepo(kdb, r.CryptoService)
|
||||
|
||||
err = r.tufRepo.InitRoot(false)
|
||||
if err != nil {
|
||||
logrus.Debug("Error on InitRoot: ", err.Error())
|
||||
switch err.(type) {
|
||||
case tuferrors.ErrInsufficientSignatures, trustmanager.ErrPasswordInvalid:
|
||||
case signed.ErrInsufficientSignatures, trustmanager.ErrPasswordInvalid:
|
||||
default:
|
||||
return err
|
||||
}
|
||||
|
@ -236,7 +189,7 @@ func (r *NotaryRepository) Initialize(uCryptoService *cryptoservice.UnlockedCryp
|
|||
return err
|
||||
}
|
||||
|
||||
return r.saveMetadata(uCryptoService.CryptoService)
|
||||
return r.saveMetadata()
|
||||
}
|
||||
|
||||
// AddTarget adds a new target to the repository, forcing a timestamps check from TUF
|
||||
|
@ -399,23 +352,18 @@ func (r *NotaryRepository) Publish() error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
rootKeyID := r.tufRepo.Root.Signed.Roles["root"].KeyIDs[0]
|
||||
rootCryptoService, err := r.KeyStoreManager.GetRootCryptoService(rootKeyID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
root, err = r.tufRepo.SignRoot(data.DefaultExpires("root"), rootCryptoService.CryptoService)
|
||||
root, err = r.tufRepo.SignRoot(data.DefaultExpires("root"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
updateRoot = true
|
||||
}
|
||||
// we will always resign targets and snapshots
|
||||
targets, err := r.tufRepo.SignTargets("targets", data.DefaultExpires("targets"), nil)
|
||||
targets, err := r.tufRepo.SignTargets("targets", data.DefaultExpires("targets"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
snapshot, err := r.tufRepo.SignSnapshot(data.DefaultExpires("snapshot"), nil)
|
||||
snapshot, err := r.tufRepo.SignSnapshot(data.DefaultExpires("snapshot"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -461,7 +409,7 @@ func (r *NotaryRepository) Publish() error {
|
|||
|
||||
func (r *NotaryRepository) bootstrapRepo() error {
|
||||
kdb := keys.NewDB()
|
||||
tufRepo := tuf.NewTufRepo(kdb, r.cryptoService)
|
||||
tufRepo := tuf.NewRepo(kdb, r.CryptoService)
|
||||
|
||||
logrus.Debugf("Loading trusted collection.")
|
||||
rootJSON, err := r.fileStore.GetMeta("root", 0)
|
||||
|
@ -503,9 +451,10 @@ func (r *NotaryRepository) bootstrapRepo() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (r *NotaryRepository) saveMetadata(rootCryptoService signed.CryptoService) error {
|
||||
func (r *NotaryRepository) saveMetadata() error {
|
||||
logrus.Debugf("Saving changes to Trusted Collection.")
|
||||
signedRoot, err := r.tufRepo.SignRoot(data.DefaultExpires("root"), rootCryptoService)
|
||||
|
||||
signedRoot, err := r.tufRepo.SignRoot(data.DefaultExpires("root"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -516,7 +465,7 @@ func (r *NotaryRepository) saveMetadata(rootCryptoService signed.CryptoService)
|
|||
|
||||
targetsToSave := make(map[string][]byte)
|
||||
for t := range r.tufRepo.Targets {
|
||||
signedTargets, err := r.tufRepo.SignTargets(t, data.DefaultExpires("targets"), nil)
|
||||
signedTargets, err := r.tufRepo.SignTargets(t, data.DefaultExpires("targets"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -527,7 +476,7 @@ func (r *NotaryRepository) saveMetadata(rootCryptoService signed.CryptoService)
|
|||
targetsToSave[t] = targetsJSON
|
||||
}
|
||||
|
||||
signedSnapshot, err := r.tufRepo.SignSnapshot(data.DefaultExpires("snapshot"), nil)
|
||||
signedSnapshot, err := r.tufRepo.SignSnapshot(data.DefaultExpires("snapshot"))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -587,7 +536,7 @@ func (r *NotaryRepository) bootstrapClient() (*tufclient.Client, error) {
|
|||
}
|
||||
|
||||
kdb := keys.NewDB()
|
||||
r.tufRepo = tuf.NewTufRepo(kdb, r.cryptoService)
|
||||
r.tufRepo = tuf.NewRepo(kdb, r.CryptoService)
|
||||
|
||||
signedRoot, err := data.RootFromSigned(root)
|
||||
if err != nil {
|
||||
|
@ -611,7 +560,7 @@ func (r *NotaryRepository) bootstrapClient() (*tufclient.Client, error) {
|
|||
// in a changelist until publish is called.
|
||||
func (r *NotaryRepository) RotateKeys() error {
|
||||
for _, role := range []string{"targets", "snapshot"} {
|
||||
key, err := r.cryptoService.Create(role, data.ECDSAKey)
|
||||
key, err := r.CryptoService.Create(role, data.ECDSAKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -630,14 +579,11 @@ func (r *NotaryRepository) rootFileKeyChange(role, action string, key data.Publi
|
|||
}
|
||||
defer cl.Close()
|
||||
|
||||
k, ok := key.(*data.TUFKey)
|
||||
if !ok {
|
||||
return errors.New("Invalid key type found during rotation.")
|
||||
}
|
||||
|
||||
kl := make(data.KeyList, 0, 1)
|
||||
kl = append(kl, key)
|
||||
meta := changelist.TufRootData{
|
||||
RoleName: role,
|
||||
Keys: []data.TUFKey{*k},
|
||||
Keys: kl,
|
||||
}
|
||||
metaJSON, err := json.Marshal(meta)
|
||||
if err != nil {
|
||||
|
|
|
@ -7,10 +7,10 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/notary/client/changelist"
|
||||
tuf "github.com/endophage/gotuf"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/keys"
|
||||
"github.com/endophage/gotuf/store"
|
||||
tuf "github.com/docker/notary/tuf"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/keys"
|
||||
"github.com/docker/notary/tuf/store"
|
||||
)
|
||||
|
||||
// Use this to initialize remote HTTPStores from the config settings
|
||||
|
@ -25,7 +25,7 @@ func getRemoteStore(baseURL, gun string, rt http.RoundTripper) (store.RemoteStor
|
|||
)
|
||||
}
|
||||
|
||||
func applyChangelist(repo *tuf.TufRepo, cl changelist.Changelist) error {
|
||||
func applyChangelist(repo *tuf.Repo, cl changelist.Changelist) error {
|
||||
it, err := cl.NewIterator()
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -53,7 +53,7 @@ func applyChangelist(repo *tuf.TufRepo, cl changelist.Changelist) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func applyTargetsChange(repo *tuf.TufRepo, c changelist.Change) error {
|
||||
func applyTargetsChange(repo *tuf.Repo, c changelist.Change) error {
|
||||
var err error
|
||||
switch c.Action() {
|
||||
case changelist.ActionCreate:
|
||||
|
@ -77,7 +77,7 @@ func applyTargetsChange(repo *tuf.TufRepo, c changelist.Change) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func applyRootChange(repo *tuf.TufRepo, c changelist.Change) error {
|
||||
func applyRootChange(repo *tuf.Repo, c changelist.Change) error {
|
||||
var err error
|
||||
switch c.Type() {
|
||||
case changelist.TypeRootRole:
|
||||
|
@ -88,7 +88,7 @@ func applyRootChange(repo *tuf.TufRepo, c changelist.Change) error {
|
|||
return err // might be nil
|
||||
}
|
||||
|
||||
func applyRootRoleChange(repo *tuf.TufRepo, c changelist.Change) error {
|
||||
func applyRootRoleChange(repo *tuf.Repo, c changelist.Change) error {
|
||||
switch c.Action() {
|
||||
case changelist.ActionCreate:
|
||||
// replaces all keys for a role
|
||||
|
@ -97,11 +97,7 @@ func applyRootRoleChange(repo *tuf.TufRepo, c changelist.Change) error {
|
|||
if err != nil {
|
||||
return err
|
||||
}
|
||||
k := []data.PublicKey{}
|
||||
for _, key := range d.Keys {
|
||||
k = append(k, data.NewPublicKey(key.Algorithm(), key.Public()))
|
||||
}
|
||||
err = repo.ReplaceBaseKeys(d.RoleName, k...)
|
||||
err = repo.ReplaceBaseKeys(d.RoleName, d.Keys...)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
56
vendor/src/github.com/docker/notary/client/repo.go
vendored
Normal file
56
vendor/src/github.com/docker/notary/client/repo.go
vendored
Normal file
|
@ -0,0 +1,56 @@
|
|||
// +build !pkcs11
|
||||
|
||||
package client
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/notary/cryptoservice"
|
||||
"github.com/docker/notary/keystoremanager"
|
||||
"github.com/docker/notary/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/tuf/store"
|
||||
)
|
||||
|
||||
// NewNotaryRepository is a helper method that returns a new notary repository.
|
||||
// It takes the base directory under where all the trust files will be stored
|
||||
// (usually ~/.docker/trust/).
|
||||
func NewNotaryRepository(baseDir, gun, baseURL string, rt http.RoundTripper,
|
||||
retriever passphrase.Retriever) (*NotaryRepository, error) {
|
||||
fileKeyStore, err := trustmanager.NewKeyFileStore(baseDir, retriever)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create private key store in directory: %s", baseDir)
|
||||
}
|
||||
|
||||
keyStoreManager, err := keystoremanager.NewKeyStoreManager(baseDir, fileKeyStore)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
cryptoService := cryptoservice.NewCryptoService(gun, keyStoreManager.KeyStore)
|
||||
|
||||
nRepo := &NotaryRepository{
|
||||
gun: gun,
|
||||
baseDir: baseDir,
|
||||
baseURL: baseURL,
|
||||
tufRepoPath: filepath.Join(baseDir, tufDir, filepath.FromSlash(gun)),
|
||||
CryptoService: cryptoService,
|
||||
roundTrip: rt,
|
||||
KeyStoreManager: keyStoreManager,
|
||||
}
|
||||
|
||||
fileStore, err := store.NewFilesystemStore(
|
||||
nRepo.tufRepoPath,
|
||||
"metadata",
|
||||
"json",
|
||||
"",
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nRepo.fileStore = fileStore
|
||||
|
||||
return nRepo, nil
|
||||
}
|
61
vendor/src/github.com/docker/notary/client/repo_pkcs11.go
vendored
Normal file
61
vendor/src/github.com/docker/notary/client/repo_pkcs11.go
vendored
Normal file
|
@ -0,0 +1,61 @@
|
|||
// +build pkcs11
|
||||
|
||||
package client
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/docker/notary/cryptoservice"
|
||||
"github.com/docker/notary/keystoremanager"
|
||||
"github.com/docker/notary/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/trustmanager/yubikey"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
"github.com/docker/notary/tuf/store"
|
||||
)
|
||||
|
||||
// NewNotaryRepository is a helper method that returns a new notary repository.
|
||||
// It takes the base directory under where all the trust files will be stored
|
||||
// (usually ~/.docker/trust/).
|
||||
func NewNotaryRepository(baseDir, gun, baseURL string, rt http.RoundTripper,
|
||||
retriever passphrase.Retriever) (*NotaryRepository, error) {
|
||||
|
||||
fileKeyStore, err := trustmanager.NewKeyFileStore(baseDir, retriever)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create private key store in directory: %s", baseDir)
|
||||
}
|
||||
|
||||
keyStoreManager, err := keystoremanager.NewKeyStoreManager(baseDir, fileKeyStore)
|
||||
yubiKeyStore, _ := yubikey.NewYubiKeyStore(fileKeyStore, retriever)
|
||||
var cryptoService signed.CryptoService
|
||||
if yubiKeyStore == nil {
|
||||
cryptoService = cryptoservice.NewCryptoService(gun, keyStoreManager.KeyStore)
|
||||
} else {
|
||||
cryptoService = cryptoservice.NewCryptoService(gun, yubiKeyStore, keyStoreManager.KeyStore)
|
||||
}
|
||||
|
||||
nRepo := &NotaryRepository{
|
||||
gun: gun,
|
||||
baseDir: baseDir,
|
||||
baseURL: baseURL,
|
||||
tufRepoPath: filepath.Join(baseDir, tufDir, filepath.FromSlash(gun)),
|
||||
CryptoService: cryptoService,
|
||||
roundTrip: rt,
|
||||
KeyStoreManager: keyStoreManager,
|
||||
}
|
||||
|
||||
fileStore, err := store.NewFilesystemStore(
|
||||
nRepo.tufRepoPath,
|
||||
"metadata",
|
||||
"json",
|
||||
"",
|
||||
)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
nRepo.fileStore = fileStore
|
||||
|
||||
return nRepo, nil
|
||||
}
|
7
vendor/src/github.com/docker/notary/const.go
vendored
Normal file
7
vendor/src/github.com/docker/notary/const.go
vendored
Normal file
|
@ -0,0 +1,7 @@
|
|||
package notary
|
||||
|
||||
// application wide constants
|
||||
const (
|
||||
PrivKeyPerms = 0700
|
||||
PubCertPerms = 0755
|
||||
)
|
10
vendor/src/github.com/docker/notary/coverpkg.sh
vendored
Executable file
10
vendor/src/github.com/docker/notary/coverpkg.sh
vendored
Executable file
|
@ -0,0 +1,10 @@
|
|||
#!/usr/bin/env bash
|
||||
|
||||
# Given a subpackage and the containing package, figures out which packages
|
||||
# need to be passed to `go test -coverpkg`: this includes all of the
|
||||
# subpackage's dependencies within the containing package, as well as the
|
||||
# subpackage itself.
|
||||
|
||||
DEPENDENCIES="$(go list -f $'{{range $f := .Deps}}{{$f}}\n{{end}}' ${1} | grep ${2})"
|
||||
|
||||
echo "${1} ${DEPENDENCIES}" | xargs echo -n | tr ' ' ','
|
36
vendor/src/github.com/docker/notary/cryptoservice/certificate.go
vendored
Normal file
36
vendor/src/github.com/docker/notary/cryptoservice/certificate.go
vendored
Normal file
|
@ -0,0 +1,36 @@
|
|||
package cryptoservice
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// GenerateCertificate generates an X509 Certificate from a template, given a GUN
|
||||
func GenerateCertificate(rootKey data.PrivateKey, gun string) (*x509.Certificate, error) {
|
||||
signer := rootKey.CryptoSigner()
|
||||
if signer == nil {
|
||||
return nil, fmt.Errorf("key type not supported for Certificate generation: %s\n", rootKey.Algorithm())
|
||||
}
|
||||
|
||||
template, err := trustmanager.NewCertificate(gun)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create the certificate template for: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
derBytes, err := x509.CreateCertificate(rand.Reader, template, template, signer.Public(), signer)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create the certificate for: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
// Encode the new certificate into PEM
|
||||
cert, err := x509.ParseCertificate(derBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse the certificate for key: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
return cert, nil
|
||||
}
|
|
@ -1,19 +1,13 @@
|
|||
package cryptoservice
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -23,17 +17,17 @@ const (
|
|||
// CryptoService implements Sign and Create, holding a specific GUN and keystore to
|
||||
// operate on
|
||||
type CryptoService struct {
|
||||
gun string
|
||||
keyStore trustmanager.KeyStore
|
||||
gun string
|
||||
keyStores []trustmanager.KeyStore
|
||||
}
|
||||
|
||||
// NewCryptoService returns an instance of CryptoService
|
||||
func NewCryptoService(gun string, keyStore trustmanager.KeyStore) *CryptoService {
|
||||
return &CryptoService{gun: gun, keyStore: keyStore}
|
||||
func NewCryptoService(gun string, keyStores ...trustmanager.KeyStore) *CryptoService {
|
||||
return &CryptoService{gun: gun, keyStores: keyStores}
|
||||
}
|
||||
|
||||
// Create is used to generate keys for targets, snapshots and timestamps
|
||||
func (ccs *CryptoService) Create(role string, algorithm data.KeyAlgorithm) (data.PublicKey, error) {
|
||||
func (cs *CryptoService) Create(role, algorithm string) (data.PublicKey, error) {
|
||||
var privKey data.PrivateKey
|
||||
var err error
|
||||
|
||||
|
@ -59,72 +53,90 @@ func (ccs *CryptoService) Create(role string, algorithm data.KeyAlgorithm) (data
|
|||
logrus.Debugf("generated new %s key for role: %s and keyID: %s", algorithm, role, privKey.ID())
|
||||
|
||||
// Store the private key into our keystore with the name being: /GUN/ID.key with an alias of role
|
||||
err = ccs.keyStore.AddKey(filepath.Join(ccs.gun, privKey.ID()), role, privKey)
|
||||
var keyPath string
|
||||
if role == data.CanonicalRootRole {
|
||||
keyPath = privKey.ID()
|
||||
} else {
|
||||
keyPath = filepath.Join(cs.gun, privKey.ID())
|
||||
}
|
||||
|
||||
for _, ks := range cs.keyStores {
|
||||
err = ks.AddKey(keyPath, role, privKey)
|
||||
if err == nil {
|
||||
return data.PublicKeyFromPrivate(privKey), nil
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to add key to filestore: %v", err)
|
||||
}
|
||||
return data.PublicKeyFromPrivate(privKey), nil
|
||||
return nil, fmt.Errorf("keystores would not accept new private keys for unknown reasons")
|
||||
|
||||
}
|
||||
|
||||
// GetPrivateKey returns a private key by ID. It tries to get the key first
|
||||
// without a GUN (in which case it's a root key). If that fails, try to get
|
||||
// the key with the GUN (non-root key).
|
||||
// If that fails, then we don't have the key.
|
||||
func (cs *CryptoService) GetPrivateKey(keyID string) (k data.PrivateKey, role string, err error) {
|
||||
keyPaths := []string{keyID, filepath.Join(cs.gun, keyID)}
|
||||
for _, ks := range cs.keyStores {
|
||||
for _, keyPath := range keyPaths {
|
||||
k, role, err = ks.GetKey(keyPath)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
return
|
||||
}
|
||||
}
|
||||
return // returns whatever the final values were
|
||||
}
|
||||
|
||||
// GetKey returns a key by ID
|
||||
func (ccs *CryptoService) GetKey(keyID string) data.PublicKey {
|
||||
key, _, err := ccs.keyStore.GetKey(keyID)
|
||||
func (cs *CryptoService) GetKey(keyID string) data.PublicKey {
|
||||
privKey, _, err := cs.GetPrivateKey(keyID)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
return data.PublicKeyFromPrivate(key)
|
||||
return data.PublicKeyFromPrivate(privKey)
|
||||
}
|
||||
|
||||
// RemoveKey deletes a key by ID
|
||||
func (ccs *CryptoService) RemoveKey(keyID string) error {
|
||||
return ccs.keyStore.RemoveKey(keyID)
|
||||
func (cs *CryptoService) RemoveKey(keyID string) (err error) {
|
||||
keyPaths := []string{keyID, filepath.Join(cs.gun, keyID)}
|
||||
for _, ks := range cs.keyStores {
|
||||
for _, keyPath := range keyPaths {
|
||||
ks.RemoveKey(keyPath)
|
||||
}
|
||||
}
|
||||
return // returns whatever the final values were
|
||||
}
|
||||
|
||||
// Sign returns the signatures for the payload with a set of keyIDs. It ignores
|
||||
// errors to sign and expects the called to validate if the number of returned
|
||||
// signatures is adequate.
|
||||
func (ccs *CryptoService) Sign(keyIDs []string, payload []byte) ([]data.Signature, error) {
|
||||
func (cs *CryptoService) Sign(keyIDs []string, payload []byte) ([]data.Signature, error) {
|
||||
signatures := make([]data.Signature, 0, len(keyIDs))
|
||||
for _, keyid := range keyIDs {
|
||||
// ccs.gun will be empty if this is the root key
|
||||
keyName := filepath.Join(ccs.gun, keyid)
|
||||
|
||||
var privKey data.PrivateKey
|
||||
var err error
|
||||
|
||||
privKey, _, err = ccs.keyStore.GetKey(keyName)
|
||||
for _, keyID := range keyIDs {
|
||||
privKey, _, err := cs.GetPrivateKey(keyID)
|
||||
if err != nil {
|
||||
logrus.Debugf("error attempting to retrieve key ID: %s, %v", keyid, err)
|
||||
return nil, err
|
||||
logrus.Debugf("error attempting to retrieve private key: %s, %v", keyID, err)
|
||||
continue
|
||||
}
|
||||
|
||||
algorithm := privKey.Algorithm()
|
||||
var sigAlgorithm data.SigAlgorithm
|
||||
var sig []byte
|
||||
|
||||
switch algorithm {
|
||||
case data.RSAKey:
|
||||
sig, err = rsaSign(privKey, payload)
|
||||
sigAlgorithm = data.RSAPSSSignature
|
||||
case data.ECDSAKey:
|
||||
sig, err = ecdsaSign(privKey, payload)
|
||||
sigAlgorithm = data.ECDSASignature
|
||||
case data.ED25519Key:
|
||||
// ED25519 does not operate on a SHA256 hash
|
||||
sig, err = ed25519Sign(privKey, payload)
|
||||
sigAlgorithm = data.EDDSASignature
|
||||
}
|
||||
sigAlgo := privKey.SignatureAlgorithm()
|
||||
sig, err := privKey.Sign(rand.Reader, payload, nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("ignoring error attempting to %s sign with keyID: %s, %v", algorithm, keyid, err)
|
||||
return nil, err
|
||||
logrus.Debugf("ignoring error attempting to %s sign with keyID: %s, %v",
|
||||
privKey.Algorithm(), keyID, err)
|
||||
continue
|
||||
}
|
||||
|
||||
logrus.Debugf("appending %s signature with Key ID: %s", algorithm, keyid)
|
||||
logrus.Debugf("appending %s signature with Key ID: %s", privKey.Algorithm(), keyID)
|
||||
|
||||
// Append signatures to result array
|
||||
signatures = append(signatures, data.Signature{
|
||||
KeyID: keyid,
|
||||
Method: sigAlgorithm,
|
||||
KeyID: keyID,
|
||||
Method: sigAlgo,
|
||||
Signature: sig[:],
|
||||
})
|
||||
}
|
||||
|
@ -132,68 +144,26 @@ func (ccs *CryptoService) Sign(keyIDs []string, payload []byte) ([]data.Signatur
|
|||
return signatures, nil
|
||||
}
|
||||
|
||||
func rsaSign(privKey data.PrivateKey, message []byte) ([]byte, error) {
|
||||
if privKey.Algorithm() != data.RSAKey {
|
||||
return nil, fmt.Errorf("private key type not supported: %s", privKey.Algorithm())
|
||||
// ListKeys returns a list of key IDs valid for the given role
|
||||
func (cs *CryptoService) ListKeys(role string) []string {
|
||||
var res []string
|
||||
for _, ks := range cs.keyStores {
|
||||
for k, r := range ks.ListKeys() {
|
||||
if r == role {
|
||||
res = append(res, k)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
hashed := sha256.Sum256(message)
|
||||
|
||||
// Create an rsa.PrivateKey out of the private key bytes
|
||||
rsaPrivKey, err := x509.ParsePKCS1PrivateKey(privKey.Private())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Use the RSA key to RSASSA-PSS sign the data
|
||||
sig, err := rsa.SignPSS(rand.Reader, rsaPrivKey, crypto.SHA256, hashed[:], &rsa.PSSOptions{SaltLength: rsa.PSSSaltLengthEqualsHash})
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return sig, nil
|
||||
return res
|
||||
}
|
||||
|
||||
func ecdsaSign(privKey data.PrivateKey, message []byte) ([]byte, error) {
|
||||
if privKey.Algorithm() != data.ECDSAKey {
|
||||
return nil, fmt.Errorf("private key type not supported: %s", privKey.Algorithm())
|
||||
// ListAllKeys returns a map of key IDs to role
|
||||
func (cs *CryptoService) ListAllKeys() map[string]string {
|
||||
res := make(map[string]string)
|
||||
for _, ks := range cs.keyStores {
|
||||
for k, r := range ks.ListKeys() {
|
||||
res[k] = r // keys are content addressed so don't care about overwrites
|
||||
}
|
||||
}
|
||||
|
||||
hashed := sha256.Sum256(message)
|
||||
|
||||
// Create an ecdsa.PrivateKey out of the private key bytes
|
||||
ecdsaPrivKey, err := x509.ParseECPrivateKey(privKey.Private())
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Use the ECDSA key to sign the data
|
||||
r, s, err := ecdsa.Sign(rand.Reader, ecdsaPrivKey, hashed[:])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
rBytes, sBytes := r.Bytes(), s.Bytes()
|
||||
octetLength := (ecdsaPrivKey.Params().BitSize + 7) >> 3
|
||||
|
||||
// MUST include leading zeros in the output
|
||||
rBuf := make([]byte, octetLength-len(rBytes), octetLength)
|
||||
sBuf := make([]byte, octetLength-len(sBytes), octetLength)
|
||||
|
||||
rBuf = append(rBuf, rBytes...)
|
||||
sBuf = append(sBuf, sBytes...)
|
||||
|
||||
return append(rBuf, sBuf...), nil
|
||||
}
|
||||
|
||||
func ed25519Sign(privKey data.PrivateKey, message []byte) ([]byte, error) {
|
||||
if privKey.Algorithm() != data.ED25519Key {
|
||||
return nil, fmt.Errorf("private key type not supported: %s", privKey.Algorithm())
|
||||
}
|
||||
|
||||
priv := [ed25519.PrivateKeySize]byte{}
|
||||
copy(priv[:], privKey.Private()[ed25519.PublicKeySize:])
|
||||
sig := ed25519.Sign(&priv, message)
|
||||
|
||||
return sig[:], nil
|
||||
return res
|
||||
}
|
||||
|
|
326
vendor/src/github.com/docker/notary/cryptoservice/import_export.go
vendored
Normal file
326
vendor/src/github.com/docker/notary/cryptoservice/import_export.go
vendored
Normal file
|
@ -0,0 +1,326 @@
|
|||
package cryptoservice
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/docker/notary/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
)
|
||||
|
||||
const zipMadeByUNIX = 3 << 8
|
||||
|
||||
var (
|
||||
// ErrNoValidPrivateKey is returned if a key being imported doesn't
|
||||
// look like a private key
|
||||
ErrNoValidPrivateKey = errors.New("no valid private key found")
|
||||
|
||||
// ErrRootKeyNotEncrypted is returned if a root key being imported is
|
||||
// unencrypted
|
||||
ErrRootKeyNotEncrypted = errors.New("only encrypted root keys may be imported")
|
||||
|
||||
// ErrNoKeysFoundForGUN is returned if no keys are found for the
|
||||
// specified GUN during export
|
||||
ErrNoKeysFoundForGUN = errors.New("no keys found for specified GUN")
|
||||
)
|
||||
|
||||
// ExportRootKey exports the specified root key to an io.Writer in PEM format.
|
||||
// The key's existing encryption is preserved.
|
||||
func (cs *CryptoService) ExportRootKey(dest io.Writer, keyID string) error {
|
||||
var (
|
||||
pemBytes []byte
|
||||
err error
|
||||
)
|
||||
|
||||
for _, ks := range cs.keyStores {
|
||||
pemBytes, err = ks.ExportKey(keyID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
}
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
nBytes, err := dest.Write(pemBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if nBytes != len(pemBytes) {
|
||||
return errors.New("Unable to finish writing exported key.")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExportRootKeyReencrypt exports the specified root key to an io.Writer in
|
||||
// PEM format. The key is reencrypted with a new passphrase.
|
||||
func (cs *CryptoService) ExportRootKeyReencrypt(dest io.Writer, keyID string, newPassphraseRetriever passphrase.Retriever) error {
|
||||
privateKey, role, err := cs.GetPrivateKey(keyID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create temporary keystore to use as a staging area
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = tempKeyStore.AddKey(keyID, role, privateKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pemBytes, err := tempKeyStore.ExportKey(keyID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
nBytes, err := dest.Write(pemBytes)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if nBytes != len(pemBytes) {
|
||||
return errors.New("Unable to finish writing exported key.")
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImportRootKey imports a root in PEM format key from an io.Reader
|
||||
// It prompts for the key's passphrase to verify the data and to determine
|
||||
// the key ID.
|
||||
func (cs *CryptoService) ImportRootKey(source io.Reader) error {
|
||||
pemBytes, err := ioutil.ReadAll(source)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = checkRootKeyIsEncrypted(pemBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, ks := range cs.keyStores {
|
||||
// don't redeclare err, we want the value carried out of the loop
|
||||
if err = ks.ImportKey(pemBytes, "root"); err == nil {
|
||||
return nil //bail on the first keystore we import to
|
||||
}
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// ExportAllKeys exports all keys to an io.Writer in zip format.
|
||||
// newPassphraseRetriever will be used to obtain passphrases to use to encrypt the existing keys.
|
||||
func (cs *CryptoService) ExportAllKeys(dest io.Writer, newPassphraseRetriever passphrase.Retriever) error {
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
// Create temporary keystore to use as a staging area
|
||||
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, newPassphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, ks := range cs.keyStores {
|
||||
if err := moveKeys(ks, tempKeyStore); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
zipWriter := zip.NewWriter(dest)
|
||||
|
||||
if err := addKeysToArchive(zipWriter, tempKeyStore); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImportKeysZip imports keys from a zip file provided as an zip.Reader. The
|
||||
// keys in the root_keys directory are left encrypted, but the other keys are
|
||||
// decrypted with the specified passphrase.
|
||||
func (cs *CryptoService) ImportKeysZip(zipReader zip.Reader) error {
|
||||
// Temporarily store the keys in maps, so we can bail early if there's
|
||||
// an error (for example, wrong passphrase), without leaving the key
|
||||
// store in an inconsistent state
|
||||
newKeys := make(map[string][]byte)
|
||||
|
||||
// Iterate through the files in the archive. Don't add the keys
|
||||
for _, f := range zipReader.File {
|
||||
fNameTrimmed := strings.TrimSuffix(f.Name, filepath.Ext(f.Name))
|
||||
|
||||
rc, err := f.Open()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer rc.Close()
|
||||
|
||||
fileBytes, err := ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Note that using / as a separator is okay here - the zip
|
||||
// package guarantees that the separator will be /
|
||||
if fNameTrimmed[len(fNameTrimmed)-5:] == "_root" {
|
||||
if err = checkRootKeyIsEncrypted(fileBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
newKeys[fNameTrimmed] = fileBytes
|
||||
}
|
||||
|
||||
for keyName, pemBytes := range newKeys {
|
||||
if keyName[len(keyName)-5:] == "_root" {
|
||||
keyName = "root"
|
||||
}
|
||||
// try to import the key to all key stores. As long as one of them
|
||||
// succeeds, consider it a success
|
||||
var tmpErr error
|
||||
for _, ks := range cs.keyStores {
|
||||
if err := ks.ImportKey(pemBytes, keyName); err != nil {
|
||||
tmpErr = err
|
||||
} else {
|
||||
tmpErr = nil
|
||||
break
|
||||
}
|
||||
}
|
||||
if tmpErr != nil {
|
||||
return tmpErr
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExportKeysByGUN exports all keys associated with a specified GUN to an
|
||||
// io.Writer in zip format. passphraseRetriever is used to select new passphrases to use to
|
||||
// encrypt the keys.
|
||||
func (cs *CryptoService) ExportKeysByGUN(dest io.Writer, gun string, passphraseRetriever passphrase.Retriever) error {
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
// Create temporary keystore to use as a staging area
|
||||
tempKeyStore, err := trustmanager.NewKeyFileStore(tempBaseDir, passphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
for _, ks := range cs.keyStores {
|
||||
if err := moveKeysByGUN(ks, tempKeyStore, gun); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
zipWriter := zip.NewWriter(dest)
|
||||
|
||||
if len(tempKeyStore.ListKeys()) == 0 {
|
||||
return ErrNoKeysFoundForGUN
|
||||
}
|
||||
|
||||
if err := addKeysToArchive(zipWriter, tempKeyStore); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func moveKeysByGUN(oldKeyStore, newKeyStore trustmanager.KeyStore, gun string) error {
|
||||
for relKeyPath := range oldKeyStore.ListKeys() {
|
||||
// Skip keys that aren't associated with this GUN
|
||||
if !strings.HasPrefix(relKeyPath, filepath.FromSlash(gun)) {
|
||||
continue
|
||||
}
|
||||
|
||||
privKey, alias, err := oldKeyStore.GetKey(relKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = newKeyStore.AddKey(relKeyPath, alias, privKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func moveKeys(oldKeyStore, newKeyStore trustmanager.KeyStore) error {
|
||||
for f := range oldKeyStore.ListKeys() {
|
||||
privateKey, alias, err := oldKeyStore.GetKey(f)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = newKeyStore.AddKey(f, alias, privateKey)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func addKeysToArchive(zipWriter *zip.Writer, newKeyStore *trustmanager.KeyFileStore) error {
|
||||
for _, relKeyPath := range newKeyStore.ListFiles() {
|
||||
fullKeyPath := filepath.Join(newKeyStore.BaseDir(), relKeyPath)
|
||||
|
||||
fi, err := os.Lstat(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
infoHeader, err := zip.FileInfoHeader(fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
infoHeader.Name = relKeyPath
|
||||
|
||||
zipFileEntryWriter, err := zipWriter.CreateHeader(infoHeader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fileContents, err := ioutil.ReadFile(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err = zipFileEntryWriter.Write(fileContents); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// checkRootKeyIsEncrypted makes sure the root key is encrypted. We have
|
||||
// internal assumptions that depend on this.
|
||||
func checkRootKeyIsEncrypted(pemBytes []byte) error {
|
||||
block, _ := pem.Decode(pemBytes)
|
||||
if block == nil {
|
||||
return ErrNoValidPrivateKey
|
||||
}
|
||||
|
||||
if !x509.IsEncryptedPEMBlock(block) {
|
||||
return ErrRootKeyNotEncrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
|
@ -1,83 +0,0 @@
|
|||
package cryptoservice
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rand"
|
||||
"crypto/rsa"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/signed"
|
||||
)
|
||||
|
||||
// UnlockedCryptoService encapsulates a private key and a cryptoservice that
|
||||
// uses that private key, providing convinience methods for generation of
|
||||
// certificates.
|
||||
type UnlockedCryptoService struct {
|
||||
PrivKey data.PrivateKey
|
||||
CryptoService signed.CryptoService
|
||||
}
|
||||
|
||||
// NewUnlockedCryptoService creates an UnlockedCryptoService instance
|
||||
func NewUnlockedCryptoService(privKey data.PrivateKey, cryptoService signed.CryptoService) *UnlockedCryptoService {
|
||||
return &UnlockedCryptoService{
|
||||
PrivKey: privKey,
|
||||
CryptoService: cryptoService,
|
||||
}
|
||||
}
|
||||
|
||||
// ID gets a consistent ID based on the PrivateKey bytes and algorithm type
|
||||
func (ucs *UnlockedCryptoService) ID() string {
|
||||
return ucs.PublicKey().ID()
|
||||
}
|
||||
|
||||
// PublicKey Returns the public key associated with the private key
|
||||
func (ucs *UnlockedCryptoService) PublicKey() data.PublicKey {
|
||||
return data.PublicKeyFromPrivate(ucs.PrivKey)
|
||||
}
|
||||
|
||||
// GenerateCertificate generates an X509 Certificate from a template, given a GUN
|
||||
func (ucs *UnlockedCryptoService) GenerateCertificate(gun string) (*x509.Certificate, error) {
|
||||
algorithm := ucs.PrivKey.Algorithm()
|
||||
var publicKey crypto.PublicKey
|
||||
var privateKey crypto.PrivateKey
|
||||
var err error
|
||||
switch algorithm {
|
||||
case data.RSAKey:
|
||||
var rsaPrivateKey *rsa.PrivateKey
|
||||
rsaPrivateKey, err = x509.ParsePKCS1PrivateKey(ucs.PrivKey.Private())
|
||||
privateKey = rsaPrivateKey
|
||||
publicKey = rsaPrivateKey.Public()
|
||||
case data.ECDSAKey:
|
||||
var ecdsaPrivateKey *ecdsa.PrivateKey
|
||||
ecdsaPrivateKey, err = x509.ParseECPrivateKey(ucs.PrivKey.Private())
|
||||
privateKey = ecdsaPrivateKey
|
||||
publicKey = ecdsaPrivateKey.Public()
|
||||
default:
|
||||
return nil, fmt.Errorf("only RSA or ECDSA keys are currently supported. Found: %s", algorithm)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse root key: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
template, err := trustmanager.NewCertificate(gun)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create the certificate template for: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
derBytes, err := x509.CreateCertificate(rand.Reader, template, template, publicKey, privateKey)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create the certificate for: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
// Encode the new certificate into PEM
|
||||
cert, err := x509.ParseCertificate(derBytes)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to parse the certificate for key: %s (%v)", gun, err)
|
||||
}
|
||||
|
||||
return cert, nil
|
||||
}
|
23
vendor/src/github.com/docker/notary/docker-compose.yml
vendored
Normal file
23
vendor/src/github.com/docker/notary/docker-compose.yml
vendored
Normal file
|
@ -0,0 +1,23 @@
|
|||
notaryserver:
|
||||
build: .
|
||||
dockerfile: Dockerfile.server
|
||||
links:
|
||||
- notarymysql
|
||||
- notarysigner
|
||||
ports:
|
||||
- "8080"
|
||||
- "4443:4443"
|
||||
environment:
|
||||
SERVICE_NAME: notary
|
||||
notarysigner:
|
||||
volumes:
|
||||
- /dev/bus/usb/003/010:/dev/bus/usb/002/010
|
||||
- /var/run/pcscd/pcscd.comm:/var/run/pcscd/pcscd.comm
|
||||
build: .
|
||||
dockerfile: Dockerfile.signer
|
||||
links:
|
||||
- notarymysql
|
||||
notarymysql:
|
||||
build: ./notarymysql/
|
||||
ports:
|
||||
- "3306:3306"
|
|
@ -1,402 +0,0 @@
|
|||
package keystoremanager
|
||||
|
||||
import (
|
||||
"archive/zip"
|
||||
"crypto/x509"
|
||||
"encoding/pem"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/notary/pkg/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
)
|
||||
|
||||
const (
|
||||
zipSymlinkAttr = 0xA1ED0000
|
||||
zipMadeByUNIX = 3 << 8
|
||||
)
|
||||
|
||||
var (
|
||||
// ErrNoValidPrivateKey is returned if a key being imported doesn't
|
||||
// look like a private key
|
||||
ErrNoValidPrivateKey = errors.New("no valid private key found")
|
||||
|
||||
// ErrRootKeyNotEncrypted is returned if a root key being imported is
|
||||
// unencrypted
|
||||
ErrRootKeyNotEncrypted = errors.New("only encrypted root keys may be imported")
|
||||
|
||||
// ErrNoKeysFoundForGUN is returned if no keys are found for the
|
||||
// specified GUN during export
|
||||
ErrNoKeysFoundForGUN = errors.New("no keys found for specified GUN")
|
||||
)
|
||||
|
||||
// ExportRootKey exports the specified root key to an io.Writer in PEM format.
|
||||
// The key's existing encryption is preserved.
|
||||
func (km *KeyStoreManager) ExportRootKey(dest io.Writer, keyID string) error {
|
||||
pemBytes, err := km.rootKeyStore.Get(keyID + "_root")
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = dest.Write(pemBytes)
|
||||
return err
|
||||
}
|
||||
|
||||
// ExportRootKeyReencrypt exports the specified root key to an io.Writer in
|
||||
// PEM format. The key is reencrypted with a new passphrase.
|
||||
func (km *KeyStoreManager) ExportRootKeyReencrypt(dest io.Writer, keyID string, newPassphraseRetriever passphrase.Retriever) error {
|
||||
privateKey, alias, err := km.rootKeyStore.GetKey(keyID)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Create temporary keystore to use as a staging area
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
privRootKeysSubdir := filepath.Join(privDir, rootKeysSubdir)
|
||||
tempRootKeysPath := filepath.Join(tempBaseDir, privRootKeysSubdir)
|
||||
tempRootKeyStore, err := trustmanager.NewKeyFileStore(tempRootKeysPath, newPassphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = tempRootKeyStore.AddKey(keyID, alias, privateKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
pemBytes, err := tempRootKeyStore.Get(keyID + "_" + alias)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
_, err = dest.Write(pemBytes)
|
||||
return err
|
||||
}
|
||||
|
||||
// checkRootKeyIsEncrypted makes sure the root key is encrypted. We have
|
||||
// internal assumptions that depend on this.
|
||||
func checkRootKeyIsEncrypted(pemBytes []byte) error {
|
||||
block, _ := pem.Decode(pemBytes)
|
||||
if block == nil {
|
||||
return ErrNoValidPrivateKey
|
||||
}
|
||||
|
||||
if !x509.IsEncryptedPEMBlock(block) {
|
||||
return ErrRootKeyNotEncrypted
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ImportRootKey imports a root in PEM format key from an io.Reader
|
||||
// The key's existing encryption is preserved. The keyID parameter is
|
||||
// necessary because otherwise we'd need the passphrase to decrypt the key
|
||||
// in order to compute the ID.
|
||||
func (km *KeyStoreManager) ImportRootKey(source io.Reader, keyID string) error {
|
||||
pemBytes, err := ioutil.ReadAll(source)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = checkRootKeyIsEncrypted(pemBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err = km.rootKeyStore.Add(keyID+"_root", pemBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
func moveKeys(oldKeyStore, newKeyStore *trustmanager.KeyFileStore) error {
|
||||
// List all files but no symlinks
|
||||
for f := range oldKeyStore.ListKeys() {
|
||||
privateKey, alias, err := oldKeyStore.GetKey(f)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = newKeyStore.AddKey(f, alias, privateKey)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
// Recreate symlinks
|
||||
for _, relKeyPath := range oldKeyStore.ListFiles(true) {
|
||||
fullKeyPath := filepath.Join(oldKeyStore.BaseDir(), relKeyPath)
|
||||
|
||||
fi, err := os.Lstat(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if (fi.Mode() & os.ModeSymlink) != 0 {
|
||||
target, err := os.Readlink(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
os.Symlink(target, filepath.Join(newKeyStore.BaseDir(), relKeyPath))
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func addKeysToArchive(zipWriter *zip.Writer, newKeyStore *trustmanager.KeyFileStore, subDir string) error {
|
||||
for _, relKeyPath := range newKeyStore.ListFiles(true) {
|
||||
fullKeyPath := filepath.Join(newKeyStore.BaseDir(), relKeyPath)
|
||||
|
||||
fi, err := os.Lstat(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
infoHeader, err := zip.FileInfoHeader(fi)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
infoHeader.Name = filepath.Join(subDir, relKeyPath)
|
||||
|
||||
// Is this a symlink? If so, encode properly in the zip file.
|
||||
if (fi.Mode() & os.ModeSymlink) != 0 {
|
||||
infoHeader.CreatorVersion = zipMadeByUNIX
|
||||
infoHeader.ExternalAttrs = zipSymlinkAttr
|
||||
|
||||
zipFileEntryWriter, err := zipWriter.CreateHeader(infoHeader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
target, err := os.Readlink(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
// Write relative path
|
||||
if _, err = zipFileEntryWriter.Write([]byte(target)); err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
zipFileEntryWriter, err := zipWriter.CreateHeader(infoHeader)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fileContents, err := ioutil.ReadFile(fullKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if _, err = zipFileEntryWriter.Write(fileContents); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExportAllKeys exports all keys to an io.Writer in zip format.
|
||||
// newPassphraseRetriever will be used to obtain passphrases to use to encrypt the existing keys.
|
||||
func (km *KeyStoreManager) ExportAllKeys(dest io.Writer, newPassphraseRetriever passphrase.Retriever) error {
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
privNonRootKeysSubdir := filepath.Join(privDir, nonRootKeysSubdir)
|
||||
privRootKeysSubdir := filepath.Join(privDir, rootKeysSubdir)
|
||||
|
||||
// Create temporary keystores to use as a staging area
|
||||
tempNonRootKeysPath := filepath.Join(tempBaseDir, privNonRootKeysSubdir)
|
||||
tempNonRootKeyStore, err := trustmanager.NewKeyFileStore(tempNonRootKeysPath, newPassphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
tempRootKeysPath := filepath.Join(tempBaseDir, privRootKeysSubdir)
|
||||
tempRootKeyStore, err := trustmanager.NewKeyFileStore(tempRootKeysPath, newPassphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := moveKeys(km.rootKeyStore, tempRootKeyStore); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := moveKeys(km.nonRootKeyStore, tempNonRootKeyStore); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter := zip.NewWriter(dest)
|
||||
|
||||
if err := addKeysToArchive(zipWriter, tempRootKeyStore, privRootKeysSubdir); err != nil {
|
||||
return err
|
||||
}
|
||||
if err := addKeysToArchive(zipWriter, tempNonRootKeyStore, privNonRootKeysSubdir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter.Close()
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// IsZipSymlink returns true if the file described by the zip file header is a
|
||||
// symlink.
|
||||
func IsZipSymlink(f *zip.File) bool {
|
||||
return f.CreatorVersion&0xFF00 == zipMadeByUNIX && f.ExternalAttrs == zipSymlinkAttr
|
||||
}
|
||||
|
||||
// ImportKeysZip imports keys from a zip file provided as an zip.Reader. The
|
||||
// keys in the root_keys directory are left encrypted, but the other keys are
|
||||
// decrypted with the specified passphrase.
|
||||
func (km *KeyStoreManager) ImportKeysZip(zipReader zip.Reader) error {
|
||||
// Temporarily store the keys in maps, so we can bail early if there's
|
||||
// an error (for example, wrong passphrase), without leaving the key
|
||||
// store in an inconsistent state
|
||||
newRootKeys := make(map[string][]byte)
|
||||
newNonRootKeys := make(map[string][]byte)
|
||||
|
||||
// Note that using / as a separator is okay here - the zip package
|
||||
// guarantees that the separator will be /
|
||||
rootKeysPrefix := privDir + "/" + rootKeysSubdir + "/"
|
||||
nonRootKeysPrefix := privDir + "/" + nonRootKeysSubdir + "/"
|
||||
|
||||
// Iterate through the files in the archive. Don't add the keys
|
||||
for _, f := range zipReader.File {
|
||||
fNameTrimmed := strings.TrimSuffix(f.Name, filepath.Ext(f.Name))
|
||||
|
||||
rc, err := f.Open()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
fileBytes, err := ioutil.ReadAll(rc)
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Is this in the root_keys directory?
|
||||
// Note that using / as a separator is okay here - the zip
|
||||
// package guarantees that the separator will be /
|
||||
if strings.HasPrefix(fNameTrimmed, rootKeysPrefix) {
|
||||
if IsZipSymlink(f) {
|
||||
newName := filepath.Join(km.rootKeyStore.BaseDir(), strings.TrimPrefix(f.Name, rootKeysPrefix))
|
||||
err = os.Symlink(string(fileBytes), newName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
if err = checkRootKeyIsEncrypted(fileBytes); err != nil {
|
||||
rc.Close()
|
||||
return err
|
||||
}
|
||||
// Root keys are preserved without decrypting
|
||||
keyName := strings.TrimPrefix(fNameTrimmed, rootKeysPrefix)
|
||||
newRootKeys[keyName] = fileBytes
|
||||
}
|
||||
} else if strings.HasPrefix(fNameTrimmed, nonRootKeysPrefix) {
|
||||
if IsZipSymlink(f) {
|
||||
newName := filepath.Join(km.nonRootKeyStore.BaseDir(), strings.TrimPrefix(f.Name, nonRootKeysPrefix))
|
||||
err = os.Symlink(string(fileBytes), newName)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
} else {
|
||||
// Nonroot keys are preserved without decrypting
|
||||
keyName := strings.TrimPrefix(fNameTrimmed, nonRootKeysPrefix)
|
||||
newNonRootKeys[keyName] = fileBytes
|
||||
}
|
||||
} else {
|
||||
// This path inside the zip archive doesn't look like a
|
||||
// root key, non-root key, or alias. To avoid adding a file
|
||||
// to the filestore that we won't be able to use, skip
|
||||
// this file in the import.
|
||||
logrus.Warnf("skipping import of key with a path that doesn't begin with %s or %s: %s", rootKeysPrefix, nonRootKeysPrefix, f.Name)
|
||||
rc.Close()
|
||||
continue
|
||||
}
|
||||
|
||||
rc.Close()
|
||||
}
|
||||
|
||||
for keyName, pemBytes := range newRootKeys {
|
||||
if err := km.rootKeyStore.Add(keyName, pemBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
for keyName, pemBytes := range newNonRootKeys {
|
||||
if err := km.nonRootKeyStore.Add(keyName, pemBytes); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func moveKeysByGUN(oldKeyStore, newKeyStore *trustmanager.KeyFileStore, gun string) error {
|
||||
// List all files but no symlinks
|
||||
for relKeyPath := range oldKeyStore.ListKeys() {
|
||||
// Skip keys that aren't associated with this GUN
|
||||
if !strings.HasPrefix(relKeyPath, filepath.FromSlash(gun)) {
|
||||
continue
|
||||
}
|
||||
|
||||
privKey, alias, err := oldKeyStore.GetKey(relKeyPath)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
err = newKeyStore.AddKey(relKeyPath, alias, privKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ExportKeysByGUN exports all keys associated with a specified GUN to an
|
||||
// io.Writer in zip format. passphraseRetriever is used to select new passphrases to use to
|
||||
// encrypt the keys.
|
||||
func (km *KeyStoreManager) ExportKeysByGUN(dest io.Writer, gun string, passphraseRetriever passphrase.Retriever) error {
|
||||
tempBaseDir, err := ioutil.TempDir("", "notary-key-export-")
|
||||
defer os.RemoveAll(tempBaseDir)
|
||||
|
||||
privNonRootKeysSubdir := filepath.Join(privDir, nonRootKeysSubdir)
|
||||
|
||||
// Create temporary keystore to use as a staging area
|
||||
tempNonRootKeysPath := filepath.Join(tempBaseDir, privNonRootKeysSubdir)
|
||||
tempNonRootKeyStore, err := trustmanager.NewKeyFileStore(tempNonRootKeysPath, passphraseRetriever)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
if err := moveKeysByGUN(km.nonRootKeyStore, tempNonRootKeyStore, gun); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter := zip.NewWriter(dest)
|
||||
|
||||
if len(tempNonRootKeyStore.ListKeys()) == 0 {
|
||||
return ErrNoKeysFoundForGUN
|
||||
}
|
||||
|
||||
if err := addKeysToArchive(zipWriter, tempNonRootKeyStore, privNonRootKeysSubdir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
zipWriter.Close()
|
||||
|
||||
return nil
|
||||
}
|
|
@ -10,29 +10,22 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/notary/cryptoservice"
|
||||
"github.com/docker/notary/pkg/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/signed"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
)
|
||||
|
||||
// KeyStoreManager is an abstraction around the root and non-root key stores,
|
||||
// and related CA stores
|
||||
type KeyStoreManager struct {
|
||||
rootKeyStore *trustmanager.KeyFileStore
|
||||
nonRootKeyStore *trustmanager.KeyFileStore
|
||||
|
||||
KeyStore *trustmanager.KeyFileStore
|
||||
trustedCAStore trustmanager.X509Store
|
||||
trustedCertificateStore trustmanager.X509Store
|
||||
}
|
||||
|
||||
const (
|
||||
trustDir = "trusted_certificates"
|
||||
privDir = "private"
|
||||
rootKeysSubdir = "root_keys"
|
||||
nonRootKeysSubdir = "tuf_keys"
|
||||
rsaRootKeySize = 4096 // Used for new root keys
|
||||
trustDir = "trusted_certificates"
|
||||
rsaRootKeySize = 4096 // Used for new root keys
|
||||
)
|
||||
|
||||
// ErrValidationFail is returned when there is no valid trusted certificates
|
||||
|
@ -61,20 +54,7 @@ func (err ErrRootRotationFail) Error() string {
|
|||
|
||||
// NewKeyStoreManager returns an initialized KeyStoreManager, or an error
|
||||
// if it fails to create the KeyFileStores or load certificates
|
||||
func NewKeyStoreManager(baseDir string, passphraseRetriever passphrase.Retriever) (*KeyStoreManager, error) {
|
||||
nonRootKeysPath := filepath.Join(baseDir, privDir, nonRootKeysSubdir)
|
||||
nonRootKeyStore, err := trustmanager.NewKeyFileStore(nonRootKeysPath, passphraseRetriever)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Load the keystore that will hold all of our encrypted Root Private Keys
|
||||
rootKeysPath := filepath.Join(baseDir, privDir, rootKeysSubdir)
|
||||
rootKeyStore, err := trustmanager.NewKeyFileStore(rootKeysPath, passphraseRetriever)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
func NewKeyStoreManager(baseDir string, keyStore *trustmanager.KeyFileStore) (*KeyStoreManager, error) {
|
||||
trustPath := filepath.Join(baseDir, trustDir)
|
||||
|
||||
// Load all CAs that aren't expired and don't use SHA1
|
||||
|
@ -102,25 +82,12 @@ func NewKeyStoreManager(baseDir string, passphraseRetriever passphrase.Retriever
|
|||
}
|
||||
|
||||
return &KeyStoreManager{
|
||||
rootKeyStore: rootKeyStore,
|
||||
nonRootKeyStore: nonRootKeyStore,
|
||||
KeyStore: keyStore,
|
||||
trustedCAStore: trustedCAStore,
|
||||
trustedCertificateStore: trustedCertificateStore,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// RootKeyStore returns the root key store being managed by this
|
||||
// KeyStoreManager
|
||||
func (km *KeyStoreManager) RootKeyStore() *trustmanager.KeyFileStore {
|
||||
return km.rootKeyStore
|
||||
}
|
||||
|
||||
// NonRootKeyStore returns the non-root key store being managed by this
|
||||
// KeyStoreManager
|
||||
func (km *KeyStoreManager) NonRootKeyStore() *trustmanager.KeyFileStore {
|
||||
return km.nonRootKeyStore
|
||||
}
|
||||
|
||||
// TrustedCertificateStore returns the trusted certificate store being managed
|
||||
// by this KeyStoreManager
|
||||
func (km *KeyStoreManager) TrustedCertificateStore() trustmanager.X509Store {
|
||||
|
@ -151,7 +118,7 @@ func (km *KeyStoreManager) GenRootKey(algorithm string) (string, error) {
|
|||
// We don't want external API callers to rely on internal TUF data types, so
|
||||
// the API here should continue to receive a string algorithm, and ensure
|
||||
// that it is downcased
|
||||
switch data.KeyAlgorithm(strings.ToLower(algorithm)) {
|
||||
switch strings.ToLower(algorithm) {
|
||||
case data.RSAKey:
|
||||
privKey, err = trustmanager.GenerateRSAKey(rand.Reader, rsaRootKeySize)
|
||||
case data.ECDSAKey:
|
||||
|
@ -165,24 +132,11 @@ func (km *KeyStoreManager) GenRootKey(algorithm string) (string, error) {
|
|||
}
|
||||
|
||||
// Changing the root
|
||||
km.rootKeyStore.AddKey(privKey.ID(), "root", privKey)
|
||||
km.KeyStore.AddKey(privKey.ID(), "root", privKey)
|
||||
|
||||
return privKey.ID(), nil
|
||||
}
|
||||
|
||||
// GetRootCryptoService retrieves a root key and a cryptoservice to use with it
|
||||
// TODO(mccauley): remove this as its no longer needed once we have key caching in the keystores
|
||||
func (km *KeyStoreManager) GetRootCryptoService(rootKeyID string) (*cryptoservice.UnlockedCryptoService, error) {
|
||||
privKey, _, err := km.rootKeyStore.GetKey(rootKeyID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("could not get decrypted root key with keyID: %s, %v", rootKeyID, err)
|
||||
}
|
||||
|
||||
cryptoService := cryptoservice.NewCryptoService("", km.rootKeyStore)
|
||||
|
||||
return cryptoservice.NewUnlockedCryptoService(privKey, cryptoService), nil
|
||||
}
|
||||
|
||||
/*
|
||||
ValidateRoot receives a new root, validates its correctness and attempts to
|
||||
do root key rotation if needed.
|
||||
|
|
|
@ -113,20 +113,28 @@ func PromptRetrieverWithInOut(in io.Reader, out io.Writer, aliasMap map[string]s
|
|||
indexOfLastSeparator = 0
|
||||
}
|
||||
|
||||
var shortName string
|
||||
if len(keyName) > indexOfLastSeparator+idBytesToDisplay {
|
||||
if indexOfLastSeparator > 0 {
|
||||
keyNamePrefix := keyName[:indexOfLastSeparator]
|
||||
keyNameID := keyName[indexOfLastSeparator+1 : indexOfLastSeparator+idBytesToDisplay+1]
|
||||
keyName = keyNamePrefix + " (" + keyNameID + ")"
|
||||
shortName = keyNameID + " (" + keyNamePrefix + ")"
|
||||
} else {
|
||||
keyName = keyName[indexOfLastSeparator : indexOfLastSeparator+idBytesToDisplay]
|
||||
shortName = keyName[indexOfLastSeparator : indexOfLastSeparator+idBytesToDisplay]
|
||||
}
|
||||
}
|
||||
|
||||
withID := fmt.Sprintf(" with ID %s", shortName)
|
||||
if shortName == "" {
|
||||
withID = ""
|
||||
}
|
||||
|
||||
if createNew {
|
||||
fmt.Fprintf(out, "Enter passphrase for new %s key with id %s: ", displayAlias, keyName)
|
||||
fmt.Fprintf(out, "Enter passphrase for new %s key%s: ", displayAlias, withID)
|
||||
} else if displayAlias == "yubikey" {
|
||||
fmt.Fprintf(out, "Enter the %s for the attached Yubikey: ", keyName)
|
||||
} else {
|
||||
fmt.Fprintf(out, "Enter key passphrase for %s key with id %s: ", displayAlias, keyName)
|
||||
fmt.Fprintf(out, "Enter passphrase for %s key%s: ", displayAlias, withID)
|
||||
}
|
||||
|
||||
passphrase, err := stdin.ReadBytes('\n')
|
||||
|
@ -154,7 +162,7 @@ func PromptRetrieverWithInOut(in io.Reader, out io.Writer, aliasMap map[string]s
|
|||
return "", false, ErrTooShort
|
||||
}
|
||||
|
||||
fmt.Fprintf(out, "Repeat passphrase for new %s key with id %s: ", displayAlias, keyName)
|
||||
fmt.Fprintf(out, "Repeat passphrase for new %s key%s: ", displayAlias, withID)
|
||||
confirmation, err := stdin.ReadBytes('\n')
|
||||
fmt.Fprintln(out)
|
||||
if err != nil {
|
||||
|
@ -179,3 +187,11 @@ func PromptRetrieverWithInOut(in io.Reader, out io.Writer, aliasMap map[string]s
|
|||
return retPass, false, nil
|
||||
}
|
||||
}
|
||||
|
||||
// ConstantRetriever returns a new Retriever which will return a constant string
|
||||
// as a passphrase.
|
||||
func ConstantRetriever(constantPassphrase string) Retriever {
|
||||
return func(k, a string, c bool, n int) (string, bool, error) {
|
||||
return constantPassphrase, false, nil
|
||||
}
|
||||
}
|
|
@ -3,6 +3,7 @@ package trustmanager
|
|||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"github.com/docker/notary"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path/filepath"
|
||||
|
@ -11,8 +12,8 @@ import (
|
|||
)
|
||||
|
||||
const (
|
||||
visible os.FileMode = 0755
|
||||
private os.FileMode = 0700
|
||||
visible = notary.PubCertPerms
|
||||
private = notary.PrivKeyPerms
|
||||
)
|
||||
|
||||
var (
|
||||
|
@ -21,13 +22,12 @@ var (
|
|||
ErrPathOutsideStore = errors.New("path outside file store")
|
||||
)
|
||||
|
||||
// LimitedFileStore implements the bare bones primitives (no symlinks or
|
||||
// hierarchy)
|
||||
// LimitedFileStore implements the bare bones primitives (no hierarchy)
|
||||
type LimitedFileStore interface {
|
||||
Add(fileName string, data []byte) error
|
||||
Remove(fileName string) error
|
||||
Get(fileName string) ([]byte, error)
|
||||
ListFiles(symlinks bool) []string
|
||||
ListFiles() []string
|
||||
}
|
||||
|
||||
// FileStore is the interface for full-featured FileStores
|
||||
|
@ -36,8 +36,7 @@ type FileStore interface {
|
|||
|
||||
RemoveDir(directoryName string) error
|
||||
GetPath(fileName string) (string, error)
|
||||
ListDir(directoryName string, symlinks bool) []string
|
||||
Link(src, dst string) error
|
||||
ListDir(directoryName string) []string
|
||||
BaseDir() string
|
||||
}
|
||||
|
||||
|
@ -140,18 +139,18 @@ func (f *SimpleFileStore) GetPath(name string) (string, error) {
|
|||
}
|
||||
|
||||
// ListFiles lists all the files inside of a store
|
||||
func (f *SimpleFileStore) ListFiles(symlinks bool) []string {
|
||||
return f.list(f.baseDir, symlinks)
|
||||
func (f *SimpleFileStore) ListFiles() []string {
|
||||
return f.list(f.baseDir)
|
||||
}
|
||||
|
||||
// ListDir lists all the files inside of a directory identified by a name
|
||||
func (f *SimpleFileStore) ListDir(name string, symlinks bool) []string {
|
||||
func (f *SimpleFileStore) ListDir(name string) []string {
|
||||
fullPath := filepath.Join(f.baseDir, name)
|
||||
return f.list(fullPath, symlinks)
|
||||
return f.list(fullPath)
|
||||
}
|
||||
|
||||
// list lists all the files in a directory given a full path. Ignores symlinks.
|
||||
func (f *SimpleFileStore) list(path string, symlinks bool) []string {
|
||||
func (f *SimpleFileStore) list(path string) []string {
|
||||
files := make([]string, 0, 0)
|
||||
filepath.Walk(path, func(fp string, fi os.FileInfo, err error) error {
|
||||
// If there are errors, ignore this particular file
|
||||
|
@ -163,8 +162,8 @@ func (f *SimpleFileStore) list(path string, symlinks bool) []string {
|
|||
return nil
|
||||
}
|
||||
|
||||
// If this is a symlink, and symlinks is true, ignore it
|
||||
if !symlinks && fi.Mode()&os.ModeSymlink == os.ModeSymlink {
|
||||
// If this is a symlink, ignore it
|
||||
if fi.Mode()&os.ModeSymlink == os.ModeSymlink {
|
||||
return nil
|
||||
}
|
||||
|
||||
|
@ -189,19 +188,6 @@ func (f *SimpleFileStore) genFileName(name string) string {
|
|||
return fmt.Sprintf("%s.%s", name, f.fileExt)
|
||||
}
|
||||
|
||||
// Link creates a symlink between the ID of the certificate used by a repository
|
||||
// and the ID of the root key that is being used.
|
||||
// We use full path for the source and local for the destination to use relative
|
||||
// path for the symlink
|
||||
func (f *SimpleFileStore) Link(oldname, newname string) error {
|
||||
newnamePath, err := f.GetPath(newname)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return os.Symlink(f.genFileName(oldname), newnamePath)
|
||||
}
|
||||
|
||||
// BaseDir returns the base directory of the filestore
|
||||
func (f *SimpleFileStore) BaseDir() string {
|
||||
return f.baseDir
|
||||
|
@ -282,7 +268,7 @@ func (f *MemoryFileStore) Get(name string) ([]byte, error) {
|
|||
}
|
||||
|
||||
// ListFiles lists all the files inside of a store
|
||||
func (f *MemoryFileStore) ListFiles(symlinks bool) []string {
|
||||
func (f *MemoryFileStore) ListFiles() []string {
|
||||
var list []string
|
||||
|
||||
for name := range f.files {
|
||||
|
|
|
@ -1,12 +1,19 @@
|
|||
package trustmanager
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/docker/notary/pkg/passphrase"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/passphrase"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
const (
|
||||
rootKeysSubdir = "root_keys"
|
||||
nonRootKeysSubdir = "tuf_keys"
|
||||
privDir = "private"
|
||||
)
|
||||
|
||||
// KeyFileStore persists and manages private keys on disk
|
||||
|
@ -28,6 +35,7 @@ type KeyMemoryStore struct {
|
|||
// NewKeyFileStore returns a new KeyFileStore creating a private directory to
|
||||
// hold the keys.
|
||||
func NewKeyFileStore(baseDir string, passphraseRetriever passphrase.Retriever) (*KeyFileStore, error) {
|
||||
baseDir = filepath.Join(baseDir, privDir)
|
||||
fileStore, err := NewPrivateSimpleFileStore(baseDir, keyExtension)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -39,6 +47,12 @@ func NewKeyFileStore(baseDir string, passphraseRetriever passphrase.Retriever) (
|
|||
cachedKeys: cachedKeys}, nil
|
||||
}
|
||||
|
||||
// Name returns a user friendly name for the location this store
|
||||
// keeps its data
|
||||
func (s *KeyFileStore) Name() string {
|
||||
return fmt.Sprintf("file (%s)", s.SimpleFileStore.BaseDir())
|
||||
}
|
||||
|
||||
// AddKey stores the contents of a PEM-encoded private key as a PEM block
|
||||
func (s *KeyFileStore) AddKey(name, alias string, privKey data.PrivateKey) error {
|
||||
s.Lock()
|
||||
|
@ -54,8 +68,6 @@ func (s *KeyFileStore) GetKey(name string) (data.PrivateKey, string, error) {
|
|||
}
|
||||
|
||||
// ListKeys returns a list of unique PublicKeys present on the KeyFileStore.
|
||||
// There might be symlinks associating Certificate IDs to Public Keys, so this
|
||||
// method only returns the IDs that aren't symlinks
|
||||
func (s *KeyFileStore) ListKeys() map[string]string {
|
||||
return listKeys(s)
|
||||
}
|
||||
|
@ -67,6 +79,22 @@ func (s *KeyFileStore) RemoveKey(name string) error {
|
|||
return removeKey(s, s.cachedKeys, name)
|
||||
}
|
||||
|
||||
// ExportKey exportes the encrypted bytes from the keystore and writes it to
|
||||
// dest.
|
||||
func (s *KeyFileStore) ExportKey(name string) ([]byte, error) {
|
||||
keyBytes, _, err := getRawKey(s, name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return keyBytes, nil
|
||||
}
|
||||
|
||||
// ImportKey imports the private key in the encrypted bytes into the keystore
|
||||
// with the given key ID and alias.
|
||||
func (s *KeyFileStore) ImportKey(pemBytes []byte, alias string) error {
|
||||
return importKey(s, s.Retriever, s.cachedKeys, alias, pemBytes)
|
||||
}
|
||||
|
||||
// NewKeyMemoryStore returns a new KeyMemoryStore which holds keys in memory
|
||||
func NewKeyMemoryStore(passphraseRetriever passphrase.Retriever) *KeyMemoryStore {
|
||||
memStore := NewMemoryFileStore()
|
||||
|
@ -77,6 +105,12 @@ func NewKeyMemoryStore(passphraseRetriever passphrase.Retriever) *KeyMemoryStore
|
|||
cachedKeys: cachedKeys}
|
||||
}
|
||||
|
||||
// Name returns a user friendly name for the location this store
|
||||
// keeps its data
|
||||
func (s *KeyMemoryStore) Name() string {
|
||||
return "memory"
|
||||
}
|
||||
|
||||
// AddKey stores the contents of a PEM-encoded private key as a PEM block
|
||||
func (s *KeyMemoryStore) AddKey(name, alias string, privKey data.PrivateKey) error {
|
||||
s.Lock()
|
||||
|
@ -92,8 +126,6 @@ func (s *KeyMemoryStore) GetKey(name string) (data.PrivateKey, string, error) {
|
|||
}
|
||||
|
||||
// ListKeys returns a list of unique PublicKeys present on the KeyFileStore.
|
||||
// There might be symlinks associating Certificate IDs to Public Keys, so this
|
||||
// method only returns the IDs that aren't symlinks
|
||||
func (s *KeyMemoryStore) ListKeys() map[string]string {
|
||||
return listKeys(s)
|
||||
}
|
||||
|
@ -105,19 +137,33 @@ func (s *KeyMemoryStore) RemoveKey(name string) error {
|
|||
return removeKey(s, s.cachedKeys, name)
|
||||
}
|
||||
|
||||
func addKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cachedKeys map[string]*cachedKey, name, alias string, privKey data.PrivateKey) error {
|
||||
pemPrivKey, err := KeyToPEM(privKey)
|
||||
// ExportKey exportes the encrypted bytes from the keystore and writes it to
|
||||
// dest.
|
||||
func (s *KeyMemoryStore) ExportKey(name string) ([]byte, error) {
|
||||
keyBytes, _, err := getRawKey(s, name)
|
||||
if err != nil {
|
||||
return err
|
||||
return nil, err
|
||||
}
|
||||
return keyBytes, nil
|
||||
}
|
||||
|
||||
attempts := 0
|
||||
chosenPassphrase := ""
|
||||
giveup := false
|
||||
for {
|
||||
// ImportKey imports the private key in the encrypted bytes into the keystore
|
||||
// with the given key ID and alias.
|
||||
func (s *KeyMemoryStore) ImportKey(pemBytes []byte, alias string) error {
|
||||
return importKey(s, s.Retriever, s.cachedKeys, alias, pemBytes)
|
||||
}
|
||||
|
||||
func addKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cachedKeys map[string]*cachedKey, name, alias string, privKey data.PrivateKey) error {
|
||||
|
||||
var (
|
||||
chosenPassphrase string
|
||||
giveup bool
|
||||
err error
|
||||
)
|
||||
|
||||
for attempts := 0; ; attempts++ {
|
||||
chosenPassphrase, giveup, err = passphraseRetriever(name, alias, true, attempts)
|
||||
if err != nil {
|
||||
attempts++
|
||||
continue
|
||||
}
|
||||
if giveup {
|
||||
|
@ -129,19 +175,12 @@ func addKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cached
|
|||
break
|
||||
}
|
||||
|
||||
if chosenPassphrase != "" {
|
||||
pemPrivKey, err = EncryptPrivateKey(privKey, chosenPassphrase)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
|
||||
cachedKeys[name] = &cachedKey{alias: alias, key: privKey}
|
||||
return s.Add(name+"_"+alias, pemPrivKey)
|
||||
return encryptAndAddKey(s, chosenPassphrase, cachedKeys, name, alias, privKey)
|
||||
}
|
||||
|
||||
func getKeyAlias(s LimitedFileStore, keyID string) (string, error) {
|
||||
files := s.ListFiles(true)
|
||||
files := s.ListFiles()
|
||||
|
||||
name := strings.TrimSpace(strings.TrimSuffix(filepath.Base(keyID), filepath.Ext(keyID)))
|
||||
|
||||
for _, file := range files {
|
||||
|
@ -163,12 +202,8 @@ func getKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cached
|
|||
if ok {
|
||||
return cachedKeyEntry.key, cachedKeyEntry.alias, nil
|
||||
}
|
||||
keyAlias, err := getKeyAlias(s, name)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
keyBytes, err := s.Get(name + "_" + keyAlias)
|
||||
keyBytes, keyAlias, err := getRawKey(s, name)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
@ -177,27 +212,7 @@ func getKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cached
|
|||
// See if the key is encrypted. If its encrypted we'll fail to parse the private key
|
||||
privKey, err := ParsePEMPrivateKey(keyBytes, "")
|
||||
if err != nil {
|
||||
// We need to decrypt the key, lets get a passphrase
|
||||
for attempts := 0; ; attempts++ {
|
||||
passphrase, giveup, err := passphraseRetriever(name, string(keyAlias), false, attempts)
|
||||
// Check if the passphrase retriever got an error or if it is telling us to give up
|
||||
if giveup || err != nil {
|
||||
return nil, "", ErrPasswordInvalid{}
|
||||
}
|
||||
if attempts > 10 {
|
||||
return nil, "", ErrAttemptsExceeded{}
|
||||
}
|
||||
|
||||
// Try to convert PEM encoded bytes back to a PrivateKey using the passphrase
|
||||
privKey, err = ParsePEMPrivateKey(keyBytes, passphrase)
|
||||
if err != nil {
|
||||
retErr = ErrPasswordInvalid{}
|
||||
} else {
|
||||
// We managed to parse the PrivateKey. We've succeeded!
|
||||
retErr = nil
|
||||
break
|
||||
}
|
||||
}
|
||||
privKey, _, retErr = GetPasswdDecryptBytes(passphraseRetriever, keyBytes, name, string(keyAlias))
|
||||
}
|
||||
if retErr != nil {
|
||||
return nil, "", retErr
|
||||
|
@ -208,15 +223,32 @@ func getKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cached
|
|||
|
||||
// ListKeys returns a map of unique PublicKeys present on the KeyFileStore and
|
||||
// their corresponding aliases.
|
||||
// There might be symlinks associating Certificate IDs to Public Keys, so this
|
||||
// method only returns the IDs that aren't symlinks
|
||||
func listKeys(s LimitedFileStore) map[string]string {
|
||||
keyIDMap := make(map[string]string)
|
||||
|
||||
for _, f := range s.ListFiles(false) {
|
||||
for _, f := range s.ListFiles() {
|
||||
// Remove the prefix of the directory from the filename
|
||||
if f[:len(rootKeysSubdir)] == rootKeysSubdir {
|
||||
f = strings.TrimPrefix(f, rootKeysSubdir+"/")
|
||||
} else {
|
||||
f = strings.TrimPrefix(f, nonRootKeysSubdir+"/")
|
||||
}
|
||||
|
||||
// Remove the extension from the full filename
|
||||
// abcde_root.key becomes abcde_root
|
||||
keyIDFull := strings.TrimSpace(strings.TrimSuffix(f, filepath.Ext(f)))
|
||||
keyID := keyIDFull[:strings.LastIndex(keyIDFull, "_")]
|
||||
keyAlias := keyIDFull[strings.LastIndex(keyIDFull, "_")+1:]
|
||||
|
||||
// If the key does not have a _, it is malformed
|
||||
underscoreIndex := strings.LastIndex(keyIDFull, "_")
|
||||
if underscoreIndex == -1 {
|
||||
continue
|
||||
}
|
||||
|
||||
// The keyID is the first part of the keyname
|
||||
// The KeyAlias is the second part of the keyname
|
||||
// in a key named abcde_root, abcde is the keyID and root is the KeyAlias
|
||||
keyID := keyIDFull[:underscoreIndex]
|
||||
keyAlias := keyIDFull[underscoreIndex+1:]
|
||||
keyIDMap[keyID] = keyAlias
|
||||
}
|
||||
return keyIDMap
|
||||
|
@ -231,5 +263,113 @@ func removeKey(s LimitedFileStore, cachedKeys map[string]*cachedKey, name string
|
|||
|
||||
delete(cachedKeys, name)
|
||||
|
||||
return s.Remove(name + "_" + keyAlias)
|
||||
// being in a subdirectory is for backwards compatibliity
|
||||
filename := name + "_" + keyAlias
|
||||
err = s.Remove(filepath.Join(getSubdir(keyAlias), filename))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Assumes 2 subdirectories, 1 containing root keys and 1 containing tuf keys
|
||||
func getSubdir(alias string) string {
|
||||
if alias == "root" {
|
||||
return rootKeysSubdir
|
||||
}
|
||||
return nonRootKeysSubdir
|
||||
}
|
||||
|
||||
// Given a key ID, gets the bytes and alias belonging to that key if the key
|
||||
// exists
|
||||
func getRawKey(s LimitedFileStore, name string) ([]byte, string, error) {
|
||||
keyAlias, err := getKeyAlias(s, name)
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
filename := name + "_" + keyAlias
|
||||
var keyBytes []byte
|
||||
keyBytes, err = s.Get(filepath.Join(getSubdir(keyAlias), filename))
|
||||
if err != nil {
|
||||
return nil, "", err
|
||||
}
|
||||
return keyBytes, keyAlias, nil
|
||||
}
|
||||
|
||||
// GetPasswdDecryptBytes gets the password to decript the given pem bytes.
|
||||
// Returns the password and private key
|
||||
func GetPasswdDecryptBytes(passphraseRetriever passphrase.Retriever, pemBytes []byte, name, alias string) (data.PrivateKey, string, error) {
|
||||
var (
|
||||
passwd string
|
||||
retErr error
|
||||
privKey data.PrivateKey
|
||||
)
|
||||
for attempts := 0; ; attempts++ {
|
||||
var (
|
||||
giveup bool
|
||||
err error
|
||||
)
|
||||
passwd, giveup, err = passphraseRetriever(name, alias, false, attempts)
|
||||
// Check if the passphrase retriever got an error or if it is telling us to give up
|
||||
if giveup || err != nil {
|
||||
return nil, "", ErrPasswordInvalid{}
|
||||
}
|
||||
if attempts > 10 {
|
||||
return nil, "", ErrAttemptsExceeded{}
|
||||
}
|
||||
|
||||
// Try to convert PEM encoded bytes back to a PrivateKey using the passphrase
|
||||
privKey, err = ParsePEMPrivateKey(pemBytes, passwd)
|
||||
if err != nil {
|
||||
retErr = ErrPasswordInvalid{}
|
||||
} else {
|
||||
// We managed to parse the PrivateKey. We've succeeded!
|
||||
retErr = nil
|
||||
break
|
||||
}
|
||||
}
|
||||
if retErr != nil {
|
||||
return nil, "", retErr
|
||||
}
|
||||
return privKey, passwd, nil
|
||||
}
|
||||
|
||||
func encryptAndAddKey(s LimitedFileStore, passwd string, cachedKeys map[string]*cachedKey, name, alias string, privKey data.PrivateKey) error {
|
||||
|
||||
var (
|
||||
pemPrivKey []byte
|
||||
err error
|
||||
)
|
||||
|
||||
if passwd != "" {
|
||||
pemPrivKey, err = EncryptPrivateKey(privKey, passwd)
|
||||
} else {
|
||||
pemPrivKey, err = KeyToPEM(privKey)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
cachedKeys[name] = &cachedKey{alias: alias, key: privKey}
|
||||
return s.Add(filepath.Join(getSubdir(alias), name+"_"+alias), pemPrivKey)
|
||||
}
|
||||
|
||||
func importKey(s LimitedFileStore, passphraseRetriever passphrase.Retriever, cachedKeys map[string]*cachedKey, alias string, pemBytes []byte) error {
|
||||
|
||||
if alias != data.CanonicalRootRole {
|
||||
return s.Add(alias, pemBytes)
|
||||
}
|
||||
|
||||
privKey, passphrase, err := GetPasswdDecryptBytes(
|
||||
passphraseRetriever, pemBytes, "", "imported "+alias)
|
||||
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
var name string
|
||||
name = privKey.ID()
|
||||
return encryptAndAddKey(s, passphrase, cachedKeys, name, alias, privKey)
|
||||
}
|
||||
|
|
|
@ -3,7 +3,7 @@ package trustmanager
|
|||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// ErrAttemptsExceeded is returned when too many attempts have been made to decrypt a key
|
||||
|
@ -40,10 +40,15 @@ const (
|
|||
|
||||
// KeyStore is a generic interface for private key storage
|
||||
type KeyStore interface {
|
||||
// Add Key adds a key to the KeyStore, and if the key already exists,
|
||||
// succeeds. Otherwise, returns an error if it cannot add.
|
||||
AddKey(name, alias string, privKey data.PrivateKey) error
|
||||
GetKey(name string) (data.PrivateKey, string, error)
|
||||
ListKeys() map[string]string
|
||||
RemoveKey(name string) error
|
||||
ExportKey(name string) ([]byte, error)
|
||||
ImportKey(pemBytes []byte, alias string) error
|
||||
Name() string
|
||||
}
|
||||
|
||||
type cachedKey struct {
|
||||
|
|
|
@ -260,6 +260,12 @@ func (s *X509FileStore) GetVerifyOptions(dnsName string) (x509.VerifyOptions, er
|
|||
return opts, nil
|
||||
}
|
||||
|
||||
// Empty returns true if there are no certificates in the X509FileStore, false
|
||||
// otherwise.
|
||||
func (s *X509FileStore) Empty() bool {
|
||||
return len(s.fingerprintMap) == 0
|
||||
}
|
||||
|
||||
func fileName(cert *x509.Certificate) (string, CertID, error) {
|
||||
certID, err := fingerprintCert(cert)
|
||||
if err != nil {
|
||||
|
|
|
@ -20,7 +20,7 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// GetCertFromURL tries to get a X509 certificate given a HTTPS URL
|
||||
|
@ -102,25 +102,22 @@ func fingerprintCert(cert *x509.Certificate) (CertID, error) {
|
|||
block := pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw}
|
||||
pemdata := pem.EncodeToMemory(&block)
|
||||
|
||||
var keyType data.KeyAlgorithm
|
||||
var tufKey data.PublicKey
|
||||
switch cert.PublicKeyAlgorithm {
|
||||
case x509.RSA:
|
||||
keyType = data.RSAx509Key
|
||||
tufKey = data.NewRSAx509PublicKey(pemdata)
|
||||
case x509.ECDSA:
|
||||
keyType = data.ECDSAx509Key
|
||||
tufKey = data.NewECDSAx509PublicKey(pemdata)
|
||||
default:
|
||||
return "", fmt.Errorf("got Unknown key type while fingerprinting certificate")
|
||||
}
|
||||
|
||||
// Create new TUF Key so we can compute the TUF-compliant CertID
|
||||
tufKey := data.NewPublicKey(keyType, pemdata)
|
||||
|
||||
return CertID(tufKey.ID()), nil
|
||||
}
|
||||
|
||||
// loadCertsFromDir receives a store AddCertFromFile for each certificate found
|
||||
func loadCertsFromDir(s *X509FileStore) error {
|
||||
certFiles := s.fileStore.ListFiles(true)
|
||||
certFiles := s.fileStore.ListFiles()
|
||||
for _, f := range certFiles {
|
||||
// ListFiles returns relative paths
|
||||
fullPath := filepath.Join(s.fileStore.BaseDir(), f)
|
||||
|
@ -327,7 +324,8 @@ func RSAToPrivateKey(rsaPrivKey *rsa.PrivateKey) (data.PrivateKey, error) {
|
|||
// Get a DER-encoded representation of the PrivateKey
|
||||
rsaPrivBytes := x509.MarshalPKCS1PrivateKey(rsaPrivKey)
|
||||
|
||||
return data.NewPrivateKey(data.RSAKey, rsaPubBytes, rsaPrivBytes), nil
|
||||
pubKey := data.NewRSAPublicKey(rsaPubBytes)
|
||||
return data.NewRSAPrivateKey(pubKey, rsaPrivBytes)
|
||||
}
|
||||
|
||||
// GenerateECDSAKey generates an ECDSA Private key and returns a TUF PrivateKey
|
||||
|
@ -370,7 +368,7 @@ func GenerateED25519Key(random io.Reader) (data.PrivateKey, error) {
|
|||
return tufPrivKey, nil
|
||||
}
|
||||
|
||||
// ECDSAToPrivateKey converts an rsa.Private key to a TUF data.PrivateKey type
|
||||
// ECDSAToPrivateKey converts an ecdsa.Private key to a TUF data.PrivateKey type
|
||||
func ECDSAToPrivateKey(ecdsaPrivKey *ecdsa.PrivateKey) (data.PrivateKey, error) {
|
||||
// Get a DER-encoded representation of the PublicKey
|
||||
ecdsaPubBytes, err := x509.MarshalPKIXPublicKey(&ecdsaPrivKey.PublicKey)
|
||||
|
@ -384,7 +382,8 @@ func ECDSAToPrivateKey(ecdsaPrivKey *ecdsa.PrivateKey) (data.PrivateKey, error)
|
|||
return nil, fmt.Errorf("failed to marshal private key: %v", err)
|
||||
}
|
||||
|
||||
return data.NewPrivateKey(data.ECDSAKey, ecdsaPubBytes, ecdsaPrivKeyBytes), nil
|
||||
pubKey := data.NewECDSAPublicKey(ecdsaPubBytes)
|
||||
return data.NewECDSAPrivateKey(pubKey, ecdsaPrivKeyBytes)
|
||||
}
|
||||
|
||||
// ED25519ToPrivateKey converts a serialized ED25519 key to a TUF
|
||||
|
@ -394,36 +393,37 @@ func ED25519ToPrivateKey(privKeyBytes []byte) (data.PrivateKey, error) {
|
|||
return nil, errors.New("malformed ed25519 private key")
|
||||
}
|
||||
|
||||
return data.NewPrivateKey(data.ED25519Key, privKeyBytes[:ed25519.PublicKeySize], privKeyBytes), nil
|
||||
pubKey := data.NewED25519PublicKey(privKeyBytes[:ed25519.PublicKeySize])
|
||||
return data.NewED25519PrivateKey(*pubKey, privKeyBytes)
|
||||
}
|
||||
|
||||
func blockType(algorithm data.KeyAlgorithm) (string, error) {
|
||||
switch algorithm {
|
||||
case data.RSAKey:
|
||||
func blockType(k data.PrivateKey) (string, error) {
|
||||
switch k.Algorithm() {
|
||||
case data.RSAKey, data.RSAx509Key:
|
||||
return "RSA PRIVATE KEY", nil
|
||||
case data.ECDSAKey:
|
||||
case data.ECDSAKey, data.ECDSAx509Key:
|
||||
return "EC PRIVATE KEY", nil
|
||||
case data.ED25519Key:
|
||||
return "ED25519 PRIVATE KEY", nil
|
||||
default:
|
||||
return "", fmt.Errorf("algorithm %s not supported", algorithm)
|
||||
return "", fmt.Errorf("algorithm %s not supported", k.Algorithm())
|
||||
}
|
||||
}
|
||||
|
||||
// KeyToPEM returns a PEM encoded key from a Private Key
|
||||
func KeyToPEM(privKey data.PrivateKey) ([]byte, error) {
|
||||
blockType, err := blockType(privKey.Algorithm())
|
||||
bt, err := blockType(privKey)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
return pem.EncodeToMemory(&pem.Block{Type: blockType, Bytes: privKey.Private()}), nil
|
||||
return pem.EncodeToMemory(&pem.Block{Type: bt, Bytes: privKey.Private()}), nil
|
||||
}
|
||||
|
||||
// EncryptPrivateKey returns an encrypted PEM key given a Privatekey
|
||||
// and a passphrase
|
||||
func EncryptPrivateKey(key data.PrivateKey, passphrase string) ([]byte, error) {
|
||||
blockType, err := blockType(key.Algorithm())
|
||||
bt, err := blockType(key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -432,7 +432,7 @@ func EncryptPrivateKey(key data.PrivateKey, passphrase string) ([]byte, error) {
|
|||
cipherType := x509.PEMCipherAES256
|
||||
|
||||
encryptedPEMBlock, err := x509.EncryptPEMBlock(rand.Reader,
|
||||
blockType,
|
||||
bt,
|
||||
key.Private(),
|
||||
password,
|
||||
cipherType)
|
||||
|
@ -443,29 +443,31 @@ func EncryptPrivateKey(key data.PrivateKey, passphrase string) ([]byte, error) {
|
|||
return pem.EncodeToMemory(encryptedPEMBlock), nil
|
||||
}
|
||||
|
||||
// CertToKey transforms a single input certificate into its corresponding
|
||||
// PublicKey
|
||||
func CertToKey(cert *x509.Certificate) data.PublicKey {
|
||||
block := pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw}
|
||||
pemdata := pem.EncodeToMemory(&block)
|
||||
|
||||
switch cert.PublicKeyAlgorithm {
|
||||
case x509.RSA:
|
||||
return data.NewRSAx509PublicKey(pemdata)
|
||||
case x509.ECDSA:
|
||||
return data.NewECDSAx509PublicKey(pemdata)
|
||||
default:
|
||||
logrus.Debugf("Unknown key type parsed from certificate: %v", cert.PublicKeyAlgorithm)
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// CertsToKeys transforms each of the input certificates into it's corresponding
|
||||
// PublicKey
|
||||
func CertsToKeys(certs []*x509.Certificate) map[string]data.PublicKey {
|
||||
keys := make(map[string]data.PublicKey)
|
||||
for _, cert := range certs {
|
||||
block := pem.Block{Type: "CERTIFICATE", Bytes: cert.Raw}
|
||||
pemdata := pem.EncodeToMemory(&block)
|
||||
|
||||
var keyType data.KeyAlgorithm
|
||||
switch cert.PublicKeyAlgorithm {
|
||||
case x509.RSA:
|
||||
keyType = data.RSAx509Key
|
||||
case x509.ECDSA:
|
||||
keyType = data.ECDSAx509Key
|
||||
default:
|
||||
logrus.Debugf("unknown certificate type found, ignoring")
|
||||
}
|
||||
|
||||
// Create new the appropriate PublicKey
|
||||
newKey := data.NewPublicKey(keyType, pemdata)
|
||||
newKey := CertToKey(cert)
|
||||
keys[newKey.ID()] = newKey
|
||||
}
|
||||
|
||||
return keys
|
||||
}
|
||||
|
||||
|
@ -495,3 +497,26 @@ func NewCertificate(gun string) (*x509.Certificate, error) {
|
|||
BasicConstraintsValid: true,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// X509PublicKeyID returns a public key ID as a string, given a
|
||||
// data.PublicKey that contains an X509 Certificate
|
||||
func X509PublicKeyID(certPubKey data.PublicKey) (string, error) {
|
||||
cert, err := LoadCertFromPEM(certPubKey.Public())
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
pubKeyBytes, err := x509.MarshalPKIXPublicKey(cert.PublicKey)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
|
||||
var key data.PublicKey
|
||||
switch certPubKey.Algorithm() {
|
||||
case data.ECDSAx509Key:
|
||||
key = data.NewECDSAPublicKey(pubKeyBytes)
|
||||
case data.RSAx509Key:
|
||||
key = data.NewRSAPublicKey(pubKeyBytes)
|
||||
}
|
||||
|
||||
return key.ID(), nil
|
||||
}
|
||||
|
|
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/non_pkcs11.go
vendored
Normal file
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/non_pkcs11.go
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
// go list ./... and go test ./... will not pick up this package without this
|
||||
// file, because go ? ./... does not honor build tags.
|
||||
|
||||
// e.g. "go list -tags pkcs11 ./..." will not list this package if all the
|
||||
// files in it have a build tag.
|
||||
|
||||
// See https://github.com/golang/go/issues/11246
|
||||
|
||||
package yubikey
|
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_darwin.go
vendored
Normal file
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_darwin.go
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
// +build pkcs11,darwin
|
||||
|
||||
package yubikey
|
||||
|
||||
var possiblePkcs11Libs = []string{
|
||||
"/usr/local/lib/libykcs11.dylib",
|
||||
"/usr/local/docker/lib/libykcs11.dylib",
|
||||
"/usr/local/docker-experimental/lib/libykcs11.dylib",
|
||||
}
|
40
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_interface.go
vendored
Normal file
40
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_interface.go
vendored
Normal file
|
@ -0,0 +1,40 @@
|
|||
// +build pkcs11
|
||||
|
||||
// an interface around the pkcs11 library, so that things can be mocked out
|
||||
// for testing
|
||||
|
||||
package yubikey
|
||||
|
||||
import "github.com/miekg/pkcs11"
|
||||
|
||||
// IPKCS11 is an interface for wrapping github.com/miekg/pkcs11
|
||||
type pkcs11LibLoader func(module string) IPKCS11Ctx
|
||||
|
||||
func defaultLoader(module string) IPKCS11Ctx {
|
||||
return pkcs11.New(module)
|
||||
}
|
||||
|
||||
// IPKCS11Ctx is an interface for wrapping the parts of
|
||||
// github.com/miekg/pkcs11.Ctx that yubikeystore requires
|
||||
type IPKCS11Ctx interface {
|
||||
Destroy()
|
||||
Initialize() error
|
||||
Finalize() error
|
||||
GetSlotList(tokenPresent bool) ([]uint, error)
|
||||
OpenSession(slotID uint, flags uint) (pkcs11.SessionHandle, error)
|
||||
CloseSession(sh pkcs11.SessionHandle) error
|
||||
Login(sh pkcs11.SessionHandle, userType uint, pin string) error
|
||||
Logout(sh pkcs11.SessionHandle) error
|
||||
CreateObject(sh pkcs11.SessionHandle, temp []*pkcs11.Attribute) (
|
||||
pkcs11.ObjectHandle, error)
|
||||
DestroyObject(sh pkcs11.SessionHandle, oh pkcs11.ObjectHandle) error
|
||||
GetAttributeValue(sh pkcs11.SessionHandle, o pkcs11.ObjectHandle,
|
||||
a []*pkcs11.Attribute) ([]*pkcs11.Attribute, error)
|
||||
FindObjectsInit(sh pkcs11.SessionHandle, temp []*pkcs11.Attribute) error
|
||||
FindObjects(sh pkcs11.SessionHandle, max int) (
|
||||
[]pkcs11.ObjectHandle, bool, error)
|
||||
FindObjectsFinal(sh pkcs11.SessionHandle) error
|
||||
SignInit(sh pkcs11.SessionHandle, m []*pkcs11.Mechanism,
|
||||
o pkcs11.ObjectHandle) error
|
||||
Sign(sh pkcs11.SessionHandle, message []byte) ([]byte, error)
|
||||
}
|
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_linux.go
vendored
Normal file
9
vendor/src/github.com/docker/notary/trustmanager/yubikey/pkcs11_linux.go
vendored
Normal file
|
@ -0,0 +1,9 @@
|
|||
// +build pkcs11,linux
|
||||
|
||||
package yubikey
|
||||
|
||||
var possiblePkcs11Libs = []string{
|
||||
"/usr/lib/libykcs11.so",
|
||||
"/usr/lib/x86_64-linux-gnu/libykcs11.so",
|
||||
"/usr/local/lib/libykcs11.so",
|
||||
}
|
884
vendor/src/github.com/docker/notary/trustmanager/yubikey/yubikeystore.go
vendored
Normal file
884
vendor/src/github.com/docker/notary/trustmanager/yubikey/yubikeystore.go
vendored
Normal file
|
@ -0,0 +1,884 @@
|
|||
// +build pkcs11
|
||||
|
||||
package yubikey
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/elliptic"
|
||||
"crypto/rand"
|
||||
"crypto/sha256"
|
||||
"crypto/x509"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"math/big"
|
||||
"os"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/notary/passphrase"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
"github.com/miekg/pkcs11"
|
||||
)
|
||||
|
||||
const (
|
||||
USER_PIN = "123456"
|
||||
SO_USER_PIN = "010203040506070801020304050607080102030405060708"
|
||||
numSlots = 4 // number of slots in the yubikey
|
||||
|
||||
KeymodeNone = 0
|
||||
KeymodeTouch = 1 // touch enabled
|
||||
KeymodePinOnce = 2 // require pin entry once
|
||||
KeymodePinAlways = 4 // require pin entry all the time
|
||||
|
||||
// the key size, when importing a key into yubikey, MUST be 32 bytes
|
||||
ecdsaPrivateKeySize = 32
|
||||
|
||||
sigAttempts = 5
|
||||
)
|
||||
|
||||
// what key mode to use when generating keys
|
||||
var (
|
||||
yubikeyKeymode = KeymodeTouch | KeymodePinOnce
|
||||
// order in which to prefer token locations on the yubikey.
|
||||
// corresponds to: 9c, 9e, 9d, 9a
|
||||
slotIDs = []int{2, 1, 3, 0}
|
||||
)
|
||||
|
||||
// SetYubikeyKeyMode - sets the mode when generating yubikey keys.
|
||||
// This is to be used for testing. It does nothing if not building with tag
|
||||
// pkcs11.
|
||||
func SetYubikeyKeyMode(keyMode int) error {
|
||||
// technically 7 (1 | 2 | 4) is valid, but KeymodePinOnce +
|
||||
// KeymdoePinAlways don't really make sense together
|
||||
if keyMode < 0 || keyMode > 5 {
|
||||
return errors.New("Invalid key mode")
|
||||
}
|
||||
yubikeyKeymode = keyMode
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetTouchToSignUI - allows configurable UX for notifying a user that they
|
||||
// need to touch the yubikey to sign. The callback may be used to provide a
|
||||
// mechanism for updating a GUI (such as removing a modal) after the touch
|
||||
// has been made
|
||||
func SetTouchToSignUI(notifier func(), callback func()) {
|
||||
touchToSignUI = notifier
|
||||
if callback != nil {
|
||||
touchDoneCallback = callback
|
||||
}
|
||||
}
|
||||
|
||||
var touchToSignUI = func() {
|
||||
fmt.Println("Please touch the attached Yubikey to perform signing.")
|
||||
}
|
||||
|
||||
var touchDoneCallback = func() {
|
||||
// noop
|
||||
}
|
||||
|
||||
var pkcs11Lib string
|
||||
|
||||
func init() {
|
||||
for _, loc := range possiblePkcs11Libs {
|
||||
_, err := os.Stat(loc)
|
||||
if err == nil {
|
||||
p := pkcs11.New(loc)
|
||||
if p != nil {
|
||||
pkcs11Lib = loc
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
type ErrBackupFailed struct {
|
||||
err string
|
||||
}
|
||||
|
||||
func (err ErrBackupFailed) Error() string {
|
||||
return fmt.Sprintf("Failed to backup private key to: %s", err.err)
|
||||
}
|
||||
|
||||
type yubiSlot struct {
|
||||
role string
|
||||
slotID []byte
|
||||
}
|
||||
|
||||
// YubiPrivateKey represents a private key inside of a yubikey
|
||||
type YubiPrivateKey struct {
|
||||
data.ECDSAPublicKey
|
||||
passRetriever passphrase.Retriever
|
||||
slot []byte
|
||||
libLoader pkcs11LibLoader
|
||||
}
|
||||
|
||||
type YubikeySigner struct {
|
||||
YubiPrivateKey
|
||||
}
|
||||
|
||||
func NewYubiPrivateKey(slot []byte, pubKey data.ECDSAPublicKey,
|
||||
passRetriever passphrase.Retriever) *YubiPrivateKey {
|
||||
|
||||
return &YubiPrivateKey{
|
||||
ECDSAPublicKey: pubKey,
|
||||
passRetriever: passRetriever,
|
||||
slot: slot,
|
||||
libLoader: defaultLoader,
|
||||
}
|
||||
}
|
||||
|
||||
func (ys *YubikeySigner) Public() crypto.PublicKey {
|
||||
publicKey, err := x509.ParsePKIXPublicKey(ys.YubiPrivateKey.Public())
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
return publicKey
|
||||
}
|
||||
|
||||
func (y *YubiPrivateKey) setLibLoader(loader pkcs11LibLoader) {
|
||||
y.libLoader = loader
|
||||
}
|
||||
|
||||
// CryptoSigner returns a crypto.Signer tha wraps the YubiPrivateKey. Needed for
|
||||
// Certificate generation only
|
||||
func (y *YubiPrivateKey) CryptoSigner() crypto.Signer {
|
||||
return &YubikeySigner{YubiPrivateKey: *y}
|
||||
}
|
||||
|
||||
// Private is not implemented in hardware keys
|
||||
func (y *YubiPrivateKey) Private() []byte {
|
||||
// We cannot return the private material from a Yubikey
|
||||
// TODO(david): We probably want to return an error here
|
||||
return nil
|
||||
}
|
||||
|
||||
func (y YubiPrivateKey) SignatureAlgorithm() data.SigAlgorithm {
|
||||
return data.ECDSASignature
|
||||
}
|
||||
|
||||
func (y *YubiPrivateKey) Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) ([]byte, error) {
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, y.libLoader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
|
||||
v := signed.Verifiers[data.ECDSASignature]
|
||||
for i := 0; i < sigAttempts; i++ {
|
||||
sig, err := sign(ctx, session, y.slot, y.passRetriever, msg)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to sign using Yubikey: %v", err)
|
||||
}
|
||||
if err := v.Verify(&y.ECDSAPublicKey, sig, msg); err == nil {
|
||||
return sig, nil
|
||||
}
|
||||
}
|
||||
return nil, errors.New("Failed to generate signature on Yubikey.")
|
||||
}
|
||||
|
||||
// If a byte array is less than the number of bytes specified by
|
||||
// ecdsaPrivateKeySize, left-zero-pad the byte array until
|
||||
// it is the required size.
|
||||
func ensurePrivateKeySize(payload []byte) []byte {
|
||||
final := payload
|
||||
if len(payload) < ecdsaPrivateKeySize {
|
||||
final = make([]byte, ecdsaPrivateKeySize)
|
||||
copy(final[ecdsaPrivateKeySize-len(payload):], payload)
|
||||
}
|
||||
return final
|
||||
}
|
||||
|
||||
// addECDSAKey adds a key to the yubikey
|
||||
func addECDSAKey(
|
||||
ctx IPKCS11Ctx,
|
||||
session pkcs11.SessionHandle,
|
||||
privKey data.PrivateKey,
|
||||
pkcs11KeyID []byte,
|
||||
passRetriever passphrase.Retriever,
|
||||
role string,
|
||||
) error {
|
||||
logrus.Debugf("Attempting to add key to yubikey with ID: %s", privKey.ID())
|
||||
|
||||
err := login(ctx, session, passRetriever, pkcs11.CKU_SO, SO_USER_PIN)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer ctx.Logout(session)
|
||||
|
||||
// Create an ecdsa.PrivateKey out of the private key bytes
|
||||
ecdsaPrivKey, err := x509.ParseECPrivateKey(privKey.Private())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
ecdsaPrivKeyD := ensurePrivateKeySize(ecdsaPrivKey.D.Bytes())
|
||||
|
||||
template, err := trustmanager.NewCertificate(role)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create the certificate template: %v", err)
|
||||
}
|
||||
|
||||
certBytes, err := x509.CreateCertificate(rand.Reader, template, template, ecdsaPrivKey.Public(), ecdsaPrivKey)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to create the certificate: %v", err)
|
||||
}
|
||||
|
||||
certTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_CERTIFICATE),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_VALUE, certBytes),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
}
|
||||
|
||||
privateKeyTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_PRIVATE_KEY),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_KEY_TYPE, pkcs11.CKK_ECDSA),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_EC_PARAMS, []byte{0x06, 0x08, 0x2a, 0x86, 0x48, 0xce, 0x3d, 0x03, 0x01, 0x07}),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_VALUE, ecdsaPrivKeyD),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_VENDOR_DEFINED, yubikeyKeymode),
|
||||
}
|
||||
|
||||
_, err = ctx.CreateObject(session, certTemplate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error importing: %v", err)
|
||||
}
|
||||
|
||||
_, err = ctx.CreateObject(session, privateKeyTemplate)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error importing: %v", err)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func getECDSAKey(ctx IPKCS11Ctx, session pkcs11.SessionHandle, pkcs11KeyID []byte) (*data.ECDSAPublicKey, string, error) {
|
||||
findTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_TOKEN, true),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_PUBLIC_KEY),
|
||||
}
|
||||
|
||||
attrTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_KEY_TYPE, []byte{0}),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_EC_POINT, []byte{0}),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_EC_PARAMS, []byte{0}),
|
||||
}
|
||||
|
||||
if err := ctx.FindObjectsInit(session, findTemplate); err != nil {
|
||||
logrus.Debugf("Failed to init: %s", err.Error())
|
||||
return nil, "", err
|
||||
}
|
||||
obj, _, err := ctx.FindObjects(session, 1)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to find objects: %v", err)
|
||||
return nil, "", err
|
||||
}
|
||||
if err := ctx.FindObjectsFinal(session); err != nil {
|
||||
logrus.Debugf("Failed to finalize: %s", err.Error())
|
||||
return nil, "", err
|
||||
}
|
||||
if len(obj) != 1 {
|
||||
logrus.Debugf("should have found one object")
|
||||
return nil, "", errors.New("no matching keys found inside of yubikey")
|
||||
}
|
||||
|
||||
// Retrieve the public-key material to be able to create a new ECSAKey
|
||||
attr, err := ctx.GetAttributeValue(session, obj[0], attrTemplate)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to get Attribute for: %v", obj[0])
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
// Iterate through all the attributes of this key and saves CKA_PUBLIC_EXPONENT and CKA_MODULUS. Removes ordering specific issues.
|
||||
var rawPubKey []byte
|
||||
for _, a := range attr {
|
||||
if a.Type == pkcs11.CKA_EC_POINT {
|
||||
rawPubKey = a.Value
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
ecdsaPubKey := ecdsa.PublicKey{Curve: elliptic.P256(), X: new(big.Int).SetBytes(rawPubKey[3:35]), Y: new(big.Int).SetBytes(rawPubKey[35:])}
|
||||
pubBytes, err := x509.MarshalPKIXPublicKey(&ecdsaPubKey)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to Marshal public key")
|
||||
return nil, "", err
|
||||
}
|
||||
|
||||
return data.NewECDSAPublicKey(pubBytes), data.CanonicalRootRole, nil
|
||||
}
|
||||
|
||||
// Sign returns a signature for a given signature request
|
||||
func sign(ctx IPKCS11Ctx, session pkcs11.SessionHandle, pkcs11KeyID []byte, passRetriever passphrase.Retriever, payload []byte) ([]byte, error) {
|
||||
err := login(ctx, session, passRetriever, pkcs11.CKU_USER, USER_PIN)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("error logging in: %v", err)
|
||||
}
|
||||
defer ctx.Logout(session)
|
||||
|
||||
// Define the ECDSA Private key template
|
||||
class := pkcs11.CKO_PRIVATE_KEY
|
||||
privateKeyTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, class),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_KEY_TYPE, pkcs11.CKK_ECDSA),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
}
|
||||
|
||||
if err := ctx.FindObjectsInit(session, privateKeyTemplate); err != nil {
|
||||
logrus.Debugf("Failed to init find objects: %s", err.Error())
|
||||
return nil, err
|
||||
}
|
||||
obj, _, err := ctx.FindObjects(session, 1)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to find objects: %v", err)
|
||||
return nil, err
|
||||
}
|
||||
if err = ctx.FindObjectsFinal(session); err != nil {
|
||||
logrus.Debugf("Failed to finalize find objects: %s", err.Error())
|
||||
return nil, err
|
||||
}
|
||||
if len(obj) != 1 {
|
||||
return nil, errors.New("length of objects found not 1")
|
||||
}
|
||||
|
||||
var sig []byte
|
||||
err = ctx.SignInit(
|
||||
session, []*pkcs11.Mechanism{pkcs11.NewMechanism(pkcs11.CKM_ECDSA, nil)}, obj[0])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Get the SHA256 of the payload
|
||||
digest := sha256.Sum256(payload)
|
||||
|
||||
if (yubikeyKeymode & KeymodeTouch) > 0 {
|
||||
touchToSignUI()
|
||||
defer touchDoneCallback()
|
||||
}
|
||||
// a call to Sign, whether or not Sign fails, will clear the SignInit
|
||||
sig, err = ctx.Sign(session, digest[:])
|
||||
if err != nil {
|
||||
logrus.Debugf("Error while signing: %s", err)
|
||||
return nil, err
|
||||
}
|
||||
|
||||
if sig == nil {
|
||||
return nil, errors.New("Failed to create signature")
|
||||
}
|
||||
return sig[:], nil
|
||||
}
|
||||
|
||||
func yubiRemoveKey(ctx IPKCS11Ctx, session pkcs11.SessionHandle, pkcs11KeyID []byte, passRetriever passphrase.Retriever, keyID string) error {
|
||||
err := login(ctx, session, passRetriever, pkcs11.CKU_SO, SO_USER_PIN)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer ctx.Logout(session)
|
||||
|
||||
template := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_TOKEN, true),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
//pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_PRIVATE_KEY),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_CERTIFICATE),
|
||||
}
|
||||
|
||||
if err := ctx.FindObjectsInit(session, template); err != nil {
|
||||
logrus.Debugf("Failed to init find objects: %s", err.Error())
|
||||
return err
|
||||
}
|
||||
obj, b, err := ctx.FindObjects(session, 1)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to find objects: %s %v", err.Error(), b)
|
||||
return err
|
||||
}
|
||||
if err := ctx.FindObjectsFinal(session); err != nil {
|
||||
logrus.Debugf("Failed to finalize find objects: %s", err.Error())
|
||||
return err
|
||||
}
|
||||
if len(obj) != 1 {
|
||||
logrus.Debugf("should have found exactly one object")
|
||||
return err
|
||||
}
|
||||
|
||||
// Delete the certificate
|
||||
err = ctx.DestroyObject(session, obj[0])
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to delete cert")
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func yubiListKeys(ctx IPKCS11Ctx, session pkcs11.SessionHandle) (keys map[string]yubiSlot, err error) {
|
||||
keys = make(map[string]yubiSlot)
|
||||
findTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_TOKEN, true),
|
||||
//pkcs11.NewAttribute(pkcs11.CKA_ID, pkcs11KeyID),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_CLASS, pkcs11.CKO_CERTIFICATE),
|
||||
}
|
||||
|
||||
attrTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, []byte{0}),
|
||||
pkcs11.NewAttribute(pkcs11.CKA_VALUE, []byte{0}),
|
||||
}
|
||||
|
||||
if err = ctx.FindObjectsInit(session, findTemplate); err != nil {
|
||||
logrus.Debugf("Failed to init: %s", err.Error())
|
||||
return
|
||||
}
|
||||
objs, b, err := ctx.FindObjects(session, numSlots)
|
||||
for err == nil {
|
||||
var o []pkcs11.ObjectHandle
|
||||
o, b, err = ctx.FindObjects(session, numSlots)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if len(o) == 0 {
|
||||
break
|
||||
}
|
||||
objs = append(objs, o...)
|
||||
}
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to find: %s %v", err.Error(), b)
|
||||
if len(objs) == 0 {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
if err = ctx.FindObjectsFinal(session); err != nil {
|
||||
logrus.Debugf("Failed to finalize: %s", err.Error())
|
||||
return
|
||||
}
|
||||
if len(objs) == 0 {
|
||||
return nil, errors.New("No keys found in yubikey.")
|
||||
}
|
||||
logrus.Debugf("Found %d objects matching list filters", len(objs))
|
||||
for _, obj := range objs {
|
||||
var (
|
||||
cert *x509.Certificate
|
||||
slot []byte
|
||||
)
|
||||
// Retrieve the public-key material to be able to create a new ECDSA
|
||||
attr, err := ctx.GetAttributeValue(session, obj, attrTemplate)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to get Attribute for: %v", obj)
|
||||
continue
|
||||
}
|
||||
|
||||
// Iterate through all the attributes of this key and saves CKA_PUBLIC_EXPONENT and CKA_MODULUS. Removes ordering specific issues.
|
||||
for _, a := range attr {
|
||||
if a.Type == pkcs11.CKA_ID {
|
||||
slot = a.Value
|
||||
}
|
||||
if a.Type == pkcs11.CKA_VALUE {
|
||||
cert, err = x509.ParseCertificate(a.Value)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if !data.ValidRole(cert.Subject.CommonName) {
|
||||
continue
|
||||
}
|
||||
}
|
||||
}
|
||||
var ecdsaPubKey *ecdsa.PublicKey
|
||||
switch cert.PublicKeyAlgorithm {
|
||||
case x509.ECDSA:
|
||||
ecdsaPubKey = cert.PublicKey.(*ecdsa.PublicKey)
|
||||
default:
|
||||
logrus.Infof("Unsupported x509 PublicKeyAlgorithm: %d", cert.PublicKeyAlgorithm)
|
||||
continue
|
||||
}
|
||||
|
||||
pubBytes, err := x509.MarshalPKIXPublicKey(ecdsaPubKey)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to Marshal public key")
|
||||
continue
|
||||
}
|
||||
|
||||
keys[data.NewECDSAPublicKey(pubBytes).ID()] = yubiSlot{
|
||||
role: cert.Subject.CommonName,
|
||||
slotID: slot,
|
||||
}
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
func getNextEmptySlot(ctx IPKCS11Ctx, session pkcs11.SessionHandle) ([]byte, error) {
|
||||
findTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_TOKEN, true),
|
||||
}
|
||||
attrTemplate := []*pkcs11.Attribute{
|
||||
pkcs11.NewAttribute(pkcs11.CKA_ID, []byte{0}),
|
||||
}
|
||||
|
||||
if err := ctx.FindObjectsInit(session, findTemplate); err != nil {
|
||||
logrus.Debugf("Failed to init: %s", err.Error())
|
||||
return nil, err
|
||||
}
|
||||
objs, b, err := ctx.FindObjects(session, numSlots)
|
||||
// if there are more objects than `numSlots`, get all of them until
|
||||
// there are no more to get
|
||||
for err == nil {
|
||||
var o []pkcs11.ObjectHandle
|
||||
o, b, err = ctx.FindObjects(session, numSlots)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if len(o) == 0 {
|
||||
break
|
||||
}
|
||||
objs = append(objs, o...)
|
||||
}
|
||||
taken := make(map[int]bool)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to find: %s %v", err.Error(), b)
|
||||
return nil, err
|
||||
}
|
||||
if err = ctx.FindObjectsFinal(session); err != nil {
|
||||
logrus.Debugf("Failed to finalize: %s\n", err.Error())
|
||||
return nil, err
|
||||
}
|
||||
for _, obj := range objs {
|
||||
// Retrieve the slot ID
|
||||
attr, err := ctx.GetAttributeValue(session, obj, attrTemplate)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
// Iterate through attributes. If an ID attr was found, mark it as taken
|
||||
for _, a := range attr {
|
||||
if a.Type == pkcs11.CKA_ID {
|
||||
if len(a.Value) < 1 {
|
||||
continue
|
||||
}
|
||||
// a byte will always be capable of representing all slot IDs
|
||||
// for the Yubikeys
|
||||
slotNum := int(a.Value[0])
|
||||
if slotNum >= numSlots {
|
||||
// defensive
|
||||
continue
|
||||
}
|
||||
taken[slotNum] = true
|
||||
}
|
||||
}
|
||||
}
|
||||
// iterate the token locations in our preferred order and use the first
|
||||
// available one. Otherwise exit the loop and return an error.
|
||||
for _, loc := range slotIDs {
|
||||
if !taken[loc] {
|
||||
return []byte{byte(loc)}, nil
|
||||
}
|
||||
}
|
||||
return nil, errors.New("Yubikey has no available slots.")
|
||||
}
|
||||
|
||||
// YubiKeyStore is a KeyStore for private keys inside a Yubikey
|
||||
type YubiKeyStore struct {
|
||||
passRetriever passphrase.Retriever
|
||||
keys map[string]yubiSlot
|
||||
backupStore trustmanager.KeyStore
|
||||
libLoader pkcs11LibLoader
|
||||
}
|
||||
|
||||
// NewYubiKeyStore returns a YubiKeyStore, given a backup key store to write any
|
||||
// generated keys to (usually a KeyFileStore)
|
||||
func NewYubiKeyStore(backupStore trustmanager.KeyStore, passphraseRetriever passphrase.Retriever) (
|
||||
*YubiKeyStore, error) {
|
||||
|
||||
s := &YubiKeyStore{
|
||||
passRetriever: passphraseRetriever,
|
||||
keys: make(map[string]yubiSlot),
|
||||
backupStore: backupStore,
|
||||
libLoader: defaultLoader,
|
||||
}
|
||||
s.ListKeys() // populate keys field
|
||||
return s, nil
|
||||
}
|
||||
|
||||
// Name returns a user friendly name for the location this store
|
||||
// keeps its data
|
||||
func (s YubiKeyStore) Name() string {
|
||||
return "yubikey"
|
||||
}
|
||||
|
||||
func (s *YubiKeyStore) setLibLoader(loader pkcs11LibLoader) {
|
||||
s.libLoader = loader
|
||||
}
|
||||
|
||||
func (s *YubiKeyStore) ListKeys() map[string]string {
|
||||
if len(s.keys) > 0 {
|
||||
return buildKeyMap(s.keys)
|
||||
}
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, s.libLoader)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to initialize PKCS11 environment: %s", err.Error())
|
||||
return nil
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
|
||||
keys, err := yubiListKeys(ctx, session)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to list key from the yubikey: %s", err.Error())
|
||||
return nil
|
||||
}
|
||||
s.keys = keys
|
||||
|
||||
return buildKeyMap(keys)
|
||||
}
|
||||
|
||||
// AddKey puts a key inside the Yubikey, as well as writing it to the backup store
|
||||
func (s *YubiKeyStore) AddKey(keyID, role string, privKey data.PrivateKey) error {
|
||||
added, err := s.addKey(keyID, role, privKey)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if added {
|
||||
err = s.backupStore.AddKey(privKey.ID(), role, privKey)
|
||||
if err != nil {
|
||||
defer s.RemoveKey(keyID)
|
||||
return ErrBackupFailed{err: err.Error()}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Only add if we haven't seen the key already. Return whether the key was
|
||||
// added.
|
||||
func (s *YubiKeyStore) addKey(keyID, role string, privKey data.PrivateKey) (
|
||||
bool, error) {
|
||||
|
||||
// We only allow adding root keys for now
|
||||
if role != data.CanonicalRootRole {
|
||||
return false, fmt.Errorf(
|
||||
"yubikey only supports storing root keys, got %s for key: %s", role, keyID)
|
||||
}
|
||||
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, s.libLoader)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to initialize PKCS11 environment: %s", err.Error())
|
||||
return false, err
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
|
||||
if k, ok := s.keys[keyID]; ok {
|
||||
if k.role == role {
|
||||
// already have the key and it's associated with the correct role
|
||||
return false, nil
|
||||
}
|
||||
}
|
||||
|
||||
slot, err := getNextEmptySlot(ctx, session)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to get an empty yubikey slot: %s", err.Error())
|
||||
return false, err
|
||||
}
|
||||
logrus.Debugf("Attempting to store key using yubikey slot %v", slot)
|
||||
|
||||
err = addECDSAKey(
|
||||
ctx, session, privKey, slot, s.passRetriever, role)
|
||||
if err == nil {
|
||||
s.keys[privKey.ID()] = yubiSlot{
|
||||
role: role,
|
||||
slotID: slot,
|
||||
}
|
||||
return true, nil
|
||||
}
|
||||
logrus.Debugf("Failed to add key to yubikey: %v", err)
|
||||
|
||||
return false, err
|
||||
}
|
||||
|
||||
// GetKey retrieves a key from the Yubikey only (it does not look inside the
|
||||
// backup store)
|
||||
func (s *YubiKeyStore) GetKey(keyID string) (data.PrivateKey, string, error) {
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, s.libLoader)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to initialize PKCS11 environment: %s", err.Error())
|
||||
return nil, "", err
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
|
||||
key, ok := s.keys[keyID]
|
||||
if !ok {
|
||||
return nil, "", errors.New("no matching keys found inside of yubikey")
|
||||
}
|
||||
|
||||
pubKey, alias, err := getECDSAKey(ctx, session, key.slotID)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to get key from slot %s: %s", key.slotID, err.Error())
|
||||
return nil, "", err
|
||||
}
|
||||
// Check to see if we're returning the intended keyID
|
||||
if pubKey.ID() != keyID {
|
||||
return nil, "", fmt.Errorf("expected root key: %s, but found: %s", keyID, pubKey.ID())
|
||||
}
|
||||
privKey := NewYubiPrivateKey(key.slotID, *pubKey, s.passRetriever)
|
||||
if privKey == nil {
|
||||
return nil, "", errors.New("could not initialize new YubiPrivateKey")
|
||||
}
|
||||
|
||||
return privKey, alias, err
|
||||
}
|
||||
|
||||
// RemoveKey deletes a key from the Yubikey only (it does not remove it from the
|
||||
// backup store)
|
||||
func (s *YubiKeyStore) RemoveKey(keyID string) error {
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, s.libLoader)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to initialize PKCS11 environment: %s", err.Error())
|
||||
return nil
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
|
||||
key, ok := s.keys[keyID]
|
||||
if !ok {
|
||||
return errors.New("Key not present in yubikey")
|
||||
}
|
||||
err = yubiRemoveKey(ctx, session, key.slotID, s.passRetriever, keyID)
|
||||
if err == nil {
|
||||
delete(s.keys, keyID)
|
||||
} else {
|
||||
logrus.Debugf("Failed to remove from the yubikey KeyID %s: %v", keyID, err)
|
||||
}
|
||||
|
||||
return err
|
||||
}
|
||||
|
||||
// ExportKey doesn't work, because you can't export data from a Yubikey
|
||||
func (s *YubiKeyStore) ExportKey(keyID string) ([]byte, error) {
|
||||
logrus.Debugf("Attempting to export: %s key inside of YubiKeyStore", keyID)
|
||||
return nil, errors.New("Keys cannot be exported from a Yubikey.")
|
||||
}
|
||||
|
||||
// ImportKey imports a root key into a Yubikey
|
||||
func (s *YubiKeyStore) ImportKey(pemBytes []byte, keyPath string) error {
|
||||
logrus.Debugf("Attempting to import: %s key inside of YubiKeyStore", keyPath)
|
||||
privKey, _, err := trustmanager.GetPasswdDecryptBytes(
|
||||
s.passRetriever, pemBytes, "", "imported root")
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to get and retrieve a key from: %s", keyPath)
|
||||
return err
|
||||
}
|
||||
if keyPath != data.CanonicalRootRole {
|
||||
return fmt.Errorf("yubikey only supports storing root keys")
|
||||
}
|
||||
_, err = s.addKey(privKey.ID(), "root", privKey)
|
||||
return err
|
||||
}
|
||||
|
||||
func cleanup(ctx IPKCS11Ctx, session pkcs11.SessionHandle) {
|
||||
err := ctx.CloseSession(session)
|
||||
if err != nil {
|
||||
logrus.Debugf("Error closing session: %s", err.Error())
|
||||
}
|
||||
finalizeAndDestroy(ctx)
|
||||
}
|
||||
|
||||
func finalizeAndDestroy(ctx IPKCS11Ctx) {
|
||||
err := ctx.Finalize()
|
||||
if err != nil {
|
||||
logrus.Debugf("Error finalizing: %s", err.Error())
|
||||
}
|
||||
ctx.Destroy()
|
||||
}
|
||||
|
||||
// SetupHSMEnv is a method that depends on the existences
|
||||
func SetupHSMEnv(libraryPath string, libLoader pkcs11LibLoader) (
|
||||
IPKCS11Ctx, pkcs11.SessionHandle, error) {
|
||||
|
||||
if libraryPath == "" {
|
||||
return nil, 0, errors.New("No library found.")
|
||||
}
|
||||
p := libLoader(libraryPath)
|
||||
|
||||
if p == nil {
|
||||
return nil, 0, errors.New("Failed to init library")
|
||||
}
|
||||
|
||||
if err := p.Initialize(); err != nil {
|
||||
defer finalizeAndDestroy(p)
|
||||
return nil, 0, fmt.Errorf("Initialize error %s", err.Error())
|
||||
}
|
||||
|
||||
slots, err := p.GetSlotList(true)
|
||||
if err != nil {
|
||||
defer finalizeAndDestroy(p)
|
||||
return nil, 0, fmt.Errorf("Failed to list HSM slots %s", err)
|
||||
}
|
||||
// Check to see if we got any slots from the HSM.
|
||||
if len(slots) < 1 {
|
||||
defer finalizeAndDestroy(p)
|
||||
return nil, 0, fmt.Errorf("No HSM Slots found")
|
||||
}
|
||||
|
||||
// CKF_SERIAL_SESSION: TRUE if cryptographic functions are performed in serial with the application; FALSE if the functions may be performed in parallel with the application.
|
||||
// CKF_RW_SESSION: TRUE if the session is read/write; FALSE if the session is read-only
|
||||
session, err := p.OpenSession(slots[0], pkcs11.CKF_SERIAL_SESSION|pkcs11.CKF_RW_SESSION)
|
||||
if err != nil {
|
||||
defer cleanup(p, session)
|
||||
return nil, 0, fmt.Errorf("Failed to Start Session with HSM %s", err)
|
||||
}
|
||||
|
||||
return p, session, nil
|
||||
}
|
||||
|
||||
// YubikeyAccessible returns true if a Yubikey can be accessed
|
||||
func YubikeyAccessible() bool {
|
||||
if pkcs11Lib == "" {
|
||||
return false
|
||||
}
|
||||
ctx, session, err := SetupHSMEnv(pkcs11Lib, defaultLoader)
|
||||
if err != nil {
|
||||
return false
|
||||
}
|
||||
defer cleanup(ctx, session)
|
||||
return true
|
||||
}
|
||||
|
||||
func login(ctx IPKCS11Ctx, session pkcs11.SessionHandle, passRetriever passphrase.Retriever, userFlag uint, defaultPassw string) error {
|
||||
// try default password
|
||||
err := ctx.Login(session, userFlag, defaultPassw)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
|
||||
// default failed, ask user for password
|
||||
for attempts := 0; ; attempts++ {
|
||||
var (
|
||||
giveup bool
|
||||
err error
|
||||
user string
|
||||
)
|
||||
if userFlag == pkcs11.CKU_SO {
|
||||
user = "SO Pin"
|
||||
} else {
|
||||
user = "User Pin"
|
||||
}
|
||||
passwd, giveup, err := passRetriever(user, "yubikey", false, attempts)
|
||||
// Check if the passphrase retriever got an error or if it is telling us to give up
|
||||
if giveup || err != nil {
|
||||
return trustmanager.ErrPasswordInvalid{}
|
||||
}
|
||||
if attempts > 2 {
|
||||
return trustmanager.ErrAttemptsExceeded{}
|
||||
}
|
||||
|
||||
// Try to convert PEM encoded bytes back to a PrivateKey using the passphrase
|
||||
err = ctx.Login(session, userFlag, passwd)
|
||||
if err == nil {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func buildKeyMap(keys map[string]yubiSlot) map[string]string {
|
||||
res := make(map[string]string)
|
||||
for k, v := range keys {
|
||||
res[k] = v.role
|
||||
}
|
||||
return res
|
||||
}
|
|
@ -1,6 +1,6 @@
|
|||
# GOTUF
|
||||
# GOTUF
|
||||
|
||||
This is still a work in progress but will shortly be a fully compliant
|
||||
This is still a work in progress but will shortly be a fully compliant
|
||||
Go implementation of [The Update Framework (TUF)](http://theupdateframework.com/).
|
||||
|
||||
## Where's the CLI
|
||||
|
@ -32,5 +32,5 @@ without the code becoming overly convoluted.
|
|||
Some features such as pluggable verifiers have alreayd been merged upstream to flynn/go-tuf
|
||||
and we are in discussion with [titanous](https://github.com/titanous) about working to merge the 2 implementations.
|
||||
|
||||
This implementation retains the same 3 Clause BSD license present on
|
||||
This implementation retains the same 3 Clause BSD license present on
|
||||
the original flynn implementation.
|
|
@ -12,24 +12,26 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
tuf "github.com/endophage/gotuf"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/keys"
|
||||
"github.com/endophage/gotuf/signed"
|
||||
"github.com/endophage/gotuf/store"
|
||||
"github.com/endophage/gotuf/utils"
|
||||
tuf "github.com/docker/notary/tuf"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/keys"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
"github.com/docker/notary/tuf/store"
|
||||
"github.com/docker/notary/tuf/utils"
|
||||
)
|
||||
|
||||
const maxSize int64 = 5 << 20
|
||||
|
||||
// Client is a usability wrapper around a raw TUF repo
|
||||
type Client struct {
|
||||
local *tuf.TufRepo
|
||||
local *tuf.Repo
|
||||
remote store.RemoteStore
|
||||
keysDB *keys.KeyDB
|
||||
cache store.MetadataStore
|
||||
}
|
||||
|
||||
func NewClient(local *tuf.TufRepo, remote store.RemoteStore, keysDB *keys.KeyDB, cache store.MetadataStore) *Client {
|
||||
// NewClient initialized a Client with the given repo, remote source of content, key database, and cache
|
||||
func NewClient(local *tuf.Repo, remote store.RemoteStore, keysDB *keys.KeyDB, cache store.MetadataStore) *Client {
|
||||
return &Client{
|
||||
local: local,
|
||||
remote: remote,
|
||||
|
@ -38,6 +40,7 @@ func NewClient(local *tuf.TufRepo, remote store.RemoteStore, keysDB *keys.KeyDB,
|
|||
}
|
||||
}
|
||||
|
||||
// Update performs an update to the TUF repo as defined by the TUF spec
|
||||
func (c *Client) Update() error {
|
||||
// 1. Get timestamp
|
||||
// a. If timestamp error (verification, expired, etc...) download new root and return to 1.
|
||||
|
@ -52,7 +55,7 @@ func (c *Client) Update() error {
|
|||
if err != nil {
|
||||
logrus.Debug("Error occurred. Root will be downloaded and another update attempted")
|
||||
if err := c.downloadRoot(); err != nil {
|
||||
logrus.Errorf("client Update (Root):", err)
|
||||
logrus.Error("client Update (Root):", err)
|
||||
return err
|
||||
}
|
||||
// If we error again, we now have the latest root and just want to fail
|
||||
|
@ -129,7 +132,7 @@ func (c Client) checkRoot() error {
|
|||
func (c *Client) downloadRoot() error {
|
||||
role := data.RoleName("root")
|
||||
size := maxSize
|
||||
var expectedSha256 []byte = nil
|
||||
var expectedSha256 []byte
|
||||
if c.local.Snapshot != nil {
|
||||
size = c.local.Snapshot.Signed.Meta[role].Length
|
||||
expectedSha256 = c.local.Snapshot.Signed.Meta[role].Hashes["sha256"]
|
||||
|
@ -140,7 +143,7 @@ func (c *Client) downloadRoot() error {
|
|||
// interpreted as 0.
|
||||
var download bool
|
||||
var err error
|
||||
var cachedRoot []byte = nil
|
||||
var cachedRoot []byte
|
||||
old := &data.Signed{}
|
||||
version := 0
|
||||
|
||||
|
@ -380,7 +383,7 @@ func (c *Client) downloadTargets(role string) error {
|
|||
return fmt.Errorf("Invalid role: %s", role)
|
||||
}
|
||||
keyIDs := r.KeyIDs
|
||||
s, err := c.GetTargetsFile(role, keyIDs, snap.Meta, root.ConsistentSnapshot, r.Threshold)
|
||||
s, err := c.getTargetsFile(role, keyIDs, snap.Meta, root.ConsistentSnapshot, r.Threshold)
|
||||
if err != nil {
|
||||
logrus.Error("Error getting targets file:", err)
|
||||
return err
|
||||
|
@ -402,9 +405,11 @@ func (c *Client) downloadSigned(role string, size int64, expectedSha256 []byte)
|
|||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
genHash := sha256.Sum256(raw)
|
||||
if expectedSha256 != nil && !bytes.Equal(genHash[:], expectedSha256) {
|
||||
return nil, nil, ErrChecksumMismatch{role: role}
|
||||
if expectedSha256 != nil {
|
||||
genHash := sha256.Sum256(raw)
|
||||
if !bytes.Equal(genHash[:], expectedSha256) {
|
||||
return nil, nil, ErrChecksumMismatch{role: role}
|
||||
}
|
||||
}
|
||||
s := &data.Signed{}
|
||||
err = json.Unmarshal(raw, s)
|
||||
|
@ -414,7 +419,7 @@ func (c *Client) downloadSigned(role string, size int64, expectedSha256 []byte)
|
|||
return raw, s, nil
|
||||
}
|
||||
|
||||
func (c Client) GetTargetsFile(role string, keyIDs []string, snapshotMeta data.Files, consistent bool, threshold int) (*data.Signed, error) {
|
||||
func (c Client) getTargetsFile(role string, keyIDs []string, snapshotMeta data.Files, consistent bool, threshold int) (*data.Signed, error) {
|
||||
// require role exists in snapshots
|
||||
roleMeta, ok := snapshotMeta[role]
|
||||
if !ok {
|
||||
|
@ -538,6 +543,7 @@ func (c Client) TargetMeta(path string) (*data.FileMeta, error) {
|
|||
return meta, nil
|
||||
}
|
||||
|
||||
// DownloadTarget downloads the target to dst from the remote
|
||||
func (c Client) DownloadTarget(dst io.Writer, path string, meta *data.FileMeta) error {
|
||||
reader, err := c.remote.GetTarget(path)
|
||||
if err != nil {
|
32
vendor/src/github.com/docker/notary/tuf/client/errors.go
vendored
Normal file
32
vendor/src/github.com/docker/notary/tuf/client/errors.go
vendored
Normal file
|
@ -0,0 +1,32 @@
|
|||
package client
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
// ErrChecksumMismatch - a checksum failed verification
|
||||
type ErrChecksumMismatch struct {
|
||||
role string
|
||||
}
|
||||
|
||||
func (e ErrChecksumMismatch) Error() string {
|
||||
return fmt.Sprintf("tuf: checksum for %s did not match", e.role)
|
||||
}
|
||||
|
||||
// ErrMissingMeta - couldn't find the FileMeta object for a role or target
|
||||
type ErrMissingMeta struct {
|
||||
role string
|
||||
}
|
||||
|
||||
func (e ErrMissingMeta) Error() string {
|
||||
return fmt.Sprintf("tuf: sha256 checksum required for %s", e.role)
|
||||
}
|
||||
|
||||
// ErrCorruptedCache - local data is incorrect
|
||||
type ErrCorruptedCache struct {
|
||||
file string
|
||||
}
|
||||
|
||||
func (e ErrCorruptedCache) Error() string {
|
||||
return fmt.Sprintf("cache is corrupted: %s", e.file)
|
||||
}
|
519
vendor/src/github.com/docker/notary/tuf/data/keys.go
vendored
Normal file
519
vendor/src/github.com/docker/notary/tuf/data/keys.go
vendored
Normal file
|
@ -0,0 +1,519 @@
|
|||
package data
|
||||
|
||||
import (
|
||||
"crypto"
|
||||
"crypto/ecdsa"
|
||||
"crypto/rsa"
|
||||
"crypto/sha256"
|
||||
"crypto/x509"
|
||||
"encoding/asn1"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"io"
|
||||
"math/big"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// PublicKey is the necessary interface for public keys
|
||||
type PublicKey interface {
|
||||
ID() string
|
||||
Algorithm() string
|
||||
Public() []byte
|
||||
}
|
||||
|
||||
// PrivateKey adds the ability to access the private key
|
||||
type PrivateKey interface {
|
||||
PublicKey
|
||||
Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) (signature []byte, err error)
|
||||
Private() []byte
|
||||
CryptoSigner() crypto.Signer
|
||||
SignatureAlgorithm() SigAlgorithm
|
||||
}
|
||||
|
||||
// KeyPair holds the public and private key bytes
|
||||
type KeyPair struct {
|
||||
Public []byte `json:"public"`
|
||||
Private []byte `json:"private"`
|
||||
}
|
||||
|
||||
// Keys represents a map of key ID to PublicKey object. It's necessary
|
||||
// to allow us to unmarshal into an interface via the json.Unmarshaller
|
||||
// interface
|
||||
type Keys map[string]PublicKey
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaller interface
|
||||
func (ks *Keys) UnmarshalJSON(data []byte) error {
|
||||
parsed := make(map[string]tufKey)
|
||||
err := json.Unmarshal(data, &parsed)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
final := make(map[string]PublicKey)
|
||||
for k, tk := range parsed {
|
||||
final[k] = typedPublicKey(tk)
|
||||
}
|
||||
*ks = final
|
||||
return nil
|
||||
}
|
||||
|
||||
// KeyList represents a list of keys
|
||||
type KeyList []PublicKey
|
||||
|
||||
// UnmarshalJSON implements the json.Unmarshaller interface
|
||||
func (ks *KeyList) UnmarshalJSON(data []byte) error {
|
||||
parsed := make([]tufKey, 0, 1)
|
||||
err := json.Unmarshal(data, &parsed)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
final := make([]PublicKey, 0, len(parsed))
|
||||
for _, tk := range parsed {
|
||||
final = append(final, typedPublicKey(tk))
|
||||
}
|
||||
*ks = final
|
||||
return nil
|
||||
}
|
||||
|
||||
func typedPublicKey(tk tufKey) PublicKey {
|
||||
switch tk.Algorithm() {
|
||||
case ECDSAKey:
|
||||
return &ECDSAPublicKey{tufKey: tk}
|
||||
case ECDSAx509Key:
|
||||
return &ECDSAx509PublicKey{tufKey: tk}
|
||||
case RSAKey:
|
||||
return &RSAPublicKey{tufKey: tk}
|
||||
case RSAx509Key:
|
||||
return &RSAx509PublicKey{tufKey: tk}
|
||||
case ED25519Key:
|
||||
return &ED25519PublicKey{tufKey: tk}
|
||||
}
|
||||
return &UnknownPublicKey{tufKey: tk}
|
||||
}
|
||||
|
||||
func typedPrivateKey(tk tufKey) (PrivateKey, error) {
|
||||
private := tk.Value.Private
|
||||
tk.Value.Private = nil
|
||||
switch tk.Algorithm() {
|
||||
case ECDSAKey:
|
||||
return NewECDSAPrivateKey(
|
||||
&ECDSAPublicKey{
|
||||
tufKey: tk,
|
||||
},
|
||||
private,
|
||||
)
|
||||
case ECDSAx509Key:
|
||||
return NewECDSAPrivateKey(
|
||||
&ECDSAx509PublicKey{
|
||||
tufKey: tk,
|
||||
},
|
||||
private,
|
||||
)
|
||||
case RSAKey:
|
||||
return NewRSAPrivateKey(
|
||||
&RSAPublicKey{
|
||||
tufKey: tk,
|
||||
},
|
||||
private,
|
||||
)
|
||||
case RSAx509Key:
|
||||
return NewRSAPrivateKey(
|
||||
&RSAx509PublicKey{
|
||||
tufKey: tk,
|
||||
},
|
||||
private,
|
||||
)
|
||||
case ED25519Key:
|
||||
return NewED25519PrivateKey(
|
||||
ED25519PublicKey{
|
||||
tufKey: tk,
|
||||
},
|
||||
private,
|
||||
)
|
||||
}
|
||||
return &UnknownPrivateKey{
|
||||
tufKey: tk,
|
||||
privateKey: privateKey{private: private},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewPublicKey creates a new, correctly typed PublicKey, using the
|
||||
// UnknownPublicKey catchall for unsupported ciphers
|
||||
func NewPublicKey(alg string, public []byte) PublicKey {
|
||||
tk := tufKey{
|
||||
Type: alg,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
},
|
||||
}
|
||||
return typedPublicKey(tk)
|
||||
}
|
||||
|
||||
// NewPrivateKey creates a new, correctly typed PrivateKey, using the
|
||||
// UnknownPrivateKey catchall for unsupported ciphers
|
||||
func NewPrivateKey(pubKey PublicKey, private []byte) (PrivateKey, error) {
|
||||
tk := tufKey{
|
||||
Type: pubKey.Algorithm(),
|
||||
Value: KeyPair{
|
||||
Public: pubKey.Public(),
|
||||
Private: private, // typedPrivateKey moves this value
|
||||
},
|
||||
}
|
||||
return typedPrivateKey(tk)
|
||||
}
|
||||
|
||||
// UnmarshalPublicKey is used to parse individual public keys in JSON
|
||||
func UnmarshalPublicKey(data []byte) (PublicKey, error) {
|
||||
var parsed tufKey
|
||||
err := json.Unmarshal(data, &parsed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return typedPublicKey(parsed), nil
|
||||
}
|
||||
|
||||
// UnmarshalPrivateKey is used to parse individual private keys in JSON
|
||||
func UnmarshalPrivateKey(data []byte) (PrivateKey, error) {
|
||||
var parsed tufKey
|
||||
err := json.Unmarshal(data, &parsed)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return typedPrivateKey(parsed)
|
||||
}
|
||||
|
||||
// tufKey is the structure used for both public and private keys in TUF.
|
||||
// Normally it would make sense to use a different structures for public and
|
||||
// private keys, but that would change the key ID algorithm (since the canonical
|
||||
// JSON would be different). This structure should normally be accessed through
|
||||
// the PublicKey or PrivateKey interfaces.
|
||||
type tufKey struct {
|
||||
id string
|
||||
Type string `json:"keytype"`
|
||||
Value KeyPair `json:"keyval"`
|
||||
}
|
||||
|
||||
// Algorithm returns the algorithm of the key
|
||||
func (k tufKey) Algorithm() string {
|
||||
return k.Type
|
||||
}
|
||||
|
||||
// ID efficiently generates if necessary, and caches the ID of the key
|
||||
func (k *tufKey) ID() string {
|
||||
if k.id == "" {
|
||||
pubK := tufKey{
|
||||
Type: k.Algorithm(),
|
||||
Value: KeyPair{
|
||||
Public: k.Public(),
|
||||
Private: nil,
|
||||
},
|
||||
}
|
||||
data, err := json.MarshalCanonical(&pubK)
|
||||
if err != nil {
|
||||
logrus.Error("Error generating key ID:", err)
|
||||
}
|
||||
digest := sha256.Sum256(data)
|
||||
k.id = hex.EncodeToString(digest[:])
|
||||
}
|
||||
return k.id
|
||||
}
|
||||
|
||||
// Public returns the public bytes
|
||||
func (k tufKey) Public() []byte {
|
||||
return k.Value.Public
|
||||
}
|
||||
|
||||
// Public key types
|
||||
|
||||
// ECDSAPublicKey represents an ECDSA key using a raw serialization
|
||||
// of the public key
|
||||
type ECDSAPublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// ECDSAx509PublicKey represents an ECDSA key using an x509 cert
|
||||
// as the serialized format of the public key
|
||||
type ECDSAx509PublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// RSAPublicKey represents an RSA key using a raw serialization
|
||||
// of the public key
|
||||
type RSAPublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// RSAx509PublicKey represents an RSA key using an x509 cert
|
||||
// as the serialized format of the public key
|
||||
type RSAx509PublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// ED25519PublicKey represents an ED25519 key using a raw serialization
|
||||
// of the public key
|
||||
type ED25519PublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// UnknownPublicKey is a catchall for key types that are not supported
|
||||
type UnknownPublicKey struct {
|
||||
tufKey
|
||||
}
|
||||
|
||||
// NewECDSAPublicKey initializes a new public key with the ECDSAKey type
|
||||
func NewECDSAPublicKey(public []byte) *ECDSAPublicKey {
|
||||
return &ECDSAPublicKey{
|
||||
tufKey: tufKey{
|
||||
Type: ECDSAKey,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewECDSAx509PublicKey initializes a new public key with the ECDSAx509Key type
|
||||
func NewECDSAx509PublicKey(public []byte) *ECDSAx509PublicKey {
|
||||
return &ECDSAx509PublicKey{
|
||||
tufKey: tufKey{
|
||||
Type: ECDSAx509Key,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewRSAPublicKey initializes a new public key with the RSA type
|
||||
func NewRSAPublicKey(public []byte) *RSAPublicKey {
|
||||
return &RSAPublicKey{
|
||||
tufKey: tufKey{
|
||||
Type: RSAKey,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewRSAx509PublicKey initializes a new public key with the RSAx509Key type
|
||||
func NewRSAx509PublicKey(public []byte) *RSAx509PublicKey {
|
||||
return &RSAx509PublicKey{
|
||||
tufKey: tufKey{
|
||||
Type: RSAx509Key,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// NewED25519PublicKey initializes a new public key with the ED25519Key type
|
||||
func NewED25519PublicKey(public []byte) *ED25519PublicKey {
|
||||
return &ED25519PublicKey{
|
||||
tufKey: tufKey{
|
||||
Type: ED25519Key,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// Private key types
|
||||
type privateKey struct {
|
||||
private []byte
|
||||
}
|
||||
|
||||
type signer struct {
|
||||
signer crypto.Signer
|
||||
}
|
||||
|
||||
// ECDSAPrivateKey represents a private ECDSA key
|
||||
type ECDSAPrivateKey struct {
|
||||
PublicKey
|
||||
privateKey
|
||||
signer
|
||||
}
|
||||
|
||||
// RSAPrivateKey represents a private RSA key
|
||||
type RSAPrivateKey struct {
|
||||
PublicKey
|
||||
privateKey
|
||||
signer
|
||||
}
|
||||
|
||||
// ED25519PrivateKey represents a private ED25519 key
|
||||
type ED25519PrivateKey struct {
|
||||
ED25519PublicKey
|
||||
privateKey
|
||||
}
|
||||
|
||||
// UnknownPrivateKey is a catchall for unsupported key types
|
||||
type UnknownPrivateKey struct {
|
||||
tufKey
|
||||
privateKey
|
||||
}
|
||||
|
||||
// NewECDSAPrivateKey initializes a new ECDSA private key
|
||||
func NewECDSAPrivateKey(public PublicKey, private []byte) (*ECDSAPrivateKey, error) {
|
||||
switch public.(type) {
|
||||
case *ECDSAPublicKey, *ECDSAx509PublicKey:
|
||||
default:
|
||||
return nil, errors.New("Invalid public key type provided to NewECDSAPrivateKey")
|
||||
}
|
||||
ecdsaPrivKey, err := x509.ParseECPrivateKey(private)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &ECDSAPrivateKey{
|
||||
PublicKey: public,
|
||||
privateKey: privateKey{private: private},
|
||||
signer: signer{signer: ecdsaPrivKey},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewRSAPrivateKey initialized a new RSA private key
|
||||
func NewRSAPrivateKey(public PublicKey, private []byte) (*RSAPrivateKey, error) {
|
||||
switch public.(type) {
|
||||
case *RSAPublicKey, *RSAx509PublicKey:
|
||||
default:
|
||||
return nil, errors.New("Invalid public key type provided to NewRSAPrivateKey")
|
||||
}
|
||||
rsaPrivKey, err := x509.ParsePKCS1PrivateKey(private)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return &RSAPrivateKey{
|
||||
PublicKey: public,
|
||||
privateKey: privateKey{private: private},
|
||||
signer: signer{signer: rsaPrivKey},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// NewED25519PrivateKey initialized a new ED25519 private key
|
||||
func NewED25519PrivateKey(public ED25519PublicKey, private []byte) (*ED25519PrivateKey, error) {
|
||||
return &ED25519PrivateKey{
|
||||
ED25519PublicKey: public,
|
||||
privateKey: privateKey{private: private},
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Private return the serialized private bytes of the key
|
||||
func (k privateKey) Private() []byte {
|
||||
return k.private
|
||||
}
|
||||
|
||||
// CryptoSigner returns the underlying crypto.Signer for use cases where we need the default
|
||||
// signature or public key functionality (like when we generate certificates)
|
||||
func (s signer) CryptoSigner() crypto.Signer {
|
||||
return s.signer
|
||||
}
|
||||
|
||||
// CryptoSigner returns the ED25519PrivateKey which already implements crypto.Signer
|
||||
func (k ED25519PrivateKey) CryptoSigner() crypto.Signer {
|
||||
return nil
|
||||
}
|
||||
|
||||
// CryptoSigner returns the UnknownPrivateKey which already implements crypto.Signer
|
||||
func (k UnknownPrivateKey) CryptoSigner() crypto.Signer {
|
||||
return nil
|
||||
}
|
||||
|
||||
type ecdsaSig struct {
|
||||
R *big.Int
|
||||
S *big.Int
|
||||
}
|
||||
|
||||
// Sign creates an ecdsa signature
|
||||
func (k ECDSAPrivateKey) Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) (signature []byte, err error) {
|
||||
ecdsaPrivKey, ok := k.CryptoSigner().(*ecdsa.PrivateKey)
|
||||
if !ok {
|
||||
return nil, errors.New("Signer was based on the wrong key type")
|
||||
}
|
||||
hashed := sha256.Sum256(msg)
|
||||
sigASN1, err := ecdsaPrivKey.Sign(rand, hashed[:], opts)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
sig := ecdsaSig{}
|
||||
_, err = asn1.Unmarshal(sigASN1, &sig)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
rBytes, sBytes := sig.R.Bytes(), sig.S.Bytes()
|
||||
octetLength := (ecdsaPrivKey.Params().BitSize + 7) >> 3
|
||||
|
||||
// MUST include leading zeros in the output
|
||||
rBuf := make([]byte, octetLength-len(rBytes), octetLength)
|
||||
sBuf := make([]byte, octetLength-len(sBytes), octetLength)
|
||||
|
||||
rBuf = append(rBuf, rBytes...)
|
||||
sBuf = append(sBuf, sBytes...)
|
||||
return append(rBuf, sBuf...), nil
|
||||
}
|
||||
|
||||
// Sign creates an rsa signature
|
||||
func (k RSAPrivateKey) Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) (signature []byte, err error) {
|
||||
hashed := sha256.Sum256(msg)
|
||||
if opts == nil {
|
||||
opts = &rsa.PSSOptions{
|
||||
SaltLength: rsa.PSSSaltLengthEqualsHash,
|
||||
Hash: crypto.SHA256,
|
||||
}
|
||||
}
|
||||
return k.CryptoSigner().Sign(rand, hashed[:], opts)
|
||||
}
|
||||
|
||||
// Sign creates an ed25519 signature
|
||||
func (k ED25519PrivateKey) Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) (signature []byte, err error) {
|
||||
priv := [ed25519.PrivateKeySize]byte{}
|
||||
copy(priv[:], k.private[ed25519.PublicKeySize:])
|
||||
return ed25519.Sign(&priv, msg)[:], nil
|
||||
}
|
||||
|
||||
// Sign on an UnknownPrivateKey raises an error because the client does not
|
||||
// know how to sign with this key type.
|
||||
func (k UnknownPrivateKey) Sign(rand io.Reader, msg []byte, opts crypto.SignerOpts) (signature []byte, err error) {
|
||||
return nil, errors.New("Unknown key type, cannot sign.")
|
||||
}
|
||||
|
||||
// SignatureAlgorithm returns the SigAlgorithm for a ECDSAPrivateKey
|
||||
func (k ECDSAPrivateKey) SignatureAlgorithm() SigAlgorithm {
|
||||
return ECDSASignature
|
||||
}
|
||||
|
||||
// SignatureAlgorithm returns the SigAlgorithm for a RSAPrivateKey
|
||||
func (k RSAPrivateKey) SignatureAlgorithm() SigAlgorithm {
|
||||
return RSAPSSSignature
|
||||
}
|
||||
|
||||
// SignatureAlgorithm returns the SigAlgorithm for a ED25519PrivateKey
|
||||
func (k ED25519PrivateKey) SignatureAlgorithm() SigAlgorithm {
|
||||
return EDDSASignature
|
||||
}
|
||||
|
||||
// SignatureAlgorithm returns the SigAlgorithm for an UnknownPrivateKey
|
||||
func (k UnknownPrivateKey) SignatureAlgorithm() SigAlgorithm {
|
||||
return ""
|
||||
}
|
||||
|
||||
// PublicKeyFromPrivate returns a new tufKey based on a private key, with
|
||||
// the private key bytes guaranteed to be nil.
|
||||
func PublicKeyFromPrivate(pk PrivateKey) PublicKey {
|
||||
return typedPublicKey(tufKey{
|
||||
Type: pk.Algorithm(),
|
||||
Value: KeyPair{
|
||||
Public: pk.Public(),
|
||||
Private: nil,
|
||||
},
|
||||
})
|
||||
}
|
|
@ -3,8 +3,6 @@ package data
|
|||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
|
||||
"github.com/endophage/gotuf/errors"
|
||||
)
|
||||
|
||||
// Canonical base role names
|
||||
|
@ -15,6 +13,10 @@ const (
|
|||
CanonicalTimestampRole = "timestamp"
|
||||
)
|
||||
|
||||
// ValidRoles holds an overrideable mapping of canonical role names
|
||||
// to any custom roles names a user wants to make use of. This allows
|
||||
// us to be internally consistent while using different roles in the
|
||||
// public TUF files.
|
||||
var ValidRoles = map[string]string{
|
||||
CanonicalRootRole: CanonicalRootRole,
|
||||
CanonicalTargetsRole: CanonicalTargetsRole,
|
||||
|
@ -22,6 +24,18 @@ var ValidRoles = map[string]string{
|
|||
CanonicalTimestampRole: CanonicalTimestampRole,
|
||||
}
|
||||
|
||||
// ErrInvalidRole represents an error regarding a role. Typically
|
||||
// something like a role for which sone of the public keys were
|
||||
// not found in the TUF repo.
|
||||
type ErrInvalidRole struct {
|
||||
Role string
|
||||
}
|
||||
|
||||
func (e ErrInvalidRole) Error() string {
|
||||
return fmt.Sprintf("tuf: invalid role %s", e.Role)
|
||||
}
|
||||
|
||||
// SetValidRoles is a utility function to override some or all of the roles
|
||||
func SetValidRoles(rs map[string]string) {
|
||||
// iterate ValidRoles
|
||||
for k := range ValidRoles {
|
||||
|
@ -31,13 +45,17 @@ func SetValidRoles(rs map[string]string) {
|
|||
}
|
||||
}
|
||||
|
||||
func RoleName(role string) string {
|
||||
if r, ok := ValidRoles[role]; ok {
|
||||
// RoleName returns the (possibly overridden) role name for the provided
|
||||
// canonical role name
|
||||
func RoleName(canonicalRole string) string {
|
||||
if r, ok := ValidRoles[canonicalRole]; ok {
|
||||
return r
|
||||
}
|
||||
return role
|
||||
return canonicalRole
|
||||
}
|
||||
|
||||
// CanonicalRole does a reverse lookup to get the canonical role name
|
||||
// from the (possibly overridden) role name
|
||||
func CanonicalRole(role string) string {
|
||||
name := strings.ToLower(role)
|
||||
if _, ok := ValidRoles[name]; ok {
|
||||
|
@ -79,10 +97,13 @@ func ValidRole(name string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// RootRole is a cut down role as it appears in the root.json
|
||||
type RootRole struct {
|
||||
KeyIDs []string `json:"keyids"`
|
||||
Threshold int `json:"threshold"`
|
||||
}
|
||||
|
||||
// Role is a more verbose role as they appear in targets delegations
|
||||
type Role struct {
|
||||
RootRole
|
||||
Name string `json:"name"`
|
||||
|
@ -91,15 +112,16 @@ type Role struct {
|
|||
Email string `json:"email,omitempty"`
|
||||
}
|
||||
|
||||
// NewRole creates a new Role object from the given parameters
|
||||
func NewRole(name string, threshold int, keyIDs, paths, pathHashPrefixes []string) (*Role, error) {
|
||||
if len(paths) > 0 && len(pathHashPrefixes) > 0 {
|
||||
return nil, errors.ErrInvalidRole{}
|
||||
return nil, ErrInvalidRole{Role: name}
|
||||
}
|
||||
if threshold < 1 {
|
||||
return nil, errors.ErrInvalidRole{}
|
||||
return nil, ErrInvalidRole{Role: name}
|
||||
}
|
||||
if !ValidRole(name) {
|
||||
return nil, errors.ErrInvalidRole{}
|
||||
return nil, ErrInvalidRole{Role: name}
|
||||
}
|
||||
return &Role{
|
||||
RootRole: RootRole{
|
||||
|
@ -113,10 +135,13 @@ func NewRole(name string, threshold int, keyIDs, paths, pathHashPrefixes []strin
|
|||
|
||||
}
|
||||
|
||||
// IsValid checks if the role has defined both paths and path hash prefixes,
|
||||
// having both is invalid
|
||||
func (r Role) IsValid() bool {
|
||||
return !(len(r.Paths) > 0 && len(r.PathHashPrefixes) > 0)
|
||||
}
|
||||
|
||||
// ValidKey checks if the given id is a recognized signing key for the role
|
||||
func (r Role) ValidKey(id string) bool {
|
||||
for _, key := range r.KeyIDs {
|
||||
if key == id {
|
||||
|
@ -126,6 +151,7 @@ func (r Role) ValidKey(id string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// CheckPaths checks if a given path is valid for the role
|
||||
func (r Role) CheckPaths(path string) bool {
|
||||
for _, p := range r.Paths {
|
||||
if strings.HasPrefix(path, p) {
|
||||
|
@ -135,6 +161,7 @@ func (r Role) CheckPaths(path string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// CheckPrefixes checks if a given hash matches the prefixes for the role
|
||||
func (r Role) CheckPrefixes(hash string) bool {
|
||||
for _, p := range r.PathHashPrefixes {
|
||||
if strings.HasPrefix(hash, p) {
|
||||
|
@ -144,6 +171,7 @@ func (r Role) CheckPrefixes(hash string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// IsDelegation checks if the role is a delegation or a root role
|
||||
func (r Role) IsDelegation() bool {
|
||||
targetsBase := fmt.Sprintf("%s/", ValidRoles[CanonicalTargetsRole])
|
||||
return strings.HasPrefix(r.Name, targetsBase)
|
|
@ -6,23 +6,24 @@ import (
|
|||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// SignedRoot is a fully unpacked root.json
|
||||
type SignedRoot struct {
|
||||
Signatures []Signature
|
||||
Signed Root
|
||||
Dirty bool
|
||||
}
|
||||
|
||||
// Root is the Signed component of a root.json
|
||||
type Root struct {
|
||||
Type string `json:"_type"`
|
||||
Version int `json:"version"`
|
||||
Expires time.Time `json:"expires"`
|
||||
// These keys are public keys. We use TUFKey instead of PublicKey to
|
||||
// support direct JSON unmarshaling.
|
||||
Keys map[string]*TUFKey `json:"keys"`
|
||||
Type string `json:"_type"`
|
||||
Version int `json:"version"`
|
||||
Expires time.Time `json:"expires"`
|
||||
Keys Keys `json:"keys"`
|
||||
Roles map[string]*RootRole `json:"roles"`
|
||||
ConsistentSnapshot bool `json:"consistent_snapshot"`
|
||||
}
|
||||
|
||||
// NewRoot initializes a new SignedRoot with a set of keys, roles, and the consistent flag
|
||||
func NewRoot(keys map[string]PublicKey, roles map[string]*RootRole, consistent bool) (*SignedRoot, error) {
|
||||
signedRoot := &SignedRoot{
|
||||
Signatures: make([]Signature, 0),
|
||||
|
@ -30,32 +31,17 @@ func NewRoot(keys map[string]PublicKey, roles map[string]*RootRole, consistent b
|
|||
Type: TUFTypes["root"],
|
||||
Version: 0,
|
||||
Expires: DefaultExpires("root"),
|
||||
Keys: make(map[string]*TUFKey),
|
||||
Keys: keys,
|
||||
Roles: roles,
|
||||
ConsistentSnapshot: consistent,
|
||||
},
|
||||
Dirty: true,
|
||||
}
|
||||
|
||||
// Convert PublicKeys to TUFKey structures
|
||||
// The Signed.Keys map needs to have *TUFKey values, since this
|
||||
// structure gets directly unmarshalled from JSON, and it's not
|
||||
// possible to unmarshal into an interface type. But this function
|
||||
// takes a map with PublicKey values to avoid exposing this ugliness.
|
||||
// The loop below converts to the TUFKey type.
|
||||
for k, v := range keys {
|
||||
signedRoot.Signed.Keys[k] = &TUFKey{
|
||||
Type: v.Algorithm(),
|
||||
Value: KeyPair{
|
||||
Public: v.Public(),
|
||||
Private: nil,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
return signedRoot, nil
|
||||
}
|
||||
|
||||
// ToSigned partially serializes a SignedRoot for further signing
|
||||
func (r SignedRoot) ToSigned() (*Signed, error) {
|
||||
s, err := json.MarshalCanonical(r.Signed)
|
||||
if err != nil {
|
||||
|
@ -74,6 +60,7 @@ func (r SignedRoot) ToSigned() (*Signed, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// RootFromSigned fully unpacks a Signed object into a SignedRoot
|
||||
func RootFromSigned(s *Signed) (*SignedRoot, error) {
|
||||
r := Root{}
|
||||
err := json.Unmarshal(s.Signed, &r)
|
|
@ -8,12 +8,14 @@ import (
|
|||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// SignedSnapshot is a fully unpacked snapshot.json
|
||||
type SignedSnapshot struct {
|
||||
Signatures []Signature
|
||||
Signed Snapshot
|
||||
Dirty bool
|
||||
}
|
||||
|
||||
// Snapshot is the Signed component of a snapshot.json
|
||||
type Snapshot struct {
|
||||
Type string `json:"_type"`
|
||||
Version int `json:"version"`
|
||||
|
@ -21,6 +23,8 @@ type Snapshot struct {
|
|||
Meta Files `json:"meta"`
|
||||
}
|
||||
|
||||
// NewSnapshot initilizes a SignedSnapshot with a given top level root
|
||||
// and targets objects
|
||||
func NewSnapshot(root *Signed, targets *Signed) (*SignedSnapshot, error) {
|
||||
logrus.Debug("generating new snapshot...")
|
||||
targetsJSON, err := json.Marshal(targets)
|
||||
|
@ -59,6 +63,7 @@ func (sp *SignedSnapshot) hashForRole(role string) []byte {
|
|||
return sp.Signed.Meta[role].Hashes["sha256"]
|
||||
}
|
||||
|
||||
// ToSigned partially serializes a SignedSnapshot for further signing
|
||||
func (sp SignedSnapshot) ToSigned() (*Signed, error) {
|
||||
s, err := json.MarshalCanonical(sp.Signed)
|
||||
if err != nil {
|
||||
|
@ -77,11 +82,13 @@ func (sp SignedSnapshot) ToSigned() (*Signed, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// AddMeta updates a role in the snapshot with new meta
|
||||
func (sp *SignedSnapshot) AddMeta(role string, meta FileMeta) {
|
||||
sp.Signed.Meta[role] = meta
|
||||
sp.Dirty = true
|
||||
}
|
||||
|
||||
// SnapshotFromSigned fully unpacks a Signed object into a SignedSnapshot
|
||||
func SnapshotFromSigned(s *Signed) (*SignedSnapshot, error) {
|
||||
sp := Snapshot{}
|
||||
err := json.Unmarshal(s.Signed, &sp)
|
|
@ -7,18 +7,22 @@ import (
|
|||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// SignedTargets is a fully unpacked targets.json, or target delegation
|
||||
// json file
|
||||
type SignedTargets struct {
|
||||
Signatures []Signature
|
||||
Signed Targets
|
||||
Dirty bool
|
||||
}
|
||||
|
||||
// Targets is the Signed components of a targets.json or delegation json file
|
||||
type Targets struct {
|
||||
SignedCommon
|
||||
Targets Files `json:"targets"`
|
||||
Delegations Delegations `json:"delegations,omitempty"`
|
||||
}
|
||||
|
||||
// NewTargets intiializes a new empty SignedTargets object
|
||||
func NewTargets() *SignedTargets {
|
||||
return &SignedTargets{
|
||||
Signatures: make([]Signature, 0),
|
||||
|
@ -53,7 +57,7 @@ func (t SignedTargets) GetMeta(path string) *FileMeta {
|
|||
// to the role slice on Delegations per TUF spec proposal on using
|
||||
// order to determine priority.
|
||||
func (t SignedTargets) GetDelegations(path string) []*Role {
|
||||
roles := make([]*Role, 0)
|
||||
var roles []*Role
|
||||
pathHashBytes := sha256.Sum256([]byte(path))
|
||||
pathHash := hex.EncodeToString(pathHashBytes[:])
|
||||
for _, r := range t.Signed.Delegations.Roles {
|
||||
|
@ -74,15 +78,20 @@ func (t SignedTargets) GetDelegations(path string) []*Role {
|
|||
return roles
|
||||
}
|
||||
|
||||
// AddTarget adds or updates the meta for the given path
|
||||
func (t *SignedTargets) AddTarget(path string, meta FileMeta) {
|
||||
t.Signed.Targets[path] = meta
|
||||
t.Dirty = true
|
||||
}
|
||||
|
||||
// AddDelegation will add a new delegated role with the given keys,
|
||||
// ensuring the keys either already exist, or are added to the map
|
||||
// of delegation keys
|
||||
func (t *SignedTargets) AddDelegation(role *Role, keys []*PublicKey) error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// ToSigned partially serializes a SignedTargets for further signing
|
||||
func (t SignedTargets) ToSigned() (*Signed, error) {
|
||||
s, err := json.MarshalCanonical(t.Signed)
|
||||
if err != nil {
|
||||
|
@ -101,6 +110,7 @@ func (t SignedTargets) ToSigned() (*Signed, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// TargetsFromSigned fully unpacks a Signed object into a SignedTargets
|
||||
func TargetsFromSigned(s *Signed) (*SignedTargets, error) {
|
||||
t := Targets{}
|
||||
err := json.Unmarshal(s.Signed, &t)
|
|
@ -7,12 +7,14 @@ import (
|
|||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// SignedTimestamp is a fully unpacked timestamp.json
|
||||
type SignedTimestamp struct {
|
||||
Signatures []Signature
|
||||
Signed Timestamp
|
||||
Dirty bool
|
||||
}
|
||||
|
||||
// Timestamp is the Signed component of a timestamp.json
|
||||
type Timestamp struct {
|
||||
Type string `json:"_type"`
|
||||
Version int `json:"version"`
|
||||
|
@ -20,6 +22,7 @@ type Timestamp struct {
|
|||
Meta Files `json:"meta"`
|
||||
}
|
||||
|
||||
// NewTimestamp initializes a timestamp with an existing snapshot
|
||||
func NewTimestamp(snapshot *Signed) (*SignedTimestamp, error) {
|
||||
snapshotJSON, err := json.Marshal(snapshot)
|
||||
if err != nil {
|
||||
|
@ -42,6 +45,8 @@ func NewTimestamp(snapshot *Signed) (*SignedTimestamp, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// ToSigned partially serializes a SignedTimestamp such that it can
|
||||
// be signed
|
||||
func (ts SignedTimestamp) ToSigned() (*Signed, error) {
|
||||
s, err := json.MarshalCanonical(ts.Signed)
|
||||
if err != nil {
|
||||
|
@ -60,6 +65,8 @@ func (ts SignedTimestamp) ToSigned() (*Signed, error) {
|
|||
}, nil
|
||||
}
|
||||
|
||||
// TimestampFromSigned parsed a Signed object into a fully unpacked
|
||||
// SignedTimestamp
|
||||
func TimestampFromSigned(s *Signed) (*SignedTimestamp, error) {
|
||||
ts := Timestamp{}
|
||||
err := json.Unmarshal(s.Signed, &ts)
|
|
@ -14,34 +14,34 @@ import (
|
|||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
type KeyAlgorithm string
|
||||
|
||||
func (k KeyAlgorithm) String() string {
|
||||
return string(k)
|
||||
}
|
||||
|
||||
// SigAlgorithm for types of signatures
|
||||
type SigAlgorithm string
|
||||
|
||||
func (k SigAlgorithm) String() string {
|
||||
return string(k)
|
||||
}
|
||||
|
||||
const (
|
||||
defaultHashAlgorithm = "sha256"
|
||||
const defaultHashAlgorithm = "sha256"
|
||||
|
||||
// Signature types
|
||||
const (
|
||||
EDDSASignature SigAlgorithm = "eddsa"
|
||||
RSAPSSSignature SigAlgorithm = "rsapss"
|
||||
RSAPKCS1v15Signature SigAlgorithm = "rsapkcs1v15"
|
||||
ECDSASignature SigAlgorithm = "ecdsa"
|
||||
PyCryptoSignature SigAlgorithm = "pycrypto-pkcs#1 pss"
|
||||
|
||||
ED25519Key KeyAlgorithm = "ed25519"
|
||||
RSAKey KeyAlgorithm = "rsa"
|
||||
RSAx509Key KeyAlgorithm = "rsa-x509"
|
||||
ECDSAKey KeyAlgorithm = "ecdsa"
|
||||
ECDSAx509Key KeyAlgorithm = "ecdsa-x509"
|
||||
)
|
||||
|
||||
// Key types
|
||||
const (
|
||||
ED25519Key = "ed25519"
|
||||
RSAKey = "rsa"
|
||||
RSAx509Key = "rsa-x509"
|
||||
ECDSAKey = "ecdsa"
|
||||
ECDSAx509Key = "ecdsa-x509"
|
||||
)
|
||||
|
||||
// TUFTypes is the set of metadata types
|
||||
var TUFTypes = map[string]string{
|
||||
CanonicalRootRole: "Root",
|
||||
CanonicalTargetsRole: "Targets",
|
||||
|
@ -57,6 +57,7 @@ func SetTUFTypes(ts map[string]string) {
|
|||
}
|
||||
}
|
||||
|
||||
// ValidTUFType checks if the given type is valid for the role
|
||||
func ValidTUFType(typ, role string) bool {
|
||||
if ValidRole(role) {
|
||||
// All targets delegation roles must have
|
||||
|
@ -80,38 +81,54 @@ func ValidTUFType(typ, role string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// Signed is the high level, partially deserialized metadata object
|
||||
// used to verify signatures before fully unpacking, or to add signatures
|
||||
// before fully packing
|
||||
type Signed struct {
|
||||
Signed json.RawMessage `json:"signed"`
|
||||
Signatures []Signature `json:"signatures"`
|
||||
}
|
||||
|
||||
// SignedCommon contains the fields common to the Signed component of all
|
||||
// TUF metadata files
|
||||
type SignedCommon struct {
|
||||
Type string `json:"_type"`
|
||||
Expires time.Time `json:"expires"`
|
||||
Version int `json:"version"`
|
||||
}
|
||||
|
||||
// SignedMeta is used in server validation where we only need signatures
|
||||
// and common fields
|
||||
type SignedMeta struct {
|
||||
Signed SignedCommon `json:"signed"`
|
||||
Signatures []Signature `json:"signatures"`
|
||||
}
|
||||
|
||||
// Signature is a signature on a piece of metadata
|
||||
type Signature struct {
|
||||
KeyID string `json:"keyid"`
|
||||
Method SigAlgorithm `json:"method"`
|
||||
Signature []byte `json:"sig"`
|
||||
}
|
||||
|
||||
// Files is the map of paths to file meta container in targets and delegations
|
||||
// metadata files
|
||||
type Files map[string]FileMeta
|
||||
|
||||
// Hashes is the map of hash type to digest created for each metadata
|
||||
// and target file
|
||||
type Hashes map[string][]byte
|
||||
|
||||
// FileMeta contains the size and hashes for a metadata or target file. Custom
|
||||
// data can be optionally added.
|
||||
type FileMeta struct {
|
||||
Length int64 `json:"length"`
|
||||
Hashes Hashes `json:"hashes"`
|
||||
Custom json.RawMessage `json:"custom,omitempty"`
|
||||
}
|
||||
|
||||
// NewFileMeta generates a FileMeta object from the reader, using the
|
||||
// hash algorithms provided
|
||||
func NewFileMeta(r io.Reader, hashAlgorithms ...string) (FileMeta, error) {
|
||||
if len(hashAlgorithms) == 0 {
|
||||
hashAlgorithms = []string{defaultHashAlgorithm}
|
||||
|
@ -141,11 +158,13 @@ func NewFileMeta(r io.Reader, hashAlgorithms ...string) (FileMeta, error) {
|
|||
return m, nil
|
||||
}
|
||||
|
||||
// Delegations holds a tier of targets delegations
|
||||
type Delegations struct {
|
||||
Keys map[string]PublicKey `json:"keys"`
|
||||
Roles []*Role `json:"roles"`
|
||||
Keys Keys `json:"keys"`
|
||||
Roles []*Role `json:"roles"`
|
||||
}
|
||||
|
||||
// NewDelegations initializes an empty Delegations object
|
||||
func NewDelegations() *Delegations {
|
||||
return &Delegations{
|
||||
Keys: make(map[string]PublicKey),
|
||||
|
@ -172,6 +191,7 @@ func SetDefaultExpiryTimes(times map[string]int) {
|
|||
}
|
||||
}
|
||||
|
||||
// DefaultExpires gets the default expiry time for the given role
|
||||
func DefaultExpires(role string) time.Time {
|
||||
var t time.Time
|
||||
if t, ok := defaultExpiryTimes[role]; ok {
|
||||
|
@ -182,6 +202,7 @@ func DefaultExpires(role string) time.Time {
|
|||
|
||||
type unmarshalledSignature Signature
|
||||
|
||||
// UnmarshalJSON does a custom unmarshalling of the signature JSON
|
||||
func (s *Signature) UnmarshalJSON(data []byte) error {
|
||||
uSignature := unmarshalledSignature{}
|
||||
err := json.Unmarshal(data, &uSignature)
|
|
@ -3,24 +3,28 @@ package keys
|
|||
import (
|
||||
"errors"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// Various basic key database errors
|
||||
var (
|
||||
ErrWrongType = errors.New("tuf: invalid key type")
|
||||
ErrExists = errors.New("tuf: key already in db")
|
||||
ErrWrongID = errors.New("tuf: key id mismatch")
|
||||
ErrInvalidKey = errors.New("tuf: invalid key")
|
||||
ErrInvalidRole = errors.New("tuf: invalid role")
|
||||
ErrInvalidKeyID = errors.New("tuf: invalid key id")
|
||||
ErrInvalidThreshold = errors.New("tuf: invalid role threshold")
|
||||
)
|
||||
|
||||
// KeyDB is an in memory database of public keys and role associations.
|
||||
// It is populated when parsing TUF files and used during signature
|
||||
// verification to look up the keys for a given role
|
||||
type KeyDB struct {
|
||||
roles map[string]*data.Role
|
||||
keys map[string]data.PublicKey
|
||||
}
|
||||
|
||||
// NewDB initializes an empty KeyDB
|
||||
func NewDB() *KeyDB {
|
||||
return &KeyDB{
|
||||
roles: make(map[string]*data.Role),
|
||||
|
@ -28,13 +32,16 @@ func NewDB() *KeyDB {
|
|||
}
|
||||
}
|
||||
|
||||
// AddKey adds a public key to the database
|
||||
func (db *KeyDB) AddKey(k data.PublicKey) {
|
||||
db.keys[k.ID()] = k
|
||||
}
|
||||
|
||||
// AddRole adds a role to the database. Any keys associated with the
|
||||
// role must have already been added.
|
||||
func (db *KeyDB) AddRole(r *data.Role) error {
|
||||
if !data.ValidRole(r.Name) {
|
||||
return ErrInvalidRole
|
||||
return data.ErrInvalidRole{Role: r.Name}
|
||||
}
|
||||
if r.Threshold < 1 {
|
||||
return ErrInvalidThreshold
|
||||
|
@ -51,10 +58,12 @@ func (db *KeyDB) AddRole(r *data.Role) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// GetKey pulls a key out of the database by its ID
|
||||
func (db *KeyDB) GetKey(id string) data.PublicKey {
|
||||
return db.keys[id]
|
||||
}
|
||||
|
||||
// GetRole retrieves a role based on its name
|
||||
func (db *KeyDB) GetRole(name string) *data.Role {
|
||||
return db.roles[name]
|
||||
}
|
138
vendor/src/github.com/docker/notary/tuf/signed/ed25519.go
vendored
Normal file
138
vendor/src/github.com/docker/notary/tuf/signed/ed25519.go
vendored
Normal file
|
@ -0,0 +1,138 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
"io"
|
||||
"io/ioutil"
|
||||
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
type edCryptoKey struct {
|
||||
role string
|
||||
privKey data.PrivateKey
|
||||
}
|
||||
|
||||
// Ed25519 implements a simple in memory cryptosystem for ED25519 keys
|
||||
type Ed25519 struct {
|
||||
keys map[string]edCryptoKey
|
||||
}
|
||||
|
||||
// NewEd25519 initializes a new empty Ed25519 CryptoService that operates
|
||||
// entirely in memory
|
||||
func NewEd25519() *Ed25519 {
|
||||
return &Ed25519{
|
||||
make(map[string]edCryptoKey),
|
||||
}
|
||||
}
|
||||
|
||||
// addKey allows you to add a private key
|
||||
func (e *Ed25519) addKey(role string, k data.PrivateKey) {
|
||||
e.keys[k.ID()] = edCryptoKey{
|
||||
role: role,
|
||||
privKey: k,
|
||||
}
|
||||
}
|
||||
|
||||
// RemoveKey deletes a key from the signer
|
||||
func (e *Ed25519) RemoveKey(keyID string) error {
|
||||
delete(e.keys, keyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListKeys returns the list of keys IDs for the role
|
||||
func (e *Ed25519) ListKeys(role string) []string {
|
||||
keyIDs := make([]string, 0, len(e.keys))
|
||||
for id := range e.keys {
|
||||
keyIDs = append(keyIDs, id)
|
||||
}
|
||||
return keyIDs
|
||||
}
|
||||
|
||||
// ListAllKeys returns the map of keys IDs to role
|
||||
func (e *Ed25519) ListAllKeys() map[string]string {
|
||||
keys := make(map[string]string)
|
||||
for id, edKey := range e.keys {
|
||||
keys[id] = edKey.role
|
||||
}
|
||||
return keys
|
||||
}
|
||||
|
||||
// Sign generates an Ed25519 signature over the data
|
||||
func (e *Ed25519) Sign(keyIDs []string, toSign []byte) ([]data.Signature, error) {
|
||||
signatures := make([]data.Signature, 0, len(keyIDs))
|
||||
for _, keyID := range keyIDs {
|
||||
priv := [ed25519.PrivateKeySize]byte{}
|
||||
copy(priv[:], e.keys[keyID].privKey.Private())
|
||||
sig := ed25519.Sign(&priv, toSign)
|
||||
signatures = append(signatures, data.Signature{
|
||||
KeyID: keyID,
|
||||
Method: data.EDDSASignature,
|
||||
Signature: sig[:],
|
||||
})
|
||||
}
|
||||
return signatures, nil
|
||||
|
||||
}
|
||||
|
||||
// Create generates a new key and returns the public part
|
||||
func (e *Ed25519) Create(role, algorithm string) (data.PublicKey, error) {
|
||||
if algorithm != data.ED25519Key {
|
||||
return nil, errors.New("only ED25519 supported by this cryptoservice")
|
||||
}
|
||||
|
||||
private, err := trustmanager.GenerateED25519Key(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
e.addKey(role, private)
|
||||
return data.PublicKeyFromPrivate(private), nil
|
||||
}
|
||||
|
||||
// PublicKeys returns a map of public keys for the ids provided, when those IDs are found
|
||||
// in the store.
|
||||
func (e *Ed25519) PublicKeys(keyIDs ...string) (map[string]data.PublicKey, error) {
|
||||
k := make(map[string]data.PublicKey)
|
||||
for _, keyID := range keyIDs {
|
||||
if edKey, ok := e.keys[keyID]; ok {
|
||||
k[keyID] = data.PublicKeyFromPrivate(edKey.privKey)
|
||||
}
|
||||
}
|
||||
return k, nil
|
||||
}
|
||||
|
||||
// GetKey returns a single public key based on the ID
|
||||
func (e *Ed25519) GetKey(keyID string) data.PublicKey {
|
||||
return data.PublicKeyFromPrivate(e.keys[keyID].privKey)
|
||||
}
|
||||
|
||||
// GetPrivateKey returns a single private key based on the ID
|
||||
func (e *Ed25519) GetPrivateKey(keyID string) (data.PrivateKey, string, error) {
|
||||
if k, ok := e.keys[keyID]; ok {
|
||||
return k.privKey, k.role, nil
|
||||
}
|
||||
return nil, "", trustmanager.ErrKeyNotFound{KeyID: keyID}
|
||||
}
|
||||
|
||||
// ImportRootKey adds an Ed25519 key to the store as a root key
|
||||
func (e *Ed25519) ImportRootKey(r io.Reader) error {
|
||||
raw, err := ioutil.ReadAll(r)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
dataSize := ed25519.PublicKeySize + ed25519.PrivateKeySize
|
||||
if len(raw) < dataSize || len(raw) > dataSize {
|
||||
return errors.New("Wrong length of data for Ed25519 Key Import")
|
||||
}
|
||||
public := data.NewED25519PublicKey(raw[:ed25519.PublicKeySize])
|
||||
private, err := data.NewED25519PrivateKey(*public, raw[ed25519.PublicKeySize:])
|
||||
e.keys[private.ID()] = edCryptoKey{
|
||||
role: "root",
|
||||
privKey: private,
|
||||
}
|
||||
return nil
|
||||
}
|
72
vendor/src/github.com/docker/notary/tuf/signed/errors.go
vendored
Normal file
72
vendor/src/github.com/docker/notary/tuf/signed/errors.go
vendored
Normal file
|
@ -0,0 +1,72 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// ErrInsufficientSignatures - do not have enough signatures on a piece of
|
||||
// metadata
|
||||
type ErrInsufficientSignatures struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (e ErrInsufficientSignatures) Error() string {
|
||||
return fmt.Sprintf("tuf: insufficient signatures: %s", e.Name)
|
||||
}
|
||||
|
||||
// ErrExpired indicates a piece of metadata has expired
|
||||
type ErrExpired struct {
|
||||
Role string
|
||||
Expired string
|
||||
}
|
||||
|
||||
func (e ErrExpired) Error() string {
|
||||
return fmt.Sprintf("%s expired at %v", e.Role, e.Expired)
|
||||
}
|
||||
|
||||
// ErrLowVersion indicates the piece of metadata has a version number lower than
|
||||
// a version number we're already seen for this role
|
||||
type ErrLowVersion struct {
|
||||
Actual int
|
||||
Current int
|
||||
}
|
||||
|
||||
func (e ErrLowVersion) Error() string {
|
||||
return fmt.Sprintf("version %d is lower than current version %d", e.Actual, e.Current)
|
||||
}
|
||||
|
||||
// ErrRoleThreshold indicates we did not validate enough signatures to meet the threshold
|
||||
type ErrRoleThreshold struct{}
|
||||
|
||||
func (e ErrRoleThreshold) Error() string {
|
||||
return "valid signatures did not meet threshold"
|
||||
}
|
||||
|
||||
// ErrInvalidKeyType indicates the types for the key and signature it's associated with are
|
||||
// mismatched. Probably a sign of malicious behaviour
|
||||
type ErrInvalidKeyType struct{}
|
||||
|
||||
func (e ErrInvalidKeyType) Error() string {
|
||||
return "key type is not valid for signature"
|
||||
}
|
||||
|
||||
// ErrInvalidKeyLength indicates that while we may support the cipher, the provided
|
||||
// key length is not specifically supported, i.e. we support RSA, but not 1024 bit keys
|
||||
type ErrInvalidKeyLength struct {
|
||||
msg string
|
||||
}
|
||||
|
||||
func (e ErrInvalidKeyLength) Error() string {
|
||||
return fmt.Sprintf("key length is not supported: %s", e.msg)
|
||||
}
|
||||
|
||||
// ErrNoKeys indicates no signing keys were found when trying to sign
|
||||
type ErrNoKeys struct {
|
||||
keyIDs []string
|
||||
}
|
||||
|
||||
func (e ErrNoKeys) Error() string {
|
||||
return fmt.Sprintf("could not find necessary signing keys, at least one of these keys must be available: %s",
|
||||
strings.Join(e.keyIDs, ", "))
|
||||
}
|
|
@ -1,7 +1,8 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"io"
|
||||
)
|
||||
|
||||
// SigningService defines the necessary functions to determine
|
||||
|
@ -20,13 +21,27 @@ type KeyService interface {
|
|||
// the private key into the appropriate signing service.
|
||||
// The role isn't currently used for anything, but it's here to support
|
||||
// future features
|
||||
Create(role string, algorithm data.KeyAlgorithm) (data.PublicKey, error)
|
||||
Create(role, algorithm string) (data.PublicKey, error)
|
||||
|
||||
// GetKey retrieves the public key if present, otherwise it returns nil
|
||||
GetKey(keyID string) data.PublicKey
|
||||
|
||||
// GetPrivateKey retrieves the private key and role if present, otherwise
|
||||
// it returns nil
|
||||
GetPrivateKey(keyID string) (data.PrivateKey, string, error)
|
||||
|
||||
// RemoveKey deletes the specified key
|
||||
RemoveKey(keyID string) error
|
||||
|
||||
// ListKeys returns a list of key IDs for the role
|
||||
ListKeys(role string) []string
|
||||
|
||||
// ListAllKeys returns a map of all available signing key IDs to role
|
||||
ListAllKeys() map[string]string
|
||||
|
||||
// ImportRootKey imports a root key to the highest priority keystore associated with
|
||||
// the cryptoservice
|
||||
ImportRootKey(source io.Reader) error
|
||||
}
|
||||
|
||||
// CryptoService defines a unified Signing and Key Service as this
|
85
vendor/src/github.com/docker/notary/tuf/signed/sign.go
vendored
Normal file
85
vendor/src/github.com/docker/notary/tuf/signed/sign.go
vendored
Normal file
|
@ -0,0 +1,85 @@
|
|||
package signed
|
||||
|
||||
// The Sign function is a choke point for all code paths that do signing.
|
||||
// We use this fact to do key ID translation. There are 2 types of key ID:
|
||||
// - Scoped: the key ID based purely on the data that appears in the TUF
|
||||
// files. This may be wrapped by a certificate that scopes the
|
||||
// key to be used in a specific context.
|
||||
// - Canonical: the key ID based purely on the public key bytes. This is
|
||||
// used by keystores to easily identify keys that may be reused
|
||||
// in many scoped locations.
|
||||
// Currently these types only differ in the context of Root Keys in Notary
|
||||
// for which the root key is wrapped using an x509 certificate.
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"fmt"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/utils"
|
||||
)
|
||||
|
||||
// Sign takes a data.Signed and a key, calculated and adds the signature
|
||||
// to the data.Signed
|
||||
func Sign(service CryptoService, s *data.Signed, keys ...data.PublicKey) error {
|
||||
logrus.Debugf("sign called with %d keys", len(keys))
|
||||
signatures := make([]data.Signature, 0, len(s.Signatures)+1)
|
||||
signingKeyIDs := make(map[string]struct{})
|
||||
ids := make([]string, 0, len(keys))
|
||||
|
||||
privKeys := make(map[string]data.PrivateKey)
|
||||
|
||||
// Get all the private key objects related to the public keys
|
||||
for _, key := range keys {
|
||||
canonicalID, err := utils.CanonicalKeyID(key)
|
||||
ids = append(ids, canonicalID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
k, _, err := service.GetPrivateKey(canonicalID)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
privKeys[key.ID()] = k
|
||||
}
|
||||
|
||||
// Check to ensure we have at least one signing key
|
||||
if len(privKeys) == 0 {
|
||||
return ErrNoKeys{keyIDs: ids}
|
||||
}
|
||||
|
||||
// Do signing and generate list of signatures
|
||||
for keyID, pk := range privKeys {
|
||||
sig, err := pk.Sign(rand.Reader, s.Signed, nil)
|
||||
if err != nil {
|
||||
logrus.Debugf("Failed to sign with key: %s. Reason: %v", keyID, err)
|
||||
continue
|
||||
}
|
||||
signingKeyIDs[keyID] = struct{}{}
|
||||
signatures = append(signatures, data.Signature{
|
||||
KeyID: keyID,
|
||||
Method: pk.SignatureAlgorithm(),
|
||||
Signature: sig[:],
|
||||
})
|
||||
}
|
||||
|
||||
// Check we produced at least on signature
|
||||
if len(signatures) < 1 {
|
||||
return ErrInsufficientSignatures{
|
||||
Name: fmt.Sprintf(
|
||||
"cryptoservice failed to produce any signatures for keys with IDs: %v",
|
||||
ids),
|
||||
}
|
||||
}
|
||||
|
||||
for _, sig := range s.Signatures {
|
||||
if _, ok := signingKeyIDs[sig.KeyID]; ok {
|
||||
// key is in the set of key IDs for which a signature has been created
|
||||
continue
|
||||
}
|
||||
signatures = append(signatures, sig)
|
||||
}
|
||||
s.Signatures = signatures
|
||||
return nil
|
||||
}
|
|
@ -13,7 +13,7 @@ import (
|
|||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
const (
|
||||
|
@ -50,8 +50,10 @@ func RegisterVerifier(algorithm data.SigAlgorithm, v Verifier) {
|
|||
Verifiers[algorithm] = v
|
||||
}
|
||||
|
||||
// Ed25519Verifier used to verify Ed25519 signatures
|
||||
type Ed25519Verifier struct{}
|
||||
|
||||
// Verify checks that an ed25519 signature is valid
|
||||
func (v Ed25519Verifier) Verify(key data.PublicKey, sig []byte, msg []byte) error {
|
||||
if key.Algorithm() != data.ED25519Key {
|
||||
return ErrInvalidKeyType{}
|
||||
|
@ -156,9 +158,10 @@ func (v RSAPSSVerifier) Verify(key data.PublicKey, sig []byte, msg []byte) error
|
|||
return verifyPSS(pubKey, digest[:], sig)
|
||||
}
|
||||
|
||||
// RSAPKCS1v15SVerifier checks RSA PKCS1v15 signatures
|
||||
// RSAPKCS1v15Verifier checks RSA PKCS1v15 signatures
|
||||
type RSAPKCS1v15Verifier struct{}
|
||||
|
||||
// Verify does the actual verification
|
||||
func (v RSAPKCS1v15Verifier) Verify(key data.PublicKey, sig []byte, msg []byte) error {
|
||||
// will return err if keytype is not a recognized RSA type
|
||||
pubKey, err := getRSAPubKey(key)
|
||||
|
@ -190,7 +193,7 @@ func (v RSAPKCS1v15Verifier) Verify(key data.PublicKey, sig []byte, msg []byte)
|
|||
return nil
|
||||
}
|
||||
|
||||
// RSAPSSVerifier checks RSASSA-PSS signatures
|
||||
// RSAPyCryptoVerifier checks RSASSA-PSS signatures
|
||||
type RSAPyCryptoVerifier struct{}
|
||||
|
||||
// Verify does the actual check.
|
|
@ -6,11 +6,12 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/keys"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/keys"
|
||||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
// Various basic signing errors
|
||||
var (
|
||||
ErrMissingKey = errors.New("tuf: missing key")
|
||||
ErrNoSignatures = errors.New("tuf: data has no signatures")
|
||||
|
@ -61,11 +62,13 @@ func VerifyRoot(s *data.Signed, minVersion int, keys map[string]data.PublicKey)
|
|||
return ErrRoleThreshold{}
|
||||
}
|
||||
|
||||
// Verify checks the signatures and metadata (expiry, version) for the signed role
|
||||
// data
|
||||
func Verify(s *data.Signed, role string, minVersion int, db *keys.KeyDB) error {
|
||||
if err := VerifySignatures(s, role, db); err != nil {
|
||||
if err := verifyMeta(s, role, minVersion); err != nil {
|
||||
return err
|
||||
}
|
||||
return verifyMeta(s, role, minVersion)
|
||||
return VerifySignatures(s, role, db)
|
||||
}
|
||||
|
||||
func verifyMeta(s *data.Signed, role string, minVersion int) error {
|
||||
|
@ -87,10 +90,12 @@ func verifyMeta(s *data.Signed, role string, minVersion int) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
var IsExpired = func(t time.Time) bool {
|
||||
// IsExpired checks if the given time passed before the present time
|
||||
func IsExpired(t time.Time) bool {
|
||||
return t.Before(time.Now())
|
||||
}
|
||||
|
||||
// VerifySignatures checks the we have sufficient valid signatures for the given role
|
||||
func VerifySignatures(s *data.Signed, role string, db *keys.KeyDB) error {
|
||||
if len(s.Signatures) == 0 {
|
||||
return ErrNoSignatures
|
||||
|
@ -149,6 +154,7 @@ func VerifySignatures(s *data.Signed, role string, db *keys.KeyDB) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// Unmarshal unmarshals and verifys the raw bytes for a given role's metadata
|
||||
func Unmarshal(b []byte, v interface{}, role string, minVersion int, db *keys.KeyDB) error {
|
||||
s := &data.Signed{}
|
||||
if err := json.Unmarshal(b, s); err != nil {
|
||||
|
@ -160,6 +166,8 @@ func Unmarshal(b []byte, v interface{}, role string, minVersion int, db *keys.Ke
|
|||
return json.Unmarshal(s.Signed, v)
|
||||
}
|
||||
|
||||
// UnmarshalTrusted unmarshals and verifies signatures only, not metadata, for a
|
||||
// given role's metadata
|
||||
func UnmarshalTrusted(b []byte, v interface{}, role string, db *keys.KeyDB) error {
|
||||
s := &data.Signed{}
|
||||
if err := json.Unmarshal(b, s); err != nil {
|
|
@ -1,5 +1,7 @@
|
|||
package store
|
||||
|
||||
// ErrMetaNotFound indicates we did not find a particular piece
|
||||
// of metadata in the store
|
||||
type ErrMetaNotFound struct{}
|
||||
|
||||
func (err ErrMetaNotFound) Error() string {
|
|
@ -8,7 +8,8 @@ import (
|
|||
"path/filepath"
|
||||
)
|
||||
|
||||
func NewFilesystemStore(baseDir, metaSubDir, metaExtension, targetsSubDir string) (*filesystemStore, error) {
|
||||
// NewFilesystemStore creates a new store in a directory tree
|
||||
func NewFilesystemStore(baseDir, metaSubDir, metaExtension, targetsSubDir string) (*FilesystemStore, error) {
|
||||
metaDir := path.Join(baseDir, metaSubDir)
|
||||
targetsDir := path.Join(baseDir, targetsSubDir)
|
||||
|
||||
|
@ -22,7 +23,7 @@ func NewFilesystemStore(baseDir, metaSubDir, metaExtension, targetsSubDir string
|
|||
return nil, err
|
||||
}
|
||||
|
||||
return &filesystemStore{
|
||||
return &FilesystemStore{
|
||||
baseDir: baseDir,
|
||||
metaDir: metaDir,
|
||||
metaExtension: metaExtension,
|
||||
|
@ -30,14 +31,16 @@ func NewFilesystemStore(baseDir, metaSubDir, metaExtension, targetsSubDir string
|
|||
}, nil
|
||||
}
|
||||
|
||||
type filesystemStore struct {
|
||||
// FilesystemStore is a store in a locally accessible directory
|
||||
type FilesystemStore struct {
|
||||
baseDir string
|
||||
metaDir string
|
||||
metaExtension string
|
||||
targetsDir string
|
||||
}
|
||||
|
||||
func (f *filesystemStore) GetMeta(name string, size int64) ([]byte, error) {
|
||||
// GetMeta returns the meta for the given name (a role)
|
||||
func (f *FilesystemStore) GetMeta(name string, size int64) ([]byte, error) {
|
||||
fileName := fmt.Sprintf("%s.%s", name, f.metaExtension)
|
||||
path := filepath.Join(f.metaDir, fileName)
|
||||
meta, err := ioutil.ReadFile(path)
|
||||
|
@ -47,7 +50,8 @@ func (f *filesystemStore) GetMeta(name string, size int64) ([]byte, error) {
|
|||
return meta, nil
|
||||
}
|
||||
|
||||
func (f *filesystemStore) SetMultiMeta(metas map[string][]byte) error {
|
||||
// SetMultiMeta sets the metadata for multiple roles in one operation
|
||||
func (f *FilesystemStore) SetMultiMeta(metas map[string][]byte) error {
|
||||
for role, blob := range metas {
|
||||
err := f.SetMeta(role, blob)
|
||||
if err != nil {
|
||||
|
@ -57,7 +61,8 @@ func (f *filesystemStore) SetMultiMeta(metas map[string][]byte) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (f *filesystemStore) SetMeta(name string, meta []byte) error {
|
||||
// SetMeta sets the meta for a single role
|
||||
func (f *FilesystemStore) SetMeta(name string, meta []byte) error {
|
||||
fileName := fmt.Sprintf("%s.%s", name, f.metaExtension)
|
||||
path := filepath.Join(f.metaDir, fileName)
|
||||
if err := ioutil.WriteFile(path, meta, 0600); err != nil {
|
|
@ -14,6 +14,8 @@ import (
|
|||
"github.com/Sirupsen/logrus"
|
||||
)
|
||||
|
||||
// ErrServerUnavailable indicates an error from the server. code allows us to
|
||||
// populate the http error we received
|
||||
type ErrServerUnavailable struct {
|
||||
code int
|
||||
}
|
||||
|
@ -22,12 +24,9 @@ func (err ErrServerUnavailable) Error() string {
|
|||
return fmt.Sprintf("Unable to reach trust server at this time: %d.", err.code)
|
||||
}
|
||||
|
||||
type ErrShortRead struct{}
|
||||
|
||||
func (err ErrShortRead) Error() string {
|
||||
return "Trust server returned incompelete response."
|
||||
}
|
||||
|
||||
// ErrMaliciousServer indicates the server returned a response that is highly suspected
|
||||
// of being malicious. i.e. it attempted to send us more data than the known size of a
|
||||
// particular role metadata.
|
||||
type ErrMaliciousServer struct{}
|
||||
|
||||
func (err ErrMaliciousServer) Error() string {
|
||||
|
@ -52,7 +51,8 @@ type HTTPStore struct {
|
|||
roundTrip http.RoundTripper
|
||||
}
|
||||
|
||||
func NewHTTPStore(baseURL, metaPrefix, metaExtension, targetsPrefix, keyExtension string, roundTrip http.RoundTripper) (*HTTPStore, error) {
|
||||
// NewHTTPStore initializes a new store against a URL and a number of configuration options
|
||||
func NewHTTPStore(baseURL, metaPrefix, metaExtension, targetsPrefix, keyExtension string, roundTrip http.RoundTripper) (RemoteStore, error) {
|
||||
base, err := url.Parse(baseURL)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
|
@ -105,6 +105,7 @@ func (s HTTPStore) GetMeta(name string, size int64) ([]byte, error) {
|
|||
return body, nil
|
||||
}
|
||||
|
||||
// SetMeta uploads a piece of TUF metadata to the server
|
||||
func (s HTTPStore) SetMeta(name string, blob []byte) error {
|
||||
url, err := s.buildMetaURL("")
|
||||
if err != nil {
|
||||
|
@ -127,6 +128,9 @@ func (s HTTPStore) SetMeta(name string, blob []byte) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// SetMultiMeta does a single batch upload of multiple pieces of TUF metadata.
|
||||
// This should be preferred for updating a remote server as it enable the server
|
||||
// to remain consistent, either accepting or rejecting the complete update.
|
||||
func (s HTTPStore) SetMultiMeta(metas map[string][]byte) error {
|
||||
url, err := s.buildMetaURL("")
|
||||
if err != nil {
|
||||
|
@ -216,6 +220,7 @@ func (s HTTPStore) GetTarget(path string) (io.ReadCloser, error) {
|
|||
return resp.Body, nil
|
||||
}
|
||||
|
||||
// GetKey retrieves a public key from the remote server
|
||||
func (s HTTPStore) GetKey(role string) ([]byte, error) {
|
||||
url, err := s.buildKeyURL(role)
|
||||
if err != nil {
|
|
@ -3,31 +3,39 @@ package store
|
|||
import (
|
||||
"io"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
type targetsWalkFunc func(path string, meta data.FileMeta) error
|
||||
|
||||
// MetadataStore must be implemented by anything that intends to interact
|
||||
// with a store of TUF files
|
||||
type MetadataStore interface {
|
||||
GetMeta(name string, size int64) ([]byte, error)
|
||||
SetMeta(name string, blob []byte) error
|
||||
SetMultiMeta(map[string][]byte) error
|
||||
}
|
||||
|
||||
// PublicKeyStore must be implemented by a key service
|
||||
type PublicKeyStore interface {
|
||||
GetKey(role string) ([]byte, error)
|
||||
}
|
||||
|
||||
// [endophage] I'm of the opinion this should go away.
|
||||
// TargetStore represents a collection of targets that can be walked similarly
|
||||
// to walking a directory, passing a callback that receives the path and meta
|
||||
// for each target
|
||||
type TargetStore interface {
|
||||
WalkStagedTargets(paths []string, targetsFn targetsWalkFunc) error
|
||||
}
|
||||
|
||||
// LocalStore represents a local TUF sture
|
||||
type LocalStore interface {
|
||||
MetadataStore
|
||||
TargetStore
|
||||
}
|
||||
|
||||
// RemoteStore is similar to LocalStore with the added expectation that it should
|
||||
// provide a way to download targets once located
|
||||
type RemoteStore interface {
|
||||
MetadataStore
|
||||
PublicKeyStore
|
|
@ -5,12 +5,13 @@ import (
|
|||
"fmt"
|
||||
"io"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/errors"
|
||||
"github.com/endophage/gotuf/utils"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/utils"
|
||||
)
|
||||
|
||||
func NewMemoryStore(meta map[string][]byte, files map[string][]byte) *memoryStore {
|
||||
// NewMemoryStore returns a MetadataStore that operates entirely in memory.
|
||||
// Very useful for testing
|
||||
func NewMemoryStore(meta map[string][]byte, files map[string][]byte) RemoteStore {
|
||||
if meta == nil {
|
||||
meta = make(map[string][]byte)
|
||||
}
|
||||
|
@ -37,9 +38,8 @@ func (m *memoryStore) GetMeta(name string, size int64) ([]byte, error) {
|
|||
return d, nil
|
||||
}
|
||||
return d[:size], nil
|
||||
} else {
|
||||
return nil, ErrMetaNotFound{}
|
||||
}
|
||||
return nil, ErrMetaNotFound{}
|
||||
}
|
||||
|
||||
func (m *memoryStore) SetMeta(name string, meta []byte) error {
|
||||
|
@ -75,7 +75,7 @@ func (m *memoryStore) WalkStagedTargets(paths []string, targetsFn targetsWalkFun
|
|||
for _, path := range paths {
|
||||
dat, ok := m.files[path]
|
||||
if !ok {
|
||||
return errors.ErrFileNotFound{path}
|
||||
return ErrMetaNotFound{}
|
||||
}
|
||||
meta, err := data.NewFileMeta(bytes.NewReader(dat), "sha256")
|
||||
if err != nil {
|
|
@ -1,4 +1,4 @@
|
|||
// tuf defines the core TUF logic around manipulating a repo.
|
||||
// Package tuf defines the core TUF logic around manipulating a repo.
|
||||
package tuf
|
||||
|
||||
import (
|
||||
|
@ -12,31 +12,35 @@ import (
|
|||
"time"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/errors"
|
||||
"github.com/endophage/gotuf/keys"
|
||||
"github.com/endophage/gotuf/signed"
|
||||
"github.com/endophage/gotuf/utils"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
"github.com/docker/notary/tuf/keys"
|
||||
"github.com/docker/notary/tuf/signed"
|
||||
"github.com/docker/notary/tuf/utils"
|
||||
)
|
||||
|
||||
// ErrSigVerifyFail - signature verification failed
|
||||
type ErrSigVerifyFail struct{}
|
||||
|
||||
func (e ErrSigVerifyFail) Error() string {
|
||||
return "Error: Signature verification failed"
|
||||
}
|
||||
|
||||
// ErrMetaExpired - metadata file has expired
|
||||
type ErrMetaExpired struct{}
|
||||
|
||||
func (e ErrMetaExpired) Error() string {
|
||||
return "Error: Metadata has expired"
|
||||
}
|
||||
|
||||
// ErrLocalRootExpired - the local root file is out of date
|
||||
type ErrLocalRootExpired struct{}
|
||||
|
||||
func (e ErrLocalRootExpired) Error() string {
|
||||
return "Error: Local Root Has Expired"
|
||||
}
|
||||
|
||||
// ErrNotLoaded - attempted to access data that has not been loaded into
|
||||
// the repo
|
||||
type ErrNotLoaded struct {
|
||||
role string
|
||||
}
|
||||
|
@ -45,12 +49,12 @@ func (err ErrNotLoaded) Error() string {
|
|||
return fmt.Sprintf("%s role has not been loaded", err.role)
|
||||
}
|
||||
|
||||
// TufRepo is an in memory representation of the TUF Repo.
|
||||
// Repo is an in memory representation of the TUF Repo.
|
||||
// It operates at the data.Signed level, accepting and producing
|
||||
// data.Signed objects. Users of a TufRepo are responsible for
|
||||
// data.Signed objects. Users of a Repo are responsible for
|
||||
// fetching raw JSON and using the Set* functions to populate
|
||||
// the TufRepo instance.
|
||||
type TufRepo struct {
|
||||
// the Repo instance.
|
||||
type Repo struct {
|
||||
Root *data.SignedRoot
|
||||
Targets map[string]*data.SignedTargets
|
||||
Snapshot *data.SignedSnapshot
|
||||
|
@ -59,10 +63,10 @@ type TufRepo struct {
|
|||
cryptoService signed.CryptoService
|
||||
}
|
||||
|
||||
// NewTufRepo initializes a TufRepo instance with a keysDB and a signer.
|
||||
// If the TufRepo will only be used for reading, the signer should be nil.
|
||||
func NewTufRepo(keysDB *keys.KeyDB, cryptoService signed.CryptoService) *TufRepo {
|
||||
repo := &TufRepo{
|
||||
// NewRepo initializes a Repo instance with a keysDB and a signer.
|
||||
// If the Repo will only be used for reading, the signer should be nil.
|
||||
func NewRepo(keysDB *keys.KeyDB, cryptoService signed.CryptoService) *Repo {
|
||||
repo := &Repo{
|
||||
Targets: make(map[string]*data.SignedTargets),
|
||||
keysDB: keysDB,
|
||||
cryptoService: cryptoService,
|
||||
|
@ -71,18 +75,17 @@ func NewTufRepo(keysDB *keys.KeyDB, cryptoService signed.CryptoService) *TufRepo
|
|||
}
|
||||
|
||||
// AddBaseKeys is used to add keys to the role in root.json
|
||||
func (tr *TufRepo) AddBaseKeys(role string, keys ...data.PublicKey) error {
|
||||
func (tr *Repo) AddBaseKeys(role string, keys ...data.PublicKey) error {
|
||||
if tr.Root == nil {
|
||||
return ErrNotLoaded{role: "root"}
|
||||
}
|
||||
ids := []string{}
|
||||
for _, k := range keys {
|
||||
// Store only the public portion
|
||||
pubKey := data.NewPrivateKey(k.Algorithm(), k.Public(), nil)
|
||||
tr.Root.Signed.Keys[pubKey.ID()] = pubKey
|
||||
tr.Root.Signed.Keys[k.ID()] = k
|
||||
tr.keysDB.AddKey(k)
|
||||
tr.Root.Signed.Roles[role].KeyIDs = append(tr.Root.Signed.Roles[role].KeyIDs, pubKey.ID())
|
||||
ids = append(ids, pubKey.ID())
|
||||
tr.Root.Signed.Roles[role].KeyIDs = append(tr.Root.Signed.Roles[role].KeyIDs, k.ID())
|
||||
ids = append(ids, k.ID())
|
||||
}
|
||||
r, err := data.NewRole(
|
||||
role,
|
||||
|
@ -101,7 +104,7 @@ func (tr *TufRepo) AddBaseKeys(role string, keys ...data.PublicKey) error {
|
|||
}
|
||||
|
||||
// ReplaceBaseKeys is used to replace all keys for the given role with the new keys
|
||||
func (tr *TufRepo) ReplaceBaseKeys(role string, keys ...data.PublicKey) error {
|
||||
func (tr *Repo) ReplaceBaseKeys(role string, keys ...data.PublicKey) error {
|
||||
r := tr.keysDB.GetRole(role)
|
||||
err := tr.RemoveBaseKeys(role, r.KeyIDs...)
|
||||
if err != nil {
|
||||
|
@ -111,11 +114,11 @@ func (tr *TufRepo) ReplaceBaseKeys(role string, keys ...data.PublicKey) error {
|
|||
}
|
||||
|
||||
// RemoveBaseKeys is used to remove keys from the roles in root.json
|
||||
func (tr *TufRepo) RemoveBaseKeys(role string, keyIDs ...string) error {
|
||||
func (tr *Repo) RemoveBaseKeys(role string, keyIDs ...string) error {
|
||||
if tr.Root == nil {
|
||||
return ErrNotLoaded{role: "root"}
|
||||
}
|
||||
keep := make([]string, 0)
|
||||
var keep []string
|
||||
toDelete := make(map[string]struct{})
|
||||
// remove keys from specified role
|
||||
for _, k := range keyIDs {
|
||||
|
@ -143,6 +146,12 @@ func (tr *TufRepo) RemoveBaseKeys(role string, keyIDs ...string) error {
|
|||
// remove keys no longer in use by any roles
|
||||
for k := range toDelete {
|
||||
delete(tr.Root.Signed.Keys, k)
|
||||
// remove the signing key from the cryptoservice if it
|
||||
// isn't a root key. Root keys must be kept for rotation
|
||||
// signing
|
||||
if role != data.CanonicalRootRole {
|
||||
tr.cryptoService.RemoveKey(k)
|
||||
}
|
||||
}
|
||||
tr.Root.Dirty = true
|
||||
return nil
|
||||
|
@ -157,22 +166,21 @@ func (tr *TufRepo) RemoveBaseKeys(role string, keyIDs ...string) error {
|
|||
// An empty before string indicates to add the role to the end of the
|
||||
// delegation list.
|
||||
// A new, empty, targets file will be created for the new role.
|
||||
func (tr *TufRepo) UpdateDelegations(role *data.Role, keys []data.Key, before string) error {
|
||||
func (tr *Repo) UpdateDelegations(role *data.Role, keys []data.PublicKey, before string) error {
|
||||
if !role.IsDelegation() || !role.IsValid() {
|
||||
return errors.ErrInvalidRole{}
|
||||
return data.ErrInvalidRole{Role: role.Name}
|
||||
}
|
||||
parent := filepath.Dir(role.Name)
|
||||
p, ok := tr.Targets[parent]
|
||||
if !ok {
|
||||
return errors.ErrInvalidRole{}
|
||||
return data.ErrInvalidRole{Role: role.Name}
|
||||
}
|
||||
for _, k := range keys {
|
||||
key := data.NewPublicKey(k.Algorithm(), k.Public())
|
||||
if !utils.StrSliceContains(role.KeyIDs, key.ID()) {
|
||||
role.KeyIDs = append(role.KeyIDs, key.ID())
|
||||
if !utils.StrSliceContains(role.KeyIDs, k.ID()) {
|
||||
role.KeyIDs = append(role.KeyIDs, k.ID())
|
||||
}
|
||||
p.Signed.Delegations.Keys[key.ID()] = key
|
||||
tr.keysDB.AddKey(key)
|
||||
p.Signed.Delegations.Keys[k.ID()] = k
|
||||
tr.keysDB.AddKey(k)
|
||||
}
|
||||
|
||||
i := -1
|
||||
|
@ -201,7 +209,7 @@ func (tr *TufRepo) UpdateDelegations(role *data.Role, keys []data.Key, before st
|
|||
// data.ValidTypes to determine what the role names and filename should be. It
|
||||
// also relies on the keysDB having already been populated with the keys and
|
||||
// roles.
|
||||
func (tr *TufRepo) InitRepo(consistent bool) error {
|
||||
func (tr *Repo) InitRepo(consistent bool) error {
|
||||
if err := tr.InitRoot(consistent); err != nil {
|
||||
return err
|
||||
}
|
||||
|
@ -214,22 +222,22 @@ func (tr *TufRepo) InitRepo(consistent bool) error {
|
|||
return tr.InitTimestamp()
|
||||
}
|
||||
|
||||
func (tr *TufRepo) InitRoot(consistent bool) error {
|
||||
// InitRoot initializes an empty root file with the 4 core roles based
|
||||
// on the current content of th ekey db
|
||||
func (tr *Repo) InitRoot(consistent bool) error {
|
||||
rootRoles := make(map[string]*data.RootRole)
|
||||
rootKeys := make(map[string]data.PublicKey)
|
||||
for _, r := range data.ValidRoles {
|
||||
role := tr.keysDB.GetRole(r)
|
||||
if role == nil {
|
||||
return errors.ErrInvalidRole{}
|
||||
return data.ErrInvalidRole{Role: data.CanonicalRootRole}
|
||||
}
|
||||
rootRoles[r] = &role.RootRole
|
||||
for _, kid := range role.KeyIDs {
|
||||
// don't need to check if GetKey returns nil, Key presence was
|
||||
// checked by KeyDB when role was added.
|
||||
key := tr.keysDB.GetKey(kid)
|
||||
// Create new key object to doubly ensure private key is excluded
|
||||
k := data.NewPublicKey(key.Algorithm(), key.Public())
|
||||
rootKeys[kid] = k
|
||||
rootKeys[kid] = key
|
||||
}
|
||||
}
|
||||
root, err := data.NewRoot(rootKeys, rootRoles, consistent)
|
||||
|
@ -240,13 +248,15 @@ func (tr *TufRepo) InitRoot(consistent bool) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) InitTargets() error {
|
||||
// InitTargets initializes an empty targets
|
||||
func (tr *Repo) InitTargets() error {
|
||||
targets := data.NewTargets()
|
||||
tr.Targets[data.ValidRoles["targets"]] = targets
|
||||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) InitSnapshot() error {
|
||||
// InitSnapshot initializes a snapshot based on the current root and targets
|
||||
func (tr *Repo) InitSnapshot() error {
|
||||
root, err := tr.Root.ToSigned()
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -263,7 +273,8 @@ func (tr *TufRepo) InitSnapshot() error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) InitTimestamp() error {
|
||||
// InitTimestamp initializes a timestamp based on the current snapshot
|
||||
func (tr *Repo) InitTimestamp() error {
|
||||
snap, err := tr.Snapshot.ToSigned()
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -278,9 +289,9 @@ func (tr *TufRepo) InitTimestamp() error {
|
|||
}
|
||||
|
||||
// SetRoot parses the Signed object into a SignedRoot object, sets
|
||||
// the keys and roles in the KeyDB, and sets the TufRepo.Root field
|
||||
// the keys and roles in the KeyDB, and sets the Repo.Root field
|
||||
// to the SignedRoot object.
|
||||
func (tr *TufRepo) SetRoot(s *data.SignedRoot) error {
|
||||
func (tr *Repo) SetRoot(s *data.SignedRoot) error {
|
||||
for _, key := range s.Signed.Keys {
|
||||
logrus.Debug("Adding key ", key.ID())
|
||||
tr.keysDB.AddKey(key)
|
||||
|
@ -307,23 +318,23 @@ func (tr *TufRepo) SetRoot(s *data.SignedRoot) error {
|
|||
}
|
||||
|
||||
// SetTimestamp parses the Signed object into a SignedTimestamp object
|
||||
// and sets the TufRepo.Timestamp field.
|
||||
func (tr *TufRepo) SetTimestamp(s *data.SignedTimestamp) error {
|
||||
// and sets the Repo.Timestamp field.
|
||||
func (tr *Repo) SetTimestamp(s *data.SignedTimestamp) error {
|
||||
tr.Timestamp = s
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetSnapshot parses the Signed object into a SignedSnapshots object
|
||||
// and sets the TufRepo.Snapshot field.
|
||||
func (tr *TufRepo) SetSnapshot(s *data.SignedSnapshot) error {
|
||||
// and sets the Repo.Snapshot field.
|
||||
func (tr *Repo) SetSnapshot(s *data.SignedSnapshot) error {
|
||||
tr.Snapshot = s
|
||||
return nil
|
||||
}
|
||||
|
||||
// SetTargets parses the Signed object into a SignedTargets object,
|
||||
// reads the delegated roles and keys into the KeyDB, and sets the
|
||||
// SignedTargets object agaist the role in the TufRepo.Targets map.
|
||||
func (tr *TufRepo) SetTargets(role string, s *data.SignedTargets) error {
|
||||
// SignedTargets object agaist the role in the Repo.Targets map.
|
||||
func (tr *Repo) SetTargets(role string, s *data.SignedTargets) error {
|
||||
for _, k := range s.Signed.Delegations.Keys {
|
||||
tr.keysDB.AddKey(k)
|
||||
}
|
||||
|
@ -337,7 +348,7 @@ func (tr *TufRepo) SetTargets(role string, s *data.SignedTargets) error {
|
|||
// TargetMeta returns the FileMeta entry for the given path in the
|
||||
// targets file associated with the given role. This may be nil if
|
||||
// the target isn't found in the targets file.
|
||||
func (tr TufRepo) TargetMeta(role, path string) *data.FileMeta {
|
||||
func (tr Repo) TargetMeta(role, path string) *data.FileMeta {
|
||||
if t, ok := tr.Targets[role]; ok {
|
||||
if m, ok := t.Signed.Targets[path]; ok {
|
||||
return &m
|
||||
|
@ -348,12 +359,12 @@ func (tr TufRepo) TargetMeta(role, path string) *data.FileMeta {
|
|||
|
||||
// TargetDelegations returns a slice of Roles that are valid publishers
|
||||
// for the target path provided.
|
||||
func (tr TufRepo) TargetDelegations(role, path, pathHex string) []*data.Role {
|
||||
func (tr Repo) TargetDelegations(role, path, pathHex string) []*data.Role {
|
||||
if pathHex == "" {
|
||||
pathDigest := sha256.Sum256([]byte(path))
|
||||
pathHex = hex.EncodeToString(pathDigest[:])
|
||||
}
|
||||
roles := make([]*data.Role, 0)
|
||||
var roles []*data.Role
|
||||
if t, ok := tr.Targets[role]; ok {
|
||||
for _, r := range t.Signed.Delegations.Roles {
|
||||
if r.CheckPrefixes(pathHex) || r.CheckPaths(path) {
|
||||
|
@ -370,7 +381,7 @@ func (tr TufRepo) TargetDelegations(role, path, pathHex string) []*data.Role {
|
|||
// runs out of locations to search.
|
||||
// N.B. Multiple entries may exist in different delegated roles
|
||||
// for the same target. Only the first one encountered is returned.
|
||||
func (tr TufRepo) FindTarget(path string) *data.FileMeta {
|
||||
func (tr Repo) FindTarget(path string) *data.FileMeta {
|
||||
pathDigest := sha256.Sum256([]byte(path))
|
||||
pathHex := hex.EncodeToString(pathDigest[:])
|
||||
|
||||
|
@ -395,10 +406,10 @@ func (tr TufRepo) FindTarget(path string) *data.FileMeta {
|
|||
// AddTargets will attempt to add the given targets specifically to
|
||||
// the directed role. If the user does not have the signing keys for the role
|
||||
// the function will return an error and the full slice of targets.
|
||||
func (tr *TufRepo) AddTargets(role string, targets data.Files) (data.Files, error) {
|
||||
func (tr *Repo) AddTargets(role string, targets data.Files) (data.Files, error) {
|
||||
t, ok := tr.Targets[role]
|
||||
if !ok {
|
||||
return targets, errors.ErrInvalidRole{role}
|
||||
return targets, data.ErrInvalidRole{Role: role}
|
||||
}
|
||||
invalid := make(data.Files)
|
||||
for path, target := range targets {
|
||||
|
@ -418,10 +429,11 @@ func (tr *TufRepo) AddTargets(role string, targets data.Files) (data.Files, erro
|
|||
return nil, nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) RemoveTargets(role string, targets ...string) error {
|
||||
// RemoveTargets removes the given target (paths) from the given target role (delegation)
|
||||
func (tr *Repo) RemoveTargets(role string, targets ...string) error {
|
||||
t, ok := tr.Targets[role]
|
||||
if !ok {
|
||||
return errors.ErrInvalidRole{role}
|
||||
return data.ErrInvalidRole{Role: role}
|
||||
}
|
||||
|
||||
for _, path := range targets {
|
||||
|
@ -431,7 +443,8 @@ func (tr *TufRepo) RemoveTargets(role string, targets ...string) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) UpdateSnapshot(role string, s *data.Signed) error {
|
||||
// UpdateSnapshot updates the FileMeta for the given role based on the Signed object
|
||||
func (tr *Repo) UpdateSnapshot(role string, s *data.Signed) error {
|
||||
jsonData, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -445,7 +458,8 @@ func (tr *TufRepo) UpdateSnapshot(role string, s *data.Signed) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) UpdateTimestamp(s *data.Signed) error {
|
||||
// UpdateTimestamp updates the snapshot meta in the timestamp based on the Signed object
|
||||
func (tr *Repo) UpdateTimestamp(s *data.Signed) error {
|
||||
jsonData, err := json.Marshal(s)
|
||||
if err != nil {
|
||||
return err
|
||||
|
@ -459,7 +473,8 @@ func (tr *TufRepo) UpdateTimestamp(s *data.Signed) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) SignRoot(expires time.Time, cryptoService signed.CryptoService) (*data.Signed, error) {
|
||||
// SignRoot signs the root
|
||||
func (tr *Repo) SignRoot(expires time.Time) (*data.Signed, error) {
|
||||
logrus.Debug("signing root...")
|
||||
tr.Root.Signed.Expires = expires
|
||||
tr.Root.Signed.Version++
|
||||
|
@ -468,7 +483,7 @@ func (tr *TufRepo) SignRoot(expires time.Time, cryptoService signed.CryptoServic
|
|||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
signed, err = tr.sign(signed, *root, cryptoService)
|
||||
signed, err = tr.sign(signed, *root)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -476,7 +491,8 @@ func (tr *TufRepo) SignRoot(expires time.Time, cryptoService signed.CryptoServic
|
|||
return signed, nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) SignTargets(role string, expires time.Time, cryptoService signed.CryptoService) (*data.Signed, error) {
|
||||
// SignTargets signs the targets file for the given top level or delegated targets role
|
||||
func (tr *Repo) SignTargets(role string, expires time.Time) (*data.Signed, error) {
|
||||
logrus.Debugf("sign targets called for role %s", role)
|
||||
tr.Targets[role].Signed.Expires = expires
|
||||
tr.Targets[role].Signed.Version++
|
||||
|
@ -486,7 +502,7 @@ func (tr *TufRepo) SignTargets(role string, expires time.Time, cryptoService sig
|
|||
return nil, err
|
||||
}
|
||||
targets := tr.keysDB.GetRole(role)
|
||||
signed, err = tr.sign(signed, *targets, cryptoService)
|
||||
signed, err = tr.sign(signed, *targets)
|
||||
if err != nil {
|
||||
logrus.Debug("errored signing ", role)
|
||||
return nil, err
|
||||
|
@ -495,7 +511,8 @@ func (tr *TufRepo) SignTargets(role string, expires time.Time, cryptoService sig
|
|||
return signed, nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) SignSnapshot(expires time.Time, cryptoService signed.CryptoService) (*data.Signed, error) {
|
||||
// SignSnapshot updates the snapshot based on the current targets and root then signs it
|
||||
func (tr *Repo) SignSnapshot(expires time.Time) (*data.Signed, error) {
|
||||
logrus.Debug("signing snapshot...")
|
||||
signedRoot, err := tr.Root.ToSigned()
|
||||
if err != nil {
|
||||
|
@ -523,7 +540,7 @@ func (tr *TufRepo) SignSnapshot(expires time.Time, cryptoService signed.CryptoSe
|
|||
return nil, err
|
||||
}
|
||||
snapshot := tr.keysDB.GetRole(data.ValidRoles["snapshot"])
|
||||
signed, err = tr.sign(signed, *snapshot, cryptoService)
|
||||
signed, err = tr.sign(signed, *snapshot)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -531,7 +548,8 @@ func (tr *TufRepo) SignSnapshot(expires time.Time, cryptoService signed.CryptoSe
|
|||
return signed, nil
|
||||
}
|
||||
|
||||
func (tr *TufRepo) SignTimestamp(expires time.Time, cryptoService signed.CryptoService) (*data.Signed, error) {
|
||||
// SignTimestamp updates the timestamp based on the current snapshot then signs it
|
||||
func (tr *Repo) SignTimestamp(expires time.Time) (*data.Signed, error) {
|
||||
logrus.Debug("SignTimestamp")
|
||||
signedSnapshot, err := tr.Snapshot.ToSigned()
|
||||
if err != nil {
|
||||
|
@ -548,7 +566,7 @@ func (tr *TufRepo) SignTimestamp(expires time.Time, cryptoService signed.CryptoS
|
|||
return nil, err
|
||||
}
|
||||
timestamp := tr.keysDB.GetRole(data.ValidRoles["timestamp"])
|
||||
signed, err = tr.sign(signed, *timestamp, cryptoService)
|
||||
signed, err = tr.sign(signed, *timestamp)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
@ -557,7 +575,7 @@ func (tr *TufRepo) SignTimestamp(expires time.Time, cryptoService signed.CryptoS
|
|||
return signed, nil
|
||||
}
|
||||
|
||||
func (tr TufRepo) sign(signedData *data.Signed, role data.Role, cryptoService signed.CryptoService) (*data.Signed, error) {
|
||||
func (tr Repo) sign(signedData *data.Signed, role data.Role) (*data.Signed, error) {
|
||||
ks := make([]data.PublicKey, 0, len(role.KeyIDs))
|
||||
for _, kid := range role.KeyIDs {
|
||||
k := tr.keysDB.GetKey(kid)
|
||||
|
@ -569,10 +587,7 @@ func (tr TufRepo) sign(signedData *data.Signed, role data.Role, cryptoService si
|
|||
if len(ks) < 1 {
|
||||
return nil, keys.ErrInvalidKey
|
||||
}
|
||||
if cryptoService == nil {
|
||||
cryptoService = tr.cryptoService
|
||||
}
|
||||
err := signed.Sign(cryptoService, signedData, ks...)
|
||||
err := signed.Sign(tr.cryptoService, signedData, ks...)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
|
@ -8,26 +8,33 @@ import (
|
|||
gopath "path"
|
||||
"path/filepath"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/trustmanager"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// ErrWrongLength indicates the length was different to that expected
|
||||
var ErrWrongLength = errors.New("wrong length")
|
||||
|
||||
// ErrWrongHash indicates the hash was different to that expected
|
||||
type ErrWrongHash struct {
|
||||
Type string
|
||||
Expected []byte
|
||||
Actual []byte
|
||||
}
|
||||
|
||||
// Error implements error interface
|
||||
func (e ErrWrongHash) Error() string {
|
||||
return fmt.Sprintf("wrong %s hash, expected %#x got %#x", e.Type, e.Expected, e.Actual)
|
||||
}
|
||||
|
||||
// ErrNoCommonHash indicates the metadata did not provide any hashes this
|
||||
// client recognizes
|
||||
type ErrNoCommonHash struct {
|
||||
Expected data.Hashes
|
||||
Actual data.Hashes
|
||||
}
|
||||
|
||||
// Error implements error interface
|
||||
func (e ErrNoCommonHash) Error() string {
|
||||
types := func(a data.Hashes) []string {
|
||||
t := make([]string, 0, len(a))
|
||||
|
@ -39,16 +46,21 @@ func (e ErrNoCommonHash) Error() string {
|
|||
return fmt.Sprintf("no common hash function, expected one of %s, got %s", types(e.Expected), types(e.Actual))
|
||||
}
|
||||
|
||||
// ErrUnknownHashAlgorithm - client was ashed to use a hash algorithm
|
||||
// it is not familiar with
|
||||
type ErrUnknownHashAlgorithm struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
// Error implements error interface
|
||||
func (e ErrUnknownHashAlgorithm) Error() string {
|
||||
return fmt.Sprintf("unknown hash algorithm: %s", e.Name)
|
||||
}
|
||||
|
||||
// PassphraseFunc type for func that request a passphrase
|
||||
type PassphraseFunc func(role string, confirm bool) ([]byte, error)
|
||||
|
||||
// FileMetaEqual checks whether 2 FileMeta objects are consistent with eachother
|
||||
func FileMetaEqual(actual data.FileMeta, expected data.FileMeta) error {
|
||||
if actual.Length != expected.Length {
|
||||
return ErrWrongLength
|
||||
|
@ -68,10 +80,13 @@ func FileMetaEqual(actual data.FileMeta, expected data.FileMeta) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// NormalizeTarget adds a slash, if required, to the front of a target path
|
||||
func NormalizeTarget(path string) string {
|
||||
return gopath.Join("/", path)
|
||||
}
|
||||
|
||||
// HashedPaths prefixes the filename with the known hashes for the file,
|
||||
// returning a list of possible consistent paths.
|
||||
func HashedPaths(path string, hashes data.Hashes) []string {
|
||||
paths := make([]string, 0, len(hashes))
|
||||
for _, hash := range hashes {
|
||||
|
@ -80,3 +95,15 @@ func HashedPaths(path string, hashes data.Hashes) []string {
|
|||
}
|
||||
return paths
|
||||
}
|
||||
|
||||
// CanonicalKeyID returns the ID of the public bytes version of a TUF key.
|
||||
// On regular RSA/ECDSA TUF keys, this is just the key ID. On X509 RSA/ECDSA
|
||||
// TUF keys, this is the key ID of the public key part of the key.
|
||||
func CanonicalKeyID(k data.PublicKey) (string, error) {
|
||||
switch k.Algorithm() {
|
||||
case data.ECDSAx509Key, data.RSAx509Key:
|
||||
return trustmanager.X509PublicKeyID(k)
|
||||
default:
|
||||
return k.ID(), nil
|
||||
}
|
||||
}
|
|
@ -12,9 +12,10 @@ import (
|
|||
"os"
|
||||
"strings"
|
||||
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/docker/notary/tuf/data"
|
||||
)
|
||||
|
||||
// Download does a simple download from a URL
|
||||
func Download(url url.URL) (*http.Response, error) {
|
||||
tr := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
|
@ -23,6 +24,7 @@ func Download(url url.URL) (*http.Response, error) {
|
|||
return client.Get(url.String())
|
||||
}
|
||||
|
||||
// Upload does a simple JSON upload to a URL
|
||||
func Upload(url string, body io.Reader) (*http.Response, error) {
|
||||
tr := &http.Transport{
|
||||
TLSClientConfig: &tls.Config{InsecureSkipVerify: true},
|
||||
|
@ -31,6 +33,8 @@ func Upload(url string, body io.Reader) (*http.Response, error) {
|
|||
return client.Post(url, "application/json", body)
|
||||
}
|
||||
|
||||
// ValidateTarget ensures that the data read from reader matches
|
||||
// the known metadata
|
||||
func ValidateTarget(r io.Reader, m *data.FileMeta) error {
|
||||
h := sha256.New()
|
||||
length, err := io.Copy(h, r)
|
||||
|
@ -38,7 +42,7 @@ func ValidateTarget(r io.Reader, m *data.FileMeta) error {
|
|||
return err
|
||||
}
|
||||
if length != m.Length {
|
||||
return fmt.Errorf("Size of downloaded target did not match targets entry.\nExpected: %s\nReceived: %s\n", m.Length, length)
|
||||
return fmt.Errorf("Size of downloaded target did not match targets entry.\nExpected: %d\nReceived: %d\n", m.Length, length)
|
||||
}
|
||||
hashDigest := h.Sum(nil)
|
||||
if bytes.Compare(m.Hashes["sha256"], hashDigest[:]) != 0 {
|
||||
|
@ -47,6 +51,7 @@ func ValidateTarget(r io.Reader, m *data.FileMeta) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
// StrSliceContains checks if the given string appears in the slice
|
||||
func StrSliceContains(ss []string, s string) bool {
|
||||
for _, v := range ss {
|
||||
if v == s {
|
||||
|
@ -56,6 +61,8 @@ func StrSliceContains(ss []string, s string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// StrSliceContainsI checks if the given string appears in the slice
|
||||
// in a case insensitive manner
|
||||
func StrSliceContainsI(ss []string, s string) bool {
|
||||
s = strings.ToLower(s)
|
||||
for _, v := range ss {
|
||||
|
@ -67,19 +74,26 @@ func StrSliceContainsI(ss []string, s string) bool {
|
|||
return false
|
||||
}
|
||||
|
||||
// FileExists returns true if a file (or dir) exists at the given path,
|
||||
// false otherwise
|
||||
func FileExists(path string) bool {
|
||||
_, err := os.Stat(path)
|
||||
return os.IsNotExist(err)
|
||||
}
|
||||
|
||||
// NoopCloser is a simple Reader wrapper that does nothing when Close is
|
||||
// called
|
||||
type NoopCloser struct {
|
||||
io.Reader
|
||||
}
|
||||
|
||||
// Close does nothing for a NoopCloser
|
||||
func (nc *NoopCloser) Close() error {
|
||||
return nil
|
||||
}
|
||||
|
||||
// DoHash returns the digest of d using the hashing algorithm named
|
||||
// in alg
|
||||
func DoHash(alg string, d []byte) []byte {
|
||||
switch alg {
|
||||
case "sha256":
|
|
@ -1,3 +0,0 @@
|
|||
/db/
|
||||
*.bkp
|
||||
*.swp
|
|
@ -1,30 +0,0 @@
|
|||
language: go
|
||||
go:
|
||||
- 1.4
|
||||
- tip
|
||||
|
||||
sudo: false
|
||||
|
||||
before_install:
|
||||
- go get golang.org/x/tools/cmd/cover
|
||||
|
||||
script:
|
||||
- go test -race -cover ./...
|
||||
|
||||
notifications:
|
||||
irc:
|
||||
channels:
|
||||
- "chat.freenode.net#flynn"
|
||||
use_notice: true
|
||||
skip_join: true
|
||||
on_success: change
|
||||
on_failure: always
|
||||
template:
|
||||
- "%{repository}/%{branch} - %{commit}: %{message} %{build_url}"
|
||||
email:
|
||||
on_success: never
|
||||
on_failure: always
|
||||
|
||||
matrix:
|
||||
allow_failures:
|
||||
- go: tip
|
|
@ -1 +0,0 @@
|
|||
David Lawrence <david.lawrence@docker.com> (github: endophage)
|
34
vendor/src/github.com/endophage/gotuf/Makefile
vendored
34
vendor/src/github.com/endophage/gotuf/Makefile
vendored
|
@ -1,34 +0,0 @@
|
|||
# Set an output prefix, which is the local directory if not specified
|
||||
PREFIX?=$(shell pwd)
|
||||
|
||||
vet:
|
||||
@echo "+ $@"
|
||||
@go vet ./...
|
||||
|
||||
fmt:
|
||||
@echo "+ $@"
|
||||
@test -z "$$(gofmt -s -l . | grep -v Godeps/_workspace/src/ | tee /dev/stderr)" || \
|
||||
echo "+ please format Go code with 'gofmt -s'"
|
||||
|
||||
lint:
|
||||
@echo "+ $@"
|
||||
@test -z "$$(golint ./... | grep -v Godeps/_workspace/src/ | tee /dev/stderr)"
|
||||
|
||||
build:
|
||||
@echo "+ $@"
|
||||
@go build -v ${GO_LDFLAGS} ./...
|
||||
|
||||
test:
|
||||
@echo "+ $@"
|
||||
@go test -test.short ./...
|
||||
|
||||
test-full:
|
||||
@echo "+ $@"
|
||||
@go test ./...
|
||||
|
||||
binaries: ${PREFIX}/bin/registry ${PREFIX}/bin/registry-api-descriptor-template ${PREFIX}/bin/dist
|
||||
@echo "+ $@"
|
||||
|
||||
clean:
|
||||
@echo "+ $@"
|
||||
@rm -rf "${PREFIX}/bin/registry" "${PREFIX}/bin/registry-api-descriptor-template"
|
|
@ -1,130 +0,0 @@
|
|||
package client
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
var (
|
||||
ErrNoRootKeys = errors.New("tuf: no root keys found in local meta store")
|
||||
ErrInsufficientKeys = errors.New("tuf: insufficient keys to meet threshold")
|
||||
)
|
||||
|
||||
type ErrChecksumMismatch struct {
|
||||
role string
|
||||
}
|
||||
|
||||
func (e ErrChecksumMismatch) Error() string {
|
||||
return fmt.Sprintf("tuf: checksum for %s did not match", e.role)
|
||||
}
|
||||
|
||||
type ErrMissingMeta struct {
|
||||
role string
|
||||
}
|
||||
|
||||
func (e ErrMissingMeta) Error() string {
|
||||
return fmt.Sprintf("tuf: sha256 checksum required for %s", e.role)
|
||||
}
|
||||
|
||||
type ErrMissingRemoteMetadata struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (e ErrMissingRemoteMetadata) Error() string {
|
||||
return fmt.Sprintf("tuf: missing remote metadata %s", e.Name)
|
||||
}
|
||||
|
||||
type ErrDownloadFailed struct {
|
||||
File string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e ErrDownloadFailed) Error() string {
|
||||
return fmt.Sprintf("tuf: failed to download %s: %s", e.File, e.Err)
|
||||
}
|
||||
|
||||
type ErrDecodeFailed struct {
|
||||
File string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e ErrDecodeFailed) Error() string {
|
||||
return fmt.Sprintf("tuf: failed to decode %s: %s", e.File, e.Err)
|
||||
}
|
||||
|
||||
func isDecodeFailedWithErr(err, expected error) bool {
|
||||
e, ok := err.(ErrDecodeFailed)
|
||||
if !ok {
|
||||
return false
|
||||
}
|
||||
return e.Err == expected
|
||||
}
|
||||
|
||||
type ErrNotFound struct {
|
||||
File string
|
||||
}
|
||||
|
||||
func (e ErrNotFound) Error() string {
|
||||
return fmt.Sprintf("tuf: file not found: %s", e.File)
|
||||
}
|
||||
|
||||
func IsNotFound(err error) bool {
|
||||
_, ok := err.(ErrNotFound)
|
||||
return ok
|
||||
}
|
||||
|
||||
type ErrWrongSize struct {
|
||||
File string
|
||||
Actual int64
|
||||
Expected int64
|
||||
}
|
||||
|
||||
func (e ErrWrongSize) Error() string {
|
||||
return fmt.Sprintf("tuf: unexpected file size: %s (expected %d bytes, got %d bytes)", e.File, e.Expected, e.Actual)
|
||||
}
|
||||
|
||||
type ErrLatestSnapshot struct {
|
||||
Version int
|
||||
}
|
||||
|
||||
func (e ErrLatestSnapshot) Error() string {
|
||||
return fmt.Sprintf("tuf: the local snapshot version (%d) is the latest", e.Version)
|
||||
}
|
||||
|
||||
func IsLatestSnapshot(err error) bool {
|
||||
_, ok := err.(ErrLatestSnapshot)
|
||||
return ok
|
||||
}
|
||||
|
||||
type ErrUnknownTarget struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (e ErrUnknownTarget) Error() string {
|
||||
return fmt.Sprintf("tuf: unknown target file: %s", e.Name)
|
||||
}
|
||||
|
||||
type ErrMetaTooLarge struct {
|
||||
Name string
|
||||
Size int64
|
||||
}
|
||||
|
||||
func (e ErrMetaTooLarge) Error() string {
|
||||
return fmt.Sprintf("tuf: %s size %d bytes greater than maximum", e.Name, e.Size)
|
||||
}
|
||||
|
||||
type ErrInvalidURL struct {
|
||||
URL string
|
||||
}
|
||||
|
||||
func (e ErrInvalidURL) Error() string {
|
||||
return fmt.Sprintf("tuf: invalid repository URL %s", e.URL)
|
||||
}
|
||||
|
||||
type ErrCorruptedCache struct {
|
||||
file string
|
||||
}
|
||||
|
||||
func (e ErrCorruptedCache) Error() string {
|
||||
return fmt.Sprintf("cache is corrupted: %s", e.file)
|
||||
}
|
|
@ -1,96 +0,0 @@
|
|||
package data
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"encoding/hex"
|
||||
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
type Key interface {
|
||||
ID() string
|
||||
Algorithm() KeyAlgorithm
|
||||
Public() []byte
|
||||
}
|
||||
|
||||
type PublicKey interface {
|
||||
Key
|
||||
}
|
||||
|
||||
type PrivateKey interface {
|
||||
Key
|
||||
|
||||
Private() []byte
|
||||
}
|
||||
|
||||
type KeyPair struct {
|
||||
Public []byte `json:"public"`
|
||||
Private []byte `json:"private"`
|
||||
}
|
||||
|
||||
// TUFKey is the structure used for both public and private keys in TUF.
|
||||
// Normally it would make sense to use a different structures for public and
|
||||
// private keys, but that would change the key ID algorithm (since the canonical
|
||||
// JSON would be different). This structure should normally be accessed through
|
||||
// the PublicKey or PrivateKey interfaces.
|
||||
type TUFKey struct {
|
||||
id string `json:"-"`
|
||||
Type KeyAlgorithm `json:"keytype"`
|
||||
Value KeyPair `json:"keyval"`
|
||||
}
|
||||
|
||||
func NewPrivateKey(algorithm KeyAlgorithm, public, private []byte) *TUFKey {
|
||||
return &TUFKey{
|
||||
Type: algorithm,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: private,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func (k TUFKey) Algorithm() KeyAlgorithm {
|
||||
return k.Type
|
||||
}
|
||||
|
||||
func (k *TUFKey) ID() string {
|
||||
if k.id == "" {
|
||||
pubK := NewPublicKey(k.Algorithm(), k.Public())
|
||||
data, err := json.MarshalCanonical(&pubK)
|
||||
if err != nil {
|
||||
logrus.Error("Error generating key ID:", err)
|
||||
}
|
||||
digest := sha256.Sum256(data)
|
||||
k.id = hex.EncodeToString(digest[:])
|
||||
}
|
||||
return k.id
|
||||
}
|
||||
|
||||
func (k TUFKey) Public() []byte {
|
||||
return k.Value.Public
|
||||
}
|
||||
|
||||
func (k TUFKey) Private() []byte {
|
||||
return k.Value.Private
|
||||
}
|
||||
|
||||
func NewPublicKey(algorithm KeyAlgorithm, public []byte) PublicKey {
|
||||
return &TUFKey{
|
||||
Type: algorithm,
|
||||
Value: KeyPair{
|
||||
Public: public,
|
||||
Private: nil,
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
func PublicKeyFromPrivate(pk PrivateKey) PublicKey {
|
||||
return &TUFKey{
|
||||
Type: pk.Algorithm(),
|
||||
Value: KeyPair{
|
||||
Public: pk.Public(),
|
||||
Private: nil,
|
||||
},
|
||||
}
|
||||
}
|
|
@ -1,85 +0,0 @@
|
|||
package errors
|
||||
|
||||
import (
|
||||
"errors"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
var ErrInitNotAllowed = errors.New("tuf: repository already initialized")
|
||||
|
||||
type ErrMissingMetadata struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (e ErrMissingMetadata) Error() string {
|
||||
return fmt.Sprintf("tuf: missing metadata %s", e.Name)
|
||||
}
|
||||
|
||||
type ErrFileNotFound struct {
|
||||
Path string
|
||||
}
|
||||
|
||||
func (e ErrFileNotFound) Error() string {
|
||||
return fmt.Sprintf("tuf: file not found %s", e.Path)
|
||||
}
|
||||
|
||||
type ErrInsufficientKeys struct {
|
||||
Name string
|
||||
}
|
||||
|
||||
func (e ErrInsufficientKeys) Error() string {
|
||||
return fmt.Sprintf("tuf: insufficient keys to sign %s", e.Name)
|
||||
}
|
||||
|
||||
type ErrInsufficientSignatures struct {
|
||||
Name string
|
||||
Err error
|
||||
}
|
||||
|
||||
func (e ErrInsufficientSignatures) Error() string {
|
||||
return fmt.Sprintf("tuf: insufficient signatures for %s: %s", e.Name, e.Err)
|
||||
}
|
||||
|
||||
type ErrInvalidRole struct {
|
||||
Role string
|
||||
}
|
||||
|
||||
func (e ErrInvalidRole) Error() string {
|
||||
return fmt.Sprintf("tuf: invalid role %s", e.Role)
|
||||
}
|
||||
|
||||
type ErrInvalidExpires struct {
|
||||
Expires time.Time
|
||||
}
|
||||
|
||||
func (e ErrInvalidExpires) Error() string {
|
||||
return fmt.Sprintf("tuf: invalid expires: %s", e.Expires)
|
||||
}
|
||||
|
||||
type ErrKeyNotFound struct {
|
||||
Role string
|
||||
KeyID string
|
||||
}
|
||||
|
||||
func (e ErrKeyNotFound) Error() string {
|
||||
return fmt.Sprintf(`tuf: no key with id "%s" exists for the %s role`, e.KeyID, e.Role)
|
||||
}
|
||||
|
||||
type ErrNotEnoughKeys struct {
|
||||
Role string
|
||||
Keys int
|
||||
Threshold int
|
||||
}
|
||||
|
||||
func (e ErrNotEnoughKeys) Error() string {
|
||||
return fmt.Sprintf("tuf: %s role has insufficient keys for threshold (has %d keys, threshold is %d)", e.Role, e.Keys, e.Threshold)
|
||||
}
|
||||
|
||||
type ErrPassphraseRequired struct {
|
||||
Role string
|
||||
}
|
||||
|
||||
func (e ErrPassphraseRequired) Error() string {
|
||||
return fmt.Sprintf("tuf: a passphrase is required to access the encrypted %s keys file", e.Role)
|
||||
}
|
|
@ -1,75 +0,0 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"crypto/rand"
|
||||
"errors"
|
||||
|
||||
"github.com/agl/ed25519"
|
||||
"github.com/endophage/gotuf/data"
|
||||
)
|
||||
|
||||
// Ed25519 implements a simple in memory cryptosystem for ED25519 keys
|
||||
type Ed25519 struct {
|
||||
keys map[string]data.PrivateKey
|
||||
}
|
||||
|
||||
func NewEd25519() *Ed25519 {
|
||||
return &Ed25519{
|
||||
make(map[string]data.PrivateKey),
|
||||
}
|
||||
}
|
||||
|
||||
// addKey allows you to add a private key
|
||||
func (e *Ed25519) addKey(k data.PrivateKey) {
|
||||
e.keys[k.ID()] = k
|
||||
}
|
||||
|
||||
func (e *Ed25519) RemoveKey(keyID string) error {
|
||||
delete(e.keys, keyID)
|
||||
return nil
|
||||
}
|
||||
|
||||
func (e *Ed25519) Sign(keyIDs []string, toSign []byte) ([]data.Signature, error) {
|
||||
signatures := make([]data.Signature, 0, len(keyIDs))
|
||||
for _, kID := range keyIDs {
|
||||
priv := [ed25519.PrivateKeySize]byte{}
|
||||
copy(priv[:], e.keys[kID].Private())
|
||||
sig := ed25519.Sign(&priv, toSign)
|
||||
signatures = append(signatures, data.Signature{
|
||||
KeyID: kID,
|
||||
Method: data.EDDSASignature,
|
||||
Signature: sig[:],
|
||||
})
|
||||
}
|
||||
return signatures, nil
|
||||
|
||||
}
|
||||
|
||||
func (e *Ed25519) Create(role string, algorithm data.KeyAlgorithm) (data.PublicKey, error) {
|
||||
if algorithm != data.ED25519Key {
|
||||
return nil, errors.New("only ED25519 supported by this cryptoservice")
|
||||
}
|
||||
|
||||
pub, priv, err := ed25519.GenerateKey(rand.Reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
public := data.NewPublicKey(data.ED25519Key, pub[:])
|
||||
private := data.NewPrivateKey(data.ED25519Key, pub[:], priv[:])
|
||||
e.addKey(private)
|
||||
return public, nil
|
||||
}
|
||||
|
||||
func (e *Ed25519) PublicKeys(keyIDs ...string) (map[string]data.PublicKey, error) {
|
||||
k := make(map[string]data.PublicKey)
|
||||
for _, kID := range keyIDs {
|
||||
if key, ok := e.keys[kID]; ok {
|
||||
k[kID] = data.PublicKeyFromPrivate(key)
|
||||
}
|
||||
}
|
||||
return k, nil
|
||||
}
|
||||
|
||||
func (e *Ed25519) GetKey(keyID string) data.PublicKey {
|
||||
return data.PublicKeyFromPrivate(e.keys[keyID])
|
||||
}
|
|
@ -1,43 +0,0 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type ErrExpired struct {
|
||||
Role string
|
||||
Expired string
|
||||
}
|
||||
|
||||
func (e ErrExpired) Error() string {
|
||||
return fmt.Sprintf("%s expired at %v", e.Role, e.Expired)
|
||||
}
|
||||
|
||||
type ErrLowVersion struct {
|
||||
Actual int
|
||||
Current int
|
||||
}
|
||||
|
||||
func (e ErrLowVersion) Error() string {
|
||||
return fmt.Sprintf("version %d is lower than current version %d", e.Actual, e.Current)
|
||||
}
|
||||
|
||||
type ErrRoleThreshold struct{}
|
||||
|
||||
func (e ErrRoleThreshold) Error() string {
|
||||
return "valid signatures did not meet threshold"
|
||||
}
|
||||
|
||||
type ErrInvalidKeyType struct{}
|
||||
|
||||
func (e ErrInvalidKeyType) Error() string {
|
||||
return "key type is not valid for signature"
|
||||
}
|
||||
|
||||
type ErrInvalidKeyLength struct {
|
||||
msg string
|
||||
}
|
||||
|
||||
func (e ErrInvalidKeyLength) Error() string {
|
||||
return fmt.Sprintf("key length is not supported: %s", e.msg)
|
||||
}
|
|
@ -1,43 +0,0 @@
|
|||
package signed
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"github.com/Sirupsen/logrus"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/errors"
|
||||
"strings"
|
||||
)
|
||||
|
||||
// Sign takes a data.Signed and a key, calculated and adds the signature
|
||||
// to the data.Signed
|
||||
func Sign(service CryptoService, s *data.Signed, keys ...data.PublicKey) error {
|
||||
logrus.Debugf("sign called with %d keys", len(keys))
|
||||
signatures := make([]data.Signature, 0, len(s.Signatures)+1)
|
||||
keyIDMemb := make(map[string]struct{})
|
||||
keyIDs := make([]string, 0, len(keys))
|
||||
|
||||
for _, key := range keys {
|
||||
keyIDMemb[key.ID()] = struct{}{}
|
||||
keyIDs = append(keyIDs, key.ID())
|
||||
}
|
||||
logrus.Debugf("Generated list of signing IDs: %s", strings.Join(keyIDs, ", "))
|
||||
for _, sig := range s.Signatures {
|
||||
if _, ok := keyIDMemb[sig.KeyID]; ok {
|
||||
continue
|
||||
}
|
||||
signatures = append(signatures, sig)
|
||||
}
|
||||
newSigs, err := service.Sign(keyIDs, s.Signed)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if len(newSigs) < 1 {
|
||||
return errors.ErrInsufficientSignatures{
|
||||
Name: fmt.Sprint("Cryptoservice failed to produce any signatures for keys with IDs: %s", strings.Join(keyIDs, ", ")),
|
||||
Err: nil,
|
||||
}
|
||||
}
|
||||
logrus.Debugf("appending %d new signatures", len(newSigs))
|
||||
s.Signatures = append(signatures, newSigs...)
|
||||
return nil
|
||||
}
|
|
@ -1,252 +0,0 @@
|
|||
package store
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"encoding/hex"
|
||||
"fmt"
|
||||
"io/ioutil"
|
||||
"os"
|
||||
"path"
|
||||
|
||||
logrus "github.com/Sirupsen/logrus"
|
||||
"github.com/endophage/gotuf/data"
|
||||
"github.com/endophage/gotuf/utils"
|
||||
"github.com/jfrazelle/go/canonical/json"
|
||||
)
|
||||
|
||||
const (
|
||||
tufLoc string = "/tmp/tuf"
|
||||
metadataSubDir string = "metadata"
|
||||
)
|
||||
|
||||
// implements LocalStore
|
||||
type dbStore struct {
|
||||
db sql.DB
|
||||
imageName string
|
||||
}
|
||||
|
||||
// DBStore takes a database connection and the QDN of the image
|
||||
func DBStore(db *sql.DB, imageName string) *dbStore {
|
||||
store := dbStore{
|
||||
db: *db,
|
||||
imageName: imageName,
|
||||
}
|
||||
|
||||
return &store
|
||||
}
|
||||
|
||||
// GetMeta loads existing TUF metadata files
|
||||
func (dbs *dbStore) GetMeta(name string) ([]byte, error) {
|
||||
data, err := dbs.readFile(name)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return data, err
|
||||
}
|
||||
|
||||
// SetMeta writes individual TUF metadata files
|
||||
func (dbs *dbStore) SetMeta(name string, meta []byte) error {
|
||||
return dbs.writeFile(name, meta)
|
||||
}
|
||||
|
||||
// WalkStagedTargets walks all targets in scope
|
||||
func (dbs *dbStore) WalkStagedTargets(paths []string, targetsFn targetsWalkFunc) error {
|
||||
if len(paths) == 0 {
|
||||
files := dbs.loadTargets("")
|
||||
for path, meta := range files {
|
||||
if err := targetsFn(path, meta); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
for _, path := range paths {
|
||||
files := dbs.loadTargets(path)
|
||||
meta, ok := files[path]
|
||||
if !ok {
|
||||
return fmt.Errorf("File Not Found")
|
||||
}
|
||||
if err := targetsFn(path, meta); err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// Commit writes a set of consistent (possibly) TUF metadata files
|
||||
func (dbs *dbStore) Commit(metafiles map[string][]byte, consistent bool, hashes map[string]data.Hashes) error {
|
||||
// TODO (endophage): write meta files to cache
|
||||
return nil
|
||||
|
||||
}
|
||||
|
||||
// GetKeys returns private keys
|
||||
func (dbs *dbStore) GetKeys(role string) ([]data.PrivateKey, error) {
|
||||
keys := []data.PrivateKey{}
|
||||
var r *sql.Rows
|
||||
var err error
|
||||
sql := "SELECT `key` FROM `keys` WHERE `role` = ? AND `namespace` = ?;"
|
||||
tx, err := dbs.db.Begin()
|
||||
defer tx.Rollback()
|
||||
r, err = tx.Query(sql, role, dbs.imageName)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer r.Close()
|
||||
for r.Next() {
|
||||
var jsonStr string
|
||||
key := new(data.TUFKey)
|
||||
r.Scan(&jsonStr)
|
||||
err := json.Unmarshal([]byte(jsonStr), key)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
keys = append(keys, key)
|
||||
}
|
||||
return keys, nil
|
||||
}
|
||||
|
||||
// SaveKey saves a new private key
|
||||
func (dbs *dbStore) SaveKey(role string, key data.PrivateKey) error {
|
||||
jsonBytes, err := json.Marshal(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Could not JSON Marshal Key")
|
||||
}
|
||||
tx, err := dbs.db.Begin()
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec("INSERT INTO `keys` (`namespace`, `role`, `key`) VALUES (?,?,?);", dbs.imageName, role, string(jsonBytes))
|
||||
tx.Commit()
|
||||
return err
|
||||
}
|
||||
|
||||
// Clean removes staged targets
|
||||
func (dbs *dbStore) Clean() error {
|
||||
// TODO (endophage): purge stale items from db? May just/also need a remove method
|
||||
return nil
|
||||
}
|
||||
|
||||
// AddBlob adds an object to the store
|
||||
func (dbs *dbStore) AddBlob(path string, meta data.FileMeta) {
|
||||
path = utils.NormalizeTarget(path)
|
||||
jsonbytes := []byte{}
|
||||
if meta.Custom != nil {
|
||||
jsonbytes, _ = meta.Custom.MarshalJSON()
|
||||
}
|
||||
|
||||
tx, err := dbs.db.Begin()
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
return
|
||||
}
|
||||
_, err = tx.Exec("INSERT OR REPLACE INTO `filemeta` VALUES (?,?,?,?);", dbs.imageName, path, meta.Length, jsonbytes)
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
}
|
||||
tx.Commit()
|
||||
dbs.addBlobHashes(path, meta.Hashes)
|
||||
}
|
||||
|
||||
func (dbs *dbStore) addBlobHashes(path string, hashes data.Hashes) {
|
||||
tx, err := dbs.db.Begin()
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
}
|
||||
for alg, hash := range hashes {
|
||||
_, err := tx.Exec("INSERT OR REPLACE INTO `filehashes` VALUES (?,?,?,?);", dbs.imageName, path, alg, hex.EncodeToString(hash))
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
}
|
||||
}
|
||||
tx.Commit()
|
||||
}
|
||||
|
||||
// RemoveBlob removes an object from the store
|
||||
func (dbs *dbStore) RemoveBlob(path string) error {
|
||||
tx, err := dbs.db.Begin()
|
||||
if err != nil {
|
||||
logrus.Error(err)
|
||||
return err
|
||||
}
|
||||
_, err = tx.Exec("DELETE FROM `filemeta` WHERE `path`=? AND `namespace`=?", path, dbs.imageName)
|
||||
if err == nil {
|
||||
tx.Commit()
|
||||
} else {
|
||||
tx.Rollback()
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (dbs *dbStore) loadTargets(path string) map[string]data.FileMeta {
|
||||
var err error
|
||||
var r *sql.Rows
|
||||
tx, err := dbs.db.Begin()
|
||||
defer tx.Rollback()
|
||||
files := make(map[string]data.FileMeta)
|
||||
sql := "SELECT `filemeta`.`path`, `size`, `alg`, `hash`, `custom` FROM `filemeta` JOIN `filehashes` ON `filemeta`.`path` = `filehashes`.`path` AND `filemeta`.`namespace` = `filehashes`.`namespace` WHERE `filemeta`.`namespace`=?"
|
||||
if path != "" {
|
||||
sql = fmt.Sprintf("%s %s", sql, "AND `filemeta`.`path`=?")
|
||||
r, err = tx.Query(sql, dbs.imageName, path)
|
||||
} else {
|
||||
r, err = tx.Query(sql, dbs.imageName)
|
||||
}
|
||||
if err != nil {
|
||||
return files
|
||||
}
|
||||
defer r.Close()
|
||||
for r.Next() {
|
||||
var absPath, alg, hash string
|
||||
var size int64
|
||||
var custom []byte
|
||||
r.Scan(&absPath, &size, &alg, &hash, &custom)
|
||||
hashBytes, err := hex.DecodeString(hash)
|
||||
if err != nil {
|
||||
// We're going to skip items with unparseable hashes as they
|
||||
// won't be valid in the targets
|
||||
logrus.Debug("Hash was not stored in hex as expected")
|
||||
continue
|
||||
}
|
||||
if file, ok := files[absPath]; ok {
|
||||
file.Hashes[alg] = hashBytes
|
||||
} else {
|
||||
file = data.FileMeta{
|
||||
Length: size,
|
||||
Hashes: data.Hashes{
|
||||
alg: hashBytes,
|
||||
},
|
||||
}
|
||||
if custom != nil {
|
||||
file.Custom = json.RawMessage(custom)
|
||||
}
|
||||
files[absPath] = file
|
||||
}
|
||||
}
|
||||
return files
|
||||
}
|
||||
|
||||
func (dbs *dbStore) writeFile(name string, content []byte) error {
|
||||
jsonName := fmt.Sprintf("%s.json", name)
|
||||
fullPath := path.Join(tufLoc, metadataSubDir, dbs.imageName, jsonName)
|
||||
dirPath := path.Dir(fullPath)
|
||||
err := os.MkdirAll(dirPath, 0744)
|
||||
if err != nil {
|
||||
logrus.Error("error creating directory path to TUF cache")
|
||||
return err
|
||||
}
|
||||
|
||||
err = ioutil.WriteFile(fullPath, content, 0744)
|
||||
if err != nil {
|
||||
logrus.Error("Error writing file")
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
func (dbs *dbStore) readFile(name string) ([]byte, error) {
|
||||
jsonName := fmt.Sprintf("%s.json", name)
|
||||
fullPath := path.Join(tufLoc, metadataSubDir, dbs.imageName, jsonName)
|
||||
content, err := ioutil.ReadFile(fullPath)
|
||||
return content, err
|
||||
}
|
1
vendor/src/github.com/miekg/pkcs11/.gitignore
vendored
Normal file
1
vendor/src/github.com/miekg/pkcs11/.gitignore
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
tags
|
27
vendor/src/github.com/miekg/pkcs11/LICENSE
vendored
Normal file
27
vendor/src/github.com/miekg/pkcs11/LICENSE
vendored
Normal file
|
@ -0,0 +1,27 @@
|
|||
Copyright (c) 2013 Miek Gieben. All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Miek Gieben nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
48
vendor/src/github.com/miekg/pkcs11/README.md
vendored
Normal file
48
vendor/src/github.com/miekg/pkcs11/README.md
vendored
Normal file
|
@ -0,0 +1,48 @@
|
|||
# PKCS#11
|
||||
|
||||
This is a Go implementation of the PKCS#11 API. It wraps the library closely, but uses Go idiom
|
||||
were it makes sense. It has been tested with SoftHSM.
|
||||
|
||||
## SoftHSM
|
||||
|
||||
* Make it use a custom configuration file
|
||||
|
||||
export SOFTHSM_CONF=$PWD/softhsm.conf
|
||||
|
||||
* Then use `softhsm` to init it
|
||||
|
||||
softhsm --init-token --slot 0 --label test --pin 1234
|
||||
|
||||
* Then use `libsofthsm.so` as the pkcs11 module:
|
||||
|
||||
p := pkcs11.New("/usr/lib/softhsm/libsofthsm.so")
|
||||
|
||||
## Examples
|
||||
|
||||
A skeleton program would look somewhat like this (yes, pkcs#11 is verbose):
|
||||
|
||||
p := pkcs11.New("/usr/lib/softhsm/libsofthsm.so")
|
||||
p.Initialize()
|
||||
defer p.Destroy()
|
||||
defer p.Finalize()
|
||||
slots, _ := p.GetSlotList(true)
|
||||
session, _ := p.OpenSession(slots[0], pkcs11.CKF_SERIAL_SESSION|pkcs11.CKF_RW_SESSION)
|
||||
defer p.CloseSession(session)
|
||||
p.Login(session, pkcs11.CKU_USER, "1234")
|
||||
defer p.Logout(session)
|
||||
p.DigestInit(session, []*pkcs11.Mechanism{pkcs11.NewMechanism(pkcs11.CKM_SHA_1, nil)})
|
||||
hash, err := p.Digest(session, []byte("this is a string"))
|
||||
for _, d := range hash {
|
||||
fmt.Printf("%x", d)
|
||||
}
|
||||
fmt.Println()
|
||||
|
||||
Further examples are included in the tests.
|
||||
|
||||
# TODO
|
||||
|
||||
* Fix/double check endian stuff, see types.go NewAttribute();
|
||||
* Kill C.Sizeof in that same function.
|
||||
* Look at the memory copying in fast functions (sign, hash etc).
|
||||
* Fix inconsistencies in naming?
|
||||
* Add tests -- there are way too few
|
561
vendor/src/github.com/miekg/pkcs11/const.go
vendored
Normal file
561
vendor/src/github.com/miekg/pkcs11/const.go
vendored
Normal file
|
@ -0,0 +1,561 @@
|
|||
// Copyright 2013 Miek Gieben. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pkcs11
|
||||
|
||||
const (
|
||||
CKU_SO uint = 0
|
||||
CKU_USER uint = 1
|
||||
CKU_CONTEXT_SPECIFIC uint = 2
|
||||
)
|
||||
|
||||
const (
|
||||
CKO_DATA uint = 0x00000000
|
||||
CKO_CERTIFICATE uint = 0x00000001
|
||||
CKO_PUBLIC_KEY uint = 0x00000002
|
||||
CKO_PRIVATE_KEY uint = 0x00000003
|
||||
CKO_SECRET_KEY uint = 0x00000004
|
||||
CKO_HW_FEATURE uint = 0x00000005
|
||||
CKO_DOMAIN_PARAMETERS uint = 0x00000006
|
||||
CKO_MECHANISM uint = 0x00000007
|
||||
CKO_OTP_KEY uint = 0x00000008
|
||||
CKO_VENDOR_DEFINED uint = 0x80000000
|
||||
)
|
||||
|
||||
// Generated with: awk '/#define CK[AFKMR]/{ print $2 "=" $3 }' pkcs11t.h
|
||||
|
||||
// All the flag (CKF_), attribute (CKA_), error code (CKR_), key type (CKK_) and
|
||||
// mechanism (CKM_) constants as defined in PKCS#11.
|
||||
const (
|
||||
CKF_TOKEN_PRESENT = 0x00000001
|
||||
CKF_REMOVABLE_DEVICE = 0x00000002
|
||||
CKF_HW_SLOT = 0x00000004
|
||||
CKF_RNG = 0x00000001
|
||||
CKF_WRITE_PROTECTED = 0x00000002
|
||||
CKF_LOGIN_REQUIRED = 0x00000004
|
||||
CKF_USER_PIN_INITIALIZED = 0x00000008
|
||||
CKF_RESTORE_KEY_NOT_NEEDED = 0x00000020
|
||||
CKF_CLOCK_ON_TOKEN = 0x00000040
|
||||
CKF_PROTECTED_AUTHENTICATION_PATH = 0x00000100
|
||||
CKF_DUAL_CRYPTO_OPERATIONS = 0x00000200
|
||||
CKF_TOKEN_INITIALIZED = 0x00000400
|
||||
CKF_SECONDARY_AUTHENTICATION = 0x00000800
|
||||
CKF_USER_PIN_COUNT_LOW = 0x00010000
|
||||
CKF_USER_PIN_FINAL_TRY = 0x00020000
|
||||
CKF_USER_PIN_LOCKED = 0x00040000
|
||||
CKF_USER_PIN_TO_BE_CHANGED = 0x00080000
|
||||
CKF_SO_PIN_COUNT_LOW = 0x00100000
|
||||
CKF_SO_PIN_FINAL_TRY = 0x00200000
|
||||
CKF_SO_PIN_LOCKED = 0x00400000
|
||||
CKF_SO_PIN_TO_BE_CHANGED = 0x00800000
|
||||
CKF_RW_SESSION = 0x00000002
|
||||
CKF_SERIAL_SESSION = 0x00000004
|
||||
CKK_RSA = 0x00000000
|
||||
CKK_DSA = 0x00000001
|
||||
CKK_DH = 0x00000002
|
||||
CKK_ECDSA = 0x00000003
|
||||
CKK_EC = 0x00000003
|
||||
CKK_X9_42_DH = 0x00000004
|
||||
CKK_KEA = 0x00000005
|
||||
CKK_GENERIC_SECRET = 0x00000010
|
||||
CKK_RC2 = 0x00000011
|
||||
CKK_RC4 = 0x00000012
|
||||
CKK_DES = 0x00000013
|
||||
CKK_DES2 = 0x00000014
|
||||
CKK_DES3 = 0x00000015
|
||||
CKK_CAST = 0x00000016
|
||||
CKK_CAST3 = 0x00000017
|
||||
CKK_CAST5 = 0x00000018
|
||||
CKK_CAST128 = 0x00000018
|
||||
CKK_RC5 = 0x00000019
|
||||
CKK_IDEA = 0x0000001A
|
||||
CKK_SKIPJACK = 0x0000001B
|
||||
CKK_BATON = 0x0000001C
|
||||
CKK_JUNIPER = 0x0000001D
|
||||
CKK_CDMF = 0x0000001E
|
||||
CKK_AES = 0x0000001F
|
||||
CKK_BLOWFISH = 0x00000020
|
||||
CKK_TWOFISH = 0x00000021
|
||||
CKK_SECURID = 0x00000022
|
||||
CKK_HOTP = 0x00000023
|
||||
CKK_ACTI = 0x00000024
|
||||
CKK_CAMELLIA = 0x00000025
|
||||
CKK_ARIA = 0x00000026
|
||||
CKK_VENDOR_DEFINED = 0x80000000
|
||||
CKF_ARRAY_ATTRIBUTE = 0x40000000
|
||||
CKA_CLASS = 0x00000000
|
||||
CKA_TOKEN = 0x00000001
|
||||
CKA_PRIVATE = 0x00000002
|
||||
CKA_LABEL = 0x00000003
|
||||
CKA_APPLICATION = 0x00000010
|
||||
CKA_VALUE = 0x00000011
|
||||
CKA_OBJECT_ID = 0x00000012
|
||||
CKA_CERTIFICATE_TYPE = 0x00000080
|
||||
CKA_ISSUER = 0x00000081
|
||||
CKA_SERIAL_NUMBER = 0x00000082
|
||||
CKA_AC_ISSUER = 0x00000083
|
||||
CKA_OWNER = 0x00000084
|
||||
CKA_ATTR_TYPES = 0x00000085
|
||||
CKA_TRUSTED = 0x00000086
|
||||
CKA_CERTIFICATE_CATEGORY = 0x00000087
|
||||
CKA_JAVA_MIDP_SECURITY_DOMAIN = 0x00000088
|
||||
CKA_URL = 0x00000089
|
||||
CKA_HASH_OF_SUBJECT_PUBLIC_KEY = 0x0000008A
|
||||
CKA_HASH_OF_ISSUER_PUBLIC_KEY = 0x0000008B
|
||||
CKA_CHECK_VALUE = 0x00000090
|
||||
CKA_KEY_TYPE = 0x00000100
|
||||
CKA_SUBJECT = 0x00000101
|
||||
CKA_ID = 0x00000102
|
||||
CKA_SENSITIVE = 0x00000103
|
||||
CKA_ENCRYPT = 0x00000104
|
||||
CKA_DECRYPT = 0x00000105
|
||||
CKA_WRAP = 0x00000106
|
||||
CKA_UNWRAP = 0x00000107
|
||||
CKA_SIGN = 0x00000108
|
||||
CKA_SIGN_RECOVER = 0x00000109
|
||||
CKA_VERIFY = 0x0000010A
|
||||
CKA_VERIFY_RECOVER = 0x0000010B
|
||||
CKA_DERIVE = 0x0000010C
|
||||
CKA_START_DATE = 0x00000110 // Use time.Time as a value.
|
||||
CKA_END_DATE = 0x00000111 // Use time.Time as a value.
|
||||
CKA_MODULUS = 0x00000120
|
||||
CKA_MODULUS_BITS = 0x00000121
|
||||
CKA_PUBLIC_EXPONENT = 0x00000122 // Use []byte slice as a value.
|
||||
CKA_PRIVATE_EXPONENT = 0x00000123
|
||||
CKA_PRIME_1 = 0x00000124
|
||||
CKA_PRIME_2 = 0x00000125
|
||||
CKA_EXPONENT_1 = 0x00000126
|
||||
CKA_EXPONENT_2 = 0x00000127
|
||||
CKA_COEFFICIENT = 0x00000128
|
||||
CKA_PRIME = 0x00000130
|
||||
CKA_SUBPRIME = 0x00000131
|
||||
CKA_BASE = 0x00000132
|
||||
CKA_PRIME_BITS = 0x00000133
|
||||
CKA_SUBPRIME_BITS = 0x00000134
|
||||
CKA_SUB_PRIME_BITS = CKA_SUBPRIME_BITS
|
||||
CKA_VALUE_BITS = 0x00000160
|
||||
CKA_VALUE_LEN = 0x00000161
|
||||
CKA_EXTRACTABLE = 0x00000162
|
||||
CKA_LOCAL = 0x00000163
|
||||
CKA_NEVER_EXTRACTABLE = 0x00000164
|
||||
CKA_ALWAYS_SENSITIVE = 0x00000165
|
||||
CKA_KEY_GEN_MECHANISM = 0x00000166
|
||||
CKA_MODIFIABLE = 0x00000170
|
||||
CKA_ECDSA_PARAMS = 0x00000180
|
||||
CKA_EC_PARAMS = 0x00000180
|
||||
CKA_EC_POINT = 0x00000181
|
||||
CKA_SECONDARY_AUTH = 0x00000200
|
||||
CKA_AUTH_PIN_FLAGS = 0x00000201
|
||||
CKA_ALWAYS_AUTHENTICATE = 0x00000202
|
||||
CKA_WRAP_WITH_TRUSTED = 0x00000210
|
||||
CKA_WRAP_TEMPLATE = (CKF_ARRAY_ATTRIBUTE | 0x00000211)
|
||||
CKA_UNWRAP_TEMPLATE = (CKF_ARRAY_ATTRIBUTE | 0x00000212)
|
||||
CKA_OTP_FORMAT = 0x00000220
|
||||
CKA_OTP_LENGTH = 0x00000221
|
||||
CKA_OTP_TIME_INTERVAL = 0x00000222
|
||||
CKA_OTP_USER_FRIENDLY_MODE = 0x00000223
|
||||
CKA_OTP_CHALLENGE_REQUIREMENT = 0x00000224
|
||||
CKA_OTP_TIME_REQUIREMENT = 0x00000225
|
||||
CKA_OTP_COUNTER_REQUIREMENT = 0x00000226
|
||||
CKA_OTP_PIN_REQUIREMENT = 0x00000227
|
||||
CKA_OTP_COUNTER = 0x0000022E
|
||||
CKA_OTP_TIME = 0x0000022F
|
||||
CKA_OTP_USER_IDENTIFIER = 0x0000022A
|
||||
CKA_OTP_SERVICE_IDENTIFIER = 0x0000022B
|
||||
CKA_OTP_SERVICE_LOGO = 0x0000022C
|
||||
CKA_OTP_SERVICE_LOGO_TYPE = 0x0000022D
|
||||
CKA_HW_FEATURE_TYPE = 0x00000300
|
||||
CKA_RESET_ON_INIT = 0x00000301
|
||||
CKA_HAS_RESET = 0x00000302
|
||||
CKA_PIXEL_X = 0x00000400
|
||||
CKA_PIXEL_Y = 0x00000401
|
||||
CKA_RESOLUTION = 0x00000402
|
||||
CKA_CHAR_ROWS = 0x00000403
|
||||
CKA_CHAR_COLUMNS = 0x00000404
|
||||
CKA_COLOR = 0x00000405
|
||||
CKA_BITS_PER_PIXEL = 0x00000406
|
||||
CKA_CHAR_SETS = 0x00000480
|
||||
CKA_ENCODING_METHODS = 0x00000481
|
||||
CKA_MIME_TYPES = 0x00000482
|
||||
CKA_MECHANISM_TYPE = 0x00000500
|
||||
CKA_REQUIRED_CMS_ATTRIBUTES = 0x00000501
|
||||
CKA_DEFAULT_CMS_ATTRIBUTES = 0x00000502
|
||||
CKA_SUPPORTED_CMS_ATTRIBUTES = 0x00000503
|
||||
CKA_ALLOWED_MECHANISMS = (CKF_ARRAY_ATTRIBUTE | 0x00000600)
|
||||
CKA_VENDOR_DEFINED = 0x80000000
|
||||
CKM_RSA_PKCS_KEY_PAIR_GEN = 0x00000000
|
||||
CKM_RSA_PKCS = 0x00000001
|
||||
CKM_RSA_9796 = 0x00000002
|
||||
CKM_RSA_X_509 = 0x00000003
|
||||
CKM_MD2_RSA_PKCS = 0x00000004
|
||||
CKM_MD5_RSA_PKCS = 0x00000005
|
||||
CKM_SHA1_RSA_PKCS = 0x00000006
|
||||
CKM_RIPEMD128_RSA_PKCS = 0x00000007
|
||||
CKM_RIPEMD160_RSA_PKCS = 0x00000008
|
||||
CKM_RSA_PKCS_OAEP = 0x00000009
|
||||
CKM_RSA_X9_31_KEY_PAIR_GEN = 0x0000000A
|
||||
CKM_RSA_X9_31 = 0x0000000B
|
||||
CKM_SHA1_RSA_X9_31 = 0x0000000C
|
||||
CKM_RSA_PKCS_PSS = 0x0000000D
|
||||
CKM_SHA1_RSA_PKCS_PSS = 0x0000000E
|
||||
CKM_DSA_KEY_PAIR_GEN = 0x00000010
|
||||
CKM_DSA = 0x00000011
|
||||
CKM_DSA_SHA1 = 0x00000012
|
||||
CKM_DH_PKCS_KEY_PAIR_GEN = 0x00000020
|
||||
CKM_DH_PKCS_DERIVE = 0x00000021
|
||||
CKM_X9_42_DH_KEY_PAIR_GEN = 0x00000030
|
||||
CKM_X9_42_DH_DERIVE = 0x00000031
|
||||
CKM_X9_42_DH_HYBRID_DERIVE = 0x00000032
|
||||
CKM_X9_42_MQV_DERIVE = 0x00000033
|
||||
CKM_SHA256_RSA_PKCS = 0x00000040
|
||||
CKM_SHA384_RSA_PKCS = 0x00000041
|
||||
CKM_SHA512_RSA_PKCS = 0x00000042
|
||||
CKM_SHA256_RSA_PKCS_PSS = 0x00000043
|
||||
CKM_SHA384_RSA_PKCS_PSS = 0x00000044
|
||||
CKM_SHA512_RSA_PKCS_PSS = 0x00000045
|
||||
CKM_SHA224_RSA_PKCS = 0x00000046
|
||||
CKM_SHA224_RSA_PKCS_PSS = 0x00000047
|
||||
CKM_RC2_KEY_GEN = 0x00000100
|
||||
CKM_RC2_ECB = 0x00000101
|
||||
CKM_RC2_CBC = 0x00000102
|
||||
CKM_RC2_MAC = 0x00000103
|
||||
CKM_RC2_MAC_GENERAL = 0x00000104
|
||||
CKM_RC2_CBC_PAD = 0x00000105
|
||||
CKM_RC4_KEY_GEN = 0x00000110
|
||||
CKM_RC4 = 0x00000111
|
||||
CKM_DES_KEY_GEN = 0x00000120
|
||||
CKM_DES_ECB = 0x00000121
|
||||
CKM_DES_CBC = 0x00000122
|
||||
CKM_DES_MAC = 0x00000123
|
||||
CKM_DES_MAC_GENERAL = 0x00000124
|
||||
CKM_DES_CBC_PAD = 0x00000125
|
||||
CKM_DES2_KEY_GEN = 0x00000130
|
||||
CKM_DES3_KEY_GEN = 0x00000131
|
||||
CKM_DES3_ECB = 0x00000132
|
||||
CKM_DES3_CBC = 0x00000133
|
||||
CKM_DES3_MAC = 0x00000134
|
||||
CKM_DES3_MAC_GENERAL = 0x00000135
|
||||
CKM_DES3_CBC_PAD = 0x00000136
|
||||
CKM_CDMF_KEY_GEN = 0x00000140
|
||||
CKM_CDMF_ECB = 0x00000141
|
||||
CKM_CDMF_CBC = 0x00000142
|
||||
CKM_CDMF_MAC = 0x00000143
|
||||
CKM_CDMF_MAC_GENERAL = 0x00000144
|
||||
CKM_CDMF_CBC_PAD = 0x00000145
|
||||
CKM_DES_OFB64 = 0x00000150
|
||||
CKM_DES_OFB8 = 0x00000151
|
||||
CKM_DES_CFB64 = 0x00000152
|
||||
CKM_DES_CFB8 = 0x00000153
|
||||
CKM_MD2 = 0x00000200
|
||||
CKM_MD2_HMAC = 0x00000201
|
||||
CKM_MD2_HMAC_GENERAL = 0x00000202
|
||||
CKM_MD5 = 0x00000210
|
||||
CKM_MD5_HMAC = 0x00000211
|
||||
CKM_MD5_HMAC_GENERAL = 0x00000212
|
||||
CKM_SHA_1 = 0x00000220
|
||||
CKM_SHA_1_HMAC = 0x00000221
|
||||
CKM_SHA_1_HMAC_GENERAL = 0x00000222
|
||||
CKM_RIPEMD128 = 0x00000230
|
||||
CKM_RIPEMD128_HMAC = 0x00000231
|
||||
CKM_RIPEMD128_HMAC_GENERAL = 0x00000232
|
||||
CKM_RIPEMD160 = 0x00000240
|
||||
CKM_RIPEMD160_HMAC = 0x00000241
|
||||
CKM_RIPEMD160_HMAC_GENERAL = 0x00000242
|
||||
CKM_SHA256 = 0x00000250
|
||||
CKM_SHA256_HMAC = 0x00000251
|
||||
CKM_SHA256_HMAC_GENERAL = 0x00000252
|
||||
CKM_SHA224 = 0x00000255
|
||||
CKM_SHA224_HMAC = 0x00000256
|
||||
CKM_SHA224_HMAC_GENERAL = 0x00000257
|
||||
CKM_SHA384 = 0x00000260
|
||||
CKM_SHA384_HMAC = 0x00000261
|
||||
CKM_SHA384_HMAC_GENERAL = 0x00000262
|
||||
CKM_SHA512 = 0x00000270
|
||||
CKM_SHA512_HMAC = 0x00000271
|
||||
CKM_SHA512_HMAC_GENERAL = 0x00000272
|
||||
CKM_SECURID_KEY_GEN = 0x00000280
|
||||
CKM_SECURID = 0x00000282
|
||||
CKM_HOTP_KEY_GEN = 0x00000290
|
||||
CKM_HOTP = 0x00000291
|
||||
CKM_ACTI = 0x000002A0
|
||||
CKM_ACTI_KEY_GEN = 0x000002A1
|
||||
CKM_CAST_KEY_GEN = 0x00000300
|
||||
CKM_CAST_ECB = 0x00000301
|
||||
CKM_CAST_CBC = 0x00000302
|
||||
CKM_CAST_MAC = 0x00000303
|
||||
CKM_CAST_MAC_GENERAL = 0x00000304
|
||||
CKM_CAST_CBC_PAD = 0x00000305
|
||||
CKM_CAST3_KEY_GEN = 0x00000310
|
||||
CKM_CAST3_ECB = 0x00000311
|
||||
CKM_CAST3_CBC = 0x00000312
|
||||
CKM_CAST3_MAC = 0x00000313
|
||||
CKM_CAST3_MAC_GENERAL = 0x00000314
|
||||
CKM_CAST3_CBC_PAD = 0x00000315
|
||||
CKM_CAST5_KEY_GEN = 0x00000320
|
||||
CKM_CAST128_KEY_GEN = 0x00000320
|
||||
CKM_CAST5_ECB = 0x00000321
|
||||
CKM_CAST128_ECB = 0x00000321
|
||||
CKM_CAST5_CBC = 0x00000322
|
||||
CKM_CAST128_CBC = 0x00000322
|
||||
CKM_CAST5_MAC = 0x00000323
|
||||
CKM_CAST128_MAC = 0x00000323
|
||||
CKM_CAST5_MAC_GENERAL = 0x00000324
|
||||
CKM_CAST128_MAC_GENERAL = 0x00000324
|
||||
CKM_CAST5_CBC_PAD = 0x00000325
|
||||
CKM_CAST128_CBC_PAD = 0x00000325
|
||||
CKM_RC5_KEY_GEN = 0x00000330
|
||||
CKM_RC5_ECB = 0x00000331
|
||||
CKM_RC5_CBC = 0x00000332
|
||||
CKM_RC5_MAC = 0x00000333
|
||||
CKM_RC5_MAC_GENERAL = 0x00000334
|
||||
CKM_RC5_CBC_PAD = 0x00000335
|
||||
CKM_IDEA_KEY_GEN = 0x00000340
|
||||
CKM_IDEA_ECB = 0x00000341
|
||||
CKM_IDEA_CBC = 0x00000342
|
||||
CKM_IDEA_MAC = 0x00000343
|
||||
CKM_IDEA_MAC_GENERAL = 0x00000344
|
||||
CKM_IDEA_CBC_PAD = 0x00000345
|
||||
CKM_GENERIC_SECRET_KEY_GEN = 0x00000350
|
||||
CKM_CONCATENATE_BASE_AND_KEY = 0x00000360
|
||||
CKM_CONCATENATE_BASE_AND_DATA = 0x00000362
|
||||
CKM_CONCATENATE_DATA_AND_BASE = 0x00000363
|
||||
CKM_XOR_BASE_AND_DATA = 0x00000364
|
||||
CKM_EXTRACT_KEY_FROM_KEY = 0x00000365
|
||||
CKM_SSL3_PRE_MASTER_KEY_GEN = 0x00000370
|
||||
CKM_SSL3_MASTER_KEY_DERIVE = 0x00000371
|
||||
CKM_SSL3_KEY_AND_MAC_DERIVE = 0x00000372
|
||||
CKM_SSL3_MASTER_KEY_DERIVE_DH = 0x00000373
|
||||
CKM_TLS_PRE_MASTER_KEY_GEN = 0x00000374
|
||||
CKM_TLS_MASTER_KEY_DERIVE = 0x00000375
|
||||
CKM_TLS_KEY_AND_MAC_DERIVE = 0x00000376
|
||||
CKM_TLS_MASTER_KEY_DERIVE_DH = 0x00000377
|
||||
CKM_TLS_PRF = 0x00000378
|
||||
CKM_SSL3_MD5_MAC = 0x00000380
|
||||
CKM_SSL3_SHA1_MAC = 0x00000381
|
||||
CKM_MD5_KEY_DERIVATION = 0x00000390
|
||||
CKM_MD2_KEY_DERIVATION = 0x00000391
|
||||
CKM_SHA1_KEY_DERIVATION = 0x00000392
|
||||
CKM_SHA256_KEY_DERIVATION = 0x00000393
|
||||
CKM_SHA384_KEY_DERIVATION = 0x00000394
|
||||
CKM_SHA512_KEY_DERIVATION = 0x00000395
|
||||
CKM_SHA224_KEY_DERIVATION = 0x00000396
|
||||
CKM_PBE_MD2_DES_CBC = 0x000003A0
|
||||
CKM_PBE_MD5_DES_CBC = 0x000003A1
|
||||
CKM_PBE_MD5_CAST_CBC = 0x000003A2
|
||||
CKM_PBE_MD5_CAST3_CBC = 0x000003A3
|
||||
CKM_PBE_MD5_CAST5_CBC = 0x000003A4
|
||||
CKM_PBE_MD5_CAST128_CBC = 0x000003A4
|
||||
CKM_PBE_SHA1_CAST5_CBC = 0x000003A5
|
||||
CKM_PBE_SHA1_CAST128_CBC = 0x000003A5
|
||||
CKM_PBE_SHA1_RC4_128 = 0x000003A6
|
||||
CKM_PBE_SHA1_RC4_40 = 0x000003A7
|
||||
CKM_PBE_SHA1_DES3_EDE_CBC = 0x000003A8
|
||||
CKM_PBE_SHA1_DES2_EDE_CBC = 0x000003A9
|
||||
CKM_PBE_SHA1_RC2_128_CBC = 0x000003AA
|
||||
CKM_PBE_SHA1_RC2_40_CBC = 0x000003AB
|
||||
CKM_PKCS5_PBKD2 = 0x000003B0
|
||||
CKM_PBA_SHA1_WITH_SHA1_HMAC = 0x000003C0
|
||||
CKM_WTLS_PRE_MASTER_KEY_GEN = 0x000003D0
|
||||
CKM_WTLS_MASTER_KEY_DERIVE = 0x000003D1
|
||||
CKM_WTLS_MASTER_KEY_DERIVE_DH_ECC = 0x000003D2
|
||||
CKM_WTLS_PRF = 0x000003D3
|
||||
CKM_WTLS_SERVER_KEY_AND_MAC_DERIVE = 0x000003D4
|
||||
CKM_WTLS_CLIENT_KEY_AND_MAC_DERIVE = 0x000003D5
|
||||
CKM_KEY_WRAP_LYNKS = 0x00000400
|
||||
CKM_KEY_WRAP_SET_OAEP = 0x00000401
|
||||
CKM_CMS_SIG = 0x00000500
|
||||
CKM_KIP_DERIVE = 0x00000510
|
||||
CKM_KIP_WRAP = 0x00000511
|
||||
CKM_KIP_MAC = 0x00000512
|
||||
CKM_CAMELLIA_KEY_GEN = 0x00000550
|
||||
CKM_CAMELLIA_ECB = 0x00000551
|
||||
CKM_CAMELLIA_CBC = 0x00000552
|
||||
CKM_CAMELLIA_MAC = 0x00000553
|
||||
CKM_CAMELLIA_MAC_GENERAL = 0x00000554
|
||||
CKM_CAMELLIA_CBC_PAD = 0x00000555
|
||||
CKM_CAMELLIA_ECB_ENCRYPT_DATA = 0x00000556
|
||||
CKM_CAMELLIA_CBC_ENCRYPT_DATA = 0x00000557
|
||||
CKM_CAMELLIA_CTR = 0x00000558
|
||||
CKM_ARIA_KEY_GEN = 0x00000560
|
||||
CKM_ARIA_ECB = 0x00000561
|
||||
CKM_ARIA_CBC = 0x00000562
|
||||
CKM_ARIA_MAC = 0x00000563
|
||||
CKM_ARIA_MAC_GENERAL = 0x00000564
|
||||
CKM_ARIA_CBC_PAD = 0x00000565
|
||||
CKM_ARIA_ECB_ENCRYPT_DATA = 0x00000566
|
||||
CKM_ARIA_CBC_ENCRYPT_DATA = 0x00000567
|
||||
CKM_SKIPJACK_KEY_GEN = 0x00001000
|
||||
CKM_SKIPJACK_ECB64 = 0x00001001
|
||||
CKM_SKIPJACK_CBC64 = 0x00001002
|
||||
CKM_SKIPJACK_OFB64 = 0x00001003
|
||||
CKM_SKIPJACK_CFB64 = 0x00001004
|
||||
CKM_SKIPJACK_CFB32 = 0x00001005
|
||||
CKM_SKIPJACK_CFB16 = 0x00001006
|
||||
CKM_SKIPJACK_CFB8 = 0x00001007
|
||||
CKM_SKIPJACK_WRAP = 0x00001008
|
||||
CKM_SKIPJACK_PRIVATE_WRAP = 0x00001009
|
||||
CKM_SKIPJACK_RELAYX = 0x0000100a
|
||||
CKM_KEA_KEY_PAIR_GEN = 0x00001010
|
||||
CKM_KEA_KEY_DERIVE = 0x00001011
|
||||
CKM_FORTEZZA_TIMESTAMP = 0x00001020
|
||||
CKM_BATON_KEY_GEN = 0x00001030
|
||||
CKM_BATON_ECB128 = 0x00001031
|
||||
CKM_BATON_ECB96 = 0x00001032
|
||||
CKM_BATON_CBC128 = 0x00001033
|
||||
CKM_BATON_COUNTER = 0x00001034
|
||||
CKM_BATON_SHUFFLE = 0x00001035
|
||||
CKM_BATON_WRAP = 0x00001036
|
||||
CKM_ECDSA_KEY_PAIR_GEN = 0x00001040
|
||||
CKM_EC_KEY_PAIR_GEN = 0x00001040
|
||||
CKM_ECDSA = 0x00001041
|
||||
CKM_ECDSA_SHA1 = 0x00001042
|
||||
CKM_ECDH1_DERIVE = 0x00001050
|
||||
CKM_ECDH1_COFACTOR_DERIVE = 0x00001051
|
||||
CKM_ECMQV_DERIVE = 0x00001052
|
||||
CKM_JUNIPER_KEY_GEN = 0x00001060
|
||||
CKM_JUNIPER_ECB128 = 0x00001061
|
||||
CKM_JUNIPER_CBC128 = 0x00001062
|
||||
CKM_JUNIPER_COUNTER = 0x00001063
|
||||
CKM_JUNIPER_SHUFFLE = 0x00001064
|
||||
CKM_JUNIPER_WRAP = 0x00001065
|
||||
CKM_FASTHASH = 0x00001070
|
||||
CKM_AES_KEY_GEN = 0x00001080
|
||||
CKM_AES_ECB = 0x00001081
|
||||
CKM_AES_CBC = 0x00001082
|
||||
CKM_AES_MAC = 0x00001083
|
||||
CKM_AES_MAC_GENERAL = 0x00001084
|
||||
CKM_AES_CBC_PAD = 0x00001085
|
||||
CKM_AES_CTR = 0x00001086
|
||||
CKM_BLOWFISH_KEY_GEN = 0x00001090
|
||||
CKM_BLOWFISH_CBC = 0x00001091
|
||||
CKM_TWOFISH_KEY_GEN = 0x00001092
|
||||
CKM_TWOFISH_CBC = 0x00001093
|
||||
CKM_DES_ECB_ENCRYPT_DATA = 0x00001100
|
||||
CKM_DES_CBC_ENCRYPT_DATA = 0x00001101
|
||||
CKM_DES3_ECB_ENCRYPT_DATA = 0x00001102
|
||||
CKM_DES3_CBC_ENCRYPT_DATA = 0x00001103
|
||||
CKM_AES_ECB_ENCRYPT_DATA = 0x00001104
|
||||
CKM_AES_CBC_ENCRYPT_DATA = 0x00001105
|
||||
CKM_DSA_PARAMETER_GEN = 0x00002000
|
||||
CKM_DH_PKCS_PARAMETER_GEN = 0x00002001
|
||||
CKM_X9_42_DH_PARAMETER_GEN = 0x00002002
|
||||
CKM_VENDOR_DEFINED = 0x80000000
|
||||
CKF_HW = 0x00000001
|
||||
CKF_ENCRYPT = 0x00000100
|
||||
CKF_DECRYPT = 0x00000200
|
||||
CKF_DIGEST = 0x00000400
|
||||
CKF_SIGN = 0x00000800
|
||||
CKF_SIGN_RECOVER = 0x00001000
|
||||
CKF_VERIFY = 0x00002000
|
||||
CKF_VERIFY_RECOVER = 0x00004000
|
||||
CKF_GENERATE = 0x00008000
|
||||
CKF_GENERATE_KEY_PAIR = 0x00010000
|
||||
CKF_WRAP = 0x00020000
|
||||
CKF_UNWRAP = 0x00040000
|
||||
CKF_DERIVE = 0x00080000
|
||||
CKF_EC_F_P = 0x00100000
|
||||
CKF_EC_F_2M = 0x00200000
|
||||
CKF_EC_ECPARAMETERS = 0x00400000
|
||||
CKF_EC_NAMEDCURVE = 0x00800000
|
||||
CKF_EC_UNCOMPRESS = 0x01000000
|
||||
CKF_EC_COMPRESS = 0x02000000
|
||||
CKF_EXTENSION = 0x80000000
|
||||
CKR_OK = 0x00000000
|
||||
CKR_CANCEL = 0x00000001
|
||||
CKR_HOST_MEMORY = 0x00000002
|
||||
CKR_SLOT_ID_INVALID = 0x00000003
|
||||
CKR_GENERAL_ERROR = 0x00000005
|
||||
CKR_FUNCTION_FAILED = 0x00000006
|
||||
CKR_ARGUMENTS_BAD = 0x00000007
|
||||
CKR_NO_EVENT = 0x00000008
|
||||
CKR_NEED_TO_CREATE_THREADS = 0x00000009
|
||||
CKR_CANT_LOCK = 0x0000000A
|
||||
CKR_ATTRIBUTE_READ_ONLY = 0x00000010
|
||||
CKR_ATTRIBUTE_SENSITIVE = 0x00000011
|
||||
CKR_ATTRIBUTE_TYPE_INVALID = 0x00000012
|
||||
CKR_ATTRIBUTE_VALUE_INVALID = 0x00000013
|
||||
CKR_DATA_INVALID = 0x00000020
|
||||
CKR_DATA_LEN_RANGE = 0x00000021
|
||||
CKR_DEVICE_ERROR = 0x00000030
|
||||
CKR_DEVICE_MEMORY = 0x00000031
|
||||
CKR_DEVICE_REMOVED = 0x00000032
|
||||
CKR_ENCRYPTED_DATA_INVALID = 0x00000040
|
||||
CKR_ENCRYPTED_DATA_LEN_RANGE = 0x00000041
|
||||
CKR_FUNCTION_CANCELED = 0x00000050
|
||||
CKR_FUNCTION_NOT_PARALLEL = 0x00000051
|
||||
CKR_FUNCTION_NOT_SUPPORTED = 0x00000054
|
||||
CKR_KEY_HANDLE_INVALID = 0x00000060
|
||||
CKR_KEY_SIZE_RANGE = 0x00000062
|
||||
CKR_KEY_TYPE_INCONSISTENT = 0x00000063
|
||||
CKR_KEY_NOT_NEEDED = 0x00000064
|
||||
CKR_KEY_CHANGED = 0x00000065
|
||||
CKR_KEY_NEEDED = 0x00000066
|
||||
CKR_KEY_INDIGESTIBLE = 0x00000067
|
||||
CKR_KEY_FUNCTION_NOT_PERMITTED = 0x00000068
|
||||
CKR_KEY_NOT_WRAPPABLE = 0x00000069
|
||||
CKR_KEY_UNEXTRACTABLE = 0x0000006A
|
||||
CKR_MECHANISM_INVALID = 0x00000070
|
||||
CKR_MECHANISM_PARAM_INVALID = 0x00000071
|
||||
CKR_OBJECT_HANDLE_INVALID = 0x00000082
|
||||
CKR_OPERATION_ACTIVE = 0x00000090
|
||||
CKR_OPERATION_NOT_INITIALIZED = 0x00000091
|
||||
CKR_PIN_INCORRECT = 0x000000A0
|
||||
CKR_PIN_INVALID = 0x000000A1
|
||||
CKR_PIN_LEN_RANGE = 0x000000A2
|
||||
CKR_PIN_EXPIRED = 0x000000A3
|
||||
CKR_PIN_LOCKED = 0x000000A4
|
||||
CKR_SESSION_CLOSED = 0x000000B0
|
||||
CKR_SESSION_COUNT = 0x000000B1
|
||||
CKR_SESSION_HANDLE_INVALID = 0x000000B3
|
||||
CKR_SESSION_PARALLEL_NOT_SUPPORTED = 0x000000B4
|
||||
CKR_SESSION_READ_ONLY = 0x000000B5
|
||||
CKR_SESSION_EXISTS = 0x000000B6
|
||||
CKR_SESSION_READ_ONLY_EXISTS = 0x000000B7
|
||||
CKR_SESSION_READ_WRITE_SO_EXISTS = 0x000000B8
|
||||
CKR_SIGNATURE_INVALID = 0x000000C0
|
||||
CKR_SIGNATURE_LEN_RANGE = 0x000000C1
|
||||
CKR_TEMPLATE_INCOMPLETE = 0x000000D0
|
||||
CKR_TEMPLATE_INCONSISTENT = 0x000000D1
|
||||
CKR_TOKEN_NOT_PRESENT = 0x000000E0
|
||||
CKR_TOKEN_NOT_RECOGNIZED = 0x000000E1
|
||||
CKR_TOKEN_WRITE_PROTECTED = 0x000000E2
|
||||
CKR_UNWRAPPING_KEY_HANDLE_INVALID = 0x000000F0
|
||||
CKR_UNWRAPPING_KEY_SIZE_RANGE = 0x000000F1
|
||||
CKR_UNWRAPPING_KEY_TYPE_INCONSISTENT = 0x000000F2
|
||||
CKR_USER_ALREADY_LOGGED_IN = 0x00000100
|
||||
CKR_USER_NOT_LOGGED_IN = 0x00000101
|
||||
CKR_USER_PIN_NOT_INITIALIZED = 0x00000102
|
||||
CKR_USER_TYPE_INVALID = 0x00000103
|
||||
CKR_USER_ANOTHER_ALREADY_LOGGED_IN = 0x00000104
|
||||
CKR_USER_TOO_MANY_TYPES = 0x00000105
|
||||
CKR_WRAPPED_KEY_INVALID = 0x00000110
|
||||
CKR_WRAPPED_KEY_LEN_RANGE = 0x00000112
|
||||
CKR_WRAPPING_KEY_HANDLE_INVALID = 0x00000113
|
||||
CKR_WRAPPING_KEY_SIZE_RANGE = 0x00000114
|
||||
CKR_WRAPPING_KEY_TYPE_INCONSISTENT = 0x00000115
|
||||
CKR_RANDOM_SEED_NOT_SUPPORTED = 0x00000120
|
||||
CKR_RANDOM_NO_RNG = 0x00000121
|
||||
CKR_DOMAIN_PARAMS_INVALID = 0x00000130
|
||||
CKR_BUFFER_TOO_SMALL = 0x00000150
|
||||
CKR_SAVED_STATE_INVALID = 0x00000160
|
||||
CKR_INFORMATION_SENSITIVE = 0x00000170
|
||||
CKR_STATE_UNSAVEABLE = 0x00000180
|
||||
CKR_CRYPTOKI_NOT_INITIALIZED = 0x00000190
|
||||
CKR_CRYPTOKI_ALREADY_INITIALIZED = 0x00000191
|
||||
CKR_MUTEX_BAD = 0x000001A0
|
||||
CKR_MUTEX_NOT_LOCKED = 0x000001A1
|
||||
CKR_NEW_PIN_MODE = 0x000001B0
|
||||
CKR_NEXT_OTP = 0x000001B1
|
||||
CKR_FUNCTION_REJECTED = 0x00000200
|
||||
CKR_VENDOR_DEFINED = 0x80000000
|
||||
CKF_LIBRARY_CANT_CREATE_OS_THREADS = 0x00000001
|
||||
CKF_OS_LOCKING_OK = 0x00000002
|
||||
CKF_DONT_BLOCK = 1
|
||||
CKF_NEXT_OTP = 0x00000001
|
||||
CKF_EXCLUDE_TIME = 0x00000002
|
||||
CKF_EXCLUDE_COUNTER = 0x00000004
|
||||
CKF_EXCLUDE_CHALLENGE = 0x00000008
|
||||
CKF_EXCLUDE_PIN = 0x00000010
|
||||
CKF_USER_FRIENDLY_OTP = 0x00000020
|
||||
)
|
98
vendor/src/github.com/miekg/pkcs11/error.go
vendored
Normal file
98
vendor/src/github.com/miekg/pkcs11/error.go
vendored
Normal file
|
@ -0,0 +1,98 @@
|
|||
// Copyright 2013 Miek Gieben. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pkcs11
|
||||
|
||||
// awk '/#define CKR_/{ print $3":\""$2"\"," }' pkcs11t.h
|
||||
|
||||
var strerror = map[uint]string{
|
||||
0x00000000: "CKR_OK",
|
||||
0x00000001: "CKR_CANCEL",
|
||||
0x00000002: "CKR_HOST_MEMORY",
|
||||
0x00000003: "CKR_SLOT_ID_INVALID",
|
||||
0x00000005: "CKR_GENERAL_ERROR",
|
||||
0x00000006: "CKR_FUNCTION_FAILED",
|
||||
0x00000007: "CKR_ARGUMENTS_BAD",
|
||||
0x00000008: "CKR_NO_EVENT",
|
||||
0x00000009: "CKR_NEED_TO_CREATE_THREADS",
|
||||
0x0000000A: "CKR_CANT_LOCK",
|
||||
0x00000010: "CKR_ATTRIBUTE_READ_ONLY",
|
||||
0x00000011: "CKR_ATTRIBUTE_SENSITIVE",
|
||||
0x00000012: "CKR_ATTRIBUTE_TYPE_INVALID",
|
||||
0x00000013: "CKR_ATTRIBUTE_VALUE_INVALID",
|
||||
0x00000020: "CKR_DATA_INVALID",
|
||||
0x00000021: "CKR_DATA_LEN_RANGE",
|
||||
0x00000030: "CKR_DEVICE_ERROR",
|
||||
0x00000031: "CKR_DEVICE_MEMORY",
|
||||
0x00000032: "CKR_DEVICE_REMOVED",
|
||||
0x00000040: "CKR_ENCRYPTED_DATA_INVALID",
|
||||
0x00000041: "CKR_ENCRYPTED_DATA_LEN_RANGE",
|
||||
0x00000050: "CKR_FUNCTION_CANCELED",
|
||||
0x00000051: "CKR_FUNCTION_NOT_PARALLEL",
|
||||
0x00000054: "CKR_FUNCTION_NOT_SUPPORTED",
|
||||
0x00000060: "CKR_KEY_HANDLE_INVALID",
|
||||
0x00000062: "CKR_KEY_SIZE_RANGE",
|
||||
0x00000063: "CKR_KEY_TYPE_INCONSISTENT",
|
||||
0x00000064: "CKR_KEY_NOT_NEEDED",
|
||||
0x00000065: "CKR_KEY_CHANGED",
|
||||
0x00000066: "CKR_KEY_NEEDED",
|
||||
0x00000067: "CKR_KEY_INDIGESTIBLE",
|
||||
0x00000068: "CKR_KEY_FUNCTION_NOT_PERMITTED",
|
||||
0x00000069: "CKR_KEY_NOT_WRAPPABLE",
|
||||
0x0000006A: "CKR_KEY_UNEXTRACTABLE",
|
||||
0x00000070: "CKR_MECHANISM_INVALID",
|
||||
0x00000071: "CKR_MECHANISM_PARAM_INVALID",
|
||||
0x00000082: "CKR_OBJECT_HANDLE_INVALID",
|
||||
0x00000090: "CKR_OPERATION_ACTIVE",
|
||||
0x00000091: "CKR_OPERATION_NOT_INITIALIZED",
|
||||
0x000000A0: "CKR_PIN_INCORRECT",
|
||||
0x000000A1: "CKR_PIN_INVALID",
|
||||
0x000000A2: "CKR_PIN_LEN_RANGE",
|
||||
0x000000A3: "CKR_PIN_EXPIRED",
|
||||
0x000000A4: "CKR_PIN_LOCKED",
|
||||
0x000000B0: "CKR_SESSION_CLOSED",
|
||||
0x000000B1: "CKR_SESSION_COUNT",
|
||||
0x000000B3: "CKR_SESSION_HANDLE_INVALID",
|
||||
0x000000B4: "CKR_SESSION_PARALLEL_NOT_SUPPORTED",
|
||||
0x000000B5: "CKR_SESSION_READ_ONLY",
|
||||
0x000000B6: "CKR_SESSION_EXISTS",
|
||||
0x000000B7: "CKR_SESSION_READ_ONLY_EXISTS",
|
||||
0x000000B8: "CKR_SESSION_READ_WRITE_SO_EXISTS",
|
||||
0x000000C0: "CKR_SIGNATURE_INVALID",
|
||||
0x000000C1: "CKR_SIGNATURE_LEN_RANGE",
|
||||
0x000000D0: "CKR_TEMPLATE_INCOMPLETE",
|
||||
0x000000D1: "CKR_TEMPLATE_INCONSISTENT",
|
||||
0x000000E0: "CKR_TOKEN_NOT_PRESENT",
|
||||
0x000000E1: "CKR_TOKEN_NOT_RECOGNIZED",
|
||||
0x000000E2: "CKR_TOKEN_WRITE_PROTECTED",
|
||||
0x000000F0: "CKR_UNWRAPPING_KEY_HANDLE_INVALID",
|
||||
0x000000F1: "CKR_UNWRAPPING_KEY_SIZE_RANGE",
|
||||
0x000000F2: "CKR_UNWRAPPING_KEY_TYPE_INCONSISTENT",
|
||||
0x00000100: "CKR_USER_ALREADY_LOGGED_IN",
|
||||
0x00000101: "CKR_USER_NOT_LOGGED_IN",
|
||||
0x00000102: "CKR_USER_PIN_NOT_INITIALIZED",
|
||||
0x00000103: "CKR_USER_TYPE_INVALID",
|
||||
0x00000104: "CKR_USER_ANOTHER_ALREADY_LOGGED_IN",
|
||||
0x00000105: "CKR_USER_TOO_MANY_TYPES",
|
||||
0x00000110: "CKR_WRAPPED_KEY_INVALID",
|
||||
0x00000112: "CKR_WRAPPED_KEY_LEN_RANGE",
|
||||
0x00000113: "CKR_WRAPPING_KEY_HANDLE_INVALID",
|
||||
0x00000114: "CKR_WRAPPING_KEY_SIZE_RANGE",
|
||||
0x00000115: "CKR_WRAPPING_KEY_TYPE_INCONSISTENT",
|
||||
0x00000120: "CKR_RANDOM_SEED_NOT_SUPPORTED",
|
||||
0x00000121: "CKR_RANDOM_NO_RNG",
|
||||
0x00000130: "CKR_DOMAIN_PARAMS_INVALID",
|
||||
0x00000150: "CKR_BUFFER_TOO_SMALL",
|
||||
0x00000160: "CKR_SAVED_STATE_INVALID",
|
||||
0x00000170: "CKR_INFORMATION_SENSITIVE",
|
||||
0x00000180: "CKR_STATE_UNSAVEABLE",
|
||||
0x00000190: "CKR_CRYPTOKI_NOT_INITIALIZED",
|
||||
0x00000191: "CKR_CRYPTOKI_ALREADY_INITIALIZED",
|
||||
0x000001A0: "CKR_MUTEX_BAD",
|
||||
0x000001A1: "CKR_MUTEX_NOT_LOCKED",
|
||||
0x000001B0: "CKR_NEW_PIN_MODE",
|
||||
0x000001B1: "CKR_NEXT_OTP",
|
||||
0x00000200: "CKR_FUNCTION_REJECTED",
|
||||
0x80000000: "CKR_VENDOR_DEFINED",
|
||||
}
|
BIN
vendor/src/github.com/miekg/pkcs11/hsm.db
vendored
Normal file
BIN
vendor/src/github.com/miekg/pkcs11/hsm.db
vendored
Normal file
Binary file not shown.
1554
vendor/src/github.com/miekg/pkcs11/pkcs11.go
vendored
Normal file
1554
vendor/src/github.com/miekg/pkcs11/pkcs11.go
vendored
Normal file
File diff suppressed because it is too large
Load diff
299
vendor/src/github.com/miekg/pkcs11/pkcs11.h
vendored
Normal file
299
vendor/src/github.com/miekg/pkcs11/pkcs11.h
vendored
Normal file
|
@ -0,0 +1,299 @@
|
|||
/* pkcs11.h include file for PKCS #11. */
|
||||
/* $Revision: 1.2 $ */
|
||||
|
||||
/* License to copy and use this software is granted provided that it is
|
||||
* identified as "RSA Security Inc. PKCS #11 Cryptographic Token Interface
|
||||
* (Cryptoki)" in all material mentioning or referencing this software.
|
||||
|
||||
* License is also granted to make and use derivative works provided that
|
||||
* such works are identified as "derived from the RSA Security Inc. PKCS #11
|
||||
* Cryptographic Token Interface (Cryptoki)" in all material mentioning or
|
||||
* referencing the derived work.
|
||||
|
||||
* RSA Security Inc. makes no representations concerning either the
|
||||
* merchantability of this software or the suitability of this software for
|
||||
* any particular purpose. It is provided "as is" without express or implied
|
||||
* warranty of any kind.
|
||||
*/
|
||||
|
||||
#ifndef _PKCS11_H_
|
||||
#define _PKCS11_H_ 1
|
||||
|
||||
#ifdef __cplusplus
|
||||
extern "C" {
|
||||
#endif
|
||||
|
||||
/* Before including this file (pkcs11.h) (or pkcs11t.h by
|
||||
* itself), 6 platform-specific macros must be defined. These
|
||||
* macros are described below, and typical definitions for them
|
||||
* are also given. Be advised that these definitions can depend
|
||||
* on both the platform and the compiler used (and possibly also
|
||||
* on whether a Cryptoki library is linked statically or
|
||||
* dynamically).
|
||||
*
|
||||
* In addition to defining these 6 macros, the packing convention
|
||||
* for Cryptoki structures should be set. The Cryptoki
|
||||
* convention on packing is that structures should be 1-byte
|
||||
* aligned.
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to produce
|
||||
* Win32 stuff, this might be done by using the following
|
||||
* preprocessor directive before including pkcs11.h or pkcs11t.h:
|
||||
*
|
||||
* #pragma pack(push, cryptoki, 1)
|
||||
*
|
||||
* and using the following preprocessor directive after including
|
||||
* pkcs11.h or pkcs11t.h:
|
||||
*
|
||||
* #pragma pack(pop, cryptoki)
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to produce Win16 stuff, this might be done by using
|
||||
* the following preprocessor directive before including
|
||||
* pkcs11.h or pkcs11t.h:
|
||||
*
|
||||
* #pragma pack(1)
|
||||
*
|
||||
* In a UNIX environment, you're on your own for this. You might
|
||||
* not need to do (or be able to do!) anything.
|
||||
*
|
||||
*
|
||||
* Now for the macros:
|
||||
*
|
||||
*
|
||||
* 1. CK_PTR: The indirection string for making a pointer to an
|
||||
* object. It can be used like this:
|
||||
*
|
||||
* typedef CK_BYTE CK_PTR CK_BYTE_PTR;
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to produce
|
||||
* Win32 stuff, it might be defined by:
|
||||
*
|
||||
* #define CK_PTR *
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to produce Win16 stuff, it might be defined by:
|
||||
*
|
||||
* #define CK_PTR far *
|
||||
*
|
||||
* In a typical UNIX environment, it might be defined by:
|
||||
*
|
||||
* #define CK_PTR *
|
||||
*
|
||||
*
|
||||
* 2. CK_DEFINE_FUNCTION(returnType, name): A macro which makes
|
||||
* an exportable Cryptoki library function definition out of a
|
||||
* return type and a function name. It should be used in the
|
||||
* following fashion to define the exposed Cryptoki functions in
|
||||
* a Cryptoki library:
|
||||
*
|
||||
* CK_DEFINE_FUNCTION(CK_RV, C_Initialize)(
|
||||
* CK_VOID_PTR pReserved
|
||||
* )
|
||||
* {
|
||||
* ...
|
||||
* }
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to define a
|
||||
* function in a Win32 Cryptoki .dll, it might be defined by:
|
||||
*
|
||||
* #define CK_DEFINE_FUNCTION(returnType, name) \
|
||||
* returnType __declspec(dllexport) name
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to define a function in a Win16 Cryptoki .dll, it
|
||||
* might be defined by:
|
||||
*
|
||||
* #define CK_DEFINE_FUNCTION(returnType, name) \
|
||||
* returnType __export _far _pascal name
|
||||
*
|
||||
* In a UNIX environment, it might be defined by:
|
||||
*
|
||||
* #define CK_DEFINE_FUNCTION(returnType, name) \
|
||||
* returnType name
|
||||
*
|
||||
*
|
||||
* 3. CK_DECLARE_FUNCTION(returnType, name): A macro which makes
|
||||
* an importable Cryptoki library function declaration out of a
|
||||
* return type and a function name. It should be used in the
|
||||
* following fashion:
|
||||
*
|
||||
* extern CK_DECLARE_FUNCTION(CK_RV, C_Initialize)(
|
||||
* CK_VOID_PTR pReserved
|
||||
* );
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to declare a
|
||||
* function in a Win32 Cryptoki .dll, it might be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION(returnType, name) \
|
||||
* returnType __declspec(dllimport) name
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to declare a function in a Win16 Cryptoki .dll, it
|
||||
* might be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION(returnType, name) \
|
||||
* returnType __export _far _pascal name
|
||||
*
|
||||
* In a UNIX environment, it might be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION(returnType, name) \
|
||||
* returnType name
|
||||
*
|
||||
*
|
||||
* 4. CK_DECLARE_FUNCTION_POINTER(returnType, name): A macro
|
||||
* which makes a Cryptoki API function pointer declaration or
|
||||
* function pointer type declaration out of a return type and a
|
||||
* function name. It should be used in the following fashion:
|
||||
*
|
||||
* // Define funcPtr to be a pointer to a Cryptoki API function
|
||||
* // taking arguments args and returning CK_RV.
|
||||
* CK_DECLARE_FUNCTION_POINTER(CK_RV, funcPtr)(args);
|
||||
*
|
||||
* or
|
||||
*
|
||||
* // Define funcPtrType to be the type of a pointer to a
|
||||
* // Cryptoki API function taking arguments args and returning
|
||||
* // CK_RV, and then define funcPtr to be a variable of type
|
||||
* // funcPtrType.
|
||||
* typedef CK_DECLARE_FUNCTION_POINTER(CK_RV, funcPtrType)(args);
|
||||
* funcPtrType funcPtr;
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to access
|
||||
* functions in a Win32 Cryptoki .dll, in might be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION_POINTER(returnType, name) \
|
||||
* returnType __declspec(dllimport) (* name)
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to access functions in a Win16 Cryptoki .dll, it might
|
||||
* be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION_POINTER(returnType, name) \
|
||||
* returnType __export _far _pascal (* name)
|
||||
*
|
||||
* In a UNIX environment, it might be defined by:
|
||||
*
|
||||
* #define CK_DECLARE_FUNCTION_POINTER(returnType, name) \
|
||||
* returnType (* name)
|
||||
*
|
||||
*
|
||||
* 5. CK_CALLBACK_FUNCTION(returnType, name): A macro which makes
|
||||
* a function pointer type for an application callback out of
|
||||
* a return type for the callback and a name for the callback.
|
||||
* It should be used in the following fashion:
|
||||
*
|
||||
* CK_CALLBACK_FUNCTION(CK_RV, myCallback)(args);
|
||||
*
|
||||
* to declare a function pointer, myCallback, to a callback
|
||||
* which takes arguments args and returns a CK_RV. It can also
|
||||
* be used like this:
|
||||
*
|
||||
* typedef CK_CALLBACK_FUNCTION(CK_RV, myCallbackType)(args);
|
||||
* myCallbackType myCallback;
|
||||
*
|
||||
* If you're using Microsoft Developer Studio 5.0 to do Win32
|
||||
* Cryptoki development, it might be defined by:
|
||||
*
|
||||
* #define CK_CALLBACK_FUNCTION(returnType, name) \
|
||||
* returnType (* name)
|
||||
*
|
||||
* If you're using an earlier version of Microsoft Developer
|
||||
* Studio to do Win16 development, it might be defined by:
|
||||
*
|
||||
* #define CK_CALLBACK_FUNCTION(returnType, name) \
|
||||
* returnType _far _pascal (* name)
|
||||
*
|
||||
* In a UNIX environment, it might be defined by:
|
||||
*
|
||||
* #define CK_CALLBACK_FUNCTION(returnType, name) \
|
||||
* returnType (* name)
|
||||
*
|
||||
*
|
||||
* 6. NULL_PTR: This macro is the value of a NULL pointer.
|
||||
*
|
||||
* In any ANSI/ISO C environment (and in many others as well),
|
||||
* this should best be defined by
|
||||
*
|
||||
* #ifndef NULL_PTR
|
||||
* #define NULL_PTR 0
|
||||
* #endif
|
||||
*/
|
||||
|
||||
|
||||
/* All the various Cryptoki types and #define'd values are in the
|
||||
* file pkcs11t.h. */
|
||||
#include "pkcs11t.h"
|
||||
|
||||
#define __PASTE(x,y) x##y
|
||||
|
||||
|
||||
/* ==============================================================
|
||||
* Define the "extern" form of all the entry points.
|
||||
* ==============================================================
|
||||
*/
|
||||
|
||||
#define CK_NEED_ARG_LIST 1
|
||||
#define CK_PKCS11_FUNCTION_INFO(name) \
|
||||
extern CK_DECLARE_FUNCTION(CK_RV, name)
|
||||
|
||||
/* pkcs11f.h has all the information about the Cryptoki
|
||||
* function prototypes. */
|
||||
#include "pkcs11f.h"
|
||||
|
||||
#undef CK_NEED_ARG_LIST
|
||||
#undef CK_PKCS11_FUNCTION_INFO
|
||||
|
||||
|
||||
/* ==============================================================
|
||||
* Define the typedef form of all the entry points. That is, for
|
||||
* each Cryptoki function C_XXX, define a type CK_C_XXX which is
|
||||
* a pointer to that kind of function.
|
||||
* ==============================================================
|
||||
*/
|
||||
|
||||
#define CK_NEED_ARG_LIST 1
|
||||
#define CK_PKCS11_FUNCTION_INFO(name) \
|
||||
typedef CK_DECLARE_FUNCTION_POINTER(CK_RV, __PASTE(CK_,name))
|
||||
|
||||
/* pkcs11f.h has all the information about the Cryptoki
|
||||
* function prototypes. */
|
||||
#include "pkcs11f.h"
|
||||
|
||||
#undef CK_NEED_ARG_LIST
|
||||
#undef CK_PKCS11_FUNCTION_INFO
|
||||
|
||||
|
||||
/* ==============================================================
|
||||
* Define structed vector of entry points. A CK_FUNCTION_LIST
|
||||
* contains a CK_VERSION indicating a library's Cryptoki version
|
||||
* and then a whole slew of function pointers to the routines in
|
||||
* the library. This type was declared, but not defined, in
|
||||
* pkcs11t.h.
|
||||
* ==============================================================
|
||||
*/
|
||||
|
||||
#define CK_PKCS11_FUNCTION_INFO(name) \
|
||||
__PASTE(CK_,name) name;
|
||||
|
||||
struct CK_FUNCTION_LIST {
|
||||
|
||||
CK_VERSION version; /* Cryptoki version */
|
||||
|
||||
/* Pile all the function pointers into the CK_FUNCTION_LIST. */
|
||||
/* pkcs11f.h has all the information about the Cryptoki
|
||||
* function prototypes. */
|
||||
#include "pkcs11f.h"
|
||||
|
||||
};
|
||||
|
||||
#undef CK_PKCS11_FUNCTION_INFO
|
||||
|
||||
|
||||
#undef __PASTE
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
#endif
|
||||
|
||||
#endif
|
910
vendor/src/github.com/miekg/pkcs11/pkcs11f.h
vendored
Normal file
910
vendor/src/github.com/miekg/pkcs11/pkcs11f.h
vendored
Normal file
|
@ -0,0 +1,910 @@
|
|||
/* pkcs11f.h include file for PKCS #11. */
|
||||
/* $Revision: 1.2 $ */
|
||||
|
||||
/* License to copy and use this software is granted provided that it is
|
||||
* identified as "RSA Security Inc. PKCS #11 Cryptographic Token Interface
|
||||
* (Cryptoki)" in all material mentioning or referencing this software.
|
||||
|
||||
* License is also granted to make and use derivative works provided that
|
||||
* such works are identified as "derived from the RSA Security Inc. PKCS #11
|
||||
* Cryptographic Token Interface (Cryptoki)" in all material mentioning or
|
||||
* referencing the derived work.
|
||||
|
||||
* RSA Security Inc. makes no representations concerning either the
|
||||
* merchantability of this software or the suitability of this software for
|
||||
* any particular purpose. It is provided "as is" without express or implied
|
||||
* warranty of any kind.
|
||||
*/
|
||||
|
||||
/* This header file contains pretty much everything about all the */
|
||||
/* Cryptoki function prototypes. Because this information is */
|
||||
/* used for more than just declaring function prototypes, the */
|
||||
/* order of the functions appearing herein is important, and */
|
||||
/* should not be altered. */
|
||||
|
||||
/* General-purpose */
|
||||
|
||||
/* C_Initialize initializes the Cryptoki library. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Initialize)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_VOID_PTR pInitArgs /* if this is not NULL_PTR, it gets
|
||||
* cast to CK_C_INITIALIZE_ARGS_PTR
|
||||
* and dereferenced */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Finalize indicates that an application is done with the
|
||||
* Cryptoki library. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Finalize)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_VOID_PTR pReserved /* reserved. Should be NULL_PTR */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetInfo returns general information about Cryptoki. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetInfo)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_INFO_PTR pInfo /* location that receives information */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetFunctionList returns the function list. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetFunctionList)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_FUNCTION_LIST_PTR_PTR ppFunctionList /* receives pointer to
|
||||
* function list */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Slot and token management */
|
||||
|
||||
/* C_GetSlotList obtains a list of slots in the system. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetSlotList)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_BBOOL tokenPresent, /* only slots with tokens? */
|
||||
CK_SLOT_ID_PTR pSlotList, /* receives array of slot IDs */
|
||||
CK_ULONG_PTR pulCount /* receives number of slots */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetSlotInfo obtains information about a particular slot in
|
||||
* the system. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetSlotInfo)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID, /* the ID of the slot */
|
||||
CK_SLOT_INFO_PTR pInfo /* receives the slot information */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetTokenInfo obtains information about a particular token
|
||||
* in the system. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetTokenInfo)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID, /* ID of the token's slot */
|
||||
CK_TOKEN_INFO_PTR pInfo /* receives the token information */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetMechanismList obtains a list of mechanism types
|
||||
* supported by a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetMechanismList)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID, /* ID of token's slot */
|
||||
CK_MECHANISM_TYPE_PTR pMechanismList, /* gets mech. array */
|
||||
CK_ULONG_PTR pulCount /* gets # of mechs. */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetMechanismInfo obtains information about a particular
|
||||
* mechanism possibly supported by a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetMechanismInfo)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID, /* ID of the token's slot */
|
||||
CK_MECHANISM_TYPE type, /* type of mechanism */
|
||||
CK_MECHANISM_INFO_PTR pInfo /* receives mechanism info */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_InitToken initializes a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_InitToken)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
/* pLabel changed from CK_CHAR_PTR to CK_UTF8CHAR_PTR for v2.10 */
|
||||
(
|
||||
CK_SLOT_ID slotID, /* ID of the token's slot */
|
||||
CK_UTF8CHAR_PTR pPin, /* the SO's initial PIN */
|
||||
CK_ULONG ulPinLen, /* length in bytes of the PIN */
|
||||
CK_UTF8CHAR_PTR pLabel /* 32-byte token label (blank padded) */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_InitPIN initializes the normal user's PIN. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_InitPIN)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_UTF8CHAR_PTR pPin, /* the normal user's PIN */
|
||||
CK_ULONG ulPinLen /* length in bytes of the PIN */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SetPIN modifies the PIN of the user who is logged in. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SetPIN)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_UTF8CHAR_PTR pOldPin, /* the old PIN */
|
||||
CK_ULONG ulOldLen, /* length of the old PIN */
|
||||
CK_UTF8CHAR_PTR pNewPin, /* the new PIN */
|
||||
CK_ULONG ulNewLen /* length of the new PIN */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Session management */
|
||||
|
||||
/* C_OpenSession opens a session between an application and a
|
||||
* token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_OpenSession)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID, /* the slot's ID */
|
||||
CK_FLAGS flags, /* from CK_SESSION_INFO */
|
||||
CK_VOID_PTR pApplication, /* passed to callback */
|
||||
CK_NOTIFY Notify, /* callback function */
|
||||
CK_SESSION_HANDLE_PTR phSession /* gets session handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_CloseSession closes a session between an application and a
|
||||
* token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_CloseSession)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession /* the session's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
/* C_CloseAllSessions closes all sessions with a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_CloseAllSessions)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SLOT_ID slotID /* the token's slot */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetSessionInfo obtains information about the session. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetSessionInfo)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_SESSION_INFO_PTR pInfo /* receives session info */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetOperationState obtains the state of the cryptographic operation
|
||||
* in a session. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetOperationState)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pOperationState, /* gets state */
|
||||
CK_ULONG_PTR pulOperationStateLen /* gets state length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SetOperationState restores the state of the cryptographic
|
||||
* operation in a session. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SetOperationState)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pOperationState, /* holds state */
|
||||
CK_ULONG ulOperationStateLen, /* holds state length */
|
||||
CK_OBJECT_HANDLE hEncryptionKey, /* en/decryption key */
|
||||
CK_OBJECT_HANDLE hAuthenticationKey /* sign/verify key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Login logs a user into a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Login)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_USER_TYPE userType, /* the user type */
|
||||
CK_UTF8CHAR_PTR pPin, /* the user's PIN */
|
||||
CK_ULONG ulPinLen /* the length of the PIN */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Logout logs a user out from a token. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Logout)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession /* the session's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Object management */
|
||||
|
||||
/* C_CreateObject creates a new object. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_CreateObject)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* the object's template */
|
||||
CK_ULONG ulCount, /* attributes in template */
|
||||
CK_OBJECT_HANDLE_PTR phObject /* gets new object's handle. */
|
||||
);
|
||||
#endif
|
||||
|
||||
/* C_CopyObject copies an object, creating a new object for the
|
||||
* copy. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_CopyObject)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hObject, /* the object's handle */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* template for new object */
|
||||
CK_ULONG ulCount, /* attributes in template */
|
||||
CK_OBJECT_HANDLE_PTR phNewObject /* receives handle of copy */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DestroyObject destroys an object. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DestroyObject)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hObject /* the object's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetObjectSize gets the size of an object in bytes. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetObjectSize)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hObject, /* the object's handle */
|
||||
CK_ULONG_PTR pulSize /* receives size of object */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GetAttributeValue obtains the value of one or more object
|
||||
* attributes. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetAttributeValue)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hObject, /* the object's handle */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* specifies attrs; gets vals */
|
||||
CK_ULONG ulCount /* attributes in template */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SetAttributeValue modifies the value of one or more object
|
||||
* attributes */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SetAttributeValue)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hObject, /* the object's handle */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* specifies attrs and values */
|
||||
CK_ULONG ulCount /* attributes in template */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_FindObjectsInit initializes a search for token and session
|
||||
* objects that match a template. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_FindObjectsInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* attribute values to match */
|
||||
CK_ULONG ulCount /* attrs in search template */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_FindObjects continues a search for token and session
|
||||
* objects that match a template, obtaining additional object
|
||||
* handles. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_FindObjects)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_OBJECT_HANDLE_PTR phObject, /* gets obj. handles */
|
||||
CK_ULONG ulMaxObjectCount, /* max handles to get */
|
||||
CK_ULONG_PTR pulObjectCount /* actual # returned */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_FindObjectsFinal finishes a search for token and session
|
||||
* objects. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_FindObjectsFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession /* the session's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Encryption and decryption */
|
||||
|
||||
/* C_EncryptInit initializes an encryption operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_EncryptInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the encryption mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* handle of encryption key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Encrypt encrypts single-part data. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Encrypt)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pData, /* the plaintext data */
|
||||
CK_ULONG ulDataLen, /* bytes of plaintext */
|
||||
CK_BYTE_PTR pEncryptedData, /* gets ciphertext */
|
||||
CK_ULONG_PTR pulEncryptedDataLen /* gets c-text size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_EncryptUpdate continues a multiple-part encryption
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_EncryptUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pPart, /* the plaintext data */
|
||||
CK_ULONG ulPartLen, /* plaintext data len */
|
||||
CK_BYTE_PTR pEncryptedPart, /* gets ciphertext */
|
||||
CK_ULONG_PTR pulEncryptedPartLen /* gets c-text size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_EncryptFinal finishes a multiple-part encryption
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_EncryptFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session handle */
|
||||
CK_BYTE_PTR pLastEncryptedPart, /* last c-text */
|
||||
CK_ULONG_PTR pulLastEncryptedPartLen /* gets last size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DecryptInit initializes a decryption operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DecryptInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the decryption mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* handle of decryption key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Decrypt decrypts encrypted data in a single part. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Decrypt)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pEncryptedData, /* ciphertext */
|
||||
CK_ULONG ulEncryptedDataLen, /* ciphertext length */
|
||||
CK_BYTE_PTR pData, /* gets plaintext */
|
||||
CK_ULONG_PTR pulDataLen /* gets p-text size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DecryptUpdate continues a multiple-part decryption
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DecryptUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pEncryptedPart, /* encrypted data */
|
||||
CK_ULONG ulEncryptedPartLen, /* input length */
|
||||
CK_BYTE_PTR pPart, /* gets plaintext */
|
||||
CK_ULONG_PTR pulPartLen /* p-text size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DecryptFinal finishes a multiple-part decryption
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DecryptFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pLastPart, /* gets plaintext */
|
||||
CK_ULONG_PTR pulLastPartLen /* p-text size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Message digesting */
|
||||
|
||||
/* C_DigestInit initializes a message-digesting operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DigestInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism /* the digesting mechanism */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Digest digests data in a single part. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Digest)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pData, /* data to be digested */
|
||||
CK_ULONG ulDataLen, /* bytes of data to digest */
|
||||
CK_BYTE_PTR pDigest, /* gets the message digest */
|
||||
CK_ULONG_PTR pulDigestLen /* gets digest length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DigestUpdate continues a multiple-part message-digesting
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DigestUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pPart, /* data to be digested */
|
||||
CK_ULONG ulPartLen /* bytes of data to be digested */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DigestKey continues a multi-part message-digesting
|
||||
* operation, by digesting the value of a secret key as part of
|
||||
* the data already digested. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DigestKey)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_OBJECT_HANDLE hKey /* secret key to digest */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DigestFinal finishes a multiple-part message-digesting
|
||||
* operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DigestFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pDigest, /* gets the message digest */
|
||||
CK_ULONG_PTR pulDigestLen /* gets byte count of digest */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Signing and MACing */
|
||||
|
||||
/* C_SignInit initializes a signature (private key encryption)
|
||||
* operation, where the signature is (will be) an appendix to
|
||||
* the data, and plaintext cannot be recovered from the
|
||||
*signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the signature mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* handle of signature key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Sign signs (encrypts with private key) data in a single
|
||||
* part, where the signature is (will be) an appendix to the
|
||||
* data, and plaintext cannot be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Sign)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pData, /* the data to sign */
|
||||
CK_ULONG ulDataLen, /* count of bytes to sign */
|
||||
CK_BYTE_PTR pSignature, /* gets the signature */
|
||||
CK_ULONG_PTR pulSignatureLen /* gets signature length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SignUpdate continues a multiple-part signature operation,
|
||||
* where the signature is (will be) an appendix to the data,
|
||||
* and plaintext cannot be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pPart, /* the data to sign */
|
||||
CK_ULONG ulPartLen /* count of bytes to sign */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SignFinal finishes a multiple-part signature operation,
|
||||
* returning the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pSignature, /* gets the signature */
|
||||
CK_ULONG_PTR pulSignatureLen /* gets signature length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SignRecoverInit initializes a signature operation, where
|
||||
* the data can be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignRecoverInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the signature mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* handle of the signature key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SignRecover signs data in a single operation, where the
|
||||
* data can be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignRecover)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pData, /* the data to sign */
|
||||
CK_ULONG ulDataLen, /* count of bytes to sign */
|
||||
CK_BYTE_PTR pSignature, /* gets the signature */
|
||||
CK_ULONG_PTR pulSignatureLen /* gets signature length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Verifying signatures and MACs */
|
||||
|
||||
/* C_VerifyInit initializes a verification operation, where the
|
||||
* signature is an appendix to the data, and plaintext cannot
|
||||
* cannot be recovered from the signature (e.g. DSA). */
|
||||
CK_PKCS11_FUNCTION_INFO(C_VerifyInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the verification mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* verification key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_Verify verifies a signature in a single-part operation,
|
||||
* where the signature is an appendix to the data, and plaintext
|
||||
* cannot be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_Verify)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pData, /* signed data */
|
||||
CK_ULONG ulDataLen, /* length of signed data */
|
||||
CK_BYTE_PTR pSignature, /* signature */
|
||||
CK_ULONG ulSignatureLen /* signature length*/
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_VerifyUpdate continues a multiple-part verification
|
||||
* operation, where the signature is an appendix to the data,
|
||||
* and plaintext cannot be recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_VerifyUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pPart, /* signed data */
|
||||
CK_ULONG ulPartLen /* length of signed data */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_VerifyFinal finishes a multiple-part verification
|
||||
* operation, checking the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_VerifyFinal)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pSignature, /* signature to verify */
|
||||
CK_ULONG ulSignatureLen /* signature length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_VerifyRecoverInit initializes a signature verification
|
||||
* operation, where the data is recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_VerifyRecoverInit)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the verification mechanism */
|
||||
CK_OBJECT_HANDLE hKey /* verification key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_VerifyRecover verifies a signature in a single-part
|
||||
* operation, where the data is recovered from the signature. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_VerifyRecover)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pSignature, /* signature to verify */
|
||||
CK_ULONG ulSignatureLen, /* signature length */
|
||||
CK_BYTE_PTR pData, /* gets signed data */
|
||||
CK_ULONG_PTR pulDataLen /* gets signed data len */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Dual-function cryptographic operations */
|
||||
|
||||
/* C_DigestEncryptUpdate continues a multiple-part digesting
|
||||
* and encryption operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DigestEncryptUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pPart, /* the plaintext data */
|
||||
CK_ULONG ulPartLen, /* plaintext length */
|
||||
CK_BYTE_PTR pEncryptedPart, /* gets ciphertext */
|
||||
CK_ULONG_PTR pulEncryptedPartLen /* gets c-text length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DecryptDigestUpdate continues a multiple-part decryption and
|
||||
* digesting operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DecryptDigestUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pEncryptedPart, /* ciphertext */
|
||||
CK_ULONG ulEncryptedPartLen, /* ciphertext length */
|
||||
CK_BYTE_PTR pPart, /* gets plaintext */
|
||||
CK_ULONG_PTR pulPartLen /* gets plaintext len */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_SignEncryptUpdate continues a multiple-part signing and
|
||||
* encryption operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SignEncryptUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pPart, /* the plaintext data */
|
||||
CK_ULONG ulPartLen, /* plaintext length */
|
||||
CK_BYTE_PTR pEncryptedPart, /* gets ciphertext */
|
||||
CK_ULONG_PTR pulEncryptedPartLen /* gets c-text length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DecryptVerifyUpdate continues a multiple-part decryption and
|
||||
* verify operation. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DecryptVerifyUpdate)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_BYTE_PTR pEncryptedPart, /* ciphertext */
|
||||
CK_ULONG ulEncryptedPartLen, /* ciphertext length */
|
||||
CK_BYTE_PTR pPart, /* gets plaintext */
|
||||
CK_ULONG_PTR pulPartLen /* gets p-text length */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Key management */
|
||||
|
||||
/* C_GenerateKey generates a secret key, creating a new key
|
||||
* object. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GenerateKey)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* key generation mech. */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* template for new key */
|
||||
CK_ULONG ulCount, /* # of attrs in template */
|
||||
CK_OBJECT_HANDLE_PTR phKey /* gets handle of new key */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GenerateKeyPair generates a public-key/private-key pair,
|
||||
* creating new key objects. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GenerateKeyPair)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session
|
||||
* handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* key-gen
|
||||
* mech. */
|
||||
CK_ATTRIBUTE_PTR pPublicKeyTemplate, /* template
|
||||
* for pub.
|
||||
* key */
|
||||
CK_ULONG ulPublicKeyAttributeCount, /* # pub.
|
||||
* attrs. */
|
||||
CK_ATTRIBUTE_PTR pPrivateKeyTemplate, /* template
|
||||
* for priv.
|
||||
* key */
|
||||
CK_ULONG ulPrivateKeyAttributeCount, /* # priv.
|
||||
* attrs. */
|
||||
CK_OBJECT_HANDLE_PTR phPublicKey, /* gets pub.
|
||||
* key
|
||||
* handle */
|
||||
CK_OBJECT_HANDLE_PTR phPrivateKey /* gets
|
||||
* priv. key
|
||||
* handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_WrapKey wraps (i.e., encrypts) a key. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_WrapKey)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* the wrapping mechanism */
|
||||
CK_OBJECT_HANDLE hWrappingKey, /* wrapping key */
|
||||
CK_OBJECT_HANDLE hKey, /* key to be wrapped */
|
||||
CK_BYTE_PTR pWrappedKey, /* gets wrapped key */
|
||||
CK_ULONG_PTR pulWrappedKeyLen /* gets wrapped key size */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_UnwrapKey unwraps (decrypts) a wrapped key, creating a new
|
||||
* key object. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_UnwrapKey)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* unwrapping mech. */
|
||||
CK_OBJECT_HANDLE hUnwrappingKey, /* unwrapping key */
|
||||
CK_BYTE_PTR pWrappedKey, /* the wrapped key */
|
||||
CK_ULONG ulWrappedKeyLen, /* wrapped key len */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* new key template */
|
||||
CK_ULONG ulAttributeCount, /* template length */
|
||||
CK_OBJECT_HANDLE_PTR phKey /* gets new handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_DeriveKey derives a key from a base key, creating a new key
|
||||
* object. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_DeriveKey)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* session's handle */
|
||||
CK_MECHANISM_PTR pMechanism, /* key deriv. mech. */
|
||||
CK_OBJECT_HANDLE hBaseKey, /* base key */
|
||||
CK_ATTRIBUTE_PTR pTemplate, /* new key template */
|
||||
CK_ULONG ulAttributeCount, /* template length */
|
||||
CK_OBJECT_HANDLE_PTR phKey /* gets new handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Random number generation */
|
||||
|
||||
/* C_SeedRandom mixes additional seed material into the token's
|
||||
* random number generator. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_SeedRandom)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR pSeed, /* the seed material */
|
||||
CK_ULONG ulSeedLen /* length of seed material */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_GenerateRandom generates random data. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GenerateRandom)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession, /* the session's handle */
|
||||
CK_BYTE_PTR RandomData, /* receives the random data */
|
||||
CK_ULONG ulRandomLen /* # of bytes to generate */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Parallel function management */
|
||||
|
||||
/* C_GetFunctionStatus is a legacy function; it obtains an
|
||||
* updated status of a function running in parallel with an
|
||||
* application. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_GetFunctionStatus)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession /* the session's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
/* C_CancelFunction is a legacy function; it cancels a function
|
||||
* running in parallel. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_CancelFunction)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_SESSION_HANDLE hSession /* the session's handle */
|
||||
);
|
||||
#endif
|
||||
|
||||
|
||||
|
||||
/* Functions added in for Cryptoki Version 2.01 or later */
|
||||
|
||||
/* C_WaitForSlotEvent waits for a slot event (token insertion,
|
||||
* removal, etc.) to occur. */
|
||||
CK_PKCS11_FUNCTION_INFO(C_WaitForSlotEvent)
|
||||
#ifdef CK_NEED_ARG_LIST
|
||||
(
|
||||
CK_FLAGS flags, /* blocking/nonblocking flag */
|
||||
CK_SLOT_ID_PTR pSlot, /* location that receives the slot ID */
|
||||
CK_VOID_PTR pRserved /* reserved. Should be NULL_PTR */
|
||||
);
|
||||
#endif
|
1885
vendor/src/github.com/miekg/pkcs11/pkcs11t.h
vendored
Normal file
1885
vendor/src/github.com/miekg/pkcs11/pkcs11t.h
vendored
Normal file
File diff suppressed because it is too large
Load diff
1
vendor/src/github.com/miekg/pkcs11/softhsm.conf
vendored
Normal file
1
vendor/src/github.com/miekg/pkcs11/softhsm.conf
vendored
Normal file
|
@ -0,0 +1 @@
|
|||
0:hsm.db
|
274
vendor/src/github.com/miekg/pkcs11/types.go
vendored
Normal file
274
vendor/src/github.com/miekg/pkcs11/types.go
vendored
Normal file
|
@ -0,0 +1,274 @@
|
|||
// Copyright 2013 Miek Gieben. All rights reserved.
|
||||
// Use of this source code is governed by a BSD-style
|
||||
// license that can be found in the LICENSE file.
|
||||
|
||||
package pkcs11
|
||||
|
||||
/*
|
||||
#define CK_PTR *
|
||||
#ifndef NULL_PTR
|
||||
#define NULL_PTR 0
|
||||
#endif
|
||||
#define CK_DEFINE_FUNCTION(returnType, name) returnType name
|
||||
#define CK_DECLARE_FUNCTION(returnType, name) returnType name
|
||||
#define CK_DECLARE_FUNCTION_POINTER(returnType, name) returnType (* name)
|
||||
#define CK_CALLBACK_FUNCTION(returnType, name) returnType (* name)
|
||||
|
||||
#include <stdlib.h>
|
||||
#include "pkcs11.h"
|
||||
|
||||
CK_ULONG Index(CK_ULONG_PTR array, CK_ULONG i)
|
||||
{
|
||||
return array[i];
|
||||
}
|
||||
|
||||
CK_ULONG Sizeof()
|
||||
{
|
||||
return sizeof(CK_ULONG);
|
||||
}
|
||||
*/
|
||||
import "C"
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"time"
|
||||
"unsafe"
|
||||
)
|
||||
|
||||
// toList converts from a C style array to a []uint.
|
||||
func toList(clist C.CK_ULONG_PTR, size C.CK_ULONG) []uint {
|
||||
l := make([]uint, int(size))
|
||||
for i := 0; i < len(l); i++ {
|
||||
l[i] = uint(C.Index(clist, C.CK_ULONG(i)))
|
||||
}
|
||||
defer C.free(unsafe.Pointer(clist))
|
||||
return l
|
||||
}
|
||||
|
||||
// cBBool converts a bool to a CK_BBOOL.
|
||||
func cBBool(x bool) C.CK_BBOOL {
|
||||
if x {
|
||||
return C.CK_BBOOL(C.CK_TRUE)
|
||||
}
|
||||
return C.CK_BBOOL(C.CK_FALSE)
|
||||
}
|
||||
|
||||
// Error represents an PKCS#11 error.
|
||||
type Error uint
|
||||
|
||||
func (e Error) Error() string {
|
||||
return fmt.Sprintf("pkcs11: 0x%X: %s", uint(e), strerror[uint(e)])
|
||||
}
|
||||
|
||||
func toError(e C.CK_RV) error {
|
||||
if e == C.CKR_OK {
|
||||
return nil
|
||||
}
|
||||
return Error(e)
|
||||
}
|
||||
|
||||
/* SessionHandle is a Cryptoki-assigned value that identifies a session. */
|
||||
type SessionHandle uint
|
||||
|
||||
/* ObjectHandle is a token-specific identifier for an object. */
|
||||
type ObjectHandle uint
|
||||
|
||||
// Version represents any version information from the library.
|
||||
type Version struct {
|
||||
Major byte
|
||||
Minor byte
|
||||
}
|
||||
|
||||
func toVersion(version C.CK_VERSION) Version {
|
||||
return Version{byte(version.major), byte(version.minor)}
|
||||
}
|
||||
|
||||
// SlotEvent holds the SlotID which for which an slot event (token insertion,
|
||||
// removal, etc.) occurred.
|
||||
type SlotEvent struct {
|
||||
SlotID uint
|
||||
}
|
||||
|
||||
// Info provides information about the library and hardware used.
|
||||
type Info struct {
|
||||
CryptokiVersion Version
|
||||
ManufacturerID string
|
||||
Flags uint
|
||||
LibraryDescription string
|
||||
LibraryVersion Version
|
||||
}
|
||||
|
||||
/* SlotInfo provides information about a slot. */
|
||||
type SlotInfo struct {
|
||||
SlotDescription string // 64 bytes.
|
||||
ManufacturerID string // 32 bytes.
|
||||
Flags uint
|
||||
HardwareVersion Version
|
||||
FirmwareVersion Version
|
||||
}
|
||||
|
||||
/* TokenInfo provides information about a token. */
|
||||
type TokenInfo struct {
|
||||
Label string
|
||||
ManufacturerID string
|
||||
Model string
|
||||
SerialNumber string
|
||||
Flags uint
|
||||
MaxSessionCount uint
|
||||
SessionCount uint
|
||||
MaxRwSessionCount uint
|
||||
RwSessionCount uint
|
||||
MaxPinLen uint
|
||||
MinPinLen uint
|
||||
TotalPublicMemory uint
|
||||
FreePublicMemory uint
|
||||
TotalPrivateMemory uint
|
||||
FreePrivateMemory uint
|
||||
HardwareVersion Version
|
||||
FirmwareVersion Version
|
||||
UTCTime string
|
||||
}
|
||||
|
||||
/* SesionInfo provides information about a session. */
|
||||
type SessionInfo struct {
|
||||
SlotID uint
|
||||
State uint
|
||||
Flags uint
|
||||
DeviceError uint
|
||||
}
|
||||
|
||||
// Attribute holds an attribute type/value combination.
|
||||
type Attribute struct {
|
||||
Type uint
|
||||
Value []byte
|
||||
}
|
||||
|
||||
// NewAttribute allocates a Attribute and returns a pointer to it.
|
||||
// Note that this is merely a convience function, as values returned
|
||||
// from the HSM are not converted back to Go values, those are just raw
|
||||
// byte slices.
|
||||
func NewAttribute(typ uint, x interface{}) *Attribute {
|
||||
// This function nicely transforms *to* an attribute, but there is
|
||||
// no corresponding function that transform back *from* an attribute,
|
||||
// which in PKCS#11 is just an byte array.
|
||||
a := new(Attribute)
|
||||
a.Type = typ
|
||||
if x == nil {
|
||||
return a
|
||||
}
|
||||
switch x.(type) {
|
||||
case bool: // create bbool
|
||||
if x.(bool) {
|
||||
a.Value = []byte{1}
|
||||
break
|
||||
}
|
||||
a.Value = []byte{0}
|
||||
case uint, int:
|
||||
var y uint
|
||||
if _, ok := x.(int); ok {
|
||||
y = uint(x.(int))
|
||||
}
|
||||
if _, ok := x.(uint); ok {
|
||||
y = x.(uint)
|
||||
}
|
||||
// TODO(miek): ugly!
|
||||
switch int(C.Sizeof()) {
|
||||
case 4:
|
||||
a.Value = make([]byte, 4)
|
||||
a.Value[0] = byte(y)
|
||||
a.Value[1] = byte(y >> 8)
|
||||
a.Value[2] = byte(y >> 16)
|
||||
a.Value[3] = byte(y >> 24)
|
||||
case 8:
|
||||
a.Value = make([]byte, 8)
|
||||
a.Value[0] = byte(y)
|
||||
a.Value[1] = byte(y >> 8)
|
||||
a.Value[2] = byte(y >> 16)
|
||||
a.Value[3] = byte(y >> 24)
|
||||
a.Value[4] = byte(y >> 32)
|
||||
a.Value[5] = byte(y >> 40)
|
||||
a.Value[6] = byte(y >> 48)
|
||||
a.Value[7] = byte(y >> 56)
|
||||
}
|
||||
case string:
|
||||
a.Value = []byte(x.(string))
|
||||
case []byte: // just copy
|
||||
a.Value = x.([]byte)
|
||||
case time.Time: // for CKA_DATE
|
||||
a.Value = cDate(x.(time.Time))
|
||||
default:
|
||||
panic("pkcs11: unhandled attribute type")
|
||||
}
|
||||
return a
|
||||
}
|
||||
|
||||
// cAttribute returns the start address and the length of an attribute list.
|
||||
func cAttributeList(a []*Attribute) (C.CK_ATTRIBUTE_PTR, C.CK_ULONG) {
|
||||
if len(a) == 0 {
|
||||
return nil, 0
|
||||
}
|
||||
pa := make([]C.CK_ATTRIBUTE, len(a))
|
||||
for i := 0; i < len(a); i++ {
|
||||
pa[i]._type = C.CK_ATTRIBUTE_TYPE(a[i].Type)
|
||||
if a[i].Value == nil {
|
||||
continue
|
||||
}
|
||||
pa[i].pValue = C.CK_VOID_PTR((&a[i].Value[0]))
|
||||
pa[i].ulValueLen = C.CK_ULONG(len(a[i].Value))
|
||||
}
|
||||
return C.CK_ATTRIBUTE_PTR(&pa[0]), C.CK_ULONG(len(a))
|
||||
}
|
||||
|
||||
func cDate(t time.Time) []byte {
|
||||
b := make([]byte, 8)
|
||||
year, month, day := t.Date()
|
||||
y := fmt.Sprintf("%4d", year)
|
||||
m := fmt.Sprintf("%02d", month)
|
||||
d1 := fmt.Sprintf("%02d", day)
|
||||
b[0], b[1], b[2], b[3] = y[0], y[1], y[2], y[3]
|
||||
b[4], b[5] = m[0], m[1]
|
||||
b[6], b[7] = d1[0], d1[1]
|
||||
return b
|
||||
}
|
||||
|
||||
// Mechanism holds an mechanism type/value combination.
|
||||
type Mechanism struct {
|
||||
Mechanism uint
|
||||
Parameter []byte
|
||||
}
|
||||
|
||||
func NewMechanism(mech uint, x interface{}) *Mechanism {
|
||||
m := new(Mechanism)
|
||||
m.Mechanism = mech
|
||||
if x == nil {
|
||||
return m
|
||||
}
|
||||
|
||||
// Add any parameters passed (For now presume always bytes were passed in, is there another case?)
|
||||
m.Parameter = x.([]byte)
|
||||
|
||||
return m
|
||||
}
|
||||
|
||||
func cMechanismList(m []*Mechanism) (C.CK_MECHANISM_PTR, C.CK_ULONG) {
|
||||
if len(m) == 0 {
|
||||
return nil, 0
|
||||
}
|
||||
pm := make([]C.CK_MECHANISM, len(m))
|
||||
for i := 0; i < len(m); i++ {
|
||||
pm[i].mechanism = C.CK_MECHANISM_TYPE(m[i].Mechanism)
|
||||
if m[i].Parameter == nil {
|
||||
continue
|
||||
}
|
||||
pm[i].pParameter = C.CK_VOID_PTR(&(m[i].Parameter[0]))
|
||||
pm[i].ulParameterLen = C.CK_ULONG(len(m[i].Parameter))
|
||||
}
|
||||
return C.CK_MECHANISM_PTR(&pm[0]), C.CK_ULONG(len(m))
|
||||
}
|
||||
|
||||
// MechanismInfo provides information about a particular mechanism.
|
||||
type MechanismInfo struct {
|
||||
MinKeySize uint
|
||||
MaxKeySize uint
|
||||
Flags uint
|
||||
}
|
50
vendor/src/golang.org/x/net/websocket/hybi.go
vendored
50
vendor/src/golang.org/x/net/websocket/hybi.go
vendored
|
@ -157,6 +157,9 @@ func (buf hybiFrameReaderFactory) NewFrameReader() (frame frameReader, err error
|
|||
if err != nil {
|
||||
return
|
||||
}
|
||||
if lengthFields == 8 && i == 0 { // MSB must be zero when 7+64 bits
|
||||
b &= 0x7f
|
||||
}
|
||||
header = append(header, b)
|
||||
hybiFrame.header.Length = hybiFrame.header.Length*256 + int64(b)
|
||||
}
|
||||
|
@ -264,7 +267,7 @@ type hybiFrameHandler struct {
|
|||
payloadType byte
|
||||
}
|
||||
|
||||
func (handler *hybiFrameHandler) HandleFrame(frame frameReader) (r frameReader, err error) {
|
||||
func (handler *hybiFrameHandler) HandleFrame(frame frameReader) (frameReader, error) {
|
||||
if handler.conn.IsServerConn() {
|
||||
// The client MUST mask all frames sent to the server.
|
||||
if frame.(*hybiFrameReader).header.MaskingKey == nil {
|
||||
|
@ -288,20 +291,19 @@ func (handler *hybiFrameHandler) HandleFrame(frame frameReader) (r frameReader,
|
|||
handler.payloadType = frame.PayloadType()
|
||||
case CloseFrame:
|
||||
return nil, io.EOF
|
||||
case PingFrame:
|
||||
pingMsg := make([]byte, maxControlFramePayloadLength)
|
||||
n, err := io.ReadFull(frame, pingMsg)
|
||||
if err != nil && err != io.ErrUnexpectedEOF {
|
||||
case PingFrame, PongFrame:
|
||||
b := make([]byte, maxControlFramePayloadLength)
|
||||
n, err := io.ReadFull(frame, b)
|
||||
if err != nil && err != io.EOF && err != io.ErrUnexpectedEOF {
|
||||
return nil, err
|
||||
}
|
||||
io.Copy(ioutil.Discard, frame)
|
||||
n, err = handler.WritePong(pingMsg[:n])
|
||||
if err != nil {
|
||||
return nil, err
|
||||
if frame.PayloadType() == PingFrame {
|
||||
if _, err := handler.WritePong(b[:n]); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
case PongFrame:
|
||||
return nil, ErrNotImplemented
|
||||
}
|
||||
return frame, nil
|
||||
}
|
||||
|
@ -370,6 +372,23 @@ func generateNonce() (nonce []byte) {
|
|||
return
|
||||
}
|
||||
|
||||
// removeZone removes IPv6 zone identifer from host.
|
||||
// E.g., "[fe80::1%en0]:8080" to "[fe80::1]:8080"
|
||||
func removeZone(host string) string {
|
||||
if !strings.HasPrefix(host, "[") {
|
||||
return host
|
||||
}
|
||||
i := strings.LastIndex(host, "]")
|
||||
if i < 0 {
|
||||
return host
|
||||
}
|
||||
j := strings.LastIndex(host[:i], "%")
|
||||
if j < 0 {
|
||||
return host
|
||||
}
|
||||
return host[:j] + host[i:]
|
||||
}
|
||||
|
||||
// getNonceAccept computes the base64-encoded SHA-1 of the concatenation of
|
||||
// the nonce ("Sec-WebSocket-Key" value) with the websocket GUID string.
|
||||
func getNonceAccept(nonce []byte) (expected []byte, err error) {
|
||||
|
@ -389,7 +408,10 @@ func getNonceAccept(nonce []byte) (expected []byte, err error) {
|
|||
func hybiClientHandshake(config *Config, br *bufio.Reader, bw *bufio.Writer) (err error) {
|
||||
bw.WriteString("GET " + config.Location.RequestURI() + " HTTP/1.1\r\n")
|
||||
|
||||
bw.WriteString("Host: " + config.Location.Host + "\r\n")
|
||||
// According to RFC 6874, an HTTP client, proxy, or other
|
||||
// intermediary must remove any IPv6 zone identifier attached
|
||||
// to an outgoing URI.
|
||||
bw.WriteString("Host: " + removeZone(config.Location.Host) + "\r\n")
|
||||
bw.WriteString("Upgrade: websocket\r\n")
|
||||
bw.WriteString("Connection: Upgrade\r\n")
|
||||
nonce := generateNonce()
|
||||
|
@ -515,15 +537,15 @@ func (c *hybiServerHandshaker) ReadHandshake(buf *bufio.Reader, req *http.Reques
|
|||
return http.StatusSwitchingProtocols, nil
|
||||
}
|
||||
|
||||
// Origin parses Origin header in "req".
|
||||
// If origin is "null", returns (nil, nil).
|
||||
// Origin parses the Origin header in req.
|
||||
// If the Origin header is not set, it returns nil and nil.
|
||||
func Origin(config *Config, req *http.Request) (*url.URL, error) {
|
||||
var origin string
|
||||
switch config.Version {
|
||||
case ProtocolVersionHybi13:
|
||||
origin = req.Header.Get("Origin")
|
||||
}
|
||||
if origin == "null" {
|
||||
if origin == "" {
|
||||
return nil, nil
|
||||
}
|
||||
return url.ParseRequestURI(origin)
|
||||
|
|
|
@ -74,7 +74,6 @@ func (s Server) serveWebSocket(w http.ResponseWriter, req *http.Request) {
|
|||
rwc, buf, err := w.(http.Hijacker).Hijack()
|
||||
if err != nil {
|
||||
panic("Hijack failed: " + err.Error())
|
||||
return
|
||||
}
|
||||
// The server should abort the WebSocket connection if it finds
|
||||
// the client did not send a handshake that matches with protocol
|
||||
|
|
28
vendor/src/google.golang.org/grpc/LICENSE
vendored
Normal file
28
vendor/src/google.golang.org/grpc/LICENSE
vendored
Normal file
|
@ -0,0 +1,28 @@
|
|||
Copyright 2014, Google Inc.
|
||||
All rights reserved.
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions are
|
||||
met:
|
||||
|
||||
* Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
* Redistributions in binary form must reproduce the above
|
||||
copyright notice, this list of conditions and the following disclaimer
|
||||
in the documentation and/or other materials provided with the
|
||||
distribution.
|
||||
* Neither the name of Google Inc. nor the names of its
|
||||
contributors may be used to endorse or promote products derived from
|
||||
this software without specific prior written permission.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
Loading…
Reference in a new issue