Update cloudflare/cfssl to 1.3.2

Matching the version that is used in SwarmKit

Signed-off-by: Sebastiaan van Stijn <github@gone.nl>
This commit is contained in:
Sebastiaan van Stijn 2018-07-04 14:19:55 +02:00
parent b711437bbd
commit 7084487fdc
No known key found for this signature in database
GPG key ID: 76698F39D527CE8C
83 changed files with 17107 additions and 6357 deletions

View file

@ -127,9 +127,9 @@ github.com/gogo/googleapis 08a7655d27152912db7aaf4f983275eaf8d128ef
# cluster
github.com/docker/swarmkit edd5641391926a50bc5f7040e20b7efc05003c26
github.com/gogo/protobuf v1.0.0
github.com/cloudflare/cfssl 7fb22c8cba7ecaf98e4082d22d65800cf45e042a
github.com/cloudflare/cfssl 1.3.2
github.com/fernet/fernet-go 1b2437bc582b3cfbb341ee5a29f8ef5b42912ff2
github.com/google/certificate-transparency d90e65c3a07988180c5b1ece71791c0b6506826e
github.com/google/certificate-transparency-go 5ab67e519c93568ac3ee50fd6772a5bcf8aa460d
golang.org/x/crypto 1a580b3eff7814fc9b40602fd35256c63b50f491
golang.org/x/time a4bde12657593d5e90d0533a3e4fd95e635124cb
github.com/hashicorp/go-memdb cb9a474f84cc5e41b273b20c6927680b2a8776ad

View file

@ -8,7 +8,7 @@
CFSSL is CloudFlare's PKI/TLS swiss army knife. It is both a command line
tool and an HTTP API server for signing, verifying, and bundling TLS
certificates. It requires Go 1.5+ to build.
certificates. It requires Go 1.8+ to build.
Note that certain linux distributions have certain algorithms removed
(RHEL-based distributions in particular), so the golang from the
@ -34,7 +34,7 @@ See [BUILDING](BUILDING.md)
### Installation
Installation requires a
[working Go 1.5+ installation](http://golang.org/doc/install) and a
[working Go 1.8+ installation](http://golang.org/doc/install) and a
properly set `GOPATH`.
```
@ -42,32 +42,27 @@ $ go get -u github.com/cloudflare/cfssl/cmd/cfssl
```
will download and build the CFSSL tool, installing it in
`$GOPATH/bin/cfssl`. To install the other utility programs that are in
this repo:
`$GOPATH/bin/cfssl`.
To install any of the other utility programs that are
in this repo (for instance `cffsljson` in this case):
```
$ go get -u github.com/cloudflare/cfssl/cmd/cfssljson
```
This will download and build the CFSSLJSON tool, installing it in
`$GOPATH/bin/`.
And to simply install __all__ of the programs in this repo:
```
$ go get -u github.com/cloudflare/cfssl/cmd/...
```
This will download, build, and install `cfssl`, `cfssljson`, and
`mkbundle` into `$GOPATH/bin/`.
Note that CFSSL makes use of vendored packages; in Go 1.5, the
`GO15VENDOREXPERIMENT` environment variable will need to be set, e.g.
```
export GO15VENDOREXPERIMENT=1
```
In Go 1.6, this works out of the box.
#### Installing pre-Go 1.5
With a Go 1.4 or earlier installation, you won't be able to install the latest version of CFSSL. However, you can checkout the `1.1.0` release and build that.
```
git clone -b 1.1.0 https://github.com/cloudflare/cfssl.git $GOPATH/src/github.com/cloudflare/cfssl
go get github.com/cloudflare/cfssl/cmd/cfssl
```
This will download, build, and install all of the utility programs
(including `cfssl`, `cfssljson`, and `mkbundle` among others) into the
`$GOPATH/bin/` directory.
### Using the Command Line Tool
@ -81,10 +76,10 @@ operation it should carry out:
serve start the API server
version prints out the current version
selfsign generates a self-signed certificate
print-defaults print default configurations
print-defaults print default configurations
Use "cfssl [command] -help" to find out more about a command.
The version command takes no arguments.
Use `cfssl [command] -help` to find out more about a command.
The `version` command takes no arguments.
#### Signing
@ -92,9 +87,9 @@ The version command takes no arguments.
cfssl sign [-ca cert] [-ca-key key] [-hostname comma,separated,hostnames] csr [subject]
```
The csr is the client's certificate request. The `-ca` and `-ca-key`
The `csr` is the client's certificate request. The `-ca` and `-ca-key`
flags are the CA's certificate and private key, respectively. By
default, they are "ca.pem" and "ca_key.pem". The `-hostname` is
default, they are `ca.pem` and `ca_key.pem`. The `-hostname` is
a comma separated hostname list that overrides the DNS names and
IP address in the certificate SAN extension.
For example, assuming the CA's private key is in
@ -103,26 +98,27 @@ For example, assuming the CA's private key is in
for cloudflare.com:
```
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
cfssl sign -ca /etc/ssl/certs/cfssl.pem \
-ca-key /etc/ssl/private/cfssl_key.pem \
-hostname cloudflare.com ./cloudflare.pem
-hostname cloudflare.com \
./cloudflare.pem
```
It is also possible to specify csr through '-csr' flag. By doing so,
It is also possible to specify CSR with the `-csr` flag. By doing so,
flag values take precedence and will overwrite the argument.
The subject is an optional file that contains subject information that
should be used in place of the information from the CSR. It should be
a JSON file with the type:
a JSON file as follows:
```json
{
"CN": "example.com",
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"OU": "WWW",
"ST": "California"
}
@ -130,6 +126,9 @@ a JSON file with the type:
}
```
**N.B.** As of Go 1.7, self-signed certificates will not include
[the AKI](https://go.googlesource.com/go/+/b623b71509b2d24df915d5bc68602e1c6edf38ca).
#### Bundling
```
@ -139,28 +138,29 @@ cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
```
The bundles are used for the root and intermediate certificate
pools. In addition, platform metadata is specified through '-metadata'
pools. In addition, platform metadata is specified through `-metadata`.
The bundle files, metadata file (and auxiliary files) can be
found at [cfssl_trust](https://github.com/cloudflare/cfssl_trust)
found at:
https://github.com/cloudflare/cfssl_trust
Specify PEM-encoded client certificate and key through '-cert' and
'-key' respectively. If key is specified, the bundle will be built
Specify PEM-encoded client certificate and key through `-cert` and
`-key` respectively. If key is specified, the bundle will be built
and verified with the key. Otherwise the bundle will be built
without a private key. Instead of file path, use '-' for reading
certificate PEM from stdin. It is also acceptable the certificate
file contains a (partial) certificate bundle.
without a private key. Instead of file path, use `-` for reading
certificate PEM from stdin. It is also acceptable that the certificate
file should contain a (partial) certificate bundle.
Specify bundling flavor through '-flavor'. There are three flavors:
'optimal' to generate a bundle of shortest chain and most advanced
cryptographic algorithms, 'ubiquitous' to generate a bundle of most
Specify bundling flavor through `-flavor`. There are three flavors:
`optimal` to generate a bundle of shortest chain and most advanced
cryptographic algorithms, `ubiquitous` to generate a bundle of most
widely acceptance across different browsers and OS platforms, and
'force' to find an acceptable bundle which is identical to the
`force` to find an acceptable bundle which is identical to the
content of the input certificate file.
Alternatively, the client certificate can be pulled directly from
a domain. It is also possible to connect to the remote address
through '-ip'.
through `-ip`.
```
cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
@ -168,7 +168,7 @@ cfssl bundle [-ca-bundle bundle] [-int-bundle bundle] \
-domain domain_name [-ip ip_address]
```
The bundle output form should follow the example
The bundle output form should follow the example:
```json
{
@ -204,7 +204,7 @@ cfssl genkey csr.json
```
To generate a private key and corresponding certificate request, specify
the key request as a JSON file. This file should follow the form
the key request as a JSON file. This file should follow the form:
```json
{
@ -218,9 +218,9 @@ the key request as a JSON file. This file should follow the form
},
"names": [
{
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"C": "US",
"L": "San Francisco",
"O": "Internet Widgets, Inc.",
"OU": "WWW",
"ST": "California"
}
@ -235,7 +235,7 @@ cfssl genkey -initca csr.json | cfssljson -bare ca
```
To generate a self-signed root CA certificate, specify the key request as
the JSON file in the same format as in 'genkey'. Three PEM-encoded entities
a JSON file in the same format as in 'genkey'. Three PEM-encoded entities
will appear in the output: the private key, the csr, and the self-signed
certificate.
@ -245,8 +245,8 @@ certificate.
cfssl gencert -remote=remote_server [-hostname=comma,separated,hostnames] csr.json
```
This is calls genkey, but has a remote CFSSL server sign and issue
a certificate. You may use `-hostname` to override certificate SANs.
This calls `genkey` but has a remote CFSSL server sign and issue
the certificate. You may use `-hostname` to override certificate SANs.
#### Generating a local-issued certificate and private key.
@ -254,25 +254,25 @@ a certificate. You may use `-hostname` to override certificate SANs.
cfssl gencert -ca cert -ca-key key [-hostname=comma,separated,hostnames] csr.json
```
This is generates and issues a certificate and private key from a local CA
This generates and issues a certificate and private key from a local CA
via a JSON request. You may use `-hostname` to override certificate SANs.
#### Updating a OCSP responses file with a newly issued certificate
#### Updating an OCSP responses file with a newly issued certificate
```
cfssl ocspsign -ca cert -responder key -responder-key key -cert cert \
| cfssljson -bare -stdout >> responses
```
This will generate a OCSP response for the `cert` and add it to the
`responses` file. You can then pass `responses` to `ocspserve` to start a
This will generate an OCSP response for the `cert` and add it to the
`responses` file. You can then pass `responses` to `ocspserve` to start an
OCSP server.
### Starting the API Server
CFSSL comes with an HTTP-based API server; the endpoints are
documented in `doc/api.txt`. The server is started with the "serve"
documented in `doc/api/intro.txt`. The server is started with the `serve`
command:
```
@ -284,18 +284,19 @@ cfssl serve [-address address] [-ca cert] [-ca-bundle bundle] \
Address and port default to "127.0.0.1:8888". The `-ca` and `-ca-key`
arguments should be the PEM-encoded certificate and private key to use
for signing; by default, they are "ca.pem" and "ca_key.pem". The
for signing; by default, they are `ca.pem` and `ca_key.pem`. The
`-ca-bundle` and `-int-bundle` should be the certificate bundles used
for the root and intermediate certificate pools, respectively. These
default to "ca-bundle.crt" and "int-bundle." If the "remote" option is
provided, all signature operations will be forwarded to the remote CFSSL.
default to `ca-bundle.crt` and `int-bundle.crt` respectively. If the
`-remote` option is specified, all signature operations will be forwarded
to the remote CFSSL.
'-int-dir' specifies intermediates directory. '-metadata' is a file for
`-int-dir` specifies an intermediates directory. `-metadata` is a file for
root certificate presence. The content of the file is a json dictionary
(k,v): each key k is SHA-1 digest of a root certificate while value v
is a list of key store filenames. '-config' specifies path to configuration
file. '-responder' and '-responder-key' are Certificate for OCSP responder
and private key for OCSP responder certificate, respectively.
(k,v) such that each key k is an SHA-1 digest of a root certificate while value v
is a list of key store filenames. `-config` specifies a path to a configuration
file. `-responder` and `-responder-key` are the certificate and the
private key for the OCSP responder, respectively.
The amount of logging can be controlled with the `-loglevel` option. This
comes *after* the serve command:
@ -306,18 +307,18 @@ cfssl serve -loglevel 2
The levels are:
* 0. DEBUG
* 1. INFO (this is the default level)
* 2. WARNING
* 3. ERROR
* 4. CRITICAL
* 0 - DEBUG
* 1 - INFO (this is the default level)
* 2 - WARNING
* 3 - ERROR
* 4 - CRITICAL
### The multirootca
The `cfssl` program can act as an online certificate authority, but it
only uses a single key. If multiple signing keys are needed, the
`multirootca` program can be used. It only provides the sign,
authsign, and info endpoints. The documentation contains instructions
`multirootca` program can be used. It only provides the `sign`,
`authsign` and `info` endpoints. The documentation contains instructions
for configuring and running the CA.
### The mkbundle Utility
@ -334,49 +335,44 @@ support is planned for the next release) and expired certificates, and
bundles them into one file. It takes directories of certificates and
certificate files (which may contain multiple certificates). For example,
if the directory `intermediates` contains a number of intermediate
certificates,
certificates:
```
mkbundle -f int-bundle.crt intermediates
```
will check those certificates and combine valid ones into a single
will check those certificates and combine valid certificates into a single
`int-bundle.crt` file.
The `-f` flag specifies an output name; `-loglevel` specifies the verbosity
of the logging (using the same loglevels above), and `-nw` controls the
of the logging (using the same loglevels as above), and `-nw` controls the
number of revocation-checking workers.
### The cfssljson Utility
Most of the output from `cfssl` is in JSON. The `cfssljson` will take
this output and split it out into separate key, certificate, CSR, and
bundle files as appropriate. The tool takes a single flag, `-f`, that
Most of the output from `cfssl` is in JSON. The `cfssljson` utility can take
this output and split it out into separate `key`, `certificate`, `CSR`, and
`bundle` files as appropriate. The tool takes a single flag, `-f`, that
specifies the input file, and an argument that specifies the base name for
the files produced. If the input filename is "-" (which is the default),
`cfssljson` reads from standard input. It maps keys in the JSON file to
the files produced. If the input filename is `-` (which is the default),
cfssljson reads from standard input. It maps keys in the JSON file to
filenames in the following way:
* if there is a "cert" (or if not, if there's a "certificate") field, the
file "basename.pem" will be produced.
* if there is a "key" (or if not, if there's a "private_key") field, the
file "basename-key.pem" will be produced.
* if there is a "csr" (or if not, if there's a "certificate_request") field,
the file "basename.csr" will be produced.
* if there is a "bundle" field, the file "basename-bundle.pem" will
be produced.
* if there is a "ocspResponse" field, the file "basename-response.der" will
be produced.
* if __cert__ or __certificate__ is specified, __basename.pem__ will be produced.
* if __key__ or __private_key__ is specified, __basename-key.pem__ will be produced.
* if __csr__ or __certificate_request__ is specified, __basename.csr__ will be produced.
* if __bundle__ is specified, __basename-bundle.pem__ will be produced.
* if __ocspResponse__ is specified, __basename-response.der__ will be produced.
Instead of saving to a file, you can pass `-stdout` to output the encoded
contents.
contents to standard output.
### Static Builds
By default, the web assets are accessed from disk, based on their
relative locations. If youre wishing to distribute a single,
statically-linked, cfssl binary, youll want to embed these resources
before building. This can by done with the
relative locations. If you wish to distribute a single,
statically-linked, `cfssl` binary, youll want to embed these resources
before building. This can by done with the
[go.rice](https://github.com/GeertJohan/go.rice) tool.
```
@ -387,16 +383,18 @@ Then building with `go build` will use the embedded resources.
### Using a PKCS#11 hardware token / HSM
For better security, you may want to store your private key in an HSM or
For better security, you may wish to store your private key in an HSM or
smartcard. The interface to both of these categories of device is described by
the PKCS#11 spec. If you need to do approximately one signing operation per
second or fewer, the Yubikey NEO and NEO-n are inexpensive smartcard options:
https://www.yubico.com/products/yubikey-hardware/yubikey-neo/. In general you
are looking for a product that supports PIV (personal identity verification). If
https://www.yubico.com/products/yubikey-hardware/yubikey-neo/
In general you should look for a product that supports PIV (personal identity verification). If
your signing needs are in the hundreds of signatures per second, you will need
to purchase an expensive HSM (in the thousands to many thousands of USD).
If you want to try out the PKCS#11 signing modes without a hardware token, you
If you wish to try out the PKCS#11 signing modes without a hardware token, you
can use the [SoftHSM](https://github.com/opendnssec/SoftHSMv1#softhsm)
implementation. Please note that using SoftHSM simply stores your private key in
a file on disk and does not increase security.
@ -404,14 +402,14 @@ a file on disk and does not increase security.
To get started with your PKCS#11 token you will need to initialize it with a
private key, PIN, and token label. The instructions to do this will be specific
to each hardware device, and you should follow the instructions provided by your
vendor. You will also need to find the path to your 'module', a shared object
vendor. You will also need to find the path to your `module`, a shared object
file (.so). Having initialized your device, you can query it to check your token
label with:
pkcs11-tool --module <module path> --list-token-slots
You'll also want to check the label of the private key you imported (or
generated). Run the following command and look for a 'Private Key Object':
generated). Run the following command and look for a `Private Key Object`:
pkcs11-tool --module <module path> --pin <pin> \
--list-token-slots --login --list-objects
@ -421,7 +419,7 @@ CFSSL supports PKCS#11 for certificate signing and OCSP signing. To create a
Signer (for certificate signing), import `signer/universal` and call NewSigner
with a Root object containing the module, pin, token label and private label
from above, plus a path to your certificate. The structure of the Root object is
documented in universal.go.
documented in `universal.go`.
Alternately, you can construct a pkcs11key.Key or pkcs11key.Pool yourself, and
pass it to ocsp.NewSigner (for OCSP) or local.NewSigner (for certificate
@ -431,8 +429,8 @@ same time.
### Additional Documentation
Additional documentation can be found in the "doc/" directory:
Additional documentation can be found in the "doc" directory:
* `api.txt`: documents the API endpoints
* `api/intro.txt`: documents the API endpoints
* `bootstrap.txt`: a walkthrough from building the package to getting
up and running

View file

@ -34,8 +34,8 @@ func (f HandlerFunc) Handle(w http.ResponseWriter, r *http.Request) error {
return f(w, r)
}
// handleError is the centralised error handling and reporting.
func handleError(w http.ResponseWriter, err error) (code int) {
// HandleError is the centralised error handling and reporting.
func HandleError(w http.ResponseWriter, err error) (code int) {
if err == nil {
return http.StatusOK
}
@ -82,7 +82,7 @@ func (h HTTPHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
} else {
err = errors.NewMethodNotAllowed(r.Method)
}
status := handleError(w, err)
status := HandleError(w, err)
log.Infof("%s - \"%s %s\" %d", r.RemoteAddr, r.Method, r.URL, status)
}

View file

@ -31,6 +31,8 @@ type Accessor interface {
InsertCertificate(cr CertificateRecord) error
GetCertificate(serial, aki string) ([]CertificateRecord, error)
GetUnexpiredCertificates() ([]CertificateRecord, error)
GetRevokedAndUnexpiredCertificates() ([]CertificateRecord, error)
GetRevokedAndUnexpiredCertificatesByLabel(label string) ([]CertificateRecord, error)
RevokeCertificate(serial, aki string, reasonCode int) error
InsertOCSP(rr OCSPRecord) error
GetOCSP(serial, aki string) ([]OCSPRecord, error)

View file

@ -551,16 +551,16 @@ func (p *Signing) Valid() bool {
// KeyUsage contains a mapping of string names to key usages.
var KeyUsage = map[string]x509.KeyUsage{
"signing": x509.KeyUsageDigitalSignature,
"digital signature": x509.KeyUsageDigitalSignature,
"content committment": x509.KeyUsageContentCommitment,
"key encipherment": x509.KeyUsageKeyEncipherment,
"key agreement": x509.KeyUsageKeyAgreement,
"data encipherment": x509.KeyUsageDataEncipherment,
"cert sign": x509.KeyUsageCertSign,
"crl sign": x509.KeyUsageCRLSign,
"encipher only": x509.KeyUsageEncipherOnly,
"decipher only": x509.KeyUsageDecipherOnly,
"signing": x509.KeyUsageDigitalSignature,
"digital signature": x509.KeyUsageDigitalSignature,
"content commitment": x509.KeyUsageContentCommitment,
"key encipherment": x509.KeyUsageKeyEncipherment,
"key agreement": x509.KeyUsageKeyAgreement,
"data encipherment": x509.KeyUsageDataEncipherment,
"cert sign": x509.KeyUsageCertSign,
"crl sign": x509.KeyUsageCRLSign,
"encipher only": x509.KeyUsageEncipherOnly,
"decipher only": x509.KeyUsageDecipherOnly,
}
// ExtKeyUsage contains a mapping of string names to extended key

View file

@ -1,7 +1,7 @@
// Package pkcs7 implements the subset of the CMS PKCS #7 datatype that is typically
// used to package certificates and CRLs. Using openssl, every certificate converted
// to PKCS #7 format from another encoding such as PEM conforms to this implementation.
// reference: https://www.openssl.org/docs/apps/crl2pkcs7.html
// reference: https://www.openssl.org/docs/man1.1.0/apps/crl2pkcs7.html
//
// PKCS #7 Data type, reference: https://tools.ietf.org/html/rfc2315
//

View file

@ -47,8 +47,8 @@ type KeyRequest interface {
// A BasicKeyRequest contains the algorithm and key size for a new private key.
type BasicKeyRequest struct {
A string `json:"algo"`
S int `json:"size"`
A string `json:"algo" yaml:"algo"`
S int `json:"size" yaml:"size"`
}
// NewBasicKeyRequest returns a default BasicKeyRequest.
@ -130,20 +130,21 @@ func (kr *BasicKeyRequest) SigAlgo() x509.SignatureAlgorithm {
// CAConfig is a section used in the requests initialising a new CA.
type CAConfig struct {
PathLength int `json:"pathlen"`
PathLenZero bool `json:"pathlenzero"`
Expiry string `json:"expiry"`
PathLength int `json:"pathlen" yaml:"pathlen"`
PathLenZero bool `json:"pathlenzero" yaml:"pathlenzero"`
Expiry string `json:"expiry" yaml:"expiry"`
Backdate string `json:"backdate" yaml:"backdate"`
}
// A CertificateRequest encapsulates the API interface to the
// certificate request functionality.
type CertificateRequest struct {
CN string
Names []Name `json:"names"`
Hosts []string `json:"hosts"`
KeyRequest KeyRequest `json:"key,omitempty"`
CA *CAConfig `json:"ca,omitempty"`
SerialNumber string `json:"serialnumber,omitempty"`
Names []Name `json:"names" yaml:"names"`
Hosts []string `json:"hosts" yaml:"hosts"`
KeyRequest KeyRequest `json:"key,omitempty" yaml:"key,omitempty"`
CA *CAConfig `json:"ca,omitempty" yaml:"ca,omitempty"`
SerialNumber string `json:"serialnumber,omitempty" yaml:"serialnumber,omitempty"`
}
// New returns a new, empty CertificateRequest with a
@ -327,7 +328,7 @@ func (g *Generator) ProcessRequest(req *CertificateRequest) (csr, key []byte, er
err = g.Validator(req)
if err != nil {
log.Warningf("invalid request: %v", err)
return
return nil, nil, err
}
csr, key, err = ParseRequest(req)

View file

@ -191,6 +191,16 @@ const (
// PrecertSubmissionFailed occurs when submitting a precertificate to
// a log server fails
PrecertSubmissionFailed = 100 * (iota + 1)
// CTClientConstructionFailed occurs when the construction of a new
// github.com/google/certificate-transparency client fails.
CTClientConstructionFailed
// PrecertMissingPoison occurs when a precert is passed to SignFromPrecert
// and is missing the CT poison extension.
PrecertMissingPoison
// PrecertInvalidPoison occurs when a precert is passed to SignFromPrecert
// and has a invalid CT poison extension value or the extension is not
// critical.
PrecertInvalidPoison
)
// Certificate persistence related errors specified with CertStoreError
@ -366,6 +376,10 @@ func New(category Category, reason Reason) *Error {
msg = "Certificate transparency parsing failed due to unknown error"
case PrecertSubmissionFailed:
msg = "Certificate transparency precertificate submission failed"
case PrecertMissingPoison:
msg = "Precertificate is missing CT poison extension"
case PrecertInvalidPoison:
msg = "Precertificate contains an invalid CT poison extension"
default:
panic(fmt.Sprintf("Unsupported CF-SSL error reason %d under category CTError.", reason))
}
@ -412,7 +426,7 @@ func Wrap(category Category, reason Reason, err error) *Error {
}
}
case PrivateKeyError, IntermediatesError, RootError, PolicyError, DialError,
APIClientError, CSRError, CTError, CertStoreError:
APIClientError, CSRError, CTError, CertStoreError, OCSPError:
// no-op, just use the error
default:
panic(fmt.Sprintf("Unsupported CFSSL error type: %d.",

View file

@ -10,11 +10,18 @@ import (
"crypto/rsa"
"crypto/tls"
"crypto/x509"
"crypto/x509/pkix"
"encoding/asn1"
"encoding/pem"
"errors"
"fmt"
"io/ioutil"
"math/big"
"os"
"github.com/google/certificate-transparency-go"
cttls "github.com/google/certificate-transparency-go/tls"
ctx509 "github.com/google/certificate-transparency-go/x509"
"golang.org/x/crypto/ocsp"
"strings"
"time"
@ -177,7 +184,7 @@ func HashAlgoString(alg x509.SignatureAlgorithm) string {
}
}
// EncodeCertificatesPEM encodes a number of x509 certficates to PEM
// EncodeCertificatesPEM encodes a number of x509 certificates to PEM
func EncodeCertificatesPEM(certs []*x509.Certificate) []byte {
var buffer bytes.Buffer
for _, cert := range certs {
@ -190,7 +197,7 @@ func EncodeCertificatesPEM(certs []*x509.Certificate) []byte {
return buffer.Bytes()
}
// EncodeCertificatePEM encodes a single x509 certficates to PEM
// EncodeCertificatePEM encodes a single x509 certificates to PEM
func EncodeCertificatePEM(cert *x509.Certificate) []byte {
return EncodeCertificatesPEM([]*x509.Certificate{cert})
}
@ -374,51 +381,6 @@ func GetKeyDERFromPEM(in []byte, password []byte) ([]byte, error) {
return nil, cferr.New(cferr.PrivateKeyError, cferr.DecodeFailed)
}
// CheckSignature verifies a signature made by the key on a CSR, such
// as on the CSR itself.
func CheckSignature(csr *x509.CertificateRequest, algo x509.SignatureAlgorithm, signed, signature []byte) error {
var hashType crypto.Hash
switch algo {
case x509.SHA1WithRSA, x509.ECDSAWithSHA1:
hashType = crypto.SHA1
case x509.SHA256WithRSA, x509.ECDSAWithSHA256:
hashType = crypto.SHA256
case x509.SHA384WithRSA, x509.ECDSAWithSHA384:
hashType = crypto.SHA384
case x509.SHA512WithRSA, x509.ECDSAWithSHA512:
hashType = crypto.SHA512
default:
return x509.ErrUnsupportedAlgorithm
}
if !hashType.Available() {
return x509.ErrUnsupportedAlgorithm
}
h := hashType.New()
h.Write(signed)
digest := h.Sum(nil)
switch pub := csr.PublicKey.(type) {
case *rsa.PublicKey:
return rsa.VerifyPKCS1v15(pub, hashType, digest, signature)
case *ecdsa.PublicKey:
ecdsaSig := new(struct{ R, S *big.Int })
if _, err := asn1.Unmarshal(signature, ecdsaSig); err != nil {
return err
}
if ecdsaSig.R.Sign() <= 0 || ecdsaSig.S.Sign() <= 0 {
return errors.New("x509: ECDSA signature contained zero or negative values")
}
if !ecdsa.Verify(pub, digest, ecdsaSig.R, ecdsaSig.S) {
return errors.New("x509: ECDSA verification failure")
}
return nil
}
return x509.ErrUnsupportedAlgorithm
}
// ParseCSR parses a PEM- or DER-encoded PKCS #10 certificate signing request.
func ParseCSR(in []byte) (csr *x509.CertificateRequest, rest []byte, err error) {
in = bytes.TrimSpace(in)
@ -437,7 +399,7 @@ func ParseCSR(in []byte) (csr *x509.CertificateRequest, rest []byte, err error)
return nil, rest, err
}
err = CheckSignature(csr, csr.SignatureAlgorithm, csr.RawTBSCertificateRequest, csr.Signature)
err = csr.CheckSignature()
if err != nil {
return nil, rest, err
}
@ -445,13 +407,15 @@ func ParseCSR(in []byte) (csr *x509.CertificateRequest, rest []byte, err error)
return csr, rest, nil
}
// ParseCSRPEM parses a PEM-encoded certificiate signing request.
// ParseCSRPEM parses a PEM-encoded certificate signing request.
// It does not check the signature. This is useful for dumping data from a CSR
// locally.
func ParseCSRPEM(csrPEM []byte) (*x509.CertificateRequest, error) {
block, _ := pem.Decode([]byte(csrPEM))
der := block.Bytes
csrObject, err := x509.ParseCertificateRequest(der)
if block == nil {
return nil, cferr.New(cferr.CSRError, cferr.DecodeFailed)
}
csrObject, err := x509.ParseCertificateRequest(block.Bytes)
if err != nil {
return nil, err
@ -516,3 +480,98 @@ func CreateTLSConfig(remoteCAs *x509.CertPool, cert *tls.Certificate) *tls.Confi
RootCAs: remoteCAs,
}
}
// SerializeSCTList serializes a list of SCTs.
func SerializeSCTList(sctList []ct.SignedCertificateTimestamp) ([]byte, error) {
list := ctx509.SignedCertificateTimestampList{}
for _, sct := range sctList {
sctBytes, err := cttls.Marshal(sct)
if err != nil {
return nil, err
}
list.SCTList = append(list.SCTList, ctx509.SerializedSCT{Val: sctBytes})
}
return cttls.Marshal(list)
}
// DeserializeSCTList deserializes a list of SCTs.
func DeserializeSCTList(serializedSCTList []byte) ([]ct.SignedCertificateTimestamp, error) {
var sctList ctx509.SignedCertificateTimestampList
rest, err := cttls.Unmarshal(serializedSCTList, &sctList)
if err != nil {
return nil, err
}
if len(rest) != 0 {
return nil, cferr.Wrap(cferr.CTError, cferr.Unknown, errors.New("serialized SCT list contained trailing garbage"))
}
list := make([]ct.SignedCertificateTimestamp, len(sctList.SCTList))
for i, serializedSCT := range sctList.SCTList {
var sct ct.SignedCertificateTimestamp
rest, err := cttls.Unmarshal(serializedSCT.Val, &sct)
if err != nil {
return nil, err
}
if len(rest) != 0 {
return nil, cferr.Wrap(cferr.CTError, cferr.Unknown, errors.New("serialized SCT contained trailing garbage"))
}
list[i] = sct
}
return list, nil
}
// SCTListFromOCSPResponse extracts the SCTList from an ocsp.Response,
// returning an empty list if the SCT extension was not found or could not be
// unmarshalled.
func SCTListFromOCSPResponse(response *ocsp.Response) ([]ct.SignedCertificateTimestamp, error) {
// This loop finds the SCTListExtension in the OCSP response.
var SCTListExtension, ext pkix.Extension
for _, ext = range response.Extensions {
// sctExtOid is the ObjectIdentifier of a Signed Certificate Timestamp.
sctExtOid := asn1.ObjectIdentifier{1, 3, 6, 1, 4, 1, 11129, 2, 4, 5}
if ext.Id.Equal(sctExtOid) {
SCTListExtension = ext
break
}
}
// This code block extracts the sctList from the SCT extension.
var sctList []ct.SignedCertificateTimestamp
var err error
if numBytes := len(SCTListExtension.Value); numBytes != 0 {
var serializedSCTList []byte
rest := make([]byte, numBytes)
copy(rest, SCTListExtension.Value)
for len(rest) != 0 {
rest, err = asn1.Unmarshal(rest, &serializedSCTList)
if err != nil {
return nil, cferr.Wrap(cferr.CTError, cferr.Unknown, err)
}
}
sctList, err = DeserializeSCTList(serializedSCTList)
}
return sctList, err
}
// ReadBytes reads a []byte either from a file or an environment variable.
// If valFile has a prefix of 'env:', the []byte is read from the environment
// using the subsequent name. If the prefix is 'file:' the []byte is read from
// the subsequent file. If no prefix is provided, valFile is assumed to be a
// file path.
func ReadBytes(valFile string) ([]byte, error) {
switch splitVal := strings.SplitN(valFile, ":", 2); len(splitVal) {
case 1:
return ioutil.ReadFile(valFile)
case 2:
switch splitVal[0] {
case "env":
return []byte(os.Getenv(splitVal[1])), nil
case "file":
return ioutil.ReadFile(splitVal[1])
default:
return nil, fmt.Errorf("unknown prefix: %s", splitVal[0])
}
default:
return nil, fmt.Errorf("multiple prefixes: %s",
strings.Join(splitVal[:len(splitVal)-1], ", "))
}
}

View file

@ -5,10 +5,11 @@ package initca
import (
"crypto"
"crypto/ecdsa"
"crypto/rand"
"crypto/rsa"
"crypto/x509"
"encoding/pem"
"errors"
"io/ioutil"
"time"
"github.com/cloudflare/cfssl/config"
@ -53,8 +54,15 @@ func New(req *csr.CertificateRequest) (cert, csrPEM, key []byte, err error) {
}
}
if req.CA.Backdate != "" {
policy.Default.Backdate, err = time.ParseDuration(req.CA.Backdate)
if err != nil {
return
}
}
policy.Default.CAConstraint.MaxPathLen = req.CA.PathLength
if req.CA.PathLength != 0 && req.CA.PathLenZero == true {
if req.CA.PathLength != 0 && req.CA.PathLenZero {
log.Infof("ignore invalid 'pathlenzero' value")
} else {
policy.Default.CAConstraint.MaxPathLenZero = req.CA.PathLenZero
@ -90,7 +98,7 @@ func New(req *csr.CertificateRequest) (cert, csrPEM, key []byte, err error) {
// NewFromPEM creates a new root certificate from the key file passed in.
func NewFromPEM(req *csr.CertificateRequest, keyFile string) (cert, csrPEM []byte, err error) {
privData, err := ioutil.ReadFile(keyFile)
privData, err := helpers.ReadBytes(keyFile)
if err != nil {
return nil, nil, err
}
@ -105,11 +113,11 @@ func NewFromPEM(req *csr.CertificateRequest, keyFile string) (cert, csrPEM []byt
// RenewFromPEM re-creates a root certificate from the CA cert and key
// files. The resulting root certificate will have the input CA certificate
// as the template and have the same expiry length. E.g. the exsiting CA
// as the template and have the same expiry length. E.g. the existing CA
// is valid for a year from Jan 01 2015 to Jan 01 2016, the renewed certificate
// will be valid from now and expire in one year as well.
func RenewFromPEM(caFile, keyFile string) ([]byte, error) {
caBytes, err := ioutil.ReadFile(caFile)
caBytes, err := helpers.ReadBytes(caFile)
if err != nil {
return nil, err
}
@ -119,7 +127,7 @@ func RenewFromPEM(caFile, keyFile string) ([]byte, error) {
return nil, err
}
keyBytes, err := ioutil.ReadFile(keyFile)
keyBytes, err := helpers.ReadBytes(keyFile)
if err != nil {
return nil, err
}
@ -130,7 +138,6 @@ func RenewFromPEM(caFile, keyFile string) ([]byte, error) {
}
return RenewFromSigner(ca, key)
}
// NewFromSigner creates a new root certificate from a crypto.Signer.
@ -171,7 +178,7 @@ func NewFromSigner(req *csr.CertificateRequest, priv crypto.Signer) (cert, csrPE
// RenewFromSigner re-creates a root certificate from the CA cert and crypto.Signer.
// The resulting root certificate will have ca certificate
// as the template and have the same expiry length. E.g. the exsiting CA
// as the template and have the same expiry length. E.g. the existing CA
// is valid for a year from Jan 01 2015 to Jan 01 2016, the renewed certificate
// will be valid from now and expire in one year as well.
func RenewFromSigner(ca *x509.Certificate, priv crypto.Signer) ([]byte, error) {
@ -182,7 +189,6 @@ func RenewFromSigner(ca *x509.Certificate, priv crypto.Signer) ([]byte, error) {
// matching certificate public key vs private key
switch {
case ca.PublicKeyAlgorithm == x509.RSA:
var rsaPublicKey *rsa.PublicKey
var ok bool
if rsaPublicKey, ok = priv.Public().(*rsa.PublicKey); !ok {
@ -205,7 +211,6 @@ func RenewFromSigner(ca *x509.Certificate, priv crypto.Signer) ([]byte, error) {
}
req := csr.ExtractCertificateRequest(ca)
cert, _, err := NewFromSigner(req, priv)
return cert, err
@ -222,3 +227,23 @@ var CAPolicy = func() *config.Signing {
},
}
}
// Update copies the CA certificate, updates the NotBefore and
// NotAfter fields, and then re-signs the certificate.
func Update(ca *x509.Certificate, priv crypto.Signer) (cert []byte, err error) {
copy, err := x509.ParseCertificate(ca.Raw)
if err != nil {
return
}
validity := ca.NotAfter.Sub(ca.NotBefore)
copy.NotBefore = time.Now().Round(time.Minute).Add(-5 * time.Minute)
copy.NotAfter = copy.NotBefore.Add(validity)
cert, err = x509.CreateCertificate(rand.Reader, copy, copy, priv.Public(), priv)
if err != nil {
return
}
cert = pem.EncodeToMemory(&pem.Block{Type: "CERTIFICATE", Bytes: cert})
return
}

View file

@ -8,14 +8,13 @@ import (
"crypto/x509"
"crypto/x509/pkix"
"encoding/asn1"
"encoding/binary"
"encoding/hex"
"encoding/pem"
"errors"
"io"
"io/ioutil"
"math/big"
"net"
"net/http"
"net/mail"
"os"
@ -26,8 +25,10 @@ import (
"github.com/cloudflare/cfssl/info"
"github.com/cloudflare/cfssl/log"
"github.com/cloudflare/cfssl/signer"
"github.com/google/certificate-transparency/go"
"github.com/google/certificate-transparency/go/client"
"github.com/google/certificate-transparency-go"
"github.com/google/certificate-transparency-go/client"
"github.com/google/certificate-transparency-go/jsonclient"
"golang.org/x/net/context"
)
// Signer contains a signer that uses the standard library to
@ -65,12 +66,12 @@ func NewSigner(priv crypto.Signer, cert *x509.Certificate, sigAlgo x509.Signatur
// and a caKey file, both PEM encoded.
func NewSignerFromFile(caFile, caKeyFile string, policy *config.Signing) (*Signer, error) {
log.Debug("Loading CA: ", caFile)
ca, err := ioutil.ReadFile(caFile)
ca, err := helpers.ReadBytes(caFile)
if err != nil {
return nil, err
}
log.Debug("Loading CA key: ", caKeyFile)
cakey, err := ioutil.ReadFile(caKeyFile)
cakey, err := helpers.ReadBytes(caKeyFile)
if err != nil {
return nil, cferr.Wrap(cferr.CertificateError, cferr.ReadFailed, err)
}
@ -95,16 +96,7 @@ func NewSignerFromFile(caFile, caKeyFile string, policy *config.Signing) (*Signe
return NewSigner(priv, parsedCa, signer.DefaultSigAlgo(priv), policy)
}
func (s *Signer) sign(template *x509.Certificate, profile *config.SigningProfile) (cert []byte, err error) {
var distPoints = template.CRLDistributionPoints
err = signer.FillTemplate(template, s.policy.Default, profile)
if distPoints != nil && len(distPoints) > 0 {
template.CRLDistributionPoints = distPoints
}
if err != nil {
return
}
func (s *Signer) sign(template *x509.Certificate) (cert []byte, err error) {
var initRoot bool
if s.ca == nil {
if !template.IsCA {
@ -204,7 +196,7 @@ func (s *Signer) Sign(req signer.SignRequest) (cert []byte, err error) {
if block.Type != "NEW CERTIFICATE REQUEST" && block.Type != "CERTIFICATE REQUEST" {
return nil, cferr.Wrap(cferr.CSRError,
cferr.BadRequest, errors.New("not a certificate or csr"))
cferr.BadRequest, errors.New("not a csr"))
}
csrTemplate, err := signer.ParseCertificateRequest(s, block.Bytes)
@ -334,27 +326,44 @@ func (s *Signer) Sign(req signer.SignRequest) (cert []byte, err error) {
}
}
var distPoints = safeTemplate.CRLDistributionPoints
err = signer.FillTemplate(&safeTemplate, s.policy.Default, profile, req.NotBefore, req.NotAfter)
if err != nil {
return nil, err
}
if distPoints != nil && len(distPoints) > 0 {
safeTemplate.CRLDistributionPoints = distPoints
}
var certTBS = safeTemplate
if len(profile.CTLogServers) > 0 {
if len(profile.CTLogServers) > 0 || req.ReturnPrecert {
// Add a poison extension which prevents validation
var poisonExtension = pkix.Extension{Id: signer.CTPoisonOID, Critical: true, Value: []byte{0x05, 0x00}}
var poisonedPreCert = certTBS
poisonedPreCert.ExtraExtensions = append(safeTemplate.ExtraExtensions, poisonExtension)
cert, err = s.sign(&poisonedPreCert, profile)
cert, err = s.sign(&poisonedPreCert)
if err != nil {
return
}
if req.ReturnPrecert {
return cert, nil
}
derCert, _ := pem.Decode(cert)
prechain := []ct.ASN1Cert{derCert.Bytes, s.ca.Raw}
prechain := []ct.ASN1Cert{{Data: derCert.Bytes}, {Data: s.ca.Raw}}
var sctList []ct.SignedCertificateTimestamp
for _, server := range profile.CTLogServers {
log.Infof("submitting poisoned precertificate to %s", server)
var ctclient = client.New(server, nil)
ctclient, err := client.New(server, nil, jsonclient.Options{})
if err != nil {
return nil, cferr.Wrap(cferr.CTError, cferr.PrecertSubmissionFailed, err)
}
var resp *ct.SignedCertificateTimestamp
resp, err = ctclient.AddPreChain(prechain)
ctx := context.Background()
resp, err = ctclient.AddPreChain(ctx, prechain)
if err != nil {
return nil, cferr.Wrap(cferr.CTError, cferr.PrecertSubmissionFailed, err)
}
@ -362,7 +371,7 @@ func (s *Signer) Sign(req signer.SignRequest) (cert []byte, err error) {
}
var serializedSCTList []byte
serializedSCTList, err = serializeSCTList(sctList)
serializedSCTList, err = helpers.SerializeSCTList(sctList)
if err != nil {
return nil, cferr.Wrap(cferr.CTError, cferr.Unknown, err)
}
@ -377,17 +386,22 @@ func (s *Signer) Sign(req signer.SignRequest) (cert []byte, err error) {
certTBS.ExtraExtensions = append(certTBS.ExtraExtensions, SCTListExtension)
}
var signedCert []byte
signedCert, err = s.sign(&certTBS, profile)
signedCert, err = s.sign(&certTBS)
if err != nil {
return nil, err
}
// Get the AKI from signedCert. This is required to support Go 1.9+.
// In prior versions of Go, x509.CreateCertificate updated the
// AuthorityKeyId of certTBS.
parsedCert, _ := helpers.ParseCertificatePEM(signedCert)
if s.dbAccessor != nil {
var certRecord = certdb.CertificateRecord{
Serial: certTBS.SerialNumber.String(),
// this relies on the specific behavior of x509.CreateCertificate
// which updates certTBS AuthorityKeyId from the signer's SubjectKeyId
AKI: hex.EncodeToString(certTBS.AuthorityKeyId),
// which sets the AuthorityKeyId from the signer's SubjectKeyId
AKI: hex.EncodeToString(parsedCert.AuthorityKeyId),
CALabel: req.Label,
Status: "good",
Expiry: certTBS.NotAfter,
@ -404,20 +418,83 @@ func (s *Signer) Sign(req signer.SignRequest) (cert []byte, err error) {
return signedCert, nil
}
func serializeSCTList(sctList []ct.SignedCertificateTimestamp) ([]byte, error) {
var buf bytes.Buffer
for _, sct := range sctList {
sct, err := ct.SerializeSCT(sct)
if err != nil {
return nil, err
}
binary.Write(&buf, binary.BigEndian, uint16(len(sct)))
buf.Write(sct)
// SignFromPrecert creates and signs a certificate from an existing precertificate
// that was previously signed by Signer.ca and inserts the provided SCTs into the
// new certificate. The resulting certificate will be a exact copy of the precert
// except for the removal of the poison extension and the addition of the SCT list
// extension. SignFromPrecert does not verify that the contents of the certificate
// still match the signing profile of the signer, it only requires that the precert
// was previously signed by the Signers CA.
func (s *Signer) SignFromPrecert(precert *x509.Certificate, scts []ct.SignedCertificateTimestamp) ([]byte, error) {
// Verify certificate was signed by s.ca
if err := precert.CheckSignatureFrom(s.ca); err != nil {
return nil, err
}
var sctListLengthField = make([]byte, 2)
binary.BigEndian.PutUint16(sctListLengthField, uint16(buf.Len()))
return bytes.Join([][]byte{sctListLengthField, buf.Bytes()}, nil), nil
// Verify certificate is a precert
isPrecert := false
poisonIndex := 0
for i, ext := range precert.Extensions {
if ext.Id.Equal(signer.CTPoisonOID) {
if !ext.Critical {
return nil, cferr.New(cferr.CTError, cferr.PrecertInvalidPoison)
}
// Check extension contains ASN.1 NULL
if bytes.Compare(ext.Value, []byte{0x05, 0x00}) != 0 {
return nil, cferr.New(cferr.CTError, cferr.PrecertInvalidPoison)
}
isPrecert = true
poisonIndex = i
break
}
}
if !isPrecert {
return nil, cferr.New(cferr.CTError, cferr.PrecertMissingPoison)
}
// Serialize SCTs into list format and create extension
serializedList, err := helpers.SerializeSCTList(scts)
if err != nil {
return nil, err
}
// Serialize again as an octet string before embedding
serializedList, err = asn1.Marshal(serializedList)
if err != nil {
return nil, cferr.Wrap(cferr.CTError, cferr.Unknown, err)
}
sctExt := pkix.Extension{Id: signer.SCTListOID, Critical: false, Value: serializedList}
// Create the new tbsCert from precert. Do explicit copies of any slices so that we don't
// use memory that may be altered by us or the caller at a later stage.
tbsCert := x509.Certificate{
SignatureAlgorithm: precert.SignatureAlgorithm,
PublicKeyAlgorithm: precert.PublicKeyAlgorithm,
PublicKey: precert.PublicKey,
Version: precert.Version,
SerialNumber: precert.SerialNumber,
Issuer: precert.Issuer,
Subject: precert.Subject,
NotBefore: precert.NotBefore,
NotAfter: precert.NotAfter,
KeyUsage: precert.KeyUsage,
BasicConstraintsValid: precert.BasicConstraintsValid,
IsCA: precert.IsCA,
MaxPathLen: precert.MaxPathLen,
MaxPathLenZero: precert.MaxPathLenZero,
PermittedDNSDomainsCritical: precert.PermittedDNSDomainsCritical,
}
if len(precert.Extensions) > 0 {
tbsCert.ExtraExtensions = make([]pkix.Extension, len(precert.Extensions))
copy(tbsCert.ExtraExtensions, precert.Extensions)
}
// Remove the poison extension from ExtraExtensions
tbsCert.ExtraExtensions = append(tbsCert.ExtraExtensions[:poisonIndex], tbsCert.ExtraExtensions[poisonIndex+1:]...)
// Insert the SCT list extension
tbsCert.ExtraExtensions = append(tbsCert.ExtraExtensions, sctExt)
// Sign the tbsCert
return s.sign(&tbsCert)
}
// Info return a populated info.Resp struct or an error.
@ -463,6 +540,16 @@ func (s *Signer) SetDBAccessor(dba certdb.Accessor) {
s.dbAccessor = dba
}
// GetDBAccessor returns the signers' cert db accessor
func (s *Signer) GetDBAccessor() certdb.Accessor {
return s.dbAccessor
}
// SetReqModifier does nothing for local
func (s *Signer) SetReqModifier(func(*http.Request, []byte)) {
// noop
}
// Policy returns the signer's policy.
func (s *Signer) Policy() *config.Signing {
return s.policy

View file

@ -12,6 +12,7 @@ import (
"encoding/asn1"
"errors"
"math/big"
"net/http"
"strings"
"time"
@ -19,7 +20,6 @@ import (
"github.com/cloudflare/cfssl/config"
"github.com/cloudflare/cfssl/csr"
cferr "github.com/cloudflare/cfssl/errors"
"github.com/cloudflare/cfssl/helpers"
"github.com/cloudflare/cfssl/info"
)
@ -55,6 +55,21 @@ type SignRequest struct {
Label string `json:"label"`
Serial *big.Int `json:"serial,omitempty"`
Extensions []Extension `json:"extensions,omitempty"`
// If provided, NotBefore will be used without modification (except
// for canonicalization) as the value of the notBefore field of the
// certificate. In particular no backdating adjustment will be made
// when NotBefore is provided.
NotBefore time.Time
// If provided, NotAfter will be used without modification (except
// for canonicalization) as the value of the notAfter field of the
// certificate.
NotAfter time.Time
// If ReturnPrecert is true a certificate with the CT poison extension
// will be returned from the Signer instead of attempting to retrieve
// SCTs and populate the tbsCert with them itself. This precert can then
// be passed to SignFromPrecert with the SCTs in order to create a
// valid certificate.
ReturnPrecert bool
}
// appendIf appends to a if s is not an empty string.
@ -96,9 +111,11 @@ type Signer interface {
Info(info.Req) (*info.Resp, error)
Policy() *config.Signing
SetDBAccessor(certdb.Accessor)
GetDBAccessor() certdb.Accessor
SetPolicy(*config.Signing)
SigAlgo() x509.SignatureAlgorithm
Sign(req SignRequest) (cert []byte, err error)
SetReqModifier(func(*http.Request, []byte))
}
// Profile gets the specific profile from the signer
@ -161,7 +178,7 @@ func ParseCertificateRequest(s Signer, csrBytes []byte) (template *x509.Certific
return
}
err = helpers.CheckSignature(csrv, csrv.SignatureAlgorithm, csrv.RawTBSCertificateRequest, csrv.Signature)
err = csrv.CheckSignature()
if err != nil {
err = cferr.Wrap(cferr.CSRError, cferr.KeyMismatch, err)
return
@ -229,16 +246,17 @@ func ComputeSKI(template *x509.Certificate) ([]byte, error) {
// the certificate template as possible from the profiles and current
// template. It fills in the key uses, expiration, revocation URLs
// and SKI.
func FillTemplate(template *x509.Certificate, defaultProfile, profile *config.SigningProfile) error {
func FillTemplate(template *x509.Certificate, defaultProfile, profile *config.SigningProfile, notBefore time.Time, notAfter time.Time) error {
ski, err := ComputeSKI(template)
if err != nil {
return err
}
var (
eku []x509.ExtKeyUsage
ku x509.KeyUsage
backdate time.Duration
expiry time.Duration
notBefore time.Time
notAfter time.Time
crlURL, ocspURL string
issuerURL = profile.IssuerURL
)
@ -265,23 +283,29 @@ func FillTemplate(template *x509.Certificate, defaultProfile, profile *config.Si
if ocspURL = profile.OCSP; ocspURL == "" {
ocspURL = defaultProfile.OCSP
}
if backdate = profile.Backdate; backdate == 0 {
backdate = -5 * time.Minute
} else {
backdate = -1 * profile.Backdate
}
if !profile.NotBefore.IsZero() {
notBefore = profile.NotBefore.UTC()
} else {
notBefore = time.Now().Round(time.Minute).Add(backdate).UTC()
if notBefore.IsZero() {
if !profile.NotBefore.IsZero() {
notBefore = profile.NotBefore
} else {
if backdate = profile.Backdate; backdate == 0 {
backdate = -5 * time.Minute
} else {
backdate = -1 * profile.Backdate
}
notBefore = time.Now().Round(time.Minute).Add(backdate)
}
}
notBefore = notBefore.UTC()
if !profile.NotAfter.IsZero() {
notAfter = profile.NotAfter.UTC()
} else {
notAfter = notBefore.Add(expiry).UTC()
if notAfter.IsZero() {
if !profile.NotAfter.IsZero() {
notAfter = profile.NotAfter
} else {
notAfter = notBefore.Add(expiry)
}
}
notAfter = notAfter.UTC()
template.NotBefore = notBefore
template.NotAfter = notAfter

View file

@ -0,0 +1,144 @@
# Certificate Transparency: Go Code
[![Build Status](https://travis-ci.org/google/certificate-transparency-go.svg?branch=master)](https://travis-ci.org/google/certificate-transparency-go)
[![Go Report Card](https://goreportcard.com/badge/github.com/google/certificate-transparency-go)](https://goreportcard.com/report/github.com/google/certificate-transparency-go)
[![GoDoc](https://godoc.org/github.com/google/certificate-transparency-go?status.svg)](https://godoc.org/github.com/google/certificate-transparency-go)
This repository holds Go code related to
[Certificate Transparency](https://www.certificate-transparency.org/) (CT). The
repository requires Go version 1.9.
- [Repository Structure](#repository-structure)
- [Trillian CT Personality](#trillian-ct-personality)
- [Working on the Code](#working-on-the-code)
- [Rebuilding Generated Code](#rebuilding-generated-code)
- [Updating Vendor Code](#updating-vendor-code)
- [Running Codebase Checks](#running-codebase-checks)
## Repository Structure
The main parts of the repository are:
- Encoding libraries:
- `asn1/` and `x509/` are forks of the upstream Go `encoding/asn1` and
`crypto/x509` libraries. We maintain separate forks of these packages
because CT is intended to act as an observatory of certificates across the
ecosystem; as such, we need to be able to process somewhat-malformed
certificates that the stricter upstream code would (correctly) reject.
Our `x509` fork also includes code for working with the
[pre-certificates defined in RFC 6962](https://tools.ietf.org/html/rfc6962#section-3.1).
- `tls` holds a library for processing TLS-encoded data as described in
[RFC 5246](https://tools.ietf.org/html/rfc5246).
- `x509util` provides additional utilities for dealing with
`x509.Certificate`s.
- CT client libraries:
- The top-level `ct` package (in `.`) holds types and utilities for working
with CT data structures defined in
[RFC 6962](https://tools.ietf.org/html/rfc6962).
- `client/` and `jsonclient/` hold libraries that allow access to CT Logs
via entrypoints described in
[section 4 of RFC 6962](https://tools.ietf.org/html/rfc6962#section-4).
- `scanner/` holds a library for scanning the entire contents of an existing
CT Log.
- Command line tools:
- `./client/ctclient` allows interaction with a CT Log
- `./scanner/scanlog` allows an existing CT Log to be scanned for certificates
of interest; please be polite when running this tool against a Log.
- `./x509util/certcheck` allows display and verification of certificates
- `./x509util/crlcheck` allows display and verification of certificate
revocation lists (CRLs).
- CT Personality for [Trillian](https://github.com/google/trillian):
- `trillian/` holds code that allows a Certificate Transparency Log to be
run using a Trillian Log as its back-end -- see
[below](#trillian-ct-personality).
## Trillian CT Personality
The `trillian/` subdirectory holds code and scripts for running a CT Log based
on the [Trillian](https://github.com/google/trillian) general transparency Log.
The main code for the CT personality is held in `trillian/ctfe`; this code
responds to HTTP requests on the
[CT API paths](https://tools.ietf.org/html/rfc6962#section-4) and translates
them to the equivalent gRPC API requests to the Trillian Log.
This obviously relies on the gRPC API definitions at
`github.com/google/trillian`; the code also uses common libraries from the
Trillian project for:
- exposing monitoring and statistics via an `interface` and corresponding
Prometheus implementation (`github.com/google/trillian/monitoring/...`)
- dealing with cryptographic keys (`github.com/google/trillian/crypto/...`).
The `trillian/integration/` directory holds scripts and tests for running the whole
system locally. In particular:
- `trillian/integration/ct_integration_test.sh` brings up local processes
running a Trillian Log server, signer and a CT personality, and exercises the
complete set of RFC 6962 API entrypoints.
- `trillian/integration/ct_hammer_test.sh` brings up a complete system and runs
a continuous randomized test of the CT entrypoints.
These scripts require a local database instance to be configured as described
in the [Trillian instructions](https://github.com/google/trillian#mysql-setup).
## Working on the Code
Developers who want to make changes to the codebase need some additional
dependencies and tools, described in the following sections. The
[Travis configuration](.travis.yml) for the codebase is also useful reference
for the required tools and scripts, as it may be more up-to-date than this
document.
### Rebuilding Generated Code
Some of the CT Go code is autogenerated from other files:
- [Protocol buffer](https://developers.google.com/protocol-buffers/) message
definitions are converted to `.pb.go` implementations.
- A mock implementation of the Trillian gRPC API (in `trillian/mockclient`) is
created with [GoMock](https://github.com/golang/mock).
Re-generating mock or protobuffer files is only needed if you're changing
the original files; if you do, you'll need to install the prerequisites:
- `mockgen` tool from https://github.com/golang/mock
- `protoc`, [Go support for protoc](https://github.com/golang/protobuf) (see
documentation linked from the
[protobuf site](https://github.com/google/protobuf))
and run the following:
```bash
go generate -x ./... # hunts for //go:generate comments and runs them
```
### Updating Vendor Code
The codebase includes a couple of external projects under the `vendor/`
subdirectory, to ensure that builds use a fixed version (typically because the
upstream repository does not guarantee back-compatibility between the tip
`master` branch and the current stable release). See
[instructions in the Trillian repo](https://github.com/google/trillian#updating-vendor-code)
for how to update vendored subtrees.
### Running Codebase Checks
The [`scripts/presubmit.sh`](scripts/presubmit.sh) script runs various tools
and tests over the codebase.
```bash
# Install gometalinter and all linters
go get -u github.com/alecthomas/gometalinter
gometalinter --install
# Run code generation, build, test and linters
./scripts/presubmit.sh
# Run build, test and linters but skip code generation
./scripts/presubmit.sh --no-generate
# Or just run the linters alone:
gometalinter --config=gometalinter.json ./...
```

View file

@ -8,12 +8,10 @@
// See also ``A Layman's Guide to a Subset of ASN.1, BER, and DER,''
// http://luca.ntop.org/Teaching/Appunti/asn1.html.
//
// START CT CHANGES
// This is a fork of the Go standard library ASN.1 implementation
// (encoding/asn1). The main difference is that this version tries to correct
// for errors (e.g. use of tagPrintableString when the string data is really
// ISO8859-1 - a common error present in many x509 certificates in the wild.)
// END CT CHANGES
package asn1
// ASN.1 is a syntax for specifying abstract objects and BER, DER, PER, XER etc
@ -27,40 +25,53 @@ package asn1
// everything by any means.
import (
// START CT CHANGES
"errors"
"fmt"
// END CT CHANGES
"math"
"math/big"
"reflect"
// START CT CHANGES
"strconv"
"strings"
// END CT CHANGES
"time"
"unicode/utf8"
)
// A StructuralError suggests that the ASN.1 data is valid, but the Go type
// which is receiving it doesn't match.
type StructuralError struct {
Msg string
Msg string
Field string
}
func (e StructuralError) Error() string { return "asn1: structure error: " + e.Msg }
func (e StructuralError) Error() string {
var prefix string
if e.Field != "" {
prefix = e.Field + ": "
}
return "asn1: structure error: " + prefix + e.Msg
}
// A SyntaxError suggests that the ASN.1 data is invalid.
type SyntaxError struct {
Msg string
Msg string
Field string
}
func (e SyntaxError) Error() string { return "asn1: syntax error: " + e.Msg }
func (e SyntaxError) Error() string {
var prefix string
if e.Field != "" {
prefix = e.Field + ": "
}
return "asn1: syntax error: " + prefix + e.Msg
}
// We start by dealing with each of the primitive types in turn.
// BOOLEAN
func parseBool(bytes []byte) (ret bool, err error) {
func parseBool(bytes []byte, fieldName string) (ret bool, err error) {
if len(bytes) != 1 {
err = SyntaxError{"invalid boolean"}
err = SyntaxError{"invalid boolean", fieldName}
return
}
@ -73,7 +84,7 @@ func parseBool(bytes []byte) (ret bool, err error) {
case 0xff:
ret = true
default:
err = SyntaxError{"invalid boolean"}
err = SyntaxError{"invalid boolean", fieldName}
}
return
@ -81,12 +92,31 @@ func parseBool(bytes []byte) (ret bool, err error) {
// INTEGER
// checkInteger returns nil if the given bytes are a valid DER-encoded
// INTEGER and an error otherwise.
func checkInteger(bytes []byte, fieldName string) error {
if len(bytes) == 0 {
return StructuralError{"empty integer", fieldName}
}
if len(bytes) == 1 {
return nil
}
if (bytes[0] == 0 && bytes[1]&0x80 == 0) || (bytes[0] == 0xff && bytes[1]&0x80 == 0x80) {
return StructuralError{"integer not minimally-encoded", fieldName}
}
return nil
}
// parseInt64 treats the given bytes as a big-endian, signed integer and
// returns the result.
func parseInt64(bytes []byte) (ret int64, err error) {
func parseInt64(bytes []byte, fieldName string) (ret int64, err error) {
err = checkInteger(bytes, fieldName)
if err != nil {
return
}
if len(bytes) > 8 {
// We'll overflow an int64 in this case.
err = StructuralError{"integer too large"}
err = StructuralError{"integer too large", fieldName}
return
}
for bytesRead := 0; bytesRead < len(bytes); bytesRead++ {
@ -102,13 +132,16 @@ func parseInt64(bytes []byte) (ret int64, err error) {
// parseInt treats the given bytes as a big-endian, signed integer and returns
// the result.
func parseInt32(bytes []byte) (int32, error) {
ret64, err := parseInt64(bytes)
func parseInt32(bytes []byte, fieldName string) (int32, error) {
if err := checkInteger(bytes, fieldName); err != nil {
return 0, err
}
ret64, err := parseInt64(bytes, fieldName)
if err != nil {
return 0, err
}
if ret64 != int64(int32(ret64)) {
return 0, StructuralError{"integer too large"}
return 0, StructuralError{"integer too large", fieldName}
}
return int32(ret64), nil
}
@ -117,7 +150,10 @@ var bigOne = big.NewInt(1)
// parseBigInt treats the given bytes as a big-endian, signed integer and returns
// the result.
func parseBigInt(bytes []byte) *big.Int {
func parseBigInt(bytes []byte, fieldName string) (*big.Int, error) {
if err := checkInteger(bytes, fieldName); err != nil {
return nil, err
}
ret := new(big.Int)
if len(bytes) > 0 && bytes[0]&0x80 == 0x80 {
// This is a negative number.
@ -128,10 +164,10 @@ func parseBigInt(bytes []byte) *big.Int {
ret.SetBytes(notBytes)
ret.Add(ret, bigOne)
ret.Neg(ret)
return ret
return ret, nil
}
ret.SetBytes(bytes)
return ret
return ret, nil
}
// BIT STRING
@ -174,16 +210,16 @@ func (b BitString) RightAlign() []byte {
}
// parseBitString parses an ASN.1 bit string from the given byte slice and returns it.
func parseBitString(bytes []byte) (ret BitString, err error) {
func parseBitString(bytes []byte, fieldName string) (ret BitString, err error) {
if len(bytes) == 0 {
err = SyntaxError{"zero length BIT STRING"}
err = SyntaxError{"zero length BIT STRING", fieldName}
return
}
paddingBits := int(bytes[0])
if paddingBits > 7 ||
len(bytes) == 1 && paddingBits > 0 ||
bytes[len(bytes)-1]&((1<<bytes[0])-1) != 0 {
err = SyntaxError{"invalid padding bits in BIT STRING"}
err = SyntaxError{"invalid padding bits in BIT STRING", fieldName}
return
}
ret.BitLength = (len(bytes)-1)*8 - paddingBits
@ -191,6 +227,14 @@ func parseBitString(bytes []byte) (ret BitString, err error) {
return
}
// NULL
// NullRawValue is a RawValue with its Tag set to the ASN.1 NULL type tag (5).
var NullRawValue = RawValue{Tag: TagNull}
// NullBytes contains bytes representing the DER-encoded ASN.1 NULL type.
var NullBytes = []byte{TagNull, 0}
// OBJECT IDENTIFIER
// An ObjectIdentifier represents an ASN.1 OBJECT IDENTIFIER.
@ -210,12 +254,25 @@ func (oi ObjectIdentifier) Equal(other ObjectIdentifier) bool {
return true
}
func (oi ObjectIdentifier) String() string {
var s string
for i, v := range oi {
if i > 0 {
s += "."
}
s += strconv.Itoa(v)
}
return s
}
// parseObjectIdentifier parses an OBJECT IDENTIFIER from the given bytes and
// returns it. An object identifier is a sequence of variable length integers
// that are assigned in a hierarchy.
func parseObjectIdentifier(bytes []byte) (s []int, err error) {
func parseObjectIdentifier(bytes []byte, fieldName string) (s []int, err error) {
if len(bytes) == 0 {
err = SyntaxError{"zero length OBJECT IDENTIFIER"}
err = SyntaxError{"zero length OBJECT IDENTIFIER", fieldName}
return
}
@ -227,7 +284,7 @@ func parseObjectIdentifier(bytes []byte) (s []int, err error) {
// According to this packing, value1 can take the values 0, 1 and 2 only.
// When value1 = 0 or value1 = 1, then value2 is <= 39. When value1 = 2,
// then there are no restrictions on value2.
v, offset, err := parseBase128Int(bytes, 0)
v, offset, err := parseBase128Int(bytes, 0, fieldName)
if err != nil {
return
}
@ -241,7 +298,7 @@ func parseObjectIdentifier(bytes []byte) (s []int, err error) {
i := 2
for ; offset < len(bytes); i++ {
v, offset, err = parseBase128Int(bytes, offset)
v, offset, err = parseBase128Int(bytes, offset, fieldName)
if err != nil {
return
}
@ -263,22 +320,30 @@ type Flag bool
// parseBase128Int parses a base-128 encoded int from the given offset in the
// given byte slice. It returns the value and the new offset.
func parseBase128Int(bytes []byte, initOffset int) (ret, offset int, err error) {
func parseBase128Int(bytes []byte, initOffset int, fieldName string) (ret, offset int, err error) {
offset = initOffset
var ret64 int64
for shifted := 0; offset < len(bytes); shifted++ {
if shifted > 4 {
err = StructuralError{"base 128 integer too large"}
// 5 * 7 bits per byte == 35 bits of data
// Thus the representation is either non-minimal or too large for an int32
if shifted == 5 {
err = StructuralError{"base 128 integer too large", fieldName}
return
}
ret <<= 7
ret64 <<= 7
b := bytes[offset]
ret |= int(b & 0x7f)
ret64 |= int64(b & 0x7f)
offset++
if b&0x80 == 0 {
ret = int(ret64)
// Ensure that the returned value fits in an int on all platforms
if ret64 > math.MaxInt32 {
err = StructuralError{"base 128 integer too large", fieldName}
}
return
}
}
err = SyntaxError{"truncated base 128 integer"}
err = SyntaxError{"truncated base 128 integer", fieldName}
return
}
@ -286,11 +351,23 @@ func parseBase128Int(bytes []byte, initOffset int) (ret, offset int, err error)
func parseUTCTime(bytes []byte) (ret time.Time, err error) {
s := string(bytes)
ret, err = time.Parse("0601021504Z0700", s)
formatStr := "0601021504Z0700"
ret, err = time.Parse(formatStr, s)
if err != nil {
ret, err = time.Parse("060102150405Z0700", s)
formatStr = "060102150405Z0700"
ret, err = time.Parse(formatStr, s)
}
if err == nil && ret.Year() >= 2050 {
if err != nil {
return
}
if serialized := ret.Format(formatStr); serialized != s {
err = fmt.Errorf("asn1: time did not serialize back to the original value and may be invalid: given %q, but serialized as %q", s, serialized)
return
}
if ret.Year() >= 2050 {
// UTCTime only encodes times prior to 2050. See https://tools.ietf.org/html/rfc5280#section-4.1.2.5.1
ret = ret.AddDate(-100, 0, 0)
}
@ -301,17 +378,47 @@ func parseUTCTime(bytes []byte) (ret time.Time, err error) {
// parseGeneralizedTime parses the GeneralizedTime from the given byte slice
// and returns the resulting time.
func parseGeneralizedTime(bytes []byte) (ret time.Time, err error) {
return time.Parse("20060102150405Z0700", string(bytes))
const formatStr = "20060102150405Z0700"
s := string(bytes)
if ret, err = time.Parse(formatStr, s); err != nil {
return
}
if serialized := ret.Format(formatStr); serialized != s {
err = fmt.Errorf("asn1: time did not serialize back to the original value and may be invalid: given %q, but serialized as %q", s, serialized)
}
return
}
// NumericString
// parseNumericString parses an ASN.1 NumericString from the given byte array
// and returns it.
func parseNumericString(bytes []byte, fieldName string) (ret string, err error) {
for _, b := range bytes {
if !isNumeric(b) {
return "", SyntaxError{"NumericString contains invalid character", fieldName}
}
}
return string(bytes), nil
}
// isNumeric reports whether the given b is in the ASN.1 NumericString set.
func isNumeric(b byte) bool {
return '0' <= b && b <= '9' ||
b == ' '
}
// PrintableString
// parsePrintableString parses a ASN.1 PrintableString from the given byte
// parsePrintableString parses an ASN.1 PrintableString from the given byte
// array and returns it.
func parsePrintableString(bytes []byte) (ret string, err error) {
func parsePrintableString(bytes []byte, fieldName string) (ret string, err error) {
for _, b := range bytes {
if !isPrintable(b) {
err = SyntaxError{"PrintableString contains invalid character"}
if !isPrintable(b, allowAsterisk, allowAmpersand) {
err = SyntaxError{"PrintableString contains invalid character", fieldName}
return
}
}
@ -319,8 +426,21 @@ func parsePrintableString(bytes []byte) (ret string, err error) {
return
}
// isPrintable returns true iff the given b is in the ASN.1 PrintableString set.
func isPrintable(b byte) bool {
type asteriskFlag bool
type ampersandFlag bool
const (
allowAsterisk asteriskFlag = true
rejectAsterisk asteriskFlag = false
allowAmpersand ampersandFlag = true
rejectAmpersand ampersandFlag = false
)
// isPrintable reports whether the given b is in the ASN.1 PrintableString set.
// If asterisk is allowAsterisk then '*' is also allowed, reflecting existing
// practice. If ampersand is allowAmpersand then '&' is allowed as well.
func isPrintable(b byte, asterisk asteriskFlag, ampersand ampersandFlag) bool {
return 'a' <= b && b <= 'z' ||
'A' <= b && b <= 'Z' ||
'0' <= b && b <= '9' ||
@ -333,17 +453,22 @@ func isPrintable(b byte) bool {
// This is technically not allowed in a PrintableString.
// However, x509 certificates with wildcard strings don't
// always use the correct string type so we permit it.
b == '*'
(bool(asterisk) && b == '*') ||
// This is not technically allowed either. However, not
// only is it relatively common, but there are also a
// handful of CA certificates that contain it. At least
// one of which will not expire until 2027.
(bool(ampersand) && b == '&')
}
// IA5String
// parseIA5String parses a ASN.1 IA5String (ASCII string) from the given
// parseIA5String parses an ASN.1 IA5String (ASCII string) from the given
// byte slice and returns it.
func parseIA5String(bytes []byte) (ret string, err error) {
func parseIA5String(bytes []byte, fieldName string) (ret string, err error) {
for _, b := range bytes {
if b >= 0x80 {
err = SyntaxError{"IA5String contains invalid character"}
if b >= utf8.RuneSelf {
err = SyntaxError{"IA5String contains invalid character", fieldName}
return
}
}
@ -353,7 +478,7 @@ func parseIA5String(bytes []byte) (ret string, err error) {
// T61String
// parseT61String parses a ASN.1 T61String (8-bit clean string) from the given
// parseT61String parses an ASN.1 T61String (8-bit clean string) from the given
// byte slice and returns it.
func parseT61String(bytes []byte) (ret string, err error) {
return string(bytes), nil
@ -361,9 +486,12 @@ func parseT61String(bytes []byte) (ret string, err error) {
// UTF8String
// parseUTF8String parses a ASN.1 UTF8String (raw UTF-8) from the given byte
// parseUTF8String parses an ASN.1 UTF8String (raw UTF-8) from the given byte
// array and returns it.
func parseUTF8String(bytes []byte) (ret string, err error) {
if !utf8.Valid(bytes) {
return "", errors.New("asn1: invalid UTF-8 string")
}
return string(bytes), nil
}
@ -386,8 +514,14 @@ type RawContent []byte
// into a byte slice. It returns the parsed data and the new offset. SET and
// SET OF (tag 17) are mapped to SEQUENCE and SEQUENCE OF (tag 16) since we
// don't distinguish between ordered and unordered objects in this code.
func parseTagAndLength(bytes []byte, initOffset int) (ret tagAndLength, offset int, err error) {
func parseTagAndLength(bytes []byte, initOffset int, fieldName string) (ret tagAndLength, offset int, err error) {
offset = initOffset
// parseTagAndLength should not be called without at least a single
// byte to read. Thus this check is for robustness:
if offset >= len(bytes) {
err = errors.New("asn1: internal error in parseTagAndLength")
return
}
b := bytes[offset]
offset++
ret.class = int(b >> 6)
@ -397,13 +531,18 @@ func parseTagAndLength(bytes []byte, initOffset int) (ret tagAndLength, offset i
// If the bottom five bits are set, then the tag number is actually base 128
// encoded afterwards
if ret.tag == 0x1f {
ret.tag, offset, err = parseBase128Int(bytes, offset)
ret.tag, offset, err = parseBase128Int(bytes, offset, fieldName)
if err != nil {
return
}
// Tags should be encoded in minimal form.
if ret.tag < 0x1f {
err = SyntaxError{"non-minimal tag", fieldName}
return
}
}
if offset >= len(bytes) {
err = SyntaxError{"truncated tag or length"}
err = SyntaxError{"truncated tag or length", fieldName}
return
}
b = bytes[offset]
@ -415,13 +554,13 @@ func parseTagAndLength(bytes []byte, initOffset int) (ret tagAndLength, offset i
// Bottom 7 bits give the number of length bytes to follow.
numBytes := int(b & 0x7f)
if numBytes == 0 {
err = SyntaxError{"indefinite length found (not DER)"}
err = SyntaxError{"indefinite length found (not DER)", fieldName}
return
}
ret.length = 0
for i := 0; i < numBytes; i++ {
if offset >= len(bytes) {
err = SyntaxError{"truncated tag or length"}
err = SyntaxError{"truncated tag or length", fieldName}
return
}
b = bytes[offset]
@ -429,17 +568,22 @@ func parseTagAndLength(bytes []byte, initOffset int) (ret tagAndLength, offset i
if ret.length >= 1<<23 {
// We can't shift ret.length up without
// overflowing.
err = StructuralError{"length too large"}
err = StructuralError{"length too large", fieldName}
return
}
ret.length <<= 8
ret.length |= int(b)
if ret.length == 0 {
// DER requires that lengths be minimal.
err = StructuralError{"superfluous leading zeros in length"}
err = StructuralError{"superfluous leading zeros in length", fieldName}
return
}
}
// Short lengths must be encoded in short form.
if ret.length < 0x80 {
err = StructuralError{"non-minimal length", fieldName}
return
}
}
return
@ -448,10 +592,10 @@ func parseTagAndLength(bytes []byte, initOffset int) (ret tagAndLength, offset i
// parseSequenceOf is used for SEQUENCE OF and SET OF values. It tries to parse
// a number of ASN.1 values from the given byte slice and returns them as a
// slice of Go values of the given type.
func parseSequenceOf(bytes []byte, sliceType reflect.Type, elemType reflect.Type) (ret reflect.Value, err error) {
expectedTag, compoundType, ok := getUniversalType(elemType)
func parseSequenceOf(bytes []byte, sliceType reflect.Type, elemType reflect.Type, fieldName string) (ret reflect.Value, err error) {
matchAny, expectedTag, compoundType, ok := getUniversalType(elemType)
if !ok {
err = StructuralError{"unknown Go type for slice"}
err = StructuralError{"unknown Go type for slice", fieldName}
return
}
@ -460,21 +604,27 @@ func parseSequenceOf(bytes []byte, sliceType reflect.Type, elemType reflect.Type
numElements := 0
for offset := 0; offset < len(bytes); {
var t tagAndLength
t, offset, err = parseTagAndLength(bytes, offset)
t, offset, err = parseTagAndLength(bytes, offset, fieldName)
if err != nil {
return
}
// We pretend that GENERAL STRINGs are PRINTABLE STRINGs so
// that a sequence of them can be parsed into a []string.
if t.tag == tagGeneralString {
t.tag = tagPrintableString
switch t.tag {
case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString:
// We pretend that various other string types are
// PRINTABLE STRINGs so that a sequence of them can be
// parsed into a []string.
t.tag = TagPrintableString
case TagGeneralizedTime, TagUTCTime:
// Likewise, both time types are treated the same.
t.tag = TagUTCTime
}
if t.class != classUniversal || t.isCompound != compoundType || t.tag != expectedTag {
err = StructuralError{"sequence tag mismatch"}
if !matchAny && (t.class != ClassUniversal || t.isCompound != compoundType || t.tag != expectedTag) {
err = StructuralError{fmt.Sprintf("sequence tag mismatch (got:%+v, want:0/%d/%t)", t, expectedTag, compoundType), fieldName}
return
}
if invalidLength(offset, t.length, len(bytes)) {
err = SyntaxError{"truncated sequence"}
err = SyntaxError{"truncated sequence", fieldName}
return
}
offset += t.length
@ -509,8 +659,6 @@ func invalidLength(offset, length, sliceLength int) bool {
return offset+length < offset || offset+length > sliceLength
}
// START CT CHANGES
// Tests whether the data in |bytes| would be a valid ISO8859-1 string.
// Clearly, a sequence of bytes comprised solely of valid ISO8859-1
// codepoints does not imply that the encoding MUST be ISO8859-1, rather that
@ -556,8 +704,6 @@ func iso8859_1ToUTF8(bytes []byte) string {
return string(buf)
}
// END CT CHANGES
// parseField is the main parsing function. Given a byte slice and an offset
// into the array, it will try to parse a suitable ASN.1 value out and store it
// in the given Value.
@ -568,46 +714,28 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
// If we have run out of data, it may be that there are optional elements at the end.
if offset == len(bytes) {
if !setDefaultValue(v, params) {
err = SyntaxError{"sequence truncated"}
err = SyntaxError{"sequence truncated", params.name}
}
return
}
// Deal with raw values.
if fieldType == rawValueType {
var t tagAndLength
t, offset, err = parseTagAndLength(bytes, offset)
if err != nil {
return
}
if invalidLength(offset, t.length, len(bytes)) {
err = SyntaxError{"data truncated"}
return
}
result := RawValue{t.class, t.tag, t.isCompound, bytes[offset : offset+t.length], bytes[initOffset : offset+t.length]}
offset += t.length
v.Set(reflect.ValueOf(result))
return
}
// Deal with the ANY type.
if ifaceType := fieldType; ifaceType.Kind() == reflect.Interface && ifaceType.NumMethod() == 0 {
var t tagAndLength
t, offset, err = parseTagAndLength(bytes, offset)
t, offset, err = parseTagAndLength(bytes, offset, params.name)
if err != nil {
return
}
if invalidLength(offset, t.length, len(bytes)) {
err = SyntaxError{"data truncated"}
err = SyntaxError{"data truncated", params.name}
return
}
var result interface{}
if !t.isCompound && t.class == classUniversal {
if !t.isCompound && t.class == ClassUniversal {
innerBytes := bytes[offset : offset+t.length]
switch t.tag {
case tagPrintableString:
result, err = parsePrintableString(innerBytes)
// START CT CHANGES
case TagPrintableString:
result, err = parsePrintableString(innerBytes, params.name)
if err != nil && strings.Contains(err.Error(), "PrintableString contains invalid character") {
// Probably an ISO8859-1 string stuffed in, check if it
// would be valid and assume that's what's happened if so,
@ -623,22 +751,25 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
err = errors.New("PrintableString contains invalid character, but couldn't determine correct String type.")
}
}
// END CT CHANGES
case tagIA5String:
result, err = parseIA5String(innerBytes)
case tagT61String:
case TagNumericString:
result, err = parseNumericString(innerBytes, params.name)
case TagIA5String:
result, err = parseIA5String(innerBytes, params.name)
case TagT61String:
result, err = parseT61String(innerBytes)
case tagUTF8String:
case TagUTF8String:
result, err = parseUTF8String(innerBytes)
case tagInteger:
result, err = parseInt64(innerBytes)
case tagBitString:
result, err = parseBitString(innerBytes)
case tagOID:
result, err = parseObjectIdentifier(innerBytes)
case tagUTCTime:
case TagInteger:
result, err = parseInt64(innerBytes, params.name)
case TagBitString:
result, err = parseBitString(innerBytes, params.name)
case TagOID:
result, err = parseObjectIdentifier(innerBytes, params.name)
case TagUTCTime:
result, err = parseUTCTime(innerBytes)
case tagOctetString:
case TagGeneralizedTime:
result, err = parseGeneralizedTime(innerBytes)
case TagOctetString:
result = innerBytes
default:
// If we don't know how to handle the type, we just leave Value as nil.
@ -653,30 +784,31 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
}
return
}
universalTag, compoundType, ok1 := getUniversalType(fieldType)
if !ok1 {
err = StructuralError{fmt.Sprintf("unknown Go type: %v", fieldType)}
return
}
t, offset, err := parseTagAndLength(bytes, offset)
t, offset, err := parseTagAndLength(bytes, offset, params.name)
if err != nil {
return
}
if params.explicit {
expectedClass := classContextSpecific
expectedClass := ClassContextSpecific
if params.application {
expectedClass = classApplication
expectedClass = ClassApplication
}
if offset == len(bytes) {
err = StructuralError{"explicit tag has no child", params.name}
return
}
if t.class == expectedClass && t.tag == *params.tag && (t.length == 0 || t.isCompound) {
if t.length > 0 {
t, offset, err = parseTagAndLength(bytes, offset)
if fieldType == rawValueType {
// The inner element should not be parsed for RawValues.
} else if t.length > 0 {
t, offset, err = parseTagAndLength(bytes, offset, params.name)
if err != nil {
return
}
} else {
if fieldType != flagType {
err = StructuralError{"zero length explicit tag was not an asn1.Flag"}
err = StructuralError{"zero length explicit tag was not an asn1.Flag", params.name}
return
}
v.SetBool(true)
@ -688,55 +820,73 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
if ok {
offset = initOffset
} else {
err = StructuralError{"explicitly tagged member didn't match"}
err = StructuralError{"explicitly tagged member didn't match", params.name}
}
return
}
}
matchAny, universalTag, compoundType, ok1 := getUniversalType(fieldType)
if !ok1 {
err = StructuralError{fmt.Sprintf("unknown Go type: %v", fieldType), params.name}
return
}
// Special case for strings: all the ASN.1 string types map to the Go
// type string. getUniversalType returns the tag for PrintableString
// when it sees a string, so if we see a different string type on the
// wire, we change the universal type to match.
if universalTag == tagPrintableString {
switch t.tag {
case tagIA5String, tagGeneralString, tagT61String, tagUTF8String:
universalTag = t.tag
if universalTag == TagPrintableString {
if t.class == ClassUniversal {
switch t.tag {
case TagIA5String, TagGeneralString, TagT61String, TagUTF8String, TagNumericString:
universalTag = t.tag
}
} else if params.stringType != 0 {
universalTag = params.stringType
}
}
// Special case for time: UTCTime and GeneralizedTime both map to the
// Go type time.Time.
if universalTag == tagUTCTime && t.tag == tagGeneralizedTime {
universalTag = tagGeneralizedTime
if universalTag == TagUTCTime && t.tag == TagGeneralizedTime && t.class == ClassUniversal {
universalTag = TagGeneralizedTime
}
expectedClass := classUniversal
if params.set {
universalTag = TagSet
}
matchAnyClassAndTag := matchAny
expectedClass := ClassUniversal
expectedTag := universalTag
if !params.explicit && params.tag != nil {
expectedClass = classContextSpecific
expectedClass = ClassContextSpecific
expectedTag = *params.tag
matchAnyClassAndTag = false
}
if !params.explicit && params.application && params.tag != nil {
expectedClass = classApplication
expectedClass = ClassApplication
expectedTag = *params.tag
matchAnyClassAndTag = false
}
// We have unwrapped any explicit tagging at this point.
if t.class != expectedClass || t.tag != expectedTag || t.isCompound != compoundType {
if !matchAnyClassAndTag && (t.class != expectedClass || t.tag != expectedTag) ||
(!matchAny && t.isCompound != compoundType) {
// Tags don't match. Again, it could be an optional element.
ok := setDefaultValue(v, params)
if ok {
offset = initOffset
} else {
err = StructuralError{fmt.Sprintf("tags don't match (%d vs %+v) %+v %s @%d", expectedTag, t, params, fieldType.Name(), offset)}
err = StructuralError{fmt.Sprintf("tags don't match (%d vs %+v) %+v %s @%d", expectedTag, t, params, fieldType.Name(), offset), params.name}
}
return
}
if invalidLength(offset, t.length, len(bytes)) {
err = SyntaxError{"data truncated"}
err = SyntaxError{"data truncated", params.name}
return
}
innerBytes := bytes[offset : offset+t.length]
@ -744,8 +894,12 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
// We deal with the structures defined in this package first.
switch fieldType {
case rawValueType:
result := RawValue{t.class, t.tag, t.isCompound, innerBytes, bytes[initOffset:offset]}
v.Set(reflect.ValueOf(result))
return
case objectIdentifierType:
newSlice, err1 := parseObjectIdentifier(innerBytes)
newSlice, err1 := parseObjectIdentifier(innerBytes, params.name)
v.Set(reflect.MakeSlice(v.Type(), len(newSlice), len(newSlice)))
if err1 == nil {
reflect.Copy(v, reflect.ValueOf(newSlice))
@ -753,7 +907,7 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
err = err1
return
case bitStringType:
bs, err1 := parseBitString(innerBytes)
bs, err1 := parseBitString(innerBytes, params.name)
if err1 == nil {
v.Set(reflect.ValueOf(bs))
}
@ -762,7 +916,7 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
case timeType:
var time time.Time
var err1 error
if universalTag == tagUTCTime {
if universalTag == TagUTCTime {
time, err1 = parseUTCTime(innerBytes)
} else {
time, err1 = parseGeneralizedTime(innerBytes)
@ -773,7 +927,7 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
err = err1
return
case enumeratedType:
parsedInt, err1 := parseInt32(innerBytes)
parsedInt, err1 := parseInt32(innerBytes, params.name)
if err1 == nil {
v.SetInt(int64(parsedInt))
}
@ -783,13 +937,16 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
v.SetBool(true)
return
case bigIntType:
parsedInt := parseBigInt(innerBytes)
v.Set(reflect.ValueOf(parsedInt))
parsedInt, err1 := parseBigInt(innerBytes, params.name)
if err1 == nil {
v.Set(reflect.ValueOf(parsedInt))
}
err = err1
return
}
switch val := v; val.Kind() {
case reflect.Bool:
parsedBool, err1 := parseBool(innerBytes)
parsedBool, err1 := parseBool(innerBytes, params.name)
if err1 == nil {
val.SetBool(parsedBool)
}
@ -797,13 +954,13 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
return
case reflect.Int, reflect.Int32, reflect.Int64:
if val.Type().Size() == 4 {
parsedInt, err1 := parseInt32(innerBytes)
parsedInt, err1 := parseInt32(innerBytes, params.name)
if err1 == nil {
val.SetInt(int64(parsedInt))
}
err = err1
} else {
parsedInt, err1 := parseInt64(innerBytes)
parsedInt, err1 := parseInt64(innerBytes, params.name)
if err1 == nil {
val.SetInt(parsedInt)
}
@ -814,6 +971,13 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
case reflect.Struct:
structType := fieldType
for i := 0; i < structType.NumField(); i++ {
if structType.Field(i).PkgPath != "" {
err = StructuralError{"struct contains unexported fields", structType.Field(i).Name}
return
}
}
if structType.NumField() > 0 &&
structType.Field(0).Type == rawContentsType {
bytes := bytes[initOffset:offset]
@ -826,7 +990,9 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
if i == 0 && field.Type == rawContentsType {
continue
}
innerOffset, err = parseField(val.Field(i), innerBytes, innerOffset, parseFieldParameters(field.Tag.Get("asn1")))
innerParams := parseFieldParameters(field.Tag.Get("asn1"))
innerParams.name = field.Name
innerOffset, err = parseField(val.Field(i), innerBytes, innerOffset, innerParams)
if err != nil {
return
}
@ -842,7 +1008,7 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
reflect.Copy(val, reflect.ValueOf(innerBytes))
return
}
newSlice, err1 := parseSequenceOf(innerBytes, sliceType, sliceType.Elem())
newSlice, err1 := parseSequenceOf(innerBytes, sliceType, sliceType.Elem(), params.name)
if err1 == nil {
val.Set(newSlice)
}
@ -851,34 +1017,47 @@ func parseField(v reflect.Value, bytes []byte, initOffset int, params fieldParam
case reflect.String:
var v string
switch universalTag {
case tagPrintableString:
v, err = parsePrintableString(innerBytes)
case tagIA5String:
v, err = parseIA5String(innerBytes)
case tagT61String:
case TagPrintableString:
v, err = parsePrintableString(innerBytes, params.name)
case TagNumericString:
v, err = parseNumericString(innerBytes, params.name)
case TagIA5String:
v, err = parseIA5String(innerBytes, params.name)
case TagT61String:
v, err = parseT61String(innerBytes)
case tagUTF8String:
case TagUTF8String:
v, err = parseUTF8String(innerBytes)
case tagGeneralString:
case TagGeneralString:
// GeneralString is specified in ISO-2022/ECMA-35,
// A brief review suggests that it includes structures
// that allow the encoding to change midstring and
// such. We give up and pass it as an 8-bit string.
v, err = parseT61String(innerBytes)
default:
err = SyntaxError{fmt.Sprintf("internal error: unknown string type %d", universalTag)}
err = SyntaxError{fmt.Sprintf("internal error: unknown string type %d", universalTag), params.name}
}
if err == nil {
val.SetString(v)
}
return
}
err = StructuralError{"unsupported: " + v.Type().String()}
err = StructuralError{"unsupported: " + v.Type().String(), params.name}
return
}
// canHaveDefaultValue reports whether k is a Kind that we will set a default
// value for. (A signed integer, essentially.)
func canHaveDefaultValue(k reflect.Kind) bool {
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return true
}
return false
}
// setDefaultValue is used to install a default value, from a tag string, into
// a Value. It is successful is the field was optional, even if a default value
// a Value. It is successful if the field was optional, even if a default value
// wasn't provided or it failed to install it into the Value.
func setDefaultValue(v reflect.Value, params fieldParameters) (ok bool) {
if !params.optional {
@ -888,9 +1067,8 @@ func setDefaultValue(v reflect.Value, params fieldParameters) (ok bool) {
if params.defaultValue == nil {
return
}
switch val := v; val.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
val.SetInt(*params.defaultValue)
if canHaveDefaultValue(v.Kind()) {
v.SetInt(*params.defaultValue)
}
return
}
@ -916,7 +1094,7 @@ func setDefaultValue(v reflect.Value, params fieldParameters) (ok bool) {
//
// An ASN.1 UTCTIME or GENERALIZEDTIME can be written to a time.Time.
//
// An ASN.1 PrintableString or IA5String can be written to a string.
// An ASN.1 PrintableString, IA5String, or NumericString can be written to a string.
//
// Any of the above ASN.1 values can be written to an interface{}.
// The value stored in the interface has the corresponding Go type.
@ -931,13 +1109,20 @@ func setDefaultValue(v reflect.Value, params fieldParameters) (ok bool) {
//
// The following tags on struct fields have special meaning to Unmarshal:
//
// optional marks the field as ASN.1 OPTIONAL
// [explicit] tag:x specifies the ASN.1 tag number; implies ASN.1 CONTEXT SPECIFIC
// default:x sets the default value for optional integer fields
// application specifies that an APPLICATION tag is used
// default:x sets the default value for optional integer fields (only used if optional is also present)
// explicit specifies that an additional, explicit tag wraps the implicit one
// optional marks the field as ASN.1 OPTIONAL
// set causes a SET, rather than a SEQUENCE type to be expected
// tag:x specifies the ASN.1 tag number; implies ASN.1 CONTEXT SPECIFIC
//
// If the type of the first field of a structure is RawContent then the raw
// ASN1 contents of the struct will be stored in it.
//
// If the type name of a slice element ends with "SET" then it's treated as if
// the "set" tag was set on it. This can be used with nested slices where a
// struct tag cannot be given.
//
// Other ASN.1 types are not supported; if it encounters them,
// Unmarshal returns a parse error.
func Unmarshal(b []byte, val interface{}) (rest []byte, err error) {

View file

@ -18,29 +18,33 @@ import (
// Here are some standard tags and classes
// ASN.1 tags represent the type of the following object.
const (
tagBoolean = 1
tagInteger = 2
tagBitString = 3
tagOctetString = 4
tagOID = 6
tagEnum = 10
tagUTF8String = 12
tagSequence = 16
tagSet = 17
tagPrintableString = 19
tagT61String = 20
tagIA5String = 22
tagUTCTime = 23
tagGeneralizedTime = 24
tagGeneralString = 27
TagBoolean = 1
TagInteger = 2
TagBitString = 3
TagOctetString = 4
TagNull = 5
TagOID = 6
TagEnum = 10
TagUTF8String = 12
TagSequence = 16
TagSet = 17
TagNumericString = 18
TagPrintableString = 19
TagT61String = 20
TagIA5String = 22
TagUTCTime = 23
TagGeneralizedTime = 24
TagGeneralString = 27
)
// ASN.1 class types represent the namespace of the tag.
const (
classUniversal = 0
classApplication = 1
classContextSpecific = 2
classPrivate = 3
ClassUniversal = 0
ClassApplication = 1
ClassContextSpecific = 2
ClassPrivate = 3
)
type tagAndLength struct {
@ -74,8 +78,10 @@ type fieldParameters struct {
defaultValue *int64 // a default value for INTEGER typed fields (maybe nil).
tag *int // the EXPLICIT or IMPLICIT tag (maybe nil).
stringType int // the string tag to use when marshaling.
timeType int // the time tag to use when marshaling.
set bool // true iff this should be encoded as a SET
omitEmpty bool // true iff this should be omitted if empty when marshaling.
name string // name of field for better diagnostics
// Invariants:
// if explicit is set, tag is non-nil.
@ -94,12 +100,18 @@ func parseFieldParameters(str string) (ret fieldParameters) {
if ret.tag == nil {
ret.tag = new(int)
}
case part == "generalized":
ret.timeType = TagGeneralizedTime
case part == "utc":
ret.timeType = TagUTCTime
case part == "ia5":
ret.stringType = tagIA5String
ret.stringType = TagIA5String
case part == "printable":
ret.stringType = tagPrintableString
ret.stringType = TagPrintableString
case part == "numeric":
ret.stringType = TagNumericString
case part == "utf8":
ret.stringType = tagUTF8String
ret.stringType = TagUTF8String
case strings.HasPrefix(part, "default:"):
i, err := strconv.ParseInt(part[8:], 10, 64)
if err == nil {
@ -128,36 +140,38 @@ func parseFieldParameters(str string) (ret fieldParameters) {
// Given a reflected Go type, getUniversalType returns the default tag number
// and expected compound flag.
func getUniversalType(t reflect.Type) (tagNumber int, isCompound, ok bool) {
func getUniversalType(t reflect.Type) (matchAny bool, tagNumber int, isCompound, ok bool) {
switch t {
case rawValueType:
return true, -1, false, true
case objectIdentifierType:
return tagOID, false, true
return false, TagOID, false, true
case bitStringType:
return tagBitString, false, true
return false, TagBitString, false, true
case timeType:
return tagUTCTime, false, true
return false, TagUTCTime, false, true
case enumeratedType:
return tagEnum, false, true
return false, TagEnum, false, true
case bigIntType:
return tagInteger, false, true
return false, TagInteger, false, true
}
switch t.Kind() {
case reflect.Bool:
return tagBoolean, false, true
return false, TagBoolean, false, true
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return tagInteger, false, true
return false, TagInteger, false, true
case reflect.Struct:
return tagSequence, true, true
return false, TagSequence, true, true
case reflect.Slice:
if t.Elem().Kind() == reflect.Uint8 {
return tagOctetString, false, true
return false, TagOctetString, false, true
}
if strings.HasSuffix(t.Name(), "SET") {
return tagSet, true, true
return false, TagSet, true, true
}
return tagSequence, true, true
return false, TagSequence, true, true
case reflect.String:
return tagPrintableString, false, true
return false, TagPrintableString, false, true
}
return 0, false, false
return false, 0, false, false
}

View file

@ -0,0 +1,689 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package asn1
import (
"errors"
"fmt"
"math/big"
"reflect"
"time"
"unicode/utf8"
)
var (
byte00Encoder encoder = byteEncoder(0x00)
byteFFEncoder encoder = byteEncoder(0xff)
)
// encoder represents an ASN.1 element that is waiting to be marshaled.
type encoder interface {
// Len returns the number of bytes needed to marshal this element.
Len() int
// Encode encodes this element by writing Len() bytes to dst.
Encode(dst []byte)
}
type byteEncoder byte
func (c byteEncoder) Len() int {
return 1
}
func (c byteEncoder) Encode(dst []byte) {
dst[0] = byte(c)
}
type bytesEncoder []byte
func (b bytesEncoder) Len() int {
return len(b)
}
func (b bytesEncoder) Encode(dst []byte) {
if copy(dst, b) != len(b) {
panic("internal error")
}
}
type stringEncoder string
func (s stringEncoder) Len() int {
return len(s)
}
func (s stringEncoder) Encode(dst []byte) {
if copy(dst, s) != len(s) {
panic("internal error")
}
}
type multiEncoder []encoder
func (m multiEncoder) Len() int {
var size int
for _, e := range m {
size += e.Len()
}
return size
}
func (m multiEncoder) Encode(dst []byte) {
var off int
for _, e := range m {
e.Encode(dst[off:])
off += e.Len()
}
}
type taggedEncoder struct {
// scratch contains temporary space for encoding the tag and length of
// an element in order to avoid extra allocations.
scratch [8]byte
tag encoder
body encoder
}
func (t *taggedEncoder) Len() int {
return t.tag.Len() + t.body.Len()
}
func (t *taggedEncoder) Encode(dst []byte) {
t.tag.Encode(dst)
t.body.Encode(dst[t.tag.Len():])
}
type int64Encoder int64
func (i int64Encoder) Len() int {
n := 1
for i > 127 {
n++
i >>= 8
}
for i < -128 {
n++
i >>= 8
}
return n
}
func (i int64Encoder) Encode(dst []byte) {
n := i.Len()
for j := 0; j < n; j++ {
dst[j] = byte(i >> uint((n-1-j)*8))
}
}
func base128IntLength(n int64) int {
if n == 0 {
return 1
}
l := 0
for i := n; i > 0; i >>= 7 {
l++
}
return l
}
func appendBase128Int(dst []byte, n int64) []byte {
l := base128IntLength(n)
for i := l - 1; i >= 0; i-- {
o := byte(n >> uint(i*7))
o &= 0x7f
if i != 0 {
o |= 0x80
}
dst = append(dst, o)
}
return dst
}
func makeBigInt(n *big.Int, fieldName string) (encoder, error) {
if n == nil {
return nil, StructuralError{"empty integer", fieldName}
}
if n.Sign() < 0 {
// A negative number has to be converted to two's-complement
// form. So we'll invert and subtract 1. If the
// most-significant-bit isn't set then we'll need to pad the
// beginning with 0xff in order to keep the number negative.
nMinus1 := new(big.Int).Neg(n)
nMinus1.Sub(nMinus1, bigOne)
bytes := nMinus1.Bytes()
for i := range bytes {
bytes[i] ^= 0xff
}
if len(bytes) == 0 || bytes[0]&0x80 == 0 {
return multiEncoder([]encoder{byteFFEncoder, bytesEncoder(bytes)}), nil
}
return bytesEncoder(bytes), nil
} else if n.Sign() == 0 {
// Zero is written as a single 0 zero rather than no bytes.
return byte00Encoder, nil
} else {
bytes := n.Bytes()
if len(bytes) > 0 && bytes[0]&0x80 != 0 {
// We'll have to pad this with 0x00 in order to stop it
// looking like a negative number.
return multiEncoder([]encoder{byte00Encoder, bytesEncoder(bytes)}), nil
}
return bytesEncoder(bytes), nil
}
}
func appendLength(dst []byte, i int) []byte {
n := lengthLength(i)
for ; n > 0; n-- {
dst = append(dst, byte(i>>uint((n-1)*8)))
}
return dst
}
func lengthLength(i int) (numBytes int) {
numBytes = 1
for i > 255 {
numBytes++
i >>= 8
}
return
}
func appendTagAndLength(dst []byte, t tagAndLength) []byte {
b := uint8(t.class) << 6
if t.isCompound {
b |= 0x20
}
if t.tag >= 31 {
b |= 0x1f
dst = append(dst, b)
dst = appendBase128Int(dst, int64(t.tag))
} else {
b |= uint8(t.tag)
dst = append(dst, b)
}
if t.length >= 128 {
l := lengthLength(t.length)
dst = append(dst, 0x80|byte(l))
dst = appendLength(dst, t.length)
} else {
dst = append(dst, byte(t.length))
}
return dst
}
type bitStringEncoder BitString
func (b bitStringEncoder) Len() int {
return len(b.Bytes) + 1
}
func (b bitStringEncoder) Encode(dst []byte) {
dst[0] = byte((8 - b.BitLength%8) % 8)
if copy(dst[1:], b.Bytes) != len(b.Bytes) {
panic("internal error")
}
}
type oidEncoder []int
func (oid oidEncoder) Len() int {
l := base128IntLength(int64(oid[0]*40 + oid[1]))
for i := 2; i < len(oid); i++ {
l += base128IntLength(int64(oid[i]))
}
return l
}
func (oid oidEncoder) Encode(dst []byte) {
dst = appendBase128Int(dst[:0], int64(oid[0]*40+oid[1]))
for i := 2; i < len(oid); i++ {
dst = appendBase128Int(dst, int64(oid[i]))
}
}
func makeObjectIdentifier(oid []int, fieldName string) (e encoder, err error) {
if len(oid) < 2 || oid[0] > 2 || (oid[0] < 2 && oid[1] >= 40) {
return nil, StructuralError{"invalid object identifier", fieldName}
}
return oidEncoder(oid), nil
}
func makePrintableString(s, fieldName string) (e encoder, err error) {
for i := 0; i < len(s); i++ {
// The asterisk is often used in PrintableString, even though
// it is invalid. If a PrintableString was specifically
// requested then the asterisk is permitted by this code.
// Ampersand is allowed in parsing due a handful of CA
// certificates, however when making new certificates
// it is rejected.
if !isPrintable(s[i], allowAsterisk, rejectAmpersand) {
return nil, StructuralError{"PrintableString contains invalid character", fieldName}
}
}
return stringEncoder(s), nil
}
func makeIA5String(s, fieldName string) (e encoder, err error) {
for i := 0; i < len(s); i++ {
if s[i] > 127 {
return nil, StructuralError{"IA5String contains invalid character", fieldName}
}
}
return stringEncoder(s), nil
}
func makeNumericString(s string, fieldName string) (e encoder, err error) {
for i := 0; i < len(s); i++ {
if !isNumeric(s[i]) {
return nil, StructuralError{"NumericString contains invalid character", fieldName}
}
}
return stringEncoder(s), nil
}
func makeUTF8String(s string) encoder {
return stringEncoder(s)
}
func appendTwoDigits(dst []byte, v int) []byte {
return append(dst, byte('0'+(v/10)%10), byte('0'+v%10))
}
func appendFourDigits(dst []byte, v int) []byte {
var bytes [4]byte
for i := range bytes {
bytes[3-i] = '0' + byte(v%10)
v /= 10
}
return append(dst, bytes[:]...)
}
func outsideUTCRange(t time.Time) bool {
year := t.Year()
return year < 1950 || year >= 2050
}
func makeUTCTime(t time.Time, fieldName string) (e encoder, err error) {
dst := make([]byte, 0, 18)
dst, err = appendUTCTime(dst, t, fieldName)
if err != nil {
return nil, err
}
return bytesEncoder(dst), nil
}
func makeGeneralizedTime(t time.Time, fieldName string) (e encoder, err error) {
dst := make([]byte, 0, 20)
dst, err = appendGeneralizedTime(dst, t, fieldName)
if err != nil {
return nil, err
}
return bytesEncoder(dst), nil
}
func appendUTCTime(dst []byte, t time.Time, fieldName string) (ret []byte, err error) {
year := t.Year()
switch {
case 1950 <= year && year < 2000:
dst = appendTwoDigits(dst, year-1900)
case 2000 <= year && year < 2050:
dst = appendTwoDigits(dst, year-2000)
default:
return nil, StructuralError{"cannot represent time as UTCTime", fieldName}
}
return appendTimeCommon(dst, t), nil
}
func appendGeneralizedTime(dst []byte, t time.Time, fieldName string) (ret []byte, err error) {
year := t.Year()
if year < 0 || year > 9999 {
return nil, StructuralError{"cannot represent time as GeneralizedTime", fieldName}
}
dst = appendFourDigits(dst, year)
return appendTimeCommon(dst, t), nil
}
func appendTimeCommon(dst []byte, t time.Time) []byte {
_, month, day := t.Date()
dst = appendTwoDigits(dst, int(month))
dst = appendTwoDigits(dst, day)
hour, min, sec := t.Clock()
dst = appendTwoDigits(dst, hour)
dst = appendTwoDigits(dst, min)
dst = appendTwoDigits(dst, sec)
_, offset := t.Zone()
switch {
case offset/60 == 0:
return append(dst, 'Z')
case offset > 0:
dst = append(dst, '+')
case offset < 0:
dst = append(dst, '-')
}
offsetMinutes := offset / 60
if offsetMinutes < 0 {
offsetMinutes = -offsetMinutes
}
dst = appendTwoDigits(dst, offsetMinutes/60)
dst = appendTwoDigits(dst, offsetMinutes%60)
return dst
}
func stripTagAndLength(in []byte) []byte {
_, offset, err := parseTagAndLength(in, 0, "")
if err != nil {
return in
}
return in[offset:]
}
func makeBody(value reflect.Value, params fieldParameters) (e encoder, err error) {
switch value.Type() {
case flagType:
return bytesEncoder(nil), nil
case timeType:
t := value.Interface().(time.Time)
if params.timeType == TagGeneralizedTime || outsideUTCRange(t) {
return makeGeneralizedTime(t, params.name)
}
return makeUTCTime(t, params.name)
case bitStringType:
return bitStringEncoder(value.Interface().(BitString)), nil
case objectIdentifierType:
return makeObjectIdentifier(value.Interface().(ObjectIdentifier), params.name)
case bigIntType:
return makeBigInt(value.Interface().(*big.Int), params.name)
}
switch v := value; v.Kind() {
case reflect.Bool:
if v.Bool() {
return byteFFEncoder, nil
}
return byte00Encoder, nil
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return int64Encoder(v.Int()), nil
case reflect.Struct:
t := v.Type()
for i := 0; i < t.NumField(); i++ {
if t.Field(i).PkgPath != "" {
return nil, StructuralError{"struct contains unexported fields", t.Field(i).Name}
}
}
startingField := 0
n := t.NumField()
if n == 0 {
return bytesEncoder(nil), nil
}
// If the first element of the structure is a non-empty
// RawContents, then we don't bother serializing the rest.
if t.Field(0).Type == rawContentsType {
s := v.Field(0)
if s.Len() > 0 {
bytes := s.Bytes()
/* The RawContents will contain the tag and
* length fields but we'll also be writing
* those ourselves, so we strip them out of
* bytes */
return bytesEncoder(stripTagAndLength(bytes)), nil
}
startingField = 1
}
switch n1 := n - startingField; n1 {
case 0:
return bytesEncoder(nil), nil
case 1:
return makeField(v.Field(startingField), parseFieldParameters(t.Field(startingField).Tag.Get("asn1")))
default:
m := make([]encoder, n1)
for i := 0; i < n1; i++ {
m[i], err = makeField(v.Field(i+startingField), parseFieldParameters(t.Field(i+startingField).Tag.Get("asn1")))
if err != nil {
return nil, err
}
}
return multiEncoder(m), nil
}
case reflect.Slice:
sliceType := v.Type()
if sliceType.Elem().Kind() == reflect.Uint8 {
return bytesEncoder(v.Bytes()), nil
}
var fp fieldParameters
switch l := v.Len(); l {
case 0:
return bytesEncoder(nil), nil
case 1:
return makeField(v.Index(0), fp)
default:
m := make([]encoder, l)
for i := 0; i < l; i++ {
m[i], err = makeField(v.Index(i), fp)
if err != nil {
return nil, err
}
}
return multiEncoder(m), nil
}
case reflect.String:
switch params.stringType {
case TagIA5String:
return makeIA5String(v.String(), params.name)
case TagPrintableString:
return makePrintableString(v.String(), params.name)
case TagNumericString:
return makeNumericString(v.String(), params.name)
default:
return makeUTF8String(v.String()), nil
}
}
return nil, StructuralError{"unknown Go type", params.name}
}
func makeField(v reflect.Value, params fieldParameters) (e encoder, err error) {
if !v.IsValid() {
return nil, fmt.Errorf("asn1: cannot marshal nil value")
}
// If the field is an interface{} then recurse into it.
if v.Kind() == reflect.Interface && v.Type().NumMethod() == 0 {
return makeField(v.Elem(), params)
}
if v.Kind() == reflect.Slice && v.Len() == 0 && params.omitEmpty {
return bytesEncoder(nil), nil
}
if params.optional && params.defaultValue != nil && canHaveDefaultValue(v.Kind()) {
defaultValue := reflect.New(v.Type()).Elem()
defaultValue.SetInt(*params.defaultValue)
if reflect.DeepEqual(v.Interface(), defaultValue.Interface()) {
return bytesEncoder(nil), nil
}
}
// If no default value is given then the zero value for the type is
// assumed to be the default value. This isn't obviously the correct
// behavior, but it's what Go has traditionally done.
if params.optional && params.defaultValue == nil {
if reflect.DeepEqual(v.Interface(), reflect.Zero(v.Type()).Interface()) {
return bytesEncoder(nil), nil
}
}
if v.Type() == rawValueType {
rv := v.Interface().(RawValue)
if len(rv.FullBytes) != 0 {
return bytesEncoder(rv.FullBytes), nil
}
t := new(taggedEncoder)
t.tag = bytesEncoder(appendTagAndLength(t.scratch[:0], tagAndLength{rv.Class, rv.Tag, len(rv.Bytes), rv.IsCompound}))
t.body = bytesEncoder(rv.Bytes)
return t, nil
}
matchAny, tag, isCompound, ok := getUniversalType(v.Type())
if !ok || matchAny {
return nil, StructuralError{fmt.Sprintf("unknown Go type: %v", v.Type()), params.name}
}
if params.timeType != 0 && tag != TagUTCTime {
return nil, StructuralError{"explicit time type given to non-time member", params.name}
}
if params.stringType != 0 && tag != TagPrintableString {
return nil, StructuralError{"explicit string type given to non-string member", params.name}
}
switch tag {
case TagPrintableString:
if params.stringType == 0 {
// This is a string without an explicit string type. We'll use
// a PrintableString if the character set in the string is
// sufficiently limited, otherwise we'll use a UTF8String.
for _, r := range v.String() {
if r >= utf8.RuneSelf || !isPrintable(byte(r), rejectAsterisk, rejectAmpersand) {
if !utf8.ValidString(v.String()) {
return nil, errors.New("asn1: string not valid UTF-8")
}
tag = TagUTF8String
break
}
}
} else {
tag = params.stringType
}
case TagUTCTime:
if params.timeType == TagGeneralizedTime || outsideUTCRange(v.Interface().(time.Time)) {
tag = TagGeneralizedTime
}
}
if params.set {
if tag != TagSequence {
return nil, StructuralError{"non sequence tagged as set", params.name}
}
tag = TagSet
}
t := new(taggedEncoder)
t.body, err = makeBody(v, params)
if err != nil {
return nil, err
}
bodyLen := t.body.Len()
class := ClassUniversal
if params.tag != nil {
if params.application {
class = ClassApplication
} else {
class = ClassContextSpecific
}
if params.explicit {
t.tag = bytesEncoder(appendTagAndLength(t.scratch[:0], tagAndLength{ClassUniversal, tag, bodyLen, isCompound}))
tt := new(taggedEncoder)
tt.body = t
tt.tag = bytesEncoder(appendTagAndLength(tt.scratch[:0], tagAndLength{
class: class,
tag: *params.tag,
length: bodyLen + t.tag.Len(),
isCompound: true,
}))
return tt, nil
}
// implicit tag.
tag = *params.tag
}
t.tag = bytesEncoder(appendTagAndLength(t.scratch[:0], tagAndLength{class, tag, bodyLen, isCompound}))
return t, nil
}
// Marshal returns the ASN.1 encoding of val.
//
// In addition to the struct tags recognised by Unmarshal, the following can be
// used:
//
// ia5: causes strings to be marshaled as ASN.1, IA5String values
// omitempty: causes empty slices to be skipped
// printable: causes strings to be marshaled as ASN.1, PrintableString values
// utf8: causes strings to be marshaled as ASN.1, UTF8String values
// utc: causes time.Time to be marshaled as ASN.1, UTCTime values
// generalized: causes time.Time to be marshaled as ASN.1, GeneralizedTime values
func Marshal(val interface{}) ([]byte, error) {
return MarshalWithParams(val, "")
}
// MarshalWithParams allows field parameters to be specified for the
// top-level element. The form of the params is the same as the field tags.
func MarshalWithParams(val interface{}, params string) ([]byte, error) {
e, err := makeField(reflect.ValueOf(val), parseFieldParameters(params))
if err != nil {
return nil, err
}
b := make([]byte, e.Len())
e.Encode(b)
return b, nil
}

View file

@ -0,0 +1,17 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package configpb
//go:generate protoc -I=. -I=$GOPATH/src --go_out=:. multilog.proto

View file

@ -0,0 +1,124 @@
// Code generated by protoc-gen-go. DO NOT EDIT.
// source: multilog.proto
/*
Package configpb is a generated protocol buffer package.
It is generated from these files:
multilog.proto
It has these top-level messages:
TemporalLogConfig
LogShardConfig
*/
package configpb
import proto "github.com/golang/protobuf/proto"
import fmt "fmt"
import math "math"
import google_protobuf "github.com/golang/protobuf/ptypes/timestamp"
// Reference imports to suppress errors if they are not otherwise used.
var _ = proto.Marshal
var _ = fmt.Errorf
var _ = math.Inf
// This is a compile-time assertion to ensure that this generated file
// is compatible with the proto package it is being compiled against.
// A compilation error at this line likely means your copy of the
// proto package needs to be updated.
const _ = proto.ProtoPackageIsVersion2 // please upgrade the proto package
// TemporalLogConfig is a set of LogShardConfig messages, whose
// time limits should be contiguous.
type TemporalLogConfig struct {
Shard []*LogShardConfig `protobuf:"bytes,1,rep,name=shard" json:"shard,omitempty"`
}
func (m *TemporalLogConfig) Reset() { *m = TemporalLogConfig{} }
func (m *TemporalLogConfig) String() string { return proto.CompactTextString(m) }
func (*TemporalLogConfig) ProtoMessage() {}
func (*TemporalLogConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{0} }
func (m *TemporalLogConfig) GetShard() []*LogShardConfig {
if m != nil {
return m.Shard
}
return nil
}
// LogShardConfig describes the acceptable date range for a single shard of a temporal
// log.
type LogShardConfig struct {
Uri string `protobuf:"bytes,1,opt,name=uri" json:"uri,omitempty"`
// The log's public key in DER-encoded PKIX form.
PublicKeyDer []byte `protobuf:"bytes,2,opt,name=public_key_der,json=publicKeyDer,proto3" json:"public_key_der,omitempty"`
// not_after_start defines the start of the range of acceptable NotAfter
// values, inclusive.
// Leaving this unset implies no lower bound to the range.
NotAfterStart *google_protobuf.Timestamp `protobuf:"bytes,3,opt,name=not_after_start,json=notAfterStart" json:"not_after_start,omitempty"`
// not_after_limit defines the end of the range of acceptable NotAfter values,
// exclusive.
// Leaving this unset implies no upper bound to the range.
NotAfterLimit *google_protobuf.Timestamp `protobuf:"bytes,4,opt,name=not_after_limit,json=notAfterLimit" json:"not_after_limit,omitempty"`
}
func (m *LogShardConfig) Reset() { *m = LogShardConfig{} }
func (m *LogShardConfig) String() string { return proto.CompactTextString(m) }
func (*LogShardConfig) ProtoMessage() {}
func (*LogShardConfig) Descriptor() ([]byte, []int) { return fileDescriptor0, []int{1} }
func (m *LogShardConfig) GetUri() string {
if m != nil {
return m.Uri
}
return ""
}
func (m *LogShardConfig) GetPublicKeyDer() []byte {
if m != nil {
return m.PublicKeyDer
}
return nil
}
func (m *LogShardConfig) GetNotAfterStart() *google_protobuf.Timestamp {
if m != nil {
return m.NotAfterStart
}
return nil
}
func (m *LogShardConfig) GetNotAfterLimit() *google_protobuf.Timestamp {
if m != nil {
return m.NotAfterLimit
}
return nil
}
func init() {
proto.RegisterType((*TemporalLogConfig)(nil), "configpb.TemporalLogConfig")
proto.RegisterType((*LogShardConfig)(nil), "configpb.LogShardConfig")
}
func init() { proto.RegisterFile("multilog.proto", fileDescriptor0) }
var fileDescriptor0 = []byte{
// 241 bytes of a gzipped FileDescriptorProto
0x1f, 0x8b, 0x08, 0x00, 0x00, 0x00, 0x00, 0x00, 0x02, 0xff, 0x8c, 0x8f, 0xb1, 0x4e, 0xc3, 0x30,
0x14, 0x45, 0x65, 0x02, 0x08, 0xdc, 0x12, 0xc0, 0x93, 0xd5, 0x85, 0xa8, 0x62, 0xc8, 0xe4, 0x4a,
0xe5, 0x0b, 0xa0, 0x6c, 0x64, 0x4a, 0xbb, 0x47, 0x4e, 0xeb, 0x18, 0x0b, 0x3b, 0xcf, 0x72, 0x5e,
0x86, 0xfe, 0x25, 0x9f, 0x84, 0x1c, 0x2b, 0x43, 0x37, 0xb6, 0xa7, 0x77, 0xcf, 0xb9, 0xd2, 0xa5,
0xb9, 0x1b, 0x2d, 0x1a, 0x0b, 0x5a, 0xf8, 0x00, 0x08, 0xec, 0xee, 0x08, 0x7d, 0x67, 0xb4, 0x6f,
0x57, 0x2f, 0x1a, 0x40, 0x5b, 0xb5, 0x99, 0xfe, 0xed, 0xd8, 0x6d, 0xd0, 0x38, 0x35, 0xa0, 0x74,
0x3e, 0xa1, 0xeb, 0x1d, 0x7d, 0x3e, 0x28, 0xe7, 0x21, 0x48, 0x5b, 0x81, 0xde, 0x4d, 0x1e, 0x13,
0xf4, 0x66, 0xf8, 0x96, 0xe1, 0xc4, 0x49, 0x91, 0x95, 0x8b, 0x2d, 0x17, 0x73, 0x9f, 0xa8, 0x40,
0xef, 0x63, 0x92, 0xc0, 0x3a, 0x61, 0xeb, 0x5f, 0x42, 0xf3, 0xcb, 0x84, 0x3d, 0xd1, 0x6c, 0x0c,
0x86, 0x93, 0x82, 0x94, 0xf7, 0x75, 0x3c, 0xd9, 0x2b, 0xcd, 0xfd, 0xd8, 0x5a, 0x73, 0x6c, 0x7e,
0xd4, 0xb9, 0x39, 0xa9, 0xc0, 0xaf, 0x0a, 0x52, 0x2e, 0xeb, 0x65, 0xfa, 0x7e, 0xa9, 0xf3, 0xa7,
0x0a, 0xec, 0x83, 0x3e, 0xf6, 0x80, 0x8d, 0xec, 0x50, 0x85, 0x66, 0x40, 0x19, 0x90, 0x67, 0x05,
0x29, 0x17, 0xdb, 0x95, 0x48, 0x53, 0xc4, 0x3c, 0x45, 0x1c, 0xe6, 0x29, 0xf5, 0x43, 0x0f, 0xf8,
0x1e, 0x8d, 0x7d, 0x14, 0x2e, 0x3b, 0xac, 0x71, 0x06, 0xf9, 0xf5, 0xff, 0x3b, 0xaa, 0x28, 0xb4,
0xb7, 0x13, 0xf2, 0xf6, 0x17, 0x00, 0x00, 0xff, 0xff, 0xf8, 0xd9, 0x50, 0x5b, 0x5b, 0x01, 0x00,
0x00,
}

View file

@ -0,0 +1,43 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
syntax = "proto3";
package configpb;
import "google/protobuf/timestamp.proto";
// TemporalLogConfig is a set of LogShardConfig messages, whose
// time limits should be contiguous.
message TemporalLogConfig {
repeated LogShardConfig shard = 1;
}
// LogShardConfig describes the acceptable date range for a single shard of a temporal
// log.
message LogShardConfig {
string uri = 1;
// The log's public key in DER-encoded PKIX form.
bytes public_key_der = 2;
// not_after_start defines the start of the range of acceptable NotAfter
// values, inclusive.
// Leaving this unset implies no lower bound to the range.
google.protobuf.Timestamp not_after_start = 3;
// not_after_limit defines the end of the range of acceptable NotAfter values,
// exclusive.
// Leaving this unset implies no upper bound to the range.
google.protobuf.Timestamp not_after_limit = 4;
}

View file

@ -0,0 +1,75 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"context"
"errors"
"strconv"
ct "github.com/google/certificate-transparency-go"
"github.com/google/certificate-transparency-go/x509"
)
// GetRawEntries exposes the /ct/v1/get-entries result with only the JSON parsing done.
func (c *LogClient) GetRawEntries(ctx context.Context, start, end int64) (*ct.GetEntriesResponse, error) {
if end < 0 {
return nil, errors.New("end should be >= 0")
}
if end < start {
return nil, errors.New("start should be <= end")
}
params := map[string]string{
"start": strconv.FormatInt(start, 10),
"end": strconv.FormatInt(end, 10),
}
if ctx == nil {
ctx = context.TODO()
}
var resp ct.GetEntriesResponse
httpRsp, body, err := c.GetAndParse(ctx, ct.GetEntriesPath, params, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
return &resp, nil
}
// GetEntries attempts to retrieve the entries in the sequence [start, end] from the CT log server
// (RFC6962 s4.6) as parsed [pre-]certificates for convenience, held in a slice of ct.LogEntry structures.
// However, this does mean that any certificate parsing failures will cause a failure of the whole
// retrieval operation; for more robust retrieval of parsed certificates, use GetRawEntries() and invoke
// ct.LogEntryFromLeaf() on each individual entry.
func (c *LogClient) GetEntries(ctx context.Context, start, end int64) ([]ct.LogEntry, error) {
resp, err := c.GetRawEntries(ctx, start, end)
if err != nil {
return nil, err
}
entries := make([]ct.LogEntry, len(resp.Entries))
for i, entry := range resp.Entries {
index := start + int64(i)
logEntry, err := ct.LogEntryFromLeaf(index, &entry)
if _, ok := err.(x509.NonFatalErrors); !ok && err != nil {
return nil, err
}
entries[i] = *logEntry
}
return entries, nil
}

View file

@ -0,0 +1,283 @@
// Copyright 2014 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package client is a CT log client implementation and contains types and code
// for interacting with RFC6962-compliant CT Log instances.
// See http://tools.ietf.org/html/rfc6962 for details
package client
import (
"context"
"crypto/sha256"
"encoding/base64"
"fmt"
"net/http"
"strconv"
ct "github.com/google/certificate-transparency-go"
"github.com/google/certificate-transparency-go/jsonclient"
"github.com/google/certificate-transparency-go/tls"
)
// LogClient represents a client for a given CT Log instance
type LogClient struct {
jsonclient.JSONClient
}
// New constructs a new LogClient instance.
// |uri| is the base URI of the CT log instance to interact with, e.g.
// http://ct.googleapis.com/pilot
// |hc| is the underlying client to be used for HTTP requests to the CT log.
// |opts| can be used to provide a customer logger interface and a public key
// for signature verification.
func New(uri string, hc *http.Client, opts jsonclient.Options) (*LogClient, error) {
logClient, err := jsonclient.New(uri, hc, opts)
if err != nil {
return nil, err
}
return &LogClient{*logClient}, err
}
// RspError represents an error that occurred when processing a response from a server,
// and also includes key details from the http.Response that triggered the error.
type RspError struct {
Err error
StatusCode int
Body []byte
}
// Error formats the RspError instance, focusing on the error.
func (e RspError) Error() string {
return e.Err.Error()
}
// Attempts to add |chain| to the log, using the api end-point specified by
// |path|. If provided context expires before submission is complete an
// error will be returned.
func (c *LogClient) addChainWithRetry(ctx context.Context, ctype ct.LogEntryType, path string, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
var resp ct.AddChainResponse
var req ct.AddChainRequest
for _, link := range chain {
req.Chain = append(req.Chain, link.Data)
}
httpRsp, body, err := c.PostAndParseWithRetry(ctx, path, &req, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
var ds ct.DigitallySigned
if rest, err := tls.Unmarshal(resp.Signature, &ds); err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
} else if len(rest) > 0 {
return nil, RspError{
Err: fmt.Errorf("trailing data (%d bytes) after DigitallySigned", len(rest)),
StatusCode: httpRsp.StatusCode,
Body: body,
}
}
exts, err := base64.StdEncoding.DecodeString(resp.Extensions)
if err != nil {
return nil, RspError{
Err: fmt.Errorf("invalid base64 data in Extensions (%q): %v", resp.Extensions, err),
StatusCode: httpRsp.StatusCode,
Body: body,
}
}
var logID ct.LogID
copy(logID.KeyID[:], resp.ID)
sct := &ct.SignedCertificateTimestamp{
SCTVersion: resp.SCTVersion,
LogID: logID,
Timestamp: resp.Timestamp,
Extensions: ct.CTExtensions(exts),
Signature: ds,
}
if err := c.VerifySCTSignature(*sct, ctype, chain); err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return sct, nil
}
// AddChain adds the (DER represented) X509 |chain| to the log.
func (c *LogClient) AddChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return c.addChainWithRetry(ctx, ct.X509LogEntryType, ct.AddChainPath, chain)
}
// AddPreChain adds the (DER represented) Precertificate |chain| to the log.
func (c *LogClient) AddPreChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return c.addChainWithRetry(ctx, ct.PrecertLogEntryType, ct.AddPreChainPath, chain)
}
// AddJSON submits arbitrary data to to XJSON server.
func (c *LogClient) AddJSON(ctx context.Context, data interface{}) (*ct.SignedCertificateTimestamp, error) {
req := ct.AddJSONRequest{Data: data}
var resp ct.AddChainResponse
httpRsp, body, err := c.PostAndParse(ctx, ct.AddJSONPath, &req, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
var ds ct.DigitallySigned
if rest, err := tls.Unmarshal(resp.Signature, &ds); err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
} else if len(rest) > 0 {
return nil, RspError{
Err: fmt.Errorf("trailing data (%d bytes) after DigitallySigned", len(rest)),
StatusCode: httpRsp.StatusCode,
Body: body,
}
}
var logID ct.LogID
copy(logID.KeyID[:], resp.ID)
return &ct.SignedCertificateTimestamp{
SCTVersion: resp.SCTVersion,
LogID: logID,
Timestamp: resp.Timestamp,
Extensions: ct.CTExtensions(resp.Extensions),
Signature: ds,
}, nil
}
// GetSTH retrieves the current STH from the log.
// Returns a populated SignedTreeHead, or a non-nil error (which may be of type
// RspError if a raw http.Response is available).
func (c *LogClient) GetSTH(ctx context.Context) (*ct.SignedTreeHead, error) {
var resp ct.GetSTHResponse
httpRsp, body, err := c.GetAndParse(ctx, ct.GetSTHPath, nil, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
sth := ct.SignedTreeHead{
TreeSize: resp.TreeSize,
Timestamp: resp.Timestamp,
}
if len(resp.SHA256RootHash) != sha256.Size {
return nil, RspError{
Err: fmt.Errorf("sha256_root_hash is invalid length, expected %d got %d", sha256.Size, len(resp.SHA256RootHash)),
StatusCode: httpRsp.StatusCode,
Body: body,
}
}
copy(sth.SHA256RootHash[:], resp.SHA256RootHash)
var ds ct.DigitallySigned
if rest, err := tls.Unmarshal(resp.TreeHeadSignature, &ds); err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
} else if len(rest) > 0 {
return nil, RspError{
Err: fmt.Errorf("trailing data (%d bytes) after DigitallySigned", len(rest)),
StatusCode: httpRsp.StatusCode,
Body: body,
}
}
sth.TreeHeadSignature = ds
if err := c.VerifySTHSignature(sth); err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return &sth, nil
}
// VerifySTHSignature checks the signature in sth, returning any error encountered or nil if verification is
// successful.
func (c *LogClient) VerifySTHSignature(sth ct.SignedTreeHead) error {
if c.Verifier == nil {
// Can't verify signatures without a verifier
return nil
}
return c.Verifier.VerifySTHSignature(sth)
}
// VerifySCTSignature checks the signature in sct for the given LogEntryType, with associated certificate chain.
func (c *LogClient) VerifySCTSignature(sct ct.SignedCertificateTimestamp, ctype ct.LogEntryType, certData []ct.ASN1Cert) error {
if c.Verifier == nil {
// Can't verify signatures without a verifier
return nil
}
leaf, err := ct.MerkleTreeLeafFromRawChain(certData, ctype, sct.Timestamp)
if err != nil {
return fmt.Errorf("failed to build MerkleTreeLeaf: %v", err)
}
entry := ct.LogEntry{Leaf: *leaf}
return c.Verifier.VerifySCTSignature(sct, entry)
}
// GetSTHConsistency retrieves the consistency proof between two snapshots.
func (c *LogClient) GetSTHConsistency(ctx context.Context, first, second uint64) ([][]byte, error) {
base10 := 10
params := map[string]string{
"first": strconv.FormatUint(first, base10),
"second": strconv.FormatUint(second, base10),
}
var resp ct.GetSTHConsistencyResponse
httpRsp, body, err := c.GetAndParse(ctx, ct.GetSTHConsistencyPath, params, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
return resp.Consistency, nil
}
// GetProofByHash returns an audit path for the hash of an SCT.
func (c *LogClient) GetProofByHash(ctx context.Context, hash []byte, treeSize uint64) (*ct.GetProofByHashResponse, error) {
b64Hash := base64.StdEncoding.EncodeToString(hash)
base10 := 10
params := map[string]string{
"tree_size": strconv.FormatUint(treeSize, base10),
"hash": b64Hash,
}
var resp ct.GetProofByHashResponse
httpRsp, body, err := c.GetAndParse(ctx, ct.GetProofByHashPath, params, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
return &resp, nil
}
// GetAcceptedRoots retrieves the set of acceptable root certificates for a log.
func (c *LogClient) GetAcceptedRoots(ctx context.Context) ([]ct.ASN1Cert, error) {
var resp ct.GetRootsResponse
httpRsp, body, err := c.GetAndParse(ctx, ct.GetRootsPath, nil, &resp)
if err != nil {
if httpRsp != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
return nil, err
}
var roots []ct.ASN1Cert
for _, cert64 := range resp.Certificates {
cert, err := base64.StdEncoding.DecodeString(cert64)
if err != nil {
return nil, RspError{Err: err, StatusCode: httpRsp.StatusCode, Body: body}
}
roots = append(roots, ct.ASN1Cert{Data: cert})
}
return roots, nil
}

View file

@ -0,0 +1,221 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"context"
"crypto/sha256"
"errors"
"fmt"
"io/ioutil"
"net/http"
"time"
"github.com/gogo/protobuf/proto"
"github.com/golang/protobuf/ptypes"
ct "github.com/google/certificate-transparency-go"
"github.com/google/certificate-transparency-go/client/configpb"
"github.com/google/certificate-transparency-go/jsonclient"
"github.com/google/certificate-transparency-go/x509"
)
type interval struct {
lower *time.Time // nil => no lower bound
upper *time.Time // nil => no upper bound
}
// TemporalLogConfigFromFile creates a TemporalLogConfig object from the given
// filename, which should contain text-protobuf encoded configuration data.
func TemporalLogConfigFromFile(filename string) (*configpb.TemporalLogConfig, error) {
if len(filename) == 0 {
return nil, errors.New("log config filename empty")
}
cfgText, err := ioutil.ReadFile(filename)
if err != nil {
return nil, fmt.Errorf("failed to read log config: %v", err)
}
var cfg configpb.TemporalLogConfig
if err := proto.UnmarshalText(string(cfgText), &cfg); err != nil {
return nil, fmt.Errorf("failed to parse log config: %v", err)
}
if len(cfg.Shard) == 0 {
return nil, errors.New("empty log config found")
}
return &cfg, nil
}
// AddLogClient is an interface that allows adding certificates and pre-certificates to a log.
// Both LogClient and TemporalLogClient implement this interface, which allows users to
// commonize code for adding certs to normal/temporal logs.
type AddLogClient interface {
AddChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error)
AddPreChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error)
GetAcceptedRoots(ctx context.Context) ([]ct.ASN1Cert, error)
}
// TemporalLogClient allows [pre-]certificates to be uploaded to a temporal log.
type TemporalLogClient struct {
Clients []*LogClient
intervals []interval
}
// NewTemporalLogClient builds a new client for interacting with a temporal log.
// The provided config should be contiguous and chronological.
func NewTemporalLogClient(cfg configpb.TemporalLogConfig, hc *http.Client) (*TemporalLogClient, error) {
if len(cfg.Shard) == 0 {
return nil, errors.New("empty config")
}
overall, err := shardInterval(cfg.Shard[0])
if err != nil {
return nil, fmt.Errorf("cfg.Shard[0] invalid: %v", err)
}
intervals := make([]interval, 0, len(cfg.Shard))
intervals = append(intervals, overall)
for i := 1; i < len(cfg.Shard); i++ {
interval, err := shardInterval(cfg.Shard[i])
if err != nil {
return nil, fmt.Errorf("cfg.Shard[%d] invalid: %v", i, err)
}
if overall.upper == nil {
return nil, fmt.Errorf("cfg.Shard[%d] extends an interval with no upper bound", i)
}
if interval.lower == nil {
return nil, fmt.Errorf("cfg.Shard[%d] has no lower bound but extends an interval", i)
}
if !interval.lower.Equal(*overall.upper) {
return nil, fmt.Errorf("cfg.Shard[%d] starts at %v but previous interval ended at %v", i, interval.lower, overall.upper)
}
overall.upper = interval.upper
intervals = append(intervals, interval)
}
clients := make([]*LogClient, 0, len(cfg.Shard))
for i, shard := range cfg.Shard {
opts := jsonclient.Options{}
opts.PublicKeyDER = shard.GetPublicKeyDer()
c, err := New(shard.Uri, hc, opts)
if err != nil {
return nil, fmt.Errorf("failed to create client for cfg.Shard[%d]: %v", i, err)
}
clients = append(clients, c)
}
tlc := TemporalLogClient{
Clients: clients,
intervals: intervals,
}
return &tlc, nil
}
// GetAcceptedRoots retrieves the set of acceptable root certificates for all
// of the shards of a temporal log (i.e. the union).
func (tlc *TemporalLogClient) GetAcceptedRoots(ctx context.Context) ([]ct.ASN1Cert, error) {
type result struct {
roots []ct.ASN1Cert
err error
}
results := make(chan result, len(tlc.Clients))
for _, c := range tlc.Clients {
go func(c *LogClient) {
var r result
r.roots, r.err = c.GetAcceptedRoots(ctx)
results <- r
}(c)
}
var allRoots []ct.ASN1Cert
seen := make(map[[sha256.Size]byte]bool)
for range tlc.Clients {
r := <-results
if r.err != nil {
return nil, r.err
}
for _, root := range r.roots {
h := sha256.Sum256(root.Data)
if seen[h] {
continue
}
seen[h] = true
allRoots = append(allRoots, root)
}
}
return allRoots, nil
}
// AddChain adds the (DER represented) X509 chain to the appropriate log.
func (tlc *TemporalLogClient) AddChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return tlc.addChain(ctx, ct.X509LogEntryType, ct.AddChainPath, chain)
}
// AddPreChain adds the (DER represented) Precertificate chain to the appropriate log.
func (tlc *TemporalLogClient) AddPreChain(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return tlc.addChain(ctx, ct.PrecertLogEntryType, ct.AddPreChainPath, chain)
}
func (tlc *TemporalLogClient) addChain(ctx context.Context, ctype ct.LogEntryType, path string, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
// Parse the first entry in the chain
if len(chain) == 0 {
return nil, errors.New("missing chain")
}
cert, err := x509.ParseCertificate(chain[0].Data)
if err != nil {
return nil, fmt.Errorf("failed to parse initial chain entry: %v", err)
}
cidx, err := tlc.IndexByDate(cert.NotAfter)
if err != nil {
return nil, fmt.Errorf("failed to find log to process cert: %v", err)
}
return tlc.Clients[cidx].addChainWithRetry(ctx, ctype, path, chain)
}
// IndexByDate returns the index of the Clients entry that is appropriate for the given
// date.
func (tlc *TemporalLogClient) IndexByDate(when time.Time) (int, error) {
for i, interval := range tlc.intervals {
if (interval.lower != nil) && when.Before(*interval.lower) {
continue
}
if (interval.upper != nil) && !when.Before(*interval.upper) {
continue
}
return i, nil
}
return -1, fmt.Errorf("no log found encompassing date %v", when)
}
func shardInterval(cfg *configpb.LogShardConfig) (interval, error) {
var interval interval
if cfg.NotAfterStart != nil {
t, err := ptypes.Timestamp(cfg.NotAfterStart)
if err != nil {
return interval, fmt.Errorf("failed to parse NotAfterStart: %v", err)
}
interval.lower = &t
}
if cfg.NotAfterLimit != nil {
t, err := ptypes.Timestamp(cfg.NotAfterLimit)
if err != nil {
return interval, fmt.Errorf("failed to parse NotAfterLimit: %v", err)
}
interval.upper = &t
}
if interval.lower != nil && interval.upper != nil && !(*interval.lower).Before(*interval.upper) {
return interval, errors.New("inverted interval")
}
return interval, nil
}

View file

@ -0,0 +1,72 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package jsonclient
import (
"sync"
"time"
)
type backoff struct {
mu sync.RWMutex
multiplier uint
notBefore time.Time
}
const (
// maximum backoff is 2^(maxMultiplier-1) = 128 seconds
maxMultiplier = 8
)
func (b *backoff) set(override *time.Duration) time.Duration {
b.mu.Lock()
defer b.mu.Unlock()
if b.notBefore.After(time.Now()) {
if override != nil {
// If existing backoff is set but override would be longer than
// it then set it to that.
notBefore := time.Now().Add(*override)
if notBefore.After(b.notBefore) {
b.notBefore = notBefore
}
}
return time.Until(b.notBefore)
}
var wait time.Duration
if override != nil {
wait = *override
} else {
if b.multiplier < maxMultiplier {
b.multiplier++
}
wait = time.Second * time.Duration(1<<(b.multiplier-1))
}
b.notBefore = time.Now().Add(wait)
return wait
}
func (b *backoff) decreaseMultiplier() {
b.mu.Lock()
defer b.mu.Unlock()
if b.multiplier > 0 {
b.multiplier--
}
}
func (b *backoff) until() time.Time {
b.mu.RLock()
defer b.mu.RUnlock()
return b.notBefore
}

View file

@ -0,0 +1,289 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package jsonclient
import (
"bytes"
"context"
"crypto"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"math/rand"
"net/http"
"net/url"
"strconv"
"strings"
"time"
ct "github.com/google/certificate-transparency-go"
"github.com/google/certificate-transparency-go/x509"
"golang.org/x/net/context/ctxhttp"
)
const maxJitter = 250 * time.Millisecond
type backoffer interface {
// set adjusts/increases the current backoff interval (typically on retryable failure);
// if the optional parameter is provided, this will be used as the interval if it is greater
// than the currently set interval. Returns the current wait period so that it can be
// logged along with any error message.
set(*time.Duration) time.Duration
// decreaseMultiplier reduces the current backoff multiplier, typically on success.
decreaseMultiplier()
// until returns the time until which the client should wait before making a request,
// it may be in the past in which case it should be ignored.
until() time.Time
}
// JSONClient provides common functionality for interacting with a JSON server
// that uses cryptographic signatures.
type JSONClient struct {
uri string // the base URI of the server. e.g. http://ct.googleapis/pilot
httpClient *http.Client // used to interact with the server via HTTP
Verifier *ct.SignatureVerifier // nil for no verification (e.g. no public key available)
logger Logger // interface to use for logging warnings and errors
backoff backoffer // object used to store and calculate backoff information
}
// Logger is a simple logging interface used to log internal errors and warnings
type Logger interface {
// Printf formats and logs a message
Printf(string, ...interface{})
}
// Options are the options for creating a new JSONClient.
type Options struct {
// Interface to use for logging warnings and errors, if nil the
// standard library log package will be used.
Logger Logger
// PEM format public key to use for signature verification.
PublicKey string
// DER format public key to use for signature verification.
PublicKeyDER []byte
}
// ParsePublicKey parses and returns the public key contained in opts.
// If both opts.PublicKey and opts.PublicKeyDER are set, PublicKeyDER is used.
// If neither is set, nil will be returned.
func (opts *Options) ParsePublicKey() (crypto.PublicKey, error) {
if len(opts.PublicKeyDER) > 0 {
return x509.ParsePKIXPublicKey(opts.PublicKeyDER)
}
if opts.PublicKey != "" {
pubkey, _ /* keyhash */, rest, err := ct.PublicKeyFromPEM([]byte(opts.PublicKey))
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, errors.New("extra data found after PEM key decoded")
}
return pubkey, nil
}
return nil, nil
}
type basicLogger struct{}
func (bl *basicLogger) Printf(msg string, args ...interface{}) {
log.Printf(msg, args...)
}
// New constructs a new JSONClient instance, for the given base URI, using the
// given http.Client object (if provided) and the Options object.
// If opts does not specify a public key, signatures will not be verified.
func New(uri string, hc *http.Client, opts Options) (*JSONClient, error) {
pubkey, err := opts.ParsePublicKey()
if err != nil {
return nil, fmt.Errorf("invalid public key: %v", err)
}
var verifier *ct.SignatureVerifier
if pubkey != nil {
var err error
verifier, err = ct.NewSignatureVerifier(pubkey)
if err != nil {
return nil, err
}
}
if hc == nil {
hc = new(http.Client)
}
logger := opts.Logger
if logger == nil {
logger = &basicLogger{}
}
return &JSONClient{
uri: strings.TrimRight(uri, "/"),
httpClient: hc,
Verifier: verifier,
logger: logger,
backoff: &backoff{},
}, nil
}
// GetAndParse makes a HTTP GET call to the given path, and attempta to parse
// the response as a JSON representation of the rsp structure. Returns the
// http.Response, the body of the response, and an error. Note that the
// returned http.Response can be non-nil even when an error is returned,
// in particular when the HTTP status is not OK or when the JSON parsing fails.
func (c *JSONClient) GetAndParse(ctx context.Context, path string, params map[string]string, rsp interface{}) (*http.Response, []byte, error) {
if ctx == nil {
return nil, nil, errors.New("context.Context required")
}
// Build a GET request with URL-encoded parameters.
vals := url.Values{}
for k, v := range params {
vals.Add(k, v)
}
fullURI := fmt.Sprintf("%s%s?%s", c.uri, path, vals.Encode())
httpReq, err := http.NewRequest(http.MethodGet, fullURI, nil)
if err != nil {
return nil, nil, err
}
httpRsp, err := ctxhttp.Do(ctx, c.httpClient, httpReq)
if err != nil {
return nil, nil, err
}
// Read everything now so http.Client can reuse the connection.
body, err := ioutil.ReadAll(httpRsp.Body)
httpRsp.Body.Close()
if err != nil {
return httpRsp, body, fmt.Errorf("failed to read response body: %v", err)
}
if httpRsp.StatusCode != http.StatusOK {
return httpRsp, body, fmt.Errorf("got HTTP Status %q", httpRsp.Status)
}
if err := json.NewDecoder(bytes.NewReader(body)).Decode(rsp); err != nil {
return httpRsp, body, err
}
return httpRsp, body, nil
}
// PostAndParse makes a HTTP POST call to the given path, including the request
// parameters, and attempts to parse the response as a JSON representation of
// the rsp structure. Returns the http.Response, the body of the response, and
// an error. Note that the returned http.Response can be non-nil even when an
// error is returned, in particular when the HTTP status is not OK or when the
// JSON parsing fails.
func (c *JSONClient) PostAndParse(ctx context.Context, path string, req, rsp interface{}) (*http.Response, []byte, error) {
if ctx == nil {
return nil, nil, errors.New("context.Context required")
}
// Build a POST request with JSON body.
postBody, err := json.Marshal(req)
if err != nil {
return nil, nil, err
}
fullURI := fmt.Sprintf("%s%s", c.uri, path)
httpReq, err := http.NewRequest(http.MethodPost, fullURI, bytes.NewReader(postBody))
if err != nil {
return nil, nil, err
}
httpReq.Header.Set("Content-Type", "application/json")
httpRsp, err := ctxhttp.Do(ctx, c.httpClient, httpReq)
// Read all of the body, if there is one, so that the http.Client can do Keep-Alive.
var body []byte
if httpRsp != nil {
body, err = ioutil.ReadAll(httpRsp.Body)
httpRsp.Body.Close()
}
if err != nil {
return httpRsp, body, err
}
if httpRsp.StatusCode == http.StatusOK {
if err = json.Unmarshal(body, &rsp); err != nil {
return httpRsp, body, err
}
}
return httpRsp, body, nil
}
// waitForBackoff blocks until the defined backoff interval or context has expired, if the returned
// not before time is in the past it returns immediately.
func (c *JSONClient) waitForBackoff(ctx context.Context) error {
dur := time.Until(c.backoff.until().Add(time.Millisecond * time.Duration(rand.Intn(int(maxJitter.Seconds()*1000)))))
if dur < 0 {
dur = 0
}
backoffTimer := time.NewTimer(dur)
select {
case <-ctx.Done():
return ctx.Err()
case <-backoffTimer.C:
}
return nil
}
// PostAndParseWithRetry makes a HTTP POST call, but retries (with backoff) on
// retriable errors; the caller should set a deadline on the provided context
// to prevent infinite retries. Return values are as for PostAndParse.
func (c *JSONClient) PostAndParseWithRetry(ctx context.Context, path string, req, rsp interface{}) (*http.Response, []byte, error) {
if ctx == nil {
return nil, nil, errors.New("context.Context required")
}
for {
httpRsp, body, err := c.PostAndParse(ctx, path, req, rsp)
if err != nil {
// Don't retry context errors.
if err == context.Canceled || err == context.DeadlineExceeded {
return nil, nil, err
}
wait := c.backoff.set(nil)
c.logger.Printf("Request failed, backing-off for %s: %s", wait, err)
} else {
switch {
case httpRsp.StatusCode == http.StatusOK:
return httpRsp, body, nil
case httpRsp.StatusCode == http.StatusRequestTimeout:
// Request timeout, retry immediately
c.logger.Printf("Request timed out, retrying immediately")
case httpRsp.StatusCode == http.StatusServiceUnavailable:
var backoff *time.Duration
// Retry-After may be either a number of seconds as a int or a RFC 1123
// date string (RFC 7231 Section 7.1.3)
if retryAfter := httpRsp.Header.Get("Retry-After"); retryAfter != "" {
if seconds, err := strconv.Atoi(retryAfter); err == nil {
b := time.Duration(seconds) * time.Second
backoff = &b
} else if date, err := time.Parse(time.RFC1123, retryAfter); err == nil {
b := date.Sub(time.Now())
backoff = &b
}
}
wait := c.backoff.set(backoff)
c.logger.Printf("Request failed, backing-off for %s: got HTTP status %s", wait, httpRsp.Status)
default:
return httpRsp, body, fmt.Errorf("got HTTP Status %q", httpRsp.Status)
}
}
if err := c.waitForBackoff(ctx); err != nil {
return nil, nil, err
}
}
}

View file

@ -0,0 +1,255 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ct
import (
"crypto"
"crypto/sha256"
"encoding/json"
"fmt"
"strings"
"github.com/google/certificate-transparency-go/tls"
"github.com/google/certificate-transparency-go/x509"
)
// SerializeSCTSignatureInput serializes the passed in sct and log entry into
// the correct format for signing.
func SerializeSCTSignatureInput(sct SignedCertificateTimestamp, entry LogEntry) ([]byte, error) {
switch sct.SCTVersion {
case V1:
input := CertificateTimestamp{
SCTVersion: sct.SCTVersion,
SignatureType: CertificateTimestampSignatureType,
Timestamp: sct.Timestamp,
EntryType: entry.Leaf.TimestampedEntry.EntryType,
Extensions: sct.Extensions,
}
switch entry.Leaf.TimestampedEntry.EntryType {
case X509LogEntryType:
input.X509Entry = entry.Leaf.TimestampedEntry.X509Entry
case PrecertLogEntryType:
input.PrecertEntry = &PreCert{
IssuerKeyHash: entry.Leaf.TimestampedEntry.PrecertEntry.IssuerKeyHash,
TBSCertificate: entry.Leaf.TimestampedEntry.PrecertEntry.TBSCertificate,
}
case XJSONLogEntryType:
input.JSONEntry = entry.Leaf.TimestampedEntry.JSONEntry
default:
return nil, fmt.Errorf("unsupported entry type %s", entry.Leaf.TimestampedEntry.EntryType)
}
return tls.Marshal(input)
default:
return nil, fmt.Errorf("unknown SCT version %d", sct.SCTVersion)
}
}
// SerializeSTHSignatureInput serializes the passed in STH into the correct
// format for signing.
func SerializeSTHSignatureInput(sth SignedTreeHead) ([]byte, error) {
switch sth.Version {
case V1:
if len(sth.SHA256RootHash) != crypto.SHA256.Size() {
return nil, fmt.Errorf("invalid TreeHash length, got %d expected %d", len(sth.SHA256RootHash), crypto.SHA256.Size())
}
input := TreeHeadSignature{
Version: sth.Version,
SignatureType: TreeHashSignatureType,
Timestamp: sth.Timestamp,
TreeSize: sth.TreeSize,
SHA256RootHash: sth.SHA256RootHash,
}
return tls.Marshal(input)
default:
return nil, fmt.Errorf("unsupported STH version %d", sth.Version)
}
}
// CreateX509MerkleTreeLeaf generates a MerkleTreeLeaf for an X509 cert
func CreateX509MerkleTreeLeaf(cert ASN1Cert, timestamp uint64) *MerkleTreeLeaf {
return &MerkleTreeLeaf{
Version: V1,
LeafType: TimestampedEntryLeafType,
TimestampedEntry: &TimestampedEntry{
Timestamp: timestamp,
EntryType: X509LogEntryType,
X509Entry: &cert,
},
}
}
// CreateJSONMerkleTreeLeaf creates the merkle tree leaf for json data.
func CreateJSONMerkleTreeLeaf(data interface{}, timestamp uint64) *MerkleTreeLeaf {
jsonData, err := json.Marshal(AddJSONRequest{Data: data})
if err != nil {
return nil
}
// Match the JSON serialization implemented by json-c
jsonStr := strings.Replace(string(jsonData), ":", ": ", -1)
jsonStr = strings.Replace(jsonStr, ",", ", ", -1)
jsonStr = strings.Replace(jsonStr, "{", "{ ", -1)
jsonStr = strings.Replace(jsonStr, "}", " }", -1)
jsonStr = strings.Replace(jsonStr, "/", `\/`, -1)
// TODO: Pending google/certificate-transparency#1243, replace with
// ObjectHash once supported by CT server.
return &MerkleTreeLeaf{
Version: V1,
LeafType: TimestampedEntryLeafType,
TimestampedEntry: &TimestampedEntry{
Timestamp: timestamp,
EntryType: XJSONLogEntryType,
JSONEntry: &JSONDataEntry{Data: []byte(jsonStr)},
},
}
}
// MerkleTreeLeafFromRawChain generates a MerkleTreeLeaf from a chain (in DER-encoded form) and timestamp.
func MerkleTreeLeafFromRawChain(rawChain []ASN1Cert, etype LogEntryType, timestamp uint64) (*MerkleTreeLeaf, error) {
// Need at most 3 of the chain
count := 3
if count > len(rawChain) {
count = len(rawChain)
}
chain := make([]*x509.Certificate, count)
for i := range chain {
cert, err := x509.ParseCertificate(rawChain[i].Data)
if err != nil {
return nil, fmt.Errorf("failed to parse chain[%d] cert: %v", i, err)
}
chain[i] = cert
}
return MerkleTreeLeafFromChain(chain, etype, timestamp)
}
// MerkleTreeLeafFromChain generates a MerkleTreeLeaf from a chain and timestamp.
func MerkleTreeLeafFromChain(chain []*x509.Certificate, etype LogEntryType, timestamp uint64) (*MerkleTreeLeaf, error) {
leaf := MerkleTreeLeaf{
Version: V1,
LeafType: TimestampedEntryLeafType,
TimestampedEntry: &TimestampedEntry{
EntryType: etype,
Timestamp: timestamp,
},
}
if etype == X509LogEntryType {
leaf.TimestampedEntry.X509Entry = &ASN1Cert{Data: chain[0].Raw}
return &leaf, nil
}
if etype != PrecertLogEntryType {
return nil, fmt.Errorf("unknown LogEntryType %d", etype)
}
// Pre-certs are more complicated. First, parse the leaf pre-cert and its
// putative issuer.
if len(chain) < 2 {
return nil, fmt.Errorf("no issuer cert available for precert leaf building")
}
issuer := chain[1]
cert := chain[0]
var preIssuer *x509.Certificate
if IsPreIssuer(issuer) {
// Replace the cert's issuance information with details from the pre-issuer.
preIssuer = issuer
// The issuer of the pre-cert is not going to be the issuer of the final
// cert. Change to use the final issuer's key hash.
if len(chain) < 3 {
return nil, fmt.Errorf("no issuer cert available for pre-issuer")
}
issuer = chain[2]
}
// Next, post-process the DER-encoded TBSCertificate, to remove the CT poison
// extension and possibly update the issuer field.
defangedTBS, err := x509.BuildPrecertTBS(cert.RawTBSCertificate, preIssuer)
if err != nil {
return nil, fmt.Errorf("failed to remove poison extension: %v", err)
}
leaf.TimestampedEntry.EntryType = PrecertLogEntryType
leaf.TimestampedEntry.PrecertEntry = &PreCert{
IssuerKeyHash: sha256.Sum256(issuer.RawSubjectPublicKeyInfo),
TBSCertificate: defangedTBS,
}
return &leaf, nil
}
// IsPreIssuer indicates whether a certificate is a pre-cert issuer with the specific
// certificate transparency extended key usage.
func IsPreIssuer(issuer *x509.Certificate) bool {
for _, eku := range issuer.ExtKeyUsage {
if eku == x509.ExtKeyUsageCertificateTransparency {
return true
}
}
return false
}
// LogEntryFromLeaf converts a LeafEntry object (which has the raw leaf data after JSON parsing)
// into a LogEntry object (which includes x509.Certificate objects, after TLS and ASN.1 parsing).
// Note that this function may return a valid LogEntry object and a non-nil error value, when
// the error indicates a non-fatal parsing error (of type x509.NonFatalErrors).
func LogEntryFromLeaf(index int64, leafEntry *LeafEntry) (*LogEntry, error) {
var leaf MerkleTreeLeaf
if rest, err := tls.Unmarshal(leafEntry.LeafInput, &leaf); err != nil {
return nil, fmt.Errorf("failed to unmarshal MerkleTreeLeaf for index %d: %v", index, err)
} else if len(rest) > 0 {
return nil, fmt.Errorf("trailing data (%d bytes) after MerkleTreeLeaf for index %d", len(rest), index)
}
var err error
entry := LogEntry{Index: index, Leaf: leaf}
switch leaf.TimestampedEntry.EntryType {
case X509LogEntryType:
var certChain CertificateChain
if rest, err := tls.Unmarshal(leafEntry.ExtraData, &certChain); err != nil {
return nil, fmt.Errorf("failed to unmarshal ExtraData for index %d: %v", index, err)
} else if len(rest) > 0 {
return nil, fmt.Errorf("trailing data (%d bytes) after CertificateChain for index %d", len(rest), index)
}
entry.Chain = certChain.Entries
entry.X509Cert, err = leaf.X509Certificate()
if _, ok := err.(x509.NonFatalErrors); !ok && err != nil {
return nil, fmt.Errorf("failed to parse certificate in MerkleTreeLeaf for index %d: %v", index, err)
}
case PrecertLogEntryType:
var precertChain PrecertChainEntry
if rest, err := tls.Unmarshal(leafEntry.ExtraData, &precertChain); err != nil {
return nil, fmt.Errorf("failed to unmarshal PrecertChainEntry for index %d: %v", index, err)
} else if len(rest) > 0 {
return nil, fmt.Errorf("trailing data (%d bytes) after PrecertChainEntry for index %d", len(rest), index)
}
entry.Chain = precertChain.CertificateChain
var tbsCert *x509.Certificate
tbsCert, err = leaf.Precertificate()
if _, ok := err.(x509.NonFatalErrors); !ok && err != nil {
return nil, fmt.Errorf("failed to parse precertificate in MerkleTreeLeaf for index %d: %v", index, err)
}
entry.Precert = &Precertificate{
Submitted: precertChain.PreCertificate,
IssuerKeyHash: leaf.TimestampedEntry.PrecertEntry.IssuerKeyHash,
TBSCertificate: tbsCert,
}
default:
return nil, fmt.Errorf("saw unknown entry type at index %d: %v", index, leaf.TimestampedEntry.EntryType)
}
// err may hold a x509.NonFatalErrors object.
return &entry, err
}

View file

@ -1,3 +1,17 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package ct
import (
@ -6,14 +20,13 @@ import (
"crypto/elliptic"
"crypto/rsa"
"crypto/sha256"
"crypto/x509"
"encoding/asn1"
"encoding/pem"
"errors"
"flag"
"fmt"
"log"
"math/big"
"github.com/google/certificate-transparency-go/tls"
"github.com/google/certificate-transparency-go/x509"
)
var allowVerificationWithNonCompliantKeys = flag.Bool("allow_verification_with_non_compliant_keys", false,
@ -64,61 +77,18 @@ func NewSignatureVerifier(pk crypto.PublicKey) (*SignatureVerifier, error) {
}, nil
}
// verifySignature verifies that the passed in signature over data was created by our PublicKey.
// Currently, only SHA256 is supported as a HashAlgorithm, and only ECDSA and RSA signatures are supported.
func (s SignatureVerifier) verifySignature(data []byte, sig DigitallySigned) error {
if sig.HashAlgorithm != SHA256 {
return fmt.Errorf("unsupported HashAlgorithm in signature: %v", sig.HashAlgorithm)
}
hasherType := crypto.SHA256
hasher := hasherType.New()
if _, err := hasher.Write(data); err != nil {
return fmt.Errorf("failed to write to hasher: %v", err)
}
hash := hasher.Sum([]byte{})
switch sig.SignatureAlgorithm {
case RSA:
rsaKey, ok := s.pubKey.(*rsa.PublicKey)
if !ok {
return fmt.Errorf("cannot verify RSA signature with %T key", s.pubKey)
}
if err := rsa.VerifyPKCS1v15(rsaKey, hasherType, hash, sig.Signature); err != nil {
return fmt.Errorf("failed to verify rsa signature: %v", err)
}
case ECDSA:
ecdsaKey, ok := s.pubKey.(*ecdsa.PublicKey)
if !ok {
return fmt.Errorf("cannot verify ECDSA signature with %T key", s.pubKey)
}
var ecdsaSig struct {
R, S *big.Int
}
rest, err := asn1.Unmarshal(sig.Signature, &ecdsaSig)
if err != nil {
return fmt.Errorf("failed to unmarshal ECDSA signature: %v", err)
}
if len(rest) != 0 {
log.Printf("Garbage following signature %v", rest)
}
if !ecdsa.Verify(ecdsaKey, hash, ecdsaSig.R, ecdsaSig.S) {
return errors.New("failed to verify ecdsa signature")
}
default:
return fmt.Errorf("unsupported signature type %v", sig.SignatureAlgorithm)
}
return nil
// VerifySignature verifies the given signature sig matches the data.
func (s SignatureVerifier) VerifySignature(data []byte, sig tls.DigitallySigned) error {
return tls.VerifySignature(s.pubKey, data, sig)
}
// VerifySCTSignature verifies that the SCT's signature is valid for the given LogEntry
// VerifySCTSignature verifies that the SCT's signature is valid for the given LogEntry.
func (s SignatureVerifier) VerifySCTSignature(sct SignedCertificateTimestamp, entry LogEntry) error {
sctData, err := SerializeSCTSignatureInput(sct, entry)
if err != nil {
return err
}
return s.verifySignature(sctData, sct.Signature)
return s.VerifySignature(sctData, tls.DigitallySigned(sct.Signature))
}
// VerifySTHSignature verifies that the STH's signature is valid.
@ -127,5 +97,5 @@ func (s SignatureVerifier) VerifySTHSignature(sth SignedTreeHead) error {
if err != nil {
return err
}
return s.verifySignature(sthData, sth.TreeHeadSignature)
return s.VerifySignature(sthData, tls.DigitallySigned(sth.TreeHeadSignature))
}

View file

@ -0,0 +1,152 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tls
import (
"crypto"
"crypto/dsa"
"crypto/ecdsa"
_ "crypto/md5" // For registration side-effect
"crypto/rand"
"crypto/rsa"
_ "crypto/sha1" // For registration side-effect
_ "crypto/sha256" // For registration side-effect
_ "crypto/sha512" // For registration side-effect
"errors"
"fmt"
"log"
"math/big"
"github.com/google/certificate-transparency-go/asn1"
)
type dsaSig struct {
R, S *big.Int
}
func generateHash(algo HashAlgorithm, data []byte) ([]byte, crypto.Hash, error) {
var hashType crypto.Hash
switch algo {
case MD5:
hashType = crypto.MD5
case SHA1:
hashType = crypto.SHA1
case SHA224:
hashType = crypto.SHA224
case SHA256:
hashType = crypto.SHA256
case SHA384:
hashType = crypto.SHA384
case SHA512:
hashType = crypto.SHA512
default:
return nil, hashType, fmt.Errorf("unsupported Algorithm.Hash in signature: %v", algo)
}
hasher := hashType.New()
if _, err := hasher.Write(data); err != nil {
return nil, hashType, fmt.Errorf("failed to write to hasher: %v", err)
}
return hasher.Sum([]byte{}), hashType, nil
}
// VerifySignature verifies that the passed in signature over data was created by the given PublicKey.
func VerifySignature(pubKey crypto.PublicKey, data []byte, sig DigitallySigned) error {
hash, hashType, err := generateHash(sig.Algorithm.Hash, data)
if err != nil {
return err
}
switch sig.Algorithm.Signature {
case RSA:
rsaKey, ok := pubKey.(*rsa.PublicKey)
if !ok {
return fmt.Errorf("cannot verify RSA signature with %T key", pubKey)
}
if err := rsa.VerifyPKCS1v15(rsaKey, hashType, hash, sig.Signature); err != nil {
return fmt.Errorf("failed to verify rsa signature: %v", err)
}
case DSA:
dsaKey, ok := pubKey.(*dsa.PublicKey)
if !ok {
return fmt.Errorf("cannot verify DSA signature with %T key", pubKey)
}
var dsaSig dsaSig
rest, err := asn1.Unmarshal(sig.Signature, &dsaSig)
if err != nil {
return fmt.Errorf("failed to unmarshal DSA signature: %v", err)
}
if len(rest) != 0 {
log.Printf("Garbage following signature %v", rest)
}
if dsaSig.R.Sign() <= 0 || dsaSig.S.Sign() <= 0 {
return errors.New("DSA signature contained zero or negative values")
}
if !dsa.Verify(dsaKey, hash, dsaSig.R, dsaSig.S) {
return errors.New("failed to verify DSA signature")
}
case ECDSA:
ecdsaKey, ok := pubKey.(*ecdsa.PublicKey)
if !ok {
return fmt.Errorf("cannot verify ECDSA signature with %T key", pubKey)
}
var ecdsaSig dsaSig
rest, err := asn1.Unmarshal(sig.Signature, &ecdsaSig)
if err != nil {
return fmt.Errorf("failed to unmarshal ECDSA signature: %v", err)
}
if len(rest) != 0 {
log.Printf("Garbage following signature %v", rest)
}
if ecdsaSig.R.Sign() <= 0 || ecdsaSig.S.Sign() <= 0 {
return errors.New("ECDSA signature contained zero or negative values")
}
if !ecdsa.Verify(ecdsaKey, hash, ecdsaSig.R, ecdsaSig.S) {
return errors.New("failed to verify ECDSA signature")
}
default:
return fmt.Errorf("unsupported Algorithm.Signature in signature: %v", sig.Algorithm.Hash)
}
return nil
}
// CreateSignature builds a signature over the given data using the specified hash algorithm and private key.
func CreateSignature(privKey crypto.PrivateKey, hashAlgo HashAlgorithm, data []byte) (DigitallySigned, error) {
var sig DigitallySigned
sig.Algorithm.Hash = hashAlgo
hash, hashType, err := generateHash(sig.Algorithm.Hash, data)
if err != nil {
return sig, err
}
switch privKey := privKey.(type) {
case rsa.PrivateKey:
sig.Algorithm.Signature = RSA
sig.Signature, err = rsa.SignPKCS1v15(rand.Reader, &privKey, hashType, hash)
return sig, err
case ecdsa.PrivateKey:
sig.Algorithm.Signature = ECDSA
var ecdsaSig dsaSig
ecdsaSig.R, ecdsaSig.S, err = ecdsa.Sign(rand.Reader, &privKey, hash)
if err != nil {
return sig, err
}
sig.Signature, err = asn1.Marshal(ecdsaSig)
return sig, err
default:
return sig, fmt.Errorf("unsupported private key type %T", privKey)
}
}

View file

@ -0,0 +1,711 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package tls implements functionality for dealing with TLS-encoded data,
// as defined in RFC 5246. This includes parsing and generation of TLS-encoded
// data, together with utility functions for dealing with the DigitallySigned
// TLS type.
package tls
import (
"bytes"
"encoding/binary"
"fmt"
"reflect"
"strconv"
"strings"
)
// This file holds utility functions for TLS encoding/decoding data
// as per RFC 5246 section 4.
// A structuralError suggests that the TLS data is valid, but the Go type
// which is receiving it doesn't match.
type structuralError struct {
field string
msg string
}
func (e structuralError) Error() string {
var prefix string
if e.field != "" {
prefix = e.field + ": "
}
return "tls: structure error: " + prefix + e.msg
}
// A syntaxError suggests that the TLS data is invalid.
type syntaxError struct {
field string
msg string
}
func (e syntaxError) Error() string {
var prefix string
if e.field != "" {
prefix = e.field + ": "
}
return "tls: syntax error: " + prefix + e.msg
}
// Uint24 is an unsigned 3-byte integer.
type Uint24 uint32
// Enum is an unsigned integer.
type Enum uint64
var (
uint8Type = reflect.TypeOf(uint8(0))
uint16Type = reflect.TypeOf(uint16(0))
uint24Type = reflect.TypeOf(Uint24(0))
uint32Type = reflect.TypeOf(uint32(0))
uint64Type = reflect.TypeOf(uint64(0))
enumType = reflect.TypeOf(Enum(0))
)
// Unmarshal parses the TLS-encoded data in b and uses the reflect package to
// fill in an arbitrary value pointed at by val. Because Unmarshal uses the
// reflect package, the structs being written to must use exported fields
// (upper case names).
//
// The mappings between TLS types and Go types is as follows; some fields
// must have tags (to indicate their encoded size).
//
// TLS Go Required Tags
// opaque byte / uint8
// uint8 byte / uint8
// uint16 uint16
// uint24 tls.Uint24
// uint32 uint32
// uint64 uint64
// enum tls.Enum size:S or maxval:N
// Type<N,M> []Type minlen:N,maxlen:M
// opaque[N] [N]byte / [N]uint8
// uint8[N] [N]byte / [N]uint8
// struct { } struct { }
// select(T) {
// case e1: Type *T selector:Field,val:e1
// }
//
// TLS variants (RFC 5246 s4.6.1) are only supported when the value of the
// associated enumeration type is available earlier in the same enclosing
// struct, and each possible variant is marked with a selector tag (to
// indicate which field selects the variants) and a val tag (to indicate
// what value of the selector picks this particular field).
//
// For example, a TLS structure:
//
// enum { e1(1), e2(2) } EnumType;
// struct {
// EnumType sel;
// select(sel) {
// case e1: uint16
// case e2: uint32
// } data;
// } VariantItem;
//
// would have a corresponding Go type:
//
// type VariantItem struct {
// Sel tls.Enum `tls:"maxval:2"`
// Data16 *uint16 `tls:"selector:Sel,val:1"`
// Data32 *uint32 `tls:"selector:Sel,val:2"`
// }
//
// TLS fixed-length vectors of types other than opaque or uint8 are not supported.
//
// For TLS variable-length vectors that are themselves used in other vectors,
// create a single-field structure to represent the inner type. For example, for:
//
// opaque InnerType<1..65535>;
// struct {
// InnerType inners<1,65535>;
// } Something;
//
// convert to:
//
// type InnerType struct {
// Val []byte `tls:"minlen:1,maxlen:65535"`
// }
// type Something struct {
// Inners []InnerType `tls:"minlen:1,maxlen:65535"`
// }
//
// If the encoded value does not fit in the Go type, Unmarshal returns a parse error.
func Unmarshal(b []byte, val interface{}) ([]byte, error) {
return UnmarshalWithParams(b, val, "")
}
// UnmarshalWithParams allows field parameters to be specified for the
// top-level element. The form of the params is the same as the field tags.
func UnmarshalWithParams(b []byte, val interface{}, params string) ([]byte, error) {
info, err := fieldTagToFieldInfo(params, "")
if err != nil {
return nil, err
}
// The passed in interface{} is a pointer (to allow the value to be written
// to); extract the pointed-to object as a reflect.Value, so parseField
// can do various introspection things.
v := reflect.ValueOf(val).Elem()
offset, err := parseField(v, b, 0, info)
if err != nil {
return nil, err
}
return b[offset:], nil
}
// Return the number of bytes needed to encode values up to (and including) x.
func byteCount(x uint64) uint {
switch {
case x < 0x100:
return 1
case x < 0x10000:
return 2
case x < 0x1000000:
return 3
case x < 0x100000000:
return 4
case x < 0x10000000000:
return 5
case x < 0x1000000000000:
return 6
case x < 0x100000000000000:
return 7
default:
return 8
}
}
type fieldInfo struct {
count uint // Number of bytes
countSet bool
minlen uint64 // Only relevant for slices
maxlen uint64 // Only relevant for slices
selector string // Only relevant for select sub-values
val uint64 // Only relevant for select sub-values
name string // Used for better error messages
}
func (i *fieldInfo) fieldName() string {
if i == nil {
return ""
}
return i.name
}
// Given a tag string, return a fieldInfo describing the field.
func fieldTagToFieldInfo(str string, name string) (*fieldInfo, error) {
var info *fieldInfo
// Iterate over clauses in the tag, ignoring any that don't parse properly.
for _, part := range strings.Split(str, ",") {
switch {
case strings.HasPrefix(part, "maxval:"):
if v, err := strconv.ParseUint(part[7:], 10, 64); err == nil {
info = &fieldInfo{count: byteCount(v), countSet: true}
}
case strings.HasPrefix(part, "size:"):
if sz, err := strconv.ParseUint(part[5:], 10, 32); err == nil {
info = &fieldInfo{count: uint(sz), countSet: true}
}
case strings.HasPrefix(part, "maxlen:"):
v, err := strconv.ParseUint(part[7:], 10, 64)
if err != nil {
continue
}
if info == nil {
info = &fieldInfo{}
}
info.count = byteCount(v)
info.countSet = true
info.maxlen = v
case strings.HasPrefix(part, "minlen:"):
v, err := strconv.ParseUint(part[7:], 10, 64)
if err != nil {
continue
}
if info == nil {
info = &fieldInfo{}
}
info.minlen = v
case strings.HasPrefix(part, "selector:"):
if info == nil {
info = &fieldInfo{}
}
info.selector = part[9:]
case strings.HasPrefix(part, "val:"):
v, err := strconv.ParseUint(part[4:], 10, 64)
if err != nil {
continue
}
if info == nil {
info = &fieldInfo{}
}
info.val = v
}
}
if info != nil {
info.name = name
if info.selector == "" {
if info.count < 1 {
return nil, structuralError{name, "field of unknown size in " + str}
} else if info.count > 8 {
return nil, structuralError{name, "specified size too large in " + str}
} else if info.minlen > info.maxlen {
return nil, structuralError{name, "specified length range inverted in " + str}
} else if info.val > 0 {
return nil, structuralError{name, "specified selector value but not field in " + str}
}
}
} else if name != "" {
info = &fieldInfo{name: name}
}
return info, nil
}
// Check that a value fits into a field described by a fieldInfo structure.
func (i fieldInfo) check(val uint64, fldName string) error {
if val >= (1 << (8 * i.count)) {
return structuralError{fldName, fmt.Sprintf("value %d too large for size", val)}
}
if i.maxlen != 0 {
if val < i.minlen {
return structuralError{fldName, fmt.Sprintf("value %d too small for minimum %d", val, i.minlen)}
}
if val > i.maxlen {
return structuralError{fldName, fmt.Sprintf("value %d too large for maximum %d", val, i.maxlen)}
}
}
return nil
}
// readVarUint reads an big-endian unsigned integer of the given size in
// bytes.
func readVarUint(data []byte, info *fieldInfo) (uint64, error) {
if info == nil || !info.countSet {
return 0, structuralError{info.fieldName(), "no field size information available"}
}
if len(data) < int(info.count) {
return 0, syntaxError{info.fieldName(), "truncated variable-length integer"}
}
var result uint64
for i := uint(0); i < info.count; i++ {
result = (result << 8) | uint64(data[i])
}
if err := info.check(result, info.name); err != nil {
return 0, err
}
return result, nil
}
// parseField is the main parsing function. Given a byte slice and an offset
// (in bytes) into the data, it will try to parse a suitable ASN.1 value out
// and store it in the given Value.
func parseField(v reflect.Value, data []byte, initOffset int, info *fieldInfo) (int, error) {
offset := initOffset
rest := data[offset:]
fieldType := v.Type()
// First look for known fixed types.
switch fieldType {
case uint8Type:
if len(rest) < 1 {
return offset, syntaxError{info.fieldName(), "truncated uint8"}
}
v.SetUint(uint64(rest[0]))
offset++
return offset, nil
case uint16Type:
if len(rest) < 2 {
return offset, syntaxError{info.fieldName(), "truncated uint16"}
}
v.SetUint(uint64(binary.BigEndian.Uint16(rest)))
offset += 2
return offset, nil
case uint24Type:
if len(rest) < 3 {
return offset, syntaxError{info.fieldName(), "truncated uint24"}
}
v.SetUint(uint64(data[0])<<16 | uint64(data[1])<<8 | uint64(data[2]))
offset += 3
return offset, nil
case uint32Type:
if len(rest) < 4 {
return offset, syntaxError{info.fieldName(), "truncated uint32"}
}
v.SetUint(uint64(binary.BigEndian.Uint32(rest)))
offset += 4
return offset, nil
case uint64Type:
if len(rest) < 8 {
return offset, syntaxError{info.fieldName(), "truncated uint64"}
}
v.SetUint(uint64(binary.BigEndian.Uint64(rest)))
offset += 8
return offset, nil
}
// Now deal with user-defined types.
switch v.Kind() {
case enumType.Kind():
// Assume that anything of the same kind as Enum is an Enum, so that
// users can alias types of their own to Enum.
val, err := readVarUint(rest, info)
if err != nil {
return offset, err
}
v.SetUint(val)
offset += int(info.count)
return offset, nil
case reflect.Struct:
structType := fieldType
// TLS includes a select(Enum) {..} construct, where the value of an enum
// indicates which variant field is present (like a C union). We require
// that the enum value be an earlier field in the same structure (the selector),
// and that each of the possible variant destination fields be pointers.
// So the Go mapping looks like:
// type variantType struct {
// Which tls.Enum `tls:"size:1"` // this is the selector
// Val1 *type1 `tls:"selector:Which,val:1"` // this is a destination
// Val2 *type2 `tls:"selector:Which,val:1"` // this is a destination
// }
// To deal with this, we track any enum-like fields and their values...
enums := make(map[string]uint64)
// .. and we track which selector names we've seen (in the destination field tags),
// and whether a destination for that selector has been chosen.
selectorSeen := make(map[string]bool)
for i := 0; i < structType.NumField(); i++ {
// Find information about this field.
tag := structType.Field(i).Tag.Get("tls")
fieldInfo, err := fieldTagToFieldInfo(tag, structType.Field(i).Name)
if err != nil {
return offset, err
}
destination := v.Field(i)
if fieldInfo.selector != "" {
// This is a possible select(Enum) destination, so first check that the referenced
// selector field has already been seen earlier in the struct.
choice, ok := enums[fieldInfo.selector]
if !ok {
return offset, structuralError{fieldInfo.name, "selector not seen: " + fieldInfo.selector}
}
if structType.Field(i).Type.Kind() != reflect.Ptr {
return offset, structuralError{fieldInfo.name, "choice field not a pointer type"}
}
// Is this the first mention of the selector field name? If so, remember it.
seen, ok := selectorSeen[fieldInfo.selector]
if !ok {
selectorSeen[fieldInfo.selector] = false
}
if choice != fieldInfo.val {
// This destination field was not the chosen one, so make it nil (we checked
// it was a pointer above).
v.Field(i).Set(reflect.Zero(structType.Field(i).Type))
continue
}
if seen {
// We already saw a different destination field receive the value for this
// selector value, which indicates a badly annotated structure.
return offset, structuralError{fieldInfo.name, "duplicate selector value for " + fieldInfo.selector}
}
selectorSeen[fieldInfo.selector] = true
// Make an object of the pointed-to type and parse into that.
v.Field(i).Set(reflect.New(structType.Field(i).Type.Elem()))
destination = v.Field(i).Elem()
}
offset, err = parseField(destination, data, offset, fieldInfo)
if err != nil {
return offset, err
}
// Remember any possible tls.Enum values encountered in case they are selectors.
if structType.Field(i).Type.Kind() == enumType.Kind() {
enums[structType.Field(i).Name] = v.Field(i).Uint()
}
}
// Now we have seen all fields in the structure, check that all select(Enum) {..} selector
// fields found a destination to put their data in.
for selector, seen := range selectorSeen {
if !seen {
return offset, syntaxError{info.fieldName(), selector + ": unhandled value for selector"}
}
}
return offset, nil
case reflect.Array:
datalen := v.Len()
if datalen > len(rest) {
return offset, syntaxError{info.fieldName(), "truncated array"}
}
inner := rest[:datalen]
offset += datalen
if fieldType.Elem().Kind() != reflect.Uint8 {
// Only byte/uint8 arrays are supported
return offset, structuralError{info.fieldName(), "unsupported array type: " + v.Type().String()}
}
reflect.Copy(v, reflect.ValueOf(inner))
return offset, nil
case reflect.Slice:
sliceType := fieldType
// Slices represent variable-length vectors, which are prefixed by a length field.
// The fieldInfo indicates the size of that length field.
varlen, err := readVarUint(rest, info)
if err != nil {
return offset, err
}
datalen := int(varlen)
offset += int(info.count)
rest = rest[info.count:]
if datalen > len(rest) {
return offset, syntaxError{info.fieldName(), "truncated slice"}
}
inner := rest[:datalen]
offset += datalen
if fieldType.Elem().Kind() == reflect.Uint8 {
// Fast version for []byte
v.Set(reflect.MakeSlice(sliceType, datalen, datalen))
reflect.Copy(v, reflect.ValueOf(inner))
return offset, nil
}
v.Set(reflect.MakeSlice(sliceType, 0, datalen))
single := reflect.New(sliceType.Elem())
for innerOffset := 0; innerOffset < len(inner); {
var err error
innerOffset, err = parseField(single.Elem(), inner, innerOffset, nil)
if err != nil {
return offset, err
}
v.Set(reflect.Append(v, single.Elem()))
}
return offset, nil
default:
return offset, structuralError{info.fieldName(), fmt.Sprintf("unsupported type: %s of kind %s", fieldType, v.Kind())}
}
}
// Marshal returns the TLS encoding of val.
func Marshal(val interface{}) ([]byte, error) {
return MarshalWithParams(val, "")
}
// MarshalWithParams returns the TLS encoding of val, and allows field
// parameters to be specified for the top-level element. The form
// of the params is the same as the field tags.
func MarshalWithParams(val interface{}, params string) ([]byte, error) {
info, err := fieldTagToFieldInfo(params, "")
if err != nil {
return nil, err
}
var out bytes.Buffer
v := reflect.ValueOf(val)
if err := marshalField(&out, v, info); err != nil {
return nil, err
}
return out.Bytes(), err
}
func marshalField(out *bytes.Buffer, v reflect.Value, info *fieldInfo) error {
var prefix string
if info != nil && len(info.name) > 0 {
prefix = info.name + ": "
}
fieldType := v.Type()
// First look for known fixed types.
switch fieldType {
case uint8Type:
out.WriteByte(byte(v.Uint()))
return nil
case uint16Type:
scratch := make([]byte, 2)
binary.BigEndian.PutUint16(scratch, uint16(v.Uint()))
out.Write(scratch)
return nil
case uint24Type:
i := v.Uint()
if i > 0xffffff {
return structuralError{info.fieldName(), fmt.Sprintf("uint24 overflow %d", i)}
}
scratch := make([]byte, 4)
binary.BigEndian.PutUint32(scratch, uint32(i))
out.Write(scratch[1:])
return nil
case uint32Type:
scratch := make([]byte, 4)
binary.BigEndian.PutUint32(scratch, uint32(v.Uint()))
out.Write(scratch)
return nil
case uint64Type:
scratch := make([]byte, 8)
binary.BigEndian.PutUint64(scratch, uint64(v.Uint()))
out.Write(scratch)
return nil
}
// Now deal with user-defined types.
switch v.Kind() {
case enumType.Kind():
i := v.Uint()
if info == nil {
return structuralError{info.fieldName(), "enum field tag missing"}
}
if err := info.check(i, prefix); err != nil {
return err
}
scratch := make([]byte, 8)
binary.BigEndian.PutUint64(scratch, uint64(i))
out.Write(scratch[(8 - info.count):])
return nil
case reflect.Struct:
structType := fieldType
enums := make(map[string]uint64) // Values of any Enum fields
// The comment parseField() describes the mapping of the TLS select(Enum) {..} construct;
// here we have selector and source (rather than destination) fields.
// Track which selector names we've seen (in the source field tags), and whether a source
// value for that selector has been processed.
selectorSeen := make(map[string]bool)
for i := 0; i < structType.NumField(); i++ {
// Find information about this field.
tag := structType.Field(i).Tag.Get("tls")
fieldInfo, err := fieldTagToFieldInfo(tag, structType.Field(i).Name)
if err != nil {
return err
}
source := v.Field(i)
if fieldInfo.selector != "" {
// This field is a possible source for a select(Enum) {..}. First check
// the selector field name has been seen.
choice, ok := enums[fieldInfo.selector]
if !ok {
return structuralError{fieldInfo.name, "selector not seen: " + fieldInfo.selector}
}
if structType.Field(i).Type.Kind() != reflect.Ptr {
return structuralError{fieldInfo.name, "choice field not a pointer type"}
}
// Is this the first mention of the selector field name? If so, remember it.
seen, ok := selectorSeen[fieldInfo.selector]
if !ok {
selectorSeen[fieldInfo.selector] = false
}
if choice != fieldInfo.val {
// This source was not chosen; police that it should be nil.
if v.Field(i).Pointer() != uintptr(0) {
return structuralError{fieldInfo.name, "unchosen field is non-nil"}
}
continue
}
if seen {
// We already saw a different source field generate the value for this
// selector value, which indicates a badly annotated structure.
return structuralError{fieldInfo.name, "duplicate selector value for " + fieldInfo.selector}
}
selectorSeen[fieldInfo.selector] = true
if v.Field(i).Pointer() == uintptr(0) {
return structuralError{fieldInfo.name, "chosen field is nil"}
}
// Marshal from the pointed-to source object.
source = v.Field(i).Elem()
}
var fieldData bytes.Buffer
if err := marshalField(&fieldData, source, fieldInfo); err != nil {
return err
}
out.Write(fieldData.Bytes())
// Remember any tls.Enum values encountered in case they are selectors.
if structType.Field(i).Type.Kind() == enumType.Kind() {
enums[structType.Field(i).Name] = v.Field(i).Uint()
}
}
// Now we have seen all fields in the structure, check that all select(Enum) {..} selector
// fields found a source field get get their data from.
for selector, seen := range selectorSeen {
if !seen {
return syntaxError{info.fieldName(), selector + ": unhandled value for selector"}
}
}
return nil
case reflect.Array:
datalen := v.Len()
arrayType := fieldType
if arrayType.Elem().Kind() != reflect.Uint8 {
// Only byte/uint8 arrays are supported
return structuralError{info.fieldName(), "unsupported array type"}
}
bytes := make([]byte, datalen)
for i := 0; i < datalen; i++ {
bytes[i] = uint8(v.Index(i).Uint())
}
_, err := out.Write(bytes)
return err
case reflect.Slice:
if info == nil {
return structuralError{info.fieldName(), "slice field tag missing"}
}
sliceType := fieldType
if sliceType.Elem().Kind() == reflect.Uint8 {
// Fast version for []byte: first write the length as info.count bytes.
datalen := v.Len()
scratch := make([]byte, 8)
binary.BigEndian.PutUint64(scratch, uint64(datalen))
out.Write(scratch[(8 - info.count):])
if err := info.check(uint64(datalen), prefix); err != nil {
return err
}
// Then just write the data.
bytes := make([]byte, datalen)
for i := 0; i < datalen; i++ {
bytes[i] = uint8(v.Index(i).Uint())
}
_, err := out.Write(bytes)
return err
}
// General version: use a separate Buffer to write the slice entries into.
var innerBuf bytes.Buffer
for i := 0; i < v.Len(); i++ {
if err := marshalField(&innerBuf, v.Index(i), nil); err != nil {
return err
}
}
// Now insert (and check) the size.
size := uint64(innerBuf.Len())
if err := info.check(size, prefix); err != nil {
return err
}
scratch := make([]byte, 8)
binary.BigEndian.PutUint64(scratch, size)
out.Write(scratch[(8 - info.count):])
// Then copy the data.
_, err := out.Write(innerBuf.Bytes())
return err
default:
return structuralError{info.fieldName(), fmt.Sprintf("unsupported type: %s of kind %s", fieldType, v.Kind())}
}
}

View file

@ -0,0 +1,96 @@
// Copyright 2016 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package tls
import "fmt"
// DigitallySigned gives information about a signature, including the algorithm used
// and the signature value. Defined in RFC 5246 s4.7.
type DigitallySigned struct {
Algorithm SignatureAndHashAlgorithm
Signature []byte `tls:"minlen:0,maxlen:65535"`
}
func (d DigitallySigned) String() string {
return fmt.Sprintf("Signature: HashAlgo=%v SignAlgo=%v Value=%x", d.Algorithm.Hash, d.Algorithm.Signature, d.Signature)
}
// SignatureAndHashAlgorithm gives information about the algorithms used for a
// signature. Defined in RFC 5246 s7.4.1.4.1.
type SignatureAndHashAlgorithm struct {
Hash HashAlgorithm `tls:"maxval:255"`
Signature SignatureAlgorithm `tls:"maxval:255"`
}
// HashAlgorithm enum from RFC 5246 s7.4.1.4.1.
type HashAlgorithm Enum
// HashAlgorithm constants from RFC 5246 s7.4.1.4.1.
const (
None HashAlgorithm = 0
MD5 HashAlgorithm = 1
SHA1 HashAlgorithm = 2
SHA224 HashAlgorithm = 3
SHA256 HashAlgorithm = 4
SHA384 HashAlgorithm = 5
SHA512 HashAlgorithm = 6
)
func (h HashAlgorithm) String() string {
switch h {
case None:
return "None"
case MD5:
return "MD5"
case SHA1:
return "SHA1"
case SHA224:
return "SHA224"
case SHA256:
return "SHA256"
case SHA384:
return "SHA384"
case SHA512:
return "SHA512"
default:
return fmt.Sprintf("UNKNOWN(%d)", h)
}
}
// SignatureAlgorithm enum from RFC 5246 s7.4.1.4.1.
type SignatureAlgorithm Enum
// SignatureAlgorithm constants from RFC 5246 s7.4.1.4.1.
const (
Anonymous SignatureAlgorithm = 0
RSA SignatureAlgorithm = 1
DSA SignatureAlgorithm = 2
ECDSA SignatureAlgorithm = 3
)
func (s SignatureAlgorithm) String() string {
switch s {
case Anonymous:
return "Anonymous"
case RSA:
return "RSA"
case DSA:
return "DSA"
case ECDSA:
return "ECDSA"
default:
return fmt.Sprintf("UNKNOWN(%d)", s)
}
}

View file

@ -0,0 +1,460 @@
// Copyright 2015 Google Inc. All Rights Reserved.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// Package ct holds core types and utilities for Certificate Transparency.
package ct
import (
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"github.com/google/certificate-transparency-go/tls"
"github.com/google/certificate-transparency-go/x509"
)
///////////////////////////////////////////////////////////////////////////////
// The following structures represent those outlined in RFC6962; any section
// numbers mentioned refer to that RFC.
///////////////////////////////////////////////////////////////////////////////
// LogEntryType represents the LogEntryType enum from section 3.1:
// enum { x509_entry(0), precert_entry(1), (65535) } LogEntryType;
type LogEntryType tls.Enum // tls:"maxval:65535"
// LogEntryType constants from section 3.1.
const (
X509LogEntryType LogEntryType = 0
PrecertLogEntryType LogEntryType = 1
XJSONLogEntryType LogEntryType = 0x8000 // Experimental. Don't rely on this!
)
func (e LogEntryType) String() string {
switch e {
case X509LogEntryType:
return "X509LogEntryType"
case PrecertLogEntryType:
return "PrecertLogEntryType"
case XJSONLogEntryType:
return "XJSONLogEntryType"
default:
return fmt.Sprintf("UnknownEntryType(%d)", e)
}
}
// MerkleLeafType represents the MerkleLeafType enum from section 3.4:
// enum { timestamped_entry(0), (255) } MerkleLeafType;
type MerkleLeafType tls.Enum // tls:"maxval:255"
// TimestampedEntryLeafType is the only defined MerkleLeafType constant from section 3.4.
const TimestampedEntryLeafType MerkleLeafType = 0 // Entry type for an SCT
func (m MerkleLeafType) String() string {
switch m {
case TimestampedEntryLeafType:
return "TimestampedEntryLeafType"
default:
return fmt.Sprintf("UnknownLeafType(%d)", m)
}
}
// Version represents the Version enum from section 3.2:
// enum { v1(0), (255) } Version;
type Version tls.Enum // tls:"maxval:255"
// CT Version constants from section 3.2.
const (
V1 Version = 0
)
func (v Version) String() string {
switch v {
case V1:
return "V1"
default:
return fmt.Sprintf("UnknownVersion(%d)", v)
}
}
// SignatureType differentiates STH signatures from SCT signatures, see section 3.2.
// enum { certificate_timestamp(0), tree_hash(1), (255) } SignatureType;
type SignatureType tls.Enum // tls:"maxval:255"
// SignatureType constants from section 3.2.
const (
CertificateTimestampSignatureType SignatureType = 0
TreeHashSignatureType SignatureType = 1
)
func (st SignatureType) String() string {
switch st {
case CertificateTimestampSignatureType:
return "CertificateTimestamp"
case TreeHashSignatureType:
return "TreeHash"
default:
return fmt.Sprintf("UnknownSignatureType(%d)", st)
}
}
// ASN1Cert type for holding the raw DER bytes of an ASN.1 Certificate
// (section 3.1).
type ASN1Cert struct {
Data []byte `tls:"minlen:1,maxlen:16777215"`
}
// LogID holds the hash of the Log's public key (section 3.2).
// TODO(pphaneuf): Users should be migrated to the one in the logid package.
type LogID struct {
KeyID [sha256.Size]byte
}
// PreCert represents a Precertificate (section 3.2).
type PreCert struct {
IssuerKeyHash [sha256.Size]byte
TBSCertificate []byte `tls:"minlen:1,maxlen:16777215"` // DER-encoded TBSCertificate
}
// CTExtensions is a representation of the raw bytes of any CtExtension
// structure (see section 3.2).
// nolint: golint
type CTExtensions []byte // tls:"minlen:0,maxlen:65535"`
// MerkleTreeNode represents an internal node in the CT tree.
type MerkleTreeNode []byte
// ConsistencyProof represents a CT consistency proof (see sections 2.1.2 and
// 4.4).
type ConsistencyProof []MerkleTreeNode
// AuditPath represents a CT inclusion proof (see sections 2.1.1 and 4.5).
type AuditPath []MerkleTreeNode
// LeafInput represents a serialized MerkleTreeLeaf structure.
type LeafInput []byte
// DigitallySigned is a local alias for tls.DigitallySigned so that we can
// attach a MarshalJSON method.
type DigitallySigned tls.DigitallySigned
// FromBase64String populates the DigitallySigned structure from the base64 data passed in.
// Returns an error if the base64 data is invalid.
func (d *DigitallySigned) FromBase64String(b64 string) error {
raw, err := base64.StdEncoding.DecodeString(b64)
if err != nil {
return fmt.Errorf("failed to unbase64 DigitallySigned: %v", err)
}
var ds tls.DigitallySigned
if rest, err := tls.Unmarshal(raw, &ds); err != nil {
return fmt.Errorf("failed to unmarshal DigitallySigned: %v", err)
} else if len(rest) > 0 {
return fmt.Errorf("trailing data (%d bytes) after DigitallySigned", len(rest))
}
*d = DigitallySigned(ds)
return nil
}
// Base64String returns the base64 representation of the DigitallySigned struct.
func (d DigitallySigned) Base64String() (string, error) {
b, err := tls.Marshal(d)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(b), nil
}
// MarshalJSON implements the json.Marshaller interface.
func (d DigitallySigned) MarshalJSON() ([]byte, error) {
b64, err := d.Base64String()
if err != nil {
return []byte{}, err
}
return []byte(`"` + b64 + `"`), nil
}
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DigitallySigned) UnmarshalJSON(b []byte) error {
var content string
if err := json.Unmarshal(b, &content); err != nil {
return fmt.Errorf("failed to unmarshal DigitallySigned: %v", err)
}
return d.FromBase64String(content)
}
// LogEntry represents the (parsed) contents of an entry in a CT log. This is described
// in section 3.1, but note that this structure does *not* match the TLS structure
// defined there (the TLS structure is never used directly in RFC6962).
type LogEntry struct {
Index int64
Leaf MerkleTreeLeaf
// Exactly one of the following three fields should be non-empty.
X509Cert *x509.Certificate // Parsed X.509 certificate
Precert *Precertificate // Extracted precertificate
JSONData []byte
// Chain holds the issuing certificate chain, starting with the
// issuer of the leaf certificate / pre-certificate.
Chain []ASN1Cert
}
// PrecertChainEntry holds an precertificate together with a validation chain
// for it; see section 3.1.
type PrecertChainEntry struct {
PreCertificate ASN1Cert `tls:"minlen:1,maxlen:16777215"`
CertificateChain []ASN1Cert `tls:"minlen:0,maxlen:16777215"`
}
// CertificateChain holds a chain of certificates, as returned as extra data
// for get-entries (section 4.6).
type CertificateChain struct {
Entries []ASN1Cert `tls:"minlen:0,maxlen:16777215"`
}
// JSONDataEntry holds arbitrary data.
type JSONDataEntry struct {
Data []byte `tls:"minlen:0,maxlen:1677215"`
}
// SHA256Hash represents the output from the SHA256 hash function.
type SHA256Hash [sha256.Size]byte
// FromBase64String populates the SHA256 struct with the contents of the base64 data passed in.
func (s *SHA256Hash) FromBase64String(b64 string) error {
bs, err := base64.StdEncoding.DecodeString(b64)
if err != nil {
return fmt.Errorf("failed to unbase64 LogID: %v", err)
}
if len(bs) != sha256.Size {
return fmt.Errorf("invalid SHA256 length, expected 32 but got %d", len(bs))
}
copy(s[:], bs)
return nil
}
// Base64String returns the base64 representation of this SHA256Hash.
func (s SHA256Hash) Base64String() string {
return base64.StdEncoding.EncodeToString(s[:])
}
// MarshalJSON implements the json.Marshaller interface for SHA256Hash.
func (s SHA256Hash) MarshalJSON() ([]byte, error) {
return []byte(`"` + s.Base64String() + `"`), nil
}
// UnmarshalJSON implements the json.Unmarshaller interface.
func (s *SHA256Hash) UnmarshalJSON(b []byte) error {
var content string
if err := json.Unmarshal(b, &content); err != nil {
return fmt.Errorf("failed to unmarshal SHA256Hash: %v", err)
}
return s.FromBase64String(content)
}
// SignedTreeHead represents the structure returned by the get-sth CT method
// after base64 decoding; see sections 3.5 and 4.3.
type SignedTreeHead struct {
Version Version `json:"sth_version"` // The version of the protocol to which the STH conforms
TreeSize uint64 `json:"tree_size"` // The number of entries in the new tree
Timestamp uint64 `json:"timestamp"` // The time at which the STH was created
SHA256RootHash SHA256Hash `json:"sha256_root_hash"` // The root hash of the log's Merkle tree
TreeHeadSignature DigitallySigned `json:"tree_head_signature"` // Log's signature over a TLS-encoded TreeHeadSignature
LogID SHA256Hash `json:"log_id"` // The SHA256 hash of the log's public key
}
// TreeHeadSignature holds the data over which the signature in an STH is
// generated; see section 3.5
type TreeHeadSignature struct {
Version Version `tls:"maxval:255"`
SignatureType SignatureType `tls:"maxval:255"` // == TreeHashSignatureType
Timestamp uint64
TreeSize uint64
SHA256RootHash SHA256Hash
}
// SignedCertificateTimestamp represents the structure returned by the
// add-chain and add-pre-chain methods after base64 decoding; see sections
// 3.2, 4.1 and 4.2.
type SignedCertificateTimestamp struct {
SCTVersion Version `tls:"maxval:255"`
LogID LogID
Timestamp uint64
Extensions CTExtensions `tls:"minlen:0,maxlen:65535"`
Signature DigitallySigned // Signature over TLS-encoded CertificateTimestamp
}
// CertificateTimestamp is the collection of data that the signature in an
// SCT is over; see section 3.2.
type CertificateTimestamp struct {
SCTVersion Version `tls:"maxval:255"`
SignatureType SignatureType `tls:"maxval:255"`
Timestamp uint64
EntryType LogEntryType `tls:"maxval:65535"`
X509Entry *ASN1Cert `tls:"selector:EntryType,val:0"`
PrecertEntry *PreCert `tls:"selector:EntryType,val:1"`
JSONEntry *JSONDataEntry `tls:"selector:EntryType,val:32768"`
Extensions CTExtensions `tls:"minlen:0,maxlen:65535"`
}
func (s SignedCertificateTimestamp) String() string {
return fmt.Sprintf("{Version:%d LogId:%s Timestamp:%d Extensions:'%s' Signature:%v}", s.SCTVersion,
base64.StdEncoding.EncodeToString(s.LogID.KeyID[:]),
s.Timestamp,
s.Extensions,
s.Signature)
}
// TimestampedEntry is part of the MerkleTreeLeaf structure; see section 3.4.
type TimestampedEntry struct {
Timestamp uint64
EntryType LogEntryType `tls:"maxval:65535"`
X509Entry *ASN1Cert `tls:"selector:EntryType,val:0"`
PrecertEntry *PreCert `tls:"selector:EntryType,val:1"`
JSONEntry *JSONDataEntry `tls:"selector:EntryType,val:32768"`
Extensions CTExtensions `tls:"minlen:0,maxlen:65535"`
}
// MerkleTreeLeaf represents the deserialized structure of the hash input for the
// leaves of a log's Merkle tree; see section 3.4.
type MerkleTreeLeaf struct {
Version Version `tls:"maxval:255"`
LeafType MerkleLeafType `tls:"maxval:255"`
TimestampedEntry *TimestampedEntry `tls:"selector:LeafType,val:0"`
}
// Precertificate represents the parsed CT Precertificate structure.
type Precertificate struct {
// DER-encoded pre-certificate as originally added, which includes a
// poison extension and a signature generated over the pre-cert by
// the pre-cert issuer (which might differ from the issuer of the final
// cert, see RFC6962 s3.1).
Submitted ASN1Cert
// SHA256 hash of the issuing key
IssuerKeyHash [sha256.Size]byte
// Parsed TBSCertificate structure, held in an x509.Certificate for convenience.
TBSCertificate *x509.Certificate
}
// X509Certificate returns the X.509 Certificate contained within the
// MerkleTreeLeaf.
func (m *MerkleTreeLeaf) X509Certificate() (*x509.Certificate, error) {
if m.TimestampedEntry.EntryType != X509LogEntryType {
return nil, fmt.Errorf("cannot call X509Certificate on a MerkleTreeLeaf that is not an X509 entry")
}
return x509.ParseCertificate(m.TimestampedEntry.X509Entry.Data)
}
// Precertificate returns the X.509 Precertificate contained within the MerkleTreeLeaf.
//
// The returned precertificate is embedded in an x509.Certificate, but is in the
// form stored internally in the log rather than the original submitted form
// (i.e. it does not include the poison extension and any changes to reflect the
// final certificate's issuer have been made; see x509.BuildPrecertTBS).
func (m *MerkleTreeLeaf) Precertificate() (*x509.Certificate, error) {
if m.TimestampedEntry.EntryType != PrecertLogEntryType {
return nil, fmt.Errorf("cannot call Precertificate on a MerkleTreeLeaf that is not a precert entry")
}
return x509.ParseTBSCertificate(m.TimestampedEntry.PrecertEntry.TBSCertificate)
}
// URI paths for Log requests; see section 4.
const (
AddChainPath = "/ct/v1/add-chain"
AddPreChainPath = "/ct/v1/add-pre-chain"
GetSTHPath = "/ct/v1/get-sth"
GetEntriesPath = "/ct/v1/get-entries"
GetProofByHashPath = "/ct/v1/get-proof-by-hash"
GetSTHConsistencyPath = "/ct/v1/get-sth-consistency"
GetRootsPath = "/ct/v1/get-roots"
GetEntryAndProofPath = "/ct/v1/get-entry-and-proof"
AddJSONPath = "/ct/v1/add-json" // Experimental addition
)
// AddChainRequest represents the JSON request body sent to the add-chain and
// add-pre-chain POST methods from sections 4.1 and 4.2.
type AddChainRequest struct {
Chain [][]byte `json:"chain"`
}
// AddChainResponse represents the JSON response to the add-chain and
// add-pre-chain POST methods.
// An SCT represents a Log's promise to integrate a [pre-]certificate into the
// log within a defined period of time.
type AddChainResponse struct {
SCTVersion Version `json:"sct_version"` // SCT structure version
ID []byte `json:"id"` // Log ID
Timestamp uint64 `json:"timestamp"` // Timestamp of issuance
Extensions string `json:"extensions"` // Holder for any CT extensions
Signature []byte `json:"signature"` // Log signature for this SCT
}
// AddJSONRequest represents the JSON request body sent to the add-json POST method.
// The corresponding response re-uses AddChainResponse.
// This is an experimental addition not covered by RFC6962.
type AddJSONRequest struct {
Data interface{} `json:"data"`
}
// GetSTHResponse respresents the JSON response to the get-sth GET method from section 4.3.
type GetSTHResponse struct {
TreeSize uint64 `json:"tree_size"` // Number of certs in the current tree
Timestamp uint64 `json:"timestamp"` // Time that the tree was created
SHA256RootHash []byte `json:"sha256_root_hash"` // Root hash of the tree
TreeHeadSignature []byte `json:"tree_head_signature"` // Log signature for this STH
}
// GetSTHConsistencyResponse represents the JSON response to the get-sth-consistency
// GET method from section 4.4. (The corresponding GET request has parameters 'first' and
// 'second'.)
type GetSTHConsistencyResponse struct {
Consistency [][]byte `json:"consistency"`
}
// GetProofByHashResponse represents the JSON response to the get-proof-by-hash GET
// method from section 4.5. (The corresponding GET request has parameters 'hash'
// and 'tree_size'.)
type GetProofByHashResponse struct {
LeafIndex int64 `json:"leaf_index"` // The 0-based index of the end entity corresponding to the "hash" parameter.
AuditPath [][]byte `json:"audit_path"` // An array of base64-encoded Merkle Tree nodes proving the inclusion of the chosen certificate.
}
// LeafEntry represents a leaf in the Log's Merkle tree, as returned by the get-entries
// GET method from section 4.6.
type LeafEntry struct {
// LeafInput is a TLS-encoded MerkleTreeLeaf
LeafInput []byte `json:"leaf_input"`
// ExtraData holds (unsigned) extra data, normally the cert validation chain.
ExtraData []byte `json:"extra_data"`
}
// GetEntriesResponse respresents the JSON response to the get-entries GET method
// from section 4.6.
type GetEntriesResponse struct {
Entries []LeafEntry `json:"entries"` // the list of returned entries
}
// GetRootsResponse represents the JSON response to the get-roots GET method from section 4.7.
type GetRootsResponse struct {
Certificates []string `json:"certificates"`
}
// GetEntryAndProofResponse represents the JSON response to the get-entry-and-proof
// GET method from section 4.8. (The corresponding GET request has parameters 'leaf_index'
// and 'tree_size'.)
type GetEntryAndProofResponse struct {
LeafInput []byte `json:"leaf_input"` // the entry itself
ExtraData []byte `json:"extra_data"` // any chain provided when the entry was added to the log
AuditPath [][]byte `json:"audit_path"` // the corresponding proof
}

View file

@ -6,6 +6,8 @@ package x509
import (
"encoding/pem"
"errors"
"runtime"
)
// CertPool is a set of certificates.
@ -18,12 +20,24 @@ type CertPool struct {
// NewCertPool returns a new, empty CertPool.
func NewCertPool() *CertPool {
return &CertPool{
make(map[string][]int),
make(map[string][]int),
nil,
bySubjectKeyId: make(map[string][]int),
byName: make(map[string][]int),
}
}
// SystemCertPool returns a copy of the system cert pool.
//
// Any mutations to the returned pool are not written to disk and do
// not affect any other pool.
func SystemCertPool() (*CertPool, error) {
if runtime.GOOS == "windows" {
// Issue 16736, 18609:
return nil, errors.New("crypto/x509: system root pool is not available on Windows")
}
return loadSystemRoots()
}
// findVerifiedParents attempts to find certificates in s which have signed the
// given certificate. If any candidates were rejected then errCert will be set
// to one of them, arbitrarily, and err will contain the reason that it was
@ -52,6 +66,21 @@ func (s *CertPool) findVerifiedParents(cert *Certificate) (parents []int, errCer
return
}
func (s *CertPool) contains(cert *Certificate) bool {
if s == nil {
return false
}
candidates := s.byName[string(cert.RawSubject)]
for _, c := range candidates {
if s.certs[c].Equal(cert) {
return true
}
}
return false
}
// AddCert adds a certificate to a pool.
func (s *CertPool) AddCert(cert *Certificate) {
if cert == nil {
@ -59,10 +88,8 @@ func (s *CertPool) AddCert(cert *Certificate) {
}
// Check that the certificate isn't being added twice.
for _, c := range s.certs {
if c.Equal(cert) {
return
}
if s.contains(cert) {
return
}
n := len(s.certs)
@ -77,7 +104,7 @@ func (s *CertPool) AddCert(cert *Certificate) {
}
// AppendCertsFromPEM attempts to parse a series of PEM encoded certificates.
// It appends any certificates found to s and returns true if any certificates
// It appends any certificates found to s and reports whether any certificates
// were successfully parsed.
//
// On many Linux systems, /etc/ssl/cert.pem will contain the system wide set
@ -107,10 +134,10 @@ func (s *CertPool) AppendCertsFromPEM(pemCerts []byte) (ok bool) {
// Subjects returns a list of the DER-encoded subjects of
// all of the certificates in the pool.
func (s *CertPool) Subjects() (res [][]byte) {
res = make([][]byte, len(s.certs))
func (s *CertPool) Subjects() [][]byte {
res := make([][]byte, len(s.certs))
for i, c := range s.certs {
res[i] = c.RawSubject
}
return
return res
}

View file

@ -0,0 +1,230 @@
package x509
import (
"bytes"
"fmt"
"strconv"
"strings"
)
// Error implements the error interface and describes a single error in an X.509 certificate or CRL.
type Error struct {
ID ErrorID
Category ErrCategory
Summary string
Field string
SpecRef string
SpecText string
// Fatal indicates that parsing has been aborted.
Fatal bool
}
func (err Error) Error() string {
var msg bytes.Buffer
if err.ID != ErrInvalidID {
if err.Fatal {
msg.WriteRune('E')
} else {
msg.WriteRune('W')
}
msg.WriteString(fmt.Sprintf("%03d: ", err.ID))
}
msg.WriteString(err.Summary)
return msg.String()
}
// VerboseError creates a more verbose error string, including spec details.
func (err Error) VerboseError() string {
var msg bytes.Buffer
msg.WriteString(err.Error())
if len(err.Field) > 0 || err.Category != UnknownCategory || len(err.SpecRef) > 0 || len(err.SpecText) > 0 {
msg.WriteString(" (")
needSep := false
if len(err.Field) > 0 {
msg.WriteString(err.Field)
needSep = true
}
if err.Category != UnknownCategory {
if needSep {
msg.WriteString(": ")
}
msg.WriteString(err.Category.String())
needSep = true
}
if len(err.SpecRef) > 0 {
if needSep {
msg.WriteString(": ")
}
msg.WriteString(err.SpecRef)
needSep = true
}
if len(err.SpecText) > 0 {
if needSep {
if len(err.SpecRef) > 0 {
msg.WriteString(", ")
} else {
msg.WriteString(": ")
}
}
msg.WriteString("'")
msg.WriteString(err.SpecText)
msg.WriteString("'")
}
msg.WriteString(")")
}
return msg.String()
}
// ErrCategory indicates the category of an x509.Error.
type ErrCategory int
// ErrCategory values.
const (
UnknownCategory ErrCategory = iota
// Errors in ASN.1 encoding
InvalidASN1Encoding
InvalidASN1Content
InvalidASN1DER
// Errors in ASN.1 relative to schema
InvalidValueRange
InvalidASN1Type
UnexpectedAdditionalData
// Errors in X.509
PoorlyFormedCertificate // Fails a SHOULD clause
MalformedCertificate // Fails a MUST clause
PoorlyFormedCRL // Fails a SHOULD clause
MalformedCRL // Fails a MUST clause
// Errors relative to CA/Browser Forum guidelines
BaselineRequirementsFailure
EVRequirementsFailure
// Other errors
InsecureAlgorithm
UnrecognizedValue
)
func (category ErrCategory) String() string {
switch category {
case InvalidASN1Encoding:
return "Invalid ASN.1 encoding"
case InvalidASN1Content:
return "Invalid ASN.1 content"
case InvalidASN1DER:
return "Invalid ASN.1 distinguished encoding"
case InvalidValueRange:
return "Invalid value for range given in schema"
case InvalidASN1Type:
return "Invalid ASN.1 type for schema"
case UnexpectedAdditionalData:
return "Unexpected additional data present"
case PoorlyFormedCertificate:
return "Certificate does not comply with SHOULD clause in spec"
case MalformedCertificate:
return "Certificate does not comply with MUST clause in spec"
case PoorlyFormedCRL:
return "Certificate Revocation List does not comply with SHOULD clause in spec"
case MalformedCRL:
return "Certificate Revocation List does not comply with MUST clause in spec"
case BaselineRequirementsFailure:
return "Certificate does not comply with CA/BF baseline requirements"
case EVRequirementsFailure:
return "Certificate does not comply with CA/BF EV requirements"
case InsecureAlgorithm:
return "Certificate uses an insecure algorithm"
case UnrecognizedValue:
return "Certificate uses an unrecognized value"
default:
return fmt.Sprintf("Unknown (%d)", category)
}
}
// ErrorID is an identifier for an x509.Error, to allow filtering.
type ErrorID int
// Errors implements the error interface and holds a collection of errors found in a certificate or CRL.
type Errors struct {
Errs []Error
}
// Error converts to a string.
func (e *Errors) Error() string {
return e.combineErrors(Error.Error)
}
// VerboseError creates a more verbose error string, including spec details.
func (e *Errors) VerboseError() string {
return e.combineErrors(Error.VerboseError)
}
// Fatal indicates whether e includes a fatal error
func (e *Errors) Fatal() bool {
return (e.FirstFatal() != nil)
}
// Empty indicates whether e has no errors.
func (e *Errors) Empty() bool {
return len(e.Errs) == 0
}
// FirstFatal returns the first fatal error in e, or nil
// if there is no fatal error.
func (e *Errors) FirstFatal() error {
for _, err := range e.Errs {
if err.Fatal {
return err
}
}
return nil
}
// AddID adds the Error identified by the given id to an x509.Errors.
func (e *Errors) AddID(id ErrorID, args ...interface{}) {
e.Errs = append(e.Errs, NewError(id, args...))
}
func (e Errors) combineErrors(errfn func(Error) string) string {
if len(e.Errs) == 0 {
return ""
}
if len(e.Errs) == 1 {
return errfn((e.Errs)[0])
}
var msg bytes.Buffer
msg.WriteString("Errors:")
for _, err := range e.Errs {
msg.WriteString("\n ")
msg.WriteString(errfn(err))
}
return msg.String()
}
// Filter creates a new Errors object with any entries from the filtered
// list of IDs removed.
func (e Errors) Filter(filtered []ErrorID) Errors {
var results Errors
eloop:
for _, v := range e.Errs {
for _, f := range filtered {
if v.ID == f {
break eloop
}
}
results.Errs = append(results.Errs, v)
}
return results
}
// ErrorFilter builds a list of error IDs (suitable for use with Errors.Filter) from a comma-separated string.
func ErrorFilter(ignore string) []ErrorID {
var ids []ErrorID
filters := strings.Split(ignore, ",")
for _, f := range filters {
v, err := strconv.Atoi(f)
if err != nil {
continue
}
ids = append(ids, ErrorID(v))
}
return ids
}

View file

@ -0,0 +1,302 @@
package x509
import "fmt"
// To preserve error IDs, only append to this list, never insert.
const (
ErrInvalidID ErrorID = iota
ErrInvalidCertList
ErrTrailingCertList
ErrUnexpectedlyCriticalCertListExtension
ErrUnexpectedlyNonCriticalCertListExtension
ErrInvalidCertListAuthKeyID
ErrTrailingCertListAuthKeyID
ErrInvalidCertListIssuerAltName
ErrInvalidCertListCRLNumber
ErrTrailingCertListCRLNumber
ErrNegativeCertListCRLNumber
ErrInvalidCertListDeltaCRL
ErrTrailingCertListDeltaCRL
ErrNegativeCertListDeltaCRL
ErrInvalidCertListIssuingDP
ErrTrailingCertListIssuingDP
ErrCertListIssuingDPMultipleTypes
ErrCertListIssuingDPInvalidFullName
ErrInvalidCertListFreshestCRL
ErrInvalidCertListAuthInfoAccess
ErrTrailingCertListAuthInfoAccess
ErrUnhandledCriticalCertListExtension
ErrUnexpectedlyCriticalRevokedCertExtension
ErrUnexpectedlyNonCriticalRevokedCertExtension
ErrInvalidRevocationReason
ErrTrailingRevocationReason
ErrInvalidRevocationInvalidityDate
ErrTrailingRevocationInvalidityDate
ErrInvalidRevocationIssuer
ErrUnhandledCriticalRevokedCertExtension
ErrMaxID
)
// idToError gives a template x509.Error for each defined ErrorID; where the Summary
// field may hold format specifiers that take field parameters.
var idToError map[ErrorID]Error
var errorInfo = []Error{
{
ID: ErrInvalidCertList,
Summary: "x509: failed to parse CertificateList: %v",
Field: "CertificateList",
SpecRef: "RFC 5280 s5.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertList,
Summary: "x509: trailing data after CertificateList",
Field: "CertificateList",
SpecRef: "RFC 5280 s5.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrUnexpectedlyCriticalCertListExtension,
Summary: "x509: certificate list extension %v marked critical but expected to be non-critical",
Field: "tbsCertList.crlExtensions.*.critical",
SpecRef: "RFC 5280 s5.2",
Category: MalformedCRL,
},
{
ID: ErrUnexpectedlyNonCriticalCertListExtension,
Summary: "x509: certificate list extension %v marked non-critical but expected to be critical",
Field: "tbsCertList.crlExtensions.*.critical",
SpecRef: "RFC 5280 s5.2",
Category: MalformedCRL,
},
{
ID: ErrInvalidCertListAuthKeyID,
Summary: "x509: failed to unmarshal certificate-list authority key-id: %v",
Field: "tbsCertList.crlExtensions.*.AuthorityKeyIdentifier",
SpecRef: "RFC 5280 s5.2.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertListAuthKeyID,
Summary: "x509: trailing data after certificate list auth key ID",
Field: "tbsCertList.crlExtensions.*.AuthorityKeyIdentifier",
SpecRef: "RFC 5280 s5.2.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidCertListIssuerAltName,
Summary: "x509: failed to parse CRL issuer alt name: %v",
Field: "tbsCertList.crlExtensions.*.IssuerAltName",
SpecRef: "RFC 5280 s5.2.2",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidCertListCRLNumber,
Summary: "x509: failed to unmarshal certificate-list crl-number: %v",
Field: "tbsCertList.crlExtensions.*.CRLNumber",
SpecRef: "RFC 5280 s5.2.3",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertListCRLNumber,
Summary: "x509: trailing data after certificate list crl-number",
Field: "tbsCertList.crlExtensions.*.CRLNumber",
SpecRef: "RFC 5280 s5.2.3",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrNegativeCertListCRLNumber,
Summary: "x509: negative certificate list crl-number: %d",
Field: "tbsCertList.crlExtensions.*.CRLNumber",
SpecRef: "RFC 5280 s5.2.3",
Category: MalformedCRL,
Fatal: true,
},
{
ID: ErrInvalidCertListDeltaCRL,
Summary: "x509: failed to unmarshal certificate-list delta-crl: %v",
Field: "tbsCertList.crlExtensions.*.BaseCRLNumber",
SpecRef: "RFC 5280 s5.2.4",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertListDeltaCRL,
Summary: "x509: trailing data after certificate list delta-crl",
Field: "tbsCertList.crlExtensions.*.BaseCRLNumber",
SpecRef: "RFC 5280 s5.2.4",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrNegativeCertListDeltaCRL,
Summary: "x509: negative certificate list base-crl-number: %d",
Field: "tbsCertList.crlExtensions.*.BaseCRLNumber",
SpecRef: "RFC 5280 s5.2.4",
Category: MalformedCRL,
Fatal: true,
},
{
ID: ErrInvalidCertListIssuingDP,
Summary: "x509: failed to unmarshal certificate list issuing distribution point: %v",
Field: "tbsCertList.crlExtensions.*.IssuingDistributionPoint",
SpecRef: "RFC 5280 s5.2.5",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertListIssuingDP,
Summary: "x509: trailing data after certificate list issuing distribution point",
Field: "tbsCertList.crlExtensions.*.IssuingDistributionPoint",
SpecRef: "RFC 5280 s5.2.5",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrCertListIssuingDPMultipleTypes,
Summary: "x509: multiple cert types set in issuing-distribution-point: user:%v CA:%v attr:%v",
Field: "tbsCertList.crlExtensions.*.IssuingDistributionPoint",
SpecRef: "RFC 5280 s5.2.5",
SpecText: "at most one of onlyContainsUserCerts, onlyContainsCACerts, and onlyContainsAttributeCerts may be set to TRUE.",
Category: MalformedCRL,
Fatal: true,
},
{
ID: ErrCertListIssuingDPInvalidFullName,
Summary: "x509: failed to parse CRL issuing-distribution-point fullName: %v",
Field: "tbsCertList.crlExtensions.*.IssuingDistributionPoint.distributionPoint",
SpecRef: "RFC 5280 s5.2.5",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidCertListFreshestCRL,
Summary: "x509: failed to unmarshal certificate list freshestCRL: %v",
Field: "tbsCertList.crlExtensions.*.FreshestCRL",
SpecRef: "RFC 5280 s5.2.6",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidCertListAuthInfoAccess,
Summary: "x509: failed to unmarshal certificate list authority info access: %v",
Field: "tbsCertList.crlExtensions.*.AuthorityInfoAccess",
SpecRef: "RFC 5280 s5.2.7",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingCertListAuthInfoAccess,
Summary: "x509: trailing data after certificate list authority info access",
Field: "tbsCertList.crlExtensions.*.AuthorityInfoAccess",
SpecRef: "RFC 5280 s5.2.7",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrUnhandledCriticalCertListExtension,
Summary: "x509: unhandled critical extension in certificate list: %v",
Field: "tbsCertList.revokedCertificates.crlExtensions.*",
SpecRef: "RFC 5280 s5.2",
SpecText: "If a CRL contains a critical extension that the application cannot process, then the application MUST NOT use that CRL to determine the status of certificates.",
Category: MalformedCRL,
Fatal: true,
},
{
ID: ErrUnexpectedlyCriticalRevokedCertExtension,
Summary: "x509: revoked certificate extension %v marked critical but expected to be non-critical",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.critical",
SpecRef: "RFC 5280 s5.3",
Category: MalformedCRL,
},
{
ID: ErrUnexpectedlyNonCriticalRevokedCertExtension,
Summary: "x509: revoked certificate extension %v marked non-critical but expected to be critical",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.critical",
SpecRef: "RFC 5280 s5.3",
Category: MalformedCRL,
},
{
ID: ErrInvalidRevocationReason,
Summary: "x509: failed to parse revocation reason: %v",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.CRLReason",
SpecRef: "RFC 5280 s5.3.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingRevocationReason,
Summary: "x509: trailing data after revoked certificate reason",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.CRLReason",
SpecRef: "RFC 5280 s5.3.1",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidRevocationInvalidityDate,
Summary: "x509: failed to parse revoked certificate invalidity date: %v",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.InvalidityDate",
SpecRef: "RFC 5280 s5.3.2",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrTrailingRevocationInvalidityDate,
Summary: "x509: trailing data after revoked certificate invalidity date",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.InvalidityDate",
SpecRef: "RFC 5280 s5.3.2",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrInvalidRevocationIssuer,
Summary: "x509: failed to parse revocation issuer %v",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*.CertificateIssuer",
SpecRef: "RFC 5280 s5.3.3",
Category: InvalidASN1Content,
Fatal: true,
},
{
ID: ErrUnhandledCriticalRevokedCertExtension,
Summary: "x509: unhandled critical extension in revoked certificate: %v",
Field: "tbsCertList.revokedCertificates.crlEntryExtensions.*",
SpecRef: "RFC 5280 s5.3",
SpecText: "If a CRL contains a critical CRL entry extension that the application cannot process, then the application MUST NOT use that CRL to determine the status of any certificates.",
Category: MalformedCRL,
Fatal: true,
},
}
func init() {
idToError = make(map[ErrorID]Error, len(errorInfo))
for _, info := range errorInfo {
idToError[info.ID] = info
}
}
// NewError builds a new x509.Error based on the template for the given id.
func NewError(id ErrorID, args ...interface{}) Error {
var err Error
if id >= ErrMaxID {
err.ID = id
err.Summary = fmt.Sprintf("Unknown error ID %v: args %+v", id, args)
err.Fatal = true
} else {
err = idToError[id]
err.Summary = fmt.Sprintf(err.Summary, args...)
}
return err
}

View file

@ -0,0 +1,164 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
import (
"fmt"
"net"
"github.com/google/certificate-transparency-go/asn1"
"github.com/google/certificate-transparency-go/x509/pkix"
)
const (
// GeneralName tag values from RFC 5280, 4.2.1.6
tagOtherName = 0
tagRFC822Name = 1
tagDNSName = 2
tagX400Address = 3
tagDirectoryName = 4
tagEDIPartyName = 5
tagURI = 6
tagIPAddress = 7
tagRegisteredID = 8
)
// OtherName describes a name related to a certificate which is not in one
// of the standard name formats. RFC 5280, 4.2.1.6:
// OtherName ::= SEQUENCE {
// type-id OBJECT IDENTIFIER,
// value [0] EXPLICIT ANY DEFINED BY type-id }
type OtherName struct {
TypeID asn1.ObjectIdentifier
Value asn1.RawValue
}
// GeneralNames holds a collection of names related to a certificate.
type GeneralNames struct {
DNSNames []string
EmailAddresses []string
DirectoryNames []pkix.Name
URIs []string
IPNets []net.IPNet
RegisteredIDs []asn1.ObjectIdentifier
OtherNames []OtherName
}
// Len returns the total number of names in a GeneralNames object.
func (gn GeneralNames) Len() int {
return (len(gn.DNSNames) + len(gn.EmailAddresses) + len(gn.DirectoryNames) +
len(gn.URIs) + len(gn.IPNets) + len(gn.RegisteredIDs) + len(gn.OtherNames))
}
// Empty indicates whether a GeneralNames object is empty.
func (gn GeneralNames) Empty() bool {
return gn.Len() == 0
}
func parseGeneralNames(value []byte, gname *GeneralNames) error {
// RFC 5280, 4.2.1.6
// GeneralNames ::= SEQUENCE SIZE (1..MAX) OF GeneralName
//
// GeneralName ::= CHOICE {
// otherName [0] OtherName,
// rfc822Name [1] IA5String,
// dNSName [2] IA5String,
// x400Address [3] ORAddress,
// directoryName [4] Name,
// ediPartyName [5] EDIPartyName,
// uniformResourceIdentifier [6] IA5String,
// iPAddress [7] OCTET STRING,
// registeredID [8] OBJECT IDENTIFIER }
var seq asn1.RawValue
var rest []byte
if rest, err := asn1.Unmarshal(value, &seq); err != nil {
return fmt.Errorf("x509: failed to parse GeneralNames: %v", err)
} else if len(rest) != 0 {
return fmt.Errorf("x509: trailing data after GeneralNames")
}
if !seq.IsCompound || seq.Tag != asn1.TagSequence || seq.Class != asn1.ClassUniversal {
return fmt.Errorf("x509: failed to parse GeneralNames sequence, tag %+v", seq)
}
rest = seq.Bytes
for len(rest) > 0 {
var err error
rest, err = parseGeneralName(rest, gname, false)
if err != nil {
return fmt.Errorf("x509: failed to parse GeneralName: %v", err)
}
}
return nil
}
func parseGeneralName(data []byte, gname *GeneralNames, withMask bool) ([]byte, error) {
var v asn1.RawValue
var rest []byte
var err error
rest, err = asn1.Unmarshal(data, &v)
if err != nil {
return nil, fmt.Errorf("x509: failed to unmarshal GeneralNames: %v", err)
}
switch v.Tag {
case tagOtherName:
if !v.IsCompound {
return nil, fmt.Errorf("x509: failed to unmarshal GeneralNames.otherName: not compound")
}
var other OtherName
v.FullBytes = append([]byte{}, v.FullBytes...)
v.FullBytes[0] = asn1.TagSequence | 0x20
_, err = asn1.Unmarshal(v.FullBytes, &other)
if err != nil {
return nil, fmt.Errorf("x509: failed to unmarshal GeneralNames.otherName: %v", err)
}
gname.OtherNames = append(gname.OtherNames, other)
case tagRFC822Name:
gname.EmailAddresses = append(gname.EmailAddresses, string(v.Bytes))
case tagDNSName:
dns := string(v.Bytes)
gname.DNSNames = append(gname.DNSNames, dns)
case tagDirectoryName:
var rdnSeq pkix.RDNSequence
if _, err := asn1.Unmarshal(v.Bytes, &rdnSeq); err != nil {
return nil, fmt.Errorf("x509: failed to unmarshal GeneralNames.directoryName: %v", err)
}
var dirName pkix.Name
dirName.FillFromRDNSequence(&rdnSeq)
gname.DirectoryNames = append(gname.DirectoryNames, dirName)
case tagURI:
gname.URIs = append(gname.URIs, string(v.Bytes))
case tagIPAddress:
vlen := len(v.Bytes)
if withMask {
switch vlen {
case (2 * net.IPv4len), (2 * net.IPv6len):
ipNet := net.IPNet{IP: v.Bytes[0 : vlen/2], Mask: v.Bytes[vlen/2:]}
gname.IPNets = append(gname.IPNets, ipNet)
default:
return nil, fmt.Errorf("x509: invalid IP/mask length %d in GeneralNames.iPAddress", vlen)
}
} else {
switch vlen {
case net.IPv4len, net.IPv6len:
ipNet := net.IPNet{IP: v.Bytes}
gname.IPNets = append(gname.IPNets, ipNet)
default:
return nil, fmt.Errorf("x509: invalid IP length %d in GeneralNames.iPAddress", vlen)
}
}
case tagRegisteredID:
var oid asn1.ObjectIdentifier
v.FullBytes = append([]byte{}, v.FullBytes...)
v.FullBytes[0] = asn1.TagOID
_, err = asn1.Unmarshal(v.FullBytes, &oid)
if err != nil {
return nil, fmt.Errorf("x509: failed to unmarshal GeneralNames.registeredID: %v", err)
}
gname.RegisteredIDs = append(gname.RegisteredIDs, oid)
default:
return nil, fmt.Errorf("x509: failed to unmarshal GeneralName: unknown tag %d", v.Tag)
}
return rest, nil
}

View file

@ -0,0 +1,26 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build cgo,!arm,!arm64,!ios,!go1.10
package x509
/*
#cgo CFLAGS: -mmacosx-version-min=10.6 -D__MAC_OS_X_VERSION_MAX_ALLOWED=1080
#cgo LDFLAGS: -framework CoreFoundation -framework Security
#include <CoreFoundation/CoreFoundation.h>
*/
import "C"
// For Go versions before 1.10, nil values for Apple's CoreFoundation
// CF*Ref types were represented by nil. See:
// https://github.com/golang/go/commit/b868616b63a8
func setNilCFRef(v *C.CFDataRef) {
*v = nil
}
func isNilCFRef(v C.CFDataRef) bool {
return v == nil
}

View file

@ -0,0 +1,26 @@
// Copyright 2018 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build cgo,!arm,!arm64,!ios,go1.10
package x509
/*
#cgo CFLAGS: -mmacosx-version-min=10.6 -D__MAC_OS_X_VERSION_MAX_ALLOWED=1080
#cgo LDFLAGS: -framework CoreFoundation -framework Security
#include <CoreFoundation/CoreFoundation.h>
*/
import "C"
// For Go versions >= 1.10, nil values for Apple's CoreFoundation
// CF*Ref types are represented by zero. See:
// https://github.com/golang/go/commit/b868616b63a8
func setNilCFRef(v *C.CFDataRef) {
*v = 0
}
func isNilCFRef(v C.CFDataRef) bool {
return v == 0
}

View file

@ -42,7 +42,7 @@ type rfc1423Algo struct {
}
// rfc1423Algos holds a slice of the possible ways to encrypt a PEM
// block. The ivSize numbers were taken from the OpenSSL source.
// block. The ivSize numbers were taken from the OpenSSL source.
var rfc1423Algos = []rfc1423Algo{{
cipher: PEMCipherDES,
name: "DES-CBC",
@ -108,7 +108,10 @@ var IncorrectPasswordError = errors.New("x509: decryption password incorrect")
// encrypt it and returns a slice of decrypted DER encoded bytes. It inspects
// the DEK-Info header to determine the algorithm used for decryption. If no
// DEK-Info header is present, an error is returned. If an incorrect password
// is detected an IncorrectPasswordError is returned.
// is detected an IncorrectPasswordError is returned. Because of deficiencies
// in the encrypted-PEM format, it's not always possible to detect an incorrect
// password. In these cases no error will be returned but the decrypted DER
// bytes will be random noise.
func DecryptPEMBlock(b *pem.Block, password []byte) ([]byte, error) {
dek, ok := b.Headers["DEK-Info"]
if !ok {
@ -141,6 +144,10 @@ func DecryptPEMBlock(b *pem.Block, password []byte) ([]byte, error) {
return nil, err
}
if len(b.Bytes)%block.BlockSize() != 0 {
return nil, errors.New("x509: encrypted PEM data is not a multiple of the block size")
}
data := make([]byte, len(b.Bytes))
dec := cipher.NewCBCDecrypter(block, iv)
dec.CryptBlocks(data, b.Bytes)

View file

@ -6,11 +6,10 @@ package x509
import (
"crypto/rsa"
// START CT CHANGES
"github.com/google/certificate-transparency/go/asn1"
// END CT CHANGES
"errors"
"math/big"
"github.com/google/certificate-transparency-go/asn1"
)
// pkcs1PrivateKey is a structure which mirrors the PKCS#1 ASN.1 for an RSA private key.
@ -37,16 +36,21 @@ type pkcs1AdditionalRSAPrime struct {
Coeff *big.Int
}
// pkcs1PublicKey reflects the ASN.1 structure of a PKCS#1 public key.
type pkcs1PublicKey struct {
N *big.Int
E int
}
// ParsePKCS1PrivateKey returns an RSA private key from its ASN.1 PKCS#1 DER encoded form.
func ParsePKCS1PrivateKey(der []byte) (key *rsa.PrivateKey, err error) {
func ParsePKCS1PrivateKey(der []byte) (*rsa.PrivateKey, error) {
var priv pkcs1PrivateKey
rest, err := asn1.Unmarshal(der, &priv)
if len(rest) > 0 {
err = asn1.SyntaxError{Msg: "trailing data"}
return
return nil, asn1.SyntaxError{Msg: "trailing data"}
}
if err != nil {
return
return nil, err
}
if priv.Version > 1 {
@ -57,7 +61,7 @@ func ParsePKCS1PrivateKey(der []byte) (key *rsa.PrivateKey, err error) {
return nil, errors.New("x509: private key contains zero or negative value")
}
key = new(rsa.PrivateKey)
key := new(rsa.PrivateKey)
key.PublicKey = rsa.PublicKey{
E: priv.E,
N: priv.N,
@ -82,7 +86,7 @@ func ParsePKCS1PrivateKey(der []byte) (key *rsa.PrivateKey, err error) {
}
key.Precompute()
return
return key, nil
}
// MarshalPKCS1PrivateKey converts a private key to ASN.1 DER encoded form.
@ -117,8 +121,35 @@ func MarshalPKCS1PrivateKey(key *rsa.PrivateKey) []byte {
return b
}
// rsaPublicKey reflects the ASN.1 structure of a PKCS#1 public key.
type rsaPublicKey struct {
N *big.Int
E int
// ParsePKCS1PublicKey parses a PKCS#1 public key in ASN.1 DER form.
func ParsePKCS1PublicKey(der []byte) (*rsa.PublicKey, error) {
var pub pkcs1PublicKey
rest, err := asn1.Unmarshal(der, &pub)
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, asn1.SyntaxError{Msg: "trailing data"}
}
if pub.N.Sign() <= 0 || pub.E <= 0 {
return nil, errors.New("x509: public key contains zero or negative value")
}
if pub.E > 1<<31-1 {
return nil, errors.New("x509: public key contains large public exponent")
}
return &rsa.PublicKey{
E: pub.E,
N: pub.N,
}, nil
}
// MarshalPKCS1PublicKey converts an RSA public key to PKCS#1, ASN.1 DER form.
func MarshalPKCS1PublicKey(key *rsa.PublicKey) []byte {
derBytes, _ := asn1.Marshal(pkcs1PublicKey{
N: key.N,
E: key.E,
})
return derBytes
}

View file

@ -0,0 +1,102 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
import (
"crypto/ecdsa"
"crypto/rsa"
"errors"
"fmt"
"github.com/google/certificate-transparency-go/asn1"
"github.com/google/certificate-transparency-go/x509/pkix"
)
// pkcs8 reflects an ASN.1, PKCS#8 PrivateKey. See
// ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-8/pkcs-8v1_2.asn
// and RFC 5208.
type pkcs8 struct {
Version int
Algo pkix.AlgorithmIdentifier
PrivateKey []byte
// optional attributes omitted.
}
// ParsePKCS8PrivateKey parses an unencrypted, PKCS#8 private key.
// See RFC 5208.
func ParsePKCS8PrivateKey(der []byte) (key interface{}, err error) {
var privKey pkcs8
if _, err := asn1.Unmarshal(der, &privKey); err != nil {
return nil, err
}
switch {
case privKey.Algo.Algorithm.Equal(OIDPublicKeyRSA):
key, err = ParsePKCS1PrivateKey(privKey.PrivateKey)
if err != nil {
return nil, errors.New("x509: failed to parse RSA private key embedded in PKCS#8: " + err.Error())
}
return key, nil
case privKey.Algo.Algorithm.Equal(OIDPublicKeyECDSA):
bytes := privKey.Algo.Parameters.FullBytes
namedCurveOID := new(asn1.ObjectIdentifier)
if _, err := asn1.Unmarshal(bytes, namedCurveOID); err != nil {
namedCurveOID = nil
}
key, err = parseECPrivateKey(namedCurveOID, privKey.PrivateKey)
if err != nil {
return nil, errors.New("x509: failed to parse EC private key embedded in PKCS#8: " + err.Error())
}
return key, nil
default:
return nil, fmt.Errorf("x509: PKCS#8 wrapping contained private key with unknown algorithm: %v", privKey.Algo.Algorithm)
}
}
// MarshalPKCS8PrivateKey converts a private key to PKCS#8 encoded form.
// The following key types are supported: *rsa.PrivateKey, *ecdsa.PublicKey.
// Unsupported key types result in an error.
//
// See RFC 5208.
func MarshalPKCS8PrivateKey(key interface{}) ([]byte, error) {
var privKey pkcs8
switch k := key.(type) {
case *rsa.PrivateKey:
privKey.Algo = pkix.AlgorithmIdentifier{
Algorithm: OIDPublicKeyRSA,
Parameters: asn1.NullRawValue,
}
privKey.PrivateKey = MarshalPKCS1PrivateKey(k)
case *ecdsa.PrivateKey:
oid, ok := OIDFromNamedCurve(k.Curve)
if !ok {
return nil, errors.New("x509: unknown curve while marshalling to PKCS#8")
}
oidBytes, err := asn1.Marshal(oid)
if err != nil {
return nil, errors.New("x509: failed to marshal curve OID: " + err.Error())
}
privKey.Algo = pkix.AlgorithmIdentifier{
Algorithm: OIDPublicKeyECDSA,
Parameters: asn1.RawValue{
FullBytes: oidBytes,
},
}
if privKey.PrivateKey, err = marshalECPrivateKeyWithOID(k, nil); err != nil {
return nil, errors.New("x509: failed to marshal EC private key while building PKCS#8: " + err.Error())
}
default:
return nil, fmt.Errorf("x509: unknown key type while marshalling PKCS#8: %T", key)
}
return asn1.Marshal(privKey)
}

View file

@ -0,0 +1,288 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package pkix contains shared, low level structures used for ASN.1 parsing
// and serialization of X.509 certificates, CRL and OCSP.
package pkix
import (
// START CT CHANGES
"encoding/hex"
"fmt"
"github.com/google/certificate-transparency-go/asn1"
// END CT CHANGES
"math/big"
"time"
)
// AlgorithmIdentifier represents the ASN.1 structure of the same name. See RFC
// 5280, section 4.1.1.2.
type AlgorithmIdentifier struct {
Algorithm asn1.ObjectIdentifier
Parameters asn1.RawValue `asn1:"optional"`
}
type RDNSequence []RelativeDistinguishedNameSET
var attributeTypeNames = map[string]string{
"2.5.4.6": "C",
"2.5.4.10": "O",
"2.5.4.11": "OU",
"2.5.4.3": "CN",
"2.5.4.5": "SERIALNUMBER",
"2.5.4.7": "L",
"2.5.4.8": "ST",
"2.5.4.9": "STREET",
"2.5.4.17": "POSTALCODE",
}
// String returns a string representation of the sequence r,
// roughly following the RFC 2253 Distinguished Names syntax.
func (r RDNSequence) String() string {
s := ""
for i := 0; i < len(r); i++ {
rdn := r[len(r)-1-i]
if i > 0 {
s += ","
}
for j, tv := range rdn {
if j > 0 {
s += "+"
}
oidString := tv.Type.String()
typeName, ok := attributeTypeNames[oidString]
if !ok {
derBytes, err := asn1.Marshal(tv.Value)
if err == nil {
s += oidString + "=#" + hex.EncodeToString(derBytes)
continue // No value escaping necessary.
}
typeName = oidString
}
valueString := fmt.Sprint(tv.Value)
escaped := make([]rune, 0, len(valueString))
for k, c := range valueString {
escape := false
switch c {
case ',', '+', '"', '\\', '<', '>', ';':
escape = true
case ' ':
escape = k == 0 || k == len(valueString)-1
case '#':
escape = k == 0
}
if escape {
escaped = append(escaped, '\\', c)
} else {
escaped = append(escaped, c)
}
}
s += typeName + "=" + string(escaped)
}
}
return s
}
type RelativeDistinguishedNameSET []AttributeTypeAndValue
// AttributeTypeAndValue mirrors the ASN.1 structure of the same name in
// http://tools.ietf.org/html/rfc5280#section-4.1.2.4
type AttributeTypeAndValue struct {
Type asn1.ObjectIdentifier
Value interface{}
}
// AttributeTypeAndValueSET represents a set of ASN.1 sequences of
// AttributeTypeAndValue sequences from RFC 2986 (PKCS #10).
type AttributeTypeAndValueSET struct {
Type asn1.ObjectIdentifier
Value [][]AttributeTypeAndValue `asn1:"set"`
}
// Extension represents the ASN.1 structure of the same name. See RFC
// 5280, section 4.2.
type Extension struct {
Id asn1.ObjectIdentifier
Critical bool `asn1:"optional"`
Value []byte
}
// Name represents an X.509 distinguished name. This only includes the common
// elements of a DN. When parsing, all elements are stored in Names and
// non-standard elements can be extracted from there. When marshaling, elements
// in ExtraNames are appended and override other values with the same OID.
type Name struct {
Country, Organization, OrganizationalUnit []string
Locality, Province []string
StreetAddress, PostalCode []string
SerialNumber, CommonName string
Names []AttributeTypeAndValue
ExtraNames []AttributeTypeAndValue
}
func (n *Name) FillFromRDNSequence(rdns *RDNSequence) {
for _, rdn := range *rdns {
if len(rdn) == 0 {
continue
}
for _, atv := range rdn {
n.Names = append(n.Names, atv)
value, ok := atv.Value.(string)
if !ok {
continue
}
t := atv.Type
if len(t) == 4 && t[0] == OIDAttribute[0] && t[1] == OIDAttribute[1] && t[2] == OIDAttribute[2] {
switch t[3] {
case OIDCommonName[3]:
n.CommonName = value
case OIDSerialNumber[3]:
n.SerialNumber = value
case OIDCountry[3]:
n.Country = append(n.Country, value)
case OIDLocality[3]:
n.Locality = append(n.Locality, value)
case OIDProvince[3]:
n.Province = append(n.Province, value)
case OIDStreetAddress[3]:
n.StreetAddress = append(n.StreetAddress, value)
case OIDOrganization[3]:
n.Organization = append(n.Organization, value)
case OIDOrganizationalUnit[3]:
n.OrganizationalUnit = append(n.OrganizationalUnit, value)
case OIDPostalCode[3]:
n.PostalCode = append(n.PostalCode, value)
}
}
}
}
}
var (
OIDAttribute = asn1.ObjectIdentifier{2, 5, 4}
OIDCountry = asn1.ObjectIdentifier{2, 5, 4, 6}
OIDOrganization = asn1.ObjectIdentifier{2, 5, 4, 10}
OIDOrganizationalUnit = asn1.ObjectIdentifier{2, 5, 4, 11}
OIDCommonName = asn1.ObjectIdentifier{2, 5, 4, 3}
OIDSerialNumber = asn1.ObjectIdentifier{2, 5, 4, 5}
OIDLocality = asn1.ObjectIdentifier{2, 5, 4, 7}
OIDProvince = asn1.ObjectIdentifier{2, 5, 4, 8}
OIDStreetAddress = asn1.ObjectIdentifier{2, 5, 4, 9}
OIDPostalCode = asn1.ObjectIdentifier{2, 5, 4, 17}
OIDPseudonym = asn1.ObjectIdentifier{2, 5, 4, 65}
OIDTitle = asn1.ObjectIdentifier{2, 5, 4, 12}
OIDDnQualifier = asn1.ObjectIdentifier{2, 5, 4, 46}
OIDName = asn1.ObjectIdentifier{2, 5, 4, 41}
OIDSurname = asn1.ObjectIdentifier{2, 5, 4, 4}
OIDGivenName = asn1.ObjectIdentifier{2, 5, 4, 42}
OIDInitials = asn1.ObjectIdentifier{2, 5, 4, 43}
OIDGenerationQualifier = asn1.ObjectIdentifier{2, 5, 4, 44}
)
// appendRDNs appends a relativeDistinguishedNameSET to the given RDNSequence
// and returns the new value. The relativeDistinguishedNameSET contains an
// attributeTypeAndValue for each of the given values. See RFC 5280, A.1, and
// search for AttributeTypeAndValue.
func (n Name) appendRDNs(in RDNSequence, values []string, oid asn1.ObjectIdentifier) RDNSequence {
if len(values) == 0 || oidInAttributeTypeAndValue(oid, n.ExtraNames) {
return in
}
s := make([]AttributeTypeAndValue, len(values))
for i, value := range values {
s[i].Type = oid
s[i].Value = value
}
return append(in, s)
}
func (n Name) ToRDNSequence() (ret RDNSequence) {
ret = n.appendRDNs(ret, n.Country, OIDCountry)
ret = n.appendRDNs(ret, n.Province, OIDProvince)
ret = n.appendRDNs(ret, n.Locality, OIDLocality)
ret = n.appendRDNs(ret, n.StreetAddress, OIDStreetAddress)
ret = n.appendRDNs(ret, n.PostalCode, OIDPostalCode)
ret = n.appendRDNs(ret, n.Organization, OIDOrganization)
ret = n.appendRDNs(ret, n.OrganizationalUnit, OIDOrganizationalUnit)
if len(n.CommonName) > 0 {
ret = n.appendRDNs(ret, []string{n.CommonName}, OIDCommonName)
}
if len(n.SerialNumber) > 0 {
ret = n.appendRDNs(ret, []string{n.SerialNumber}, OIDSerialNumber)
}
for _, atv := range n.ExtraNames {
ret = append(ret, []AttributeTypeAndValue{atv})
}
return ret
}
// String returns the string form of n, roughly following
// the RFC 2253 Distinguished Names syntax.
func (n Name) String() string {
return n.ToRDNSequence().String()
}
// oidInAttributeTypeAndValue returns whether a type with the given OID exists
// in atv.
func oidInAttributeTypeAndValue(oid asn1.ObjectIdentifier, atv []AttributeTypeAndValue) bool {
for _, a := range atv {
if a.Type.Equal(oid) {
return true
}
}
return false
}
// CertificateList represents the ASN.1 structure of the same name. See RFC
// 5280, section 5.1. Use Certificate.CheckCRLSignature to verify the
// signature.
type CertificateList struct {
TBSCertList TBSCertificateList
SignatureAlgorithm AlgorithmIdentifier
SignatureValue asn1.BitString
}
// HasExpired reports whether certList should have been updated by now.
func (certList *CertificateList) HasExpired(now time.Time) bool {
return !now.Before(certList.TBSCertList.NextUpdate)
}
// TBSCertificateList represents the ASN.1 structure TBSCertList. See RFC
// 5280, section 5.1.
type TBSCertificateList struct {
Raw asn1.RawContent
Version int `asn1:"optional,default:0"`
Signature AlgorithmIdentifier
Issuer RDNSequence
ThisUpdate time.Time
NextUpdate time.Time `asn1:"optional"`
RevokedCertificates []RevokedCertificate `asn1:"optional"`
Extensions []Extension `asn1:"tag:0,optional,explicit"`
}
// RevokedCertificate represents the unnamed ASN.1 structure that makes up the
// revokedCertificates member of the TBSCertList structure. See RFC
// 5280, section 5.1.
type RevokedCertificate struct {
SerialNumber *big.Int
RevocationTime time.Time
Extensions []Extension `asn1:"optional"`
}

View file

@ -0,0 +1,362 @@
// Copyright 2017 Google Inc. All Rights Reserved.
//
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
import (
"bytes"
"encoding/pem"
"time"
"github.com/google/certificate-transparency-go/asn1"
"github.com/google/certificate-transparency-go/x509/pkix"
)
var (
// OID values for CRL extensions (TBSCertList.Extensions), RFC 5280 s5.2.
OIDExtensionCRLNumber = asn1.ObjectIdentifier{2, 5, 29, 20}
OIDExtensionDeltaCRLIndicator = asn1.ObjectIdentifier{2, 5, 29, 27}
OIDExtensionIssuingDistributionPoint = asn1.ObjectIdentifier{2, 5, 29, 28}
// OID values for CRL entry extensions (RevokedCertificate.Extensions), RFC 5280 s5.3
OIDExtensionCRLReasons = asn1.ObjectIdentifier{2, 5, 29, 21}
OIDExtensionInvalidityDate = asn1.ObjectIdentifier{2, 5, 29, 24}
OIDExtensionCertificateIssuer = asn1.ObjectIdentifier{2, 5, 29, 29}
)
// RevocationReasonCode represents the reason for a certificate revocation; see RFC 5280 s5.3.1.
type RevocationReasonCode asn1.Enumerated
// RevocationReasonCode values.
var (
Unspecified = RevocationReasonCode(0)
KeyCompromise = RevocationReasonCode(1)
CACompromise = RevocationReasonCode(2)
AffiliationChanged = RevocationReasonCode(3)
Superseded = RevocationReasonCode(4)
CessationOfOperation = RevocationReasonCode(5)
CertificateHold = RevocationReasonCode(6)
RemoveFromCRL = RevocationReasonCode(8)
PrivilegeWithdrawn = RevocationReasonCode(9)
AACompromise = RevocationReasonCode(10)
)
// ReasonFlag holds a bitmask of applicable revocation reasons, from RFC 5280 s4.2.1.13
type ReasonFlag int
// ReasonFlag values.
const (
UnusedFlag ReasonFlag = 1 << iota
KeyCompromiseFlag
CACompromiseFlag
AffiliationChangedFlag
SupersededFlag
CessationOfOperationFlag
CertificateHoldFlag
PrivilegeWithdrawnFlag
AACompromiseFlag
)
// CertificateList represents the ASN.1 structure of the same name from RFC 5280, s5.1.
// It has the same content as pkix.CertificateList, but the contents include parsed versions
// of any extensions.
type CertificateList struct {
Raw asn1.RawContent
TBSCertList TBSCertList
SignatureAlgorithm pkix.AlgorithmIdentifier
SignatureValue asn1.BitString
}
// ExpiredAt reports whether now is past the expiry time of certList.
func (certList *CertificateList) ExpiredAt(now time.Time) bool {
return now.After(certList.TBSCertList.NextUpdate)
}
// Indication of whether extensions need to be critical or non-critical. Extensions that
// can be either are omitted from the map.
var listExtCritical = map[string]bool{
// From RFC 5280...
OIDExtensionAuthorityKeyId.String(): false, // s5.2.1
OIDExtensionIssuerAltName.String(): false, // s5.2.2
OIDExtensionCRLNumber.String(): false, // s5.2.3
OIDExtensionDeltaCRLIndicator.String(): true, // s5.2.4
OIDExtensionIssuingDistributionPoint.String(): true, // s5.2.5
OIDExtensionFreshestCRL.String(): false, // s5.2.6
OIDExtensionAuthorityInfoAccess.String(): false, // s5.2.7
}
var certExtCritical = map[string]bool{
// From RFC 5280...
OIDExtensionCRLReasons.String(): false, // s5.3.1
OIDExtensionInvalidityDate.String(): false, // s5.3.2
OIDExtensionCertificateIssuer.String(): true, // s5.3.3
}
// IssuingDistributionPoint represents the ASN.1 structure of the same
// name
type IssuingDistributionPoint struct {
DistributionPoint distributionPointName `asn1:"optional,tag:0"`
OnlyContainsUserCerts bool `asn1:"optional,tag:1"`
OnlyContainsCACerts bool `asn1:"optional,tag:2"`
OnlySomeReasons asn1.BitString `asn1:"optional,tag:3"`
IndirectCRL bool `asn1:"optional,tag:4"`
OnlyContainsAttributeCerts bool `asn1:"optional,tag:5"`
}
// TBSCertList represents the ASN.1 structure of the same name from RFC
// 5280, section 5.1. It has the same content as pkix.TBSCertificateList
// but the extensions are included in a parsed format.
type TBSCertList struct {
Raw asn1.RawContent
Version int
Signature pkix.AlgorithmIdentifier
Issuer pkix.RDNSequence
ThisUpdate time.Time
NextUpdate time.Time
RevokedCertificates []*RevokedCertificate
Extensions []pkix.Extension
// Cracked out extensions:
AuthorityKeyID []byte
IssuerAltNames GeneralNames
CRLNumber int
BaseCRLNumber int // -1 if no delta CRL present
IssuingDistributionPoint IssuingDistributionPoint
IssuingDPFullNames GeneralNames
FreshestCRLDistributionPoint []string
OCSPServer []string
IssuingCertificateURL []string
}
// ParseCertificateList parses a CertificateList (e.g. a CRL) from the given
// bytes. It's often the case that PEM encoded CRLs will appear where they
// should be DER encoded, so this function will transparently handle PEM
// encoding as long as there isn't any leading garbage.
func ParseCertificateList(clBytes []byte) (*CertificateList, error) {
if bytes.HasPrefix(clBytes, pemCRLPrefix) {
block, _ := pem.Decode(clBytes)
if block != nil && block.Type == pemType {
clBytes = block.Bytes
}
}
return ParseCertificateListDER(clBytes)
}
// ParseCertificateListDER parses a DER encoded CertificateList from the given bytes.
// For non-fatal errors, this function returns both an error and a CertificateList
// object.
func ParseCertificateListDER(derBytes []byte) (*CertificateList, error) {
var errs Errors
// First parse the DER into the pkix structures.
pkixList := new(pkix.CertificateList)
if rest, err := asn1.Unmarshal(derBytes, pkixList); err != nil {
errs.AddID(ErrInvalidCertList, err)
return nil, &errs
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertList)
return nil, &errs
}
// Transcribe the revoked certs but crack out extensions.
revokedCerts := make([]*RevokedCertificate, len(pkixList.TBSCertList.RevokedCertificates))
for i, pkixRevoked := range pkixList.TBSCertList.RevokedCertificates {
revokedCerts[i] = parseRevokedCertificate(pkixRevoked, &errs)
if revokedCerts[i] == nil {
return nil, &errs
}
}
certList := CertificateList{
Raw: derBytes,
TBSCertList: TBSCertList{
Raw: pkixList.TBSCertList.Raw,
Version: pkixList.TBSCertList.Version,
Signature: pkixList.TBSCertList.Signature,
Issuer: pkixList.TBSCertList.Issuer,
ThisUpdate: pkixList.TBSCertList.ThisUpdate,
NextUpdate: pkixList.TBSCertList.NextUpdate,
RevokedCertificates: revokedCerts,
Extensions: pkixList.TBSCertList.Extensions,
CRLNumber: -1,
BaseCRLNumber: -1,
},
SignatureAlgorithm: pkixList.SignatureAlgorithm,
SignatureValue: pkixList.SignatureValue,
}
// Now crack out extensions.
for _, e := range certList.TBSCertList.Extensions {
if expectCritical, present := listExtCritical[e.Id.String()]; present {
if e.Critical && !expectCritical {
errs.AddID(ErrUnexpectedlyCriticalCertListExtension, e.Id)
} else if !e.Critical && expectCritical {
errs.AddID(ErrUnexpectedlyNonCriticalCertListExtension, e.Id)
}
}
switch {
case e.Id.Equal(OIDExtensionAuthorityKeyId):
// RFC 5280 s5.2.1
var a authKeyId
if rest, err := asn1.Unmarshal(e.Value, &a); err != nil {
errs.AddID(ErrInvalidCertListAuthKeyID, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertListAuthKeyID)
}
certList.TBSCertList.AuthorityKeyID = a.Id
case e.Id.Equal(OIDExtensionIssuerAltName):
// RFC 5280 s5.2.2
if err := parseGeneralNames(e.Value, &certList.TBSCertList.IssuerAltNames); err != nil {
errs.AddID(ErrInvalidCertListIssuerAltName, err)
}
case e.Id.Equal(OIDExtensionCRLNumber):
// RFC 5280 s5.2.3
if rest, err := asn1.Unmarshal(e.Value, &certList.TBSCertList.CRLNumber); err != nil {
errs.AddID(ErrInvalidCertListCRLNumber, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertListCRLNumber)
}
if certList.TBSCertList.CRLNumber < 0 {
errs.AddID(ErrNegativeCertListCRLNumber, certList.TBSCertList.CRLNumber)
}
case e.Id.Equal(OIDExtensionDeltaCRLIndicator):
// RFC 5280 s5.2.4
if rest, err := asn1.Unmarshal(e.Value, &certList.TBSCertList.BaseCRLNumber); err != nil {
errs.AddID(ErrInvalidCertListDeltaCRL, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertListDeltaCRL)
}
if certList.TBSCertList.BaseCRLNumber < 0 {
errs.AddID(ErrNegativeCertListDeltaCRL, certList.TBSCertList.BaseCRLNumber)
}
case e.Id.Equal(OIDExtensionIssuingDistributionPoint):
parseIssuingDistributionPoint(e.Value, &certList.TBSCertList.IssuingDistributionPoint, &certList.TBSCertList.IssuingDPFullNames, &errs)
case e.Id.Equal(OIDExtensionFreshestCRL):
// RFC 5280 s5.2.6
if err := parseDistributionPoints(e.Value, &certList.TBSCertList.FreshestCRLDistributionPoint); err != nil {
errs.AddID(ErrInvalidCertListFreshestCRL, err)
return nil, err
}
case e.Id.Equal(OIDExtensionAuthorityInfoAccess):
// RFC 5280 s5.2.7
var aia []authorityInfoAccess
if rest, err := asn1.Unmarshal(e.Value, &aia); err != nil {
errs.AddID(ErrInvalidCertListAuthInfoAccess, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertListAuthInfoAccess)
}
for _, v := range aia {
// GeneralName: uniformResourceIdentifier [6] IA5String
if v.Location.Tag != tagURI {
continue
}
switch {
case v.Method.Equal(OIDAuthorityInfoAccessOCSP):
certList.TBSCertList.OCSPServer = append(certList.TBSCertList.OCSPServer, string(v.Location.Bytes))
case v.Method.Equal(OIDAuthorityInfoAccessIssuers):
certList.TBSCertList.IssuingCertificateURL = append(certList.TBSCertList.IssuingCertificateURL, string(v.Location.Bytes))
}
// TODO(drysdale): cope with more possibilities
}
default:
if e.Critical {
errs.AddID(ErrUnhandledCriticalCertListExtension, e.Id)
}
}
}
if errs.Fatal() {
return nil, &errs
}
if errs.Empty() {
return &certList, nil
}
return &certList, &errs
}
func parseIssuingDistributionPoint(data []byte, idp *IssuingDistributionPoint, name *GeneralNames, errs *Errors) {
// RFC 5280 s5.2.5
if rest, err := asn1.Unmarshal(data, idp); err != nil {
errs.AddID(ErrInvalidCertListIssuingDP, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingCertListIssuingDP)
}
typeCount := 0
if idp.OnlyContainsUserCerts {
typeCount++
}
if idp.OnlyContainsCACerts {
typeCount++
}
if idp.OnlyContainsAttributeCerts {
typeCount++
}
if typeCount > 1 {
errs.AddID(ErrCertListIssuingDPMultipleTypes, idp.OnlyContainsUserCerts, idp.OnlyContainsCACerts, idp.OnlyContainsAttributeCerts)
}
for _, fn := range idp.DistributionPoint.FullName {
if _, err := parseGeneralName(fn.FullBytes, name, false); err != nil {
errs.AddID(ErrCertListIssuingDPInvalidFullName, err)
}
}
}
// RevokedCertificate represents the unnamed ASN.1 structure that makes up the
// revokedCertificates member of the TBSCertList structure from RFC 5280, s5.1.
// It has the same content as pkix.RevokedCertificate but the extensions are
// included in a parsed format.
type RevokedCertificate struct {
pkix.RevokedCertificate
// Cracked out extensions:
RevocationReason RevocationReasonCode
InvalidityDate time.Time
Issuer GeneralNames
}
func parseRevokedCertificate(pkixRevoked pkix.RevokedCertificate, errs *Errors) *RevokedCertificate {
result := RevokedCertificate{RevokedCertificate: pkixRevoked}
for _, e := range pkixRevoked.Extensions {
if expectCritical, present := certExtCritical[e.Id.String()]; present {
if e.Critical && !expectCritical {
errs.AddID(ErrUnexpectedlyCriticalRevokedCertExtension, e.Id)
} else if !e.Critical && expectCritical {
errs.AddID(ErrUnexpectedlyNonCriticalRevokedCertExtension, e.Id)
}
}
switch {
case e.Id.Equal(OIDExtensionCRLReasons):
// RFC 5280, s5.3.1
var reason asn1.Enumerated
if rest, err := asn1.Unmarshal(e.Value, &reason); err != nil {
errs.AddID(ErrInvalidRevocationReason, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingRevocationReason)
}
result.RevocationReason = RevocationReasonCode(reason)
case e.Id.Equal(OIDExtensionInvalidityDate):
// RFC 5280, s5.3.2
if rest, err := asn1.Unmarshal(e.Value, &result.InvalidityDate); err != nil {
errs.AddID(ErrInvalidRevocationInvalidityDate, err)
} else if len(rest) != 0 {
errs.AddID(ErrTrailingRevocationInvalidityDate)
}
case e.Id.Equal(OIDExtensionCertificateIssuer):
// RFC 5280, s5.3.3
if err := parseGeneralNames(e.Value, &result.Issuer); err != nil {
errs.AddID(ErrInvalidRevocationIssuer, err)
}
default:
if e.Critical {
errs.AddID(ErrUnhandledCriticalRevokedCertExtension, e.Id)
}
}
}
return &result
}
// CheckCertificateListSignature checks that the signature in crl is from c.
func (c *Certificate) CheckCertificateListSignature(crl *CertificateList) error {
algo := SignatureAlgorithmFromAI(crl.SignatureAlgorithm)
return c.CheckSignature(algo, crl.TBSCertList.Raw, crl.SignatureValue.RightAlign())
}

View file

@ -7,11 +7,16 @@ package x509
import "sync"
var (
once sync.Once
systemRoots *CertPool
once sync.Once
systemRoots *CertPool
systemRootsErr error
)
func systemRootsPool() *CertPool {
once.Do(initSystemRoots)
return systemRoots
}
func initSystemRoots() {
systemRoots, systemRootsErr = loadSystemRoots()
}

View file

@ -0,0 +1,15 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build dragonfly freebsd netbsd openbsd
package x509
// Possible certificate files; stop after finding one.
var certFiles = []string{
"/usr/local/etc/ssl/cert.pem", // FreeBSD
"/etc/ssl/cert.pem", // OpenBSD
"/usr/local/share/certs/ca-root-nss.crt", // DragonFly
"/etc/openssl/certs/ca-certificates.crt", // NetBSD
}

View file

@ -0,0 +1,252 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build cgo,!arm,!arm64,!ios
package x509
/*
#cgo CFLAGS: -mmacosx-version-min=10.6 -D__MAC_OS_X_VERSION_MAX_ALLOWED=1080
#cgo LDFLAGS: -framework CoreFoundation -framework Security
#include <errno.h>
#include <sys/sysctl.h>
#include <CoreFoundation/CoreFoundation.h>
#include <Security/Security.h>
// FetchPEMRootsCTX509_MountainLion is the version of FetchPEMRoots from Go 1.6
// which still works on OS X 10.8 (Mountain Lion).
// It lacks support for admin & user cert domains.
// See golang.org/issue/16473
int FetchPEMRootsCTX509_MountainLion(CFDataRef *pemRoots) {
if (pemRoots == NULL) {
return -1;
}
CFArrayRef certs = NULL;
OSStatus err = SecTrustCopyAnchorCertificates(&certs);
if (err != noErr) {
return -1;
}
CFMutableDataRef combinedData = CFDataCreateMutable(kCFAllocatorDefault, 0);
int i, ncerts = CFArrayGetCount(certs);
for (i = 0; i < ncerts; i++) {
CFDataRef data = NULL;
SecCertificateRef cert = (SecCertificateRef)CFArrayGetValueAtIndex(certs, i);
if (cert == NULL) {
continue;
}
// Note: SecKeychainItemExport is deprecated as of 10.7 in favor of SecItemExport.
// Once we support weak imports via cgo we should prefer that, and fall back to this
// for older systems.
err = SecKeychainItemExport(cert, kSecFormatX509Cert, kSecItemPemArmour, NULL, &data);
if (err != noErr) {
continue;
}
if (data != NULL) {
CFDataAppendBytes(combinedData, CFDataGetBytePtr(data), CFDataGetLength(data));
CFRelease(data);
}
}
CFRelease(certs);
*pemRoots = combinedData;
return 0;
}
// useOldCodeCTX509 reports whether the running machine is OS X 10.8 Mountain Lion
// or older. We only support Mountain Lion and higher, but we'll at least try our
// best on older machines and continue to use the old code path.
//
// See golang.org/issue/16473
int useOldCodeCTX509() {
char str[256];
size_t size = sizeof(str);
memset(str, 0, size);
sysctlbyname("kern.osrelease", str, &size, NULL, 0);
// OS X 10.8 is osrelease "12.*", 10.7 is 11.*, 10.6 is 10.*.
// We never supported things before that.
return memcmp(str, "12.", 3) == 0 || memcmp(str, "11.", 3) == 0 || memcmp(str, "10.", 3) == 0;
}
// FetchPEMRootsCTX509 fetches the system's list of trusted X.509 root certificates.
//
// On success it returns 0 and fills pemRoots with a CFDataRef that contains the extracted root
// certificates of the system. On failure, the function returns -1.
// Additionally, it fills untrustedPemRoots with certs that must be removed from pemRoots.
//
// Note: The CFDataRef returned in pemRoots and untrustedPemRoots must
// be released (using CFRelease) after we've consumed its content.
int FetchPEMRootsCTX509(CFDataRef *pemRoots, CFDataRef *untrustedPemRoots) {
if (useOldCodeCTX509()) {
return FetchPEMRootsCTX509_MountainLion(pemRoots);
}
// Get certificates from all domains, not just System, this lets
// the user add CAs to their "login" keychain, and Admins to add
// to the "System" keychain
SecTrustSettingsDomain domains[] = { kSecTrustSettingsDomainSystem,
kSecTrustSettingsDomainAdmin,
kSecTrustSettingsDomainUser };
int numDomains = sizeof(domains)/sizeof(SecTrustSettingsDomain);
if (pemRoots == NULL) {
return -1;
}
// kSecTrustSettingsResult is defined as CFSTR("kSecTrustSettingsResult"),
// but the Go linker's internal linking mode can't handle CFSTR relocations.
// Create our own dynamic string instead and release it below.
CFStringRef policy = CFStringCreateWithCString(NULL, "kSecTrustSettingsResult", kCFStringEncodingUTF8);
CFMutableDataRef combinedData = CFDataCreateMutable(kCFAllocatorDefault, 0);
CFMutableDataRef combinedUntrustedData = CFDataCreateMutable(kCFAllocatorDefault, 0);
for (int i = 0; i < numDomains; i++) {
CFArrayRef certs = NULL;
OSStatus err = SecTrustSettingsCopyCertificates(domains[i], &certs);
if (err != noErr) {
continue;
}
CFIndex numCerts = CFArrayGetCount(certs);
for (int j = 0; j < numCerts; j++) {
CFDataRef data = NULL;
CFErrorRef errRef = NULL;
CFArrayRef trustSettings = NULL;
SecCertificateRef cert = (SecCertificateRef)CFArrayGetValueAtIndex(certs, j);
if (cert == NULL) {
continue;
}
// We only want trusted certs.
int untrusted = 0;
int trustAsRoot = 0;
int trustRoot = 0;
if (i == 0) {
trustAsRoot = 1;
} else {
// Certs found in the system domain are always trusted. If the user
// configures "Never Trust" on such a cert, it will also be found in the
// admin or user domain, causing it to be added to untrustedPemRoots. The
// Go code will then clean this up.
// Trust may be stored in any of the domains. According to Apple's
// SecTrustServer.c, "user trust settings overrule admin trust settings",
// so take the last trust settings array we find.
// Skip the system domain since it is always trusted.
for (int k = i; k < numDomains; k++) {
CFArrayRef domainTrustSettings = NULL;
err = SecTrustSettingsCopyTrustSettings(cert, domains[k], &domainTrustSettings);
if (err == errSecSuccess && domainTrustSettings != NULL) {
if (trustSettings) {
CFRelease(trustSettings);
}
trustSettings = domainTrustSettings;
}
}
if (trustSettings == NULL) {
// "this certificate must be verified to a known trusted certificate"; aka not a root.
continue;
}
for (CFIndex k = 0; k < CFArrayGetCount(trustSettings); k++) {
CFNumberRef cfNum;
CFDictionaryRef tSetting = (CFDictionaryRef)CFArrayGetValueAtIndex(trustSettings, k);
if (CFDictionaryGetValueIfPresent(tSetting, policy, (const void**)&cfNum)){
SInt32 result = 0;
CFNumberGetValue(cfNum, kCFNumberSInt32Type, &result);
// TODO: The rest of the dictionary specifies conditions for evaluation.
if (result == kSecTrustSettingsResultDeny) {
untrusted = 1;
} else if (result == kSecTrustSettingsResultTrustAsRoot) {
trustAsRoot = 1;
} else if (result == kSecTrustSettingsResultTrustRoot) {
trustRoot = 1;
}
}
}
CFRelease(trustSettings);
}
if (trustRoot) {
// We only want to add Root CAs, so make sure Subject and Issuer Name match
CFDataRef subjectName = SecCertificateCopyNormalizedSubjectContent(cert, &errRef);
if (errRef != NULL) {
CFRelease(errRef);
continue;
}
CFDataRef issuerName = SecCertificateCopyNormalizedIssuerContent(cert, &errRef);
if (errRef != NULL) {
CFRelease(subjectName);
CFRelease(errRef);
continue;
}
Boolean equal = CFEqual(subjectName, issuerName);
CFRelease(subjectName);
CFRelease(issuerName);
if (!equal) {
continue;
}
}
// Note: SecKeychainItemExport is deprecated as of 10.7 in favor of SecItemExport.
// Once we support weak imports via cgo we should prefer that, and fall back to this
// for older systems.
err = SecKeychainItemExport(cert, kSecFormatX509Cert, kSecItemPemArmour, NULL, &data);
if (err != noErr) {
continue;
}
if (data != NULL) {
if (!trustRoot && !trustAsRoot) {
untrusted = 1;
}
CFMutableDataRef appendTo = untrusted ? combinedUntrustedData : combinedData;
CFDataAppendBytes(appendTo, CFDataGetBytePtr(data), CFDataGetLength(data));
CFRelease(data);
}
}
CFRelease(certs);
}
CFRelease(policy);
*pemRoots = combinedData;
*untrustedPemRoots = combinedUntrustedData;
return 0;
}
*/
import "C"
import (
"errors"
"unsafe"
)
func loadSystemRoots() (*CertPool, error) {
roots := NewCertPool()
var data C.CFDataRef
setNilCFRef(&data)
var untrustedData C.CFDataRef
setNilCFRef(&untrustedData)
err := C.FetchPEMRootsCTX509(&data, &untrustedData)
if err == -1 {
// TODO: better error message
return nil, errors.New("crypto/x509: failed to load darwin system roots with cgo")
}
defer C.CFRelease(C.CFTypeRef(data))
buf := C.GoBytes(unsafe.Pointer(C.CFDataGetBytePtr(data)), C.int(C.CFDataGetLength(data)))
roots.AppendCertsFromPEM(buf)
if isNilCFRef(untrustedData) {
return roots, nil
}
defer C.CFRelease(C.CFTypeRef(untrustedData))
buf = C.GoBytes(unsafe.Pointer(C.CFDataGetBytePtr(untrustedData)), C.int(C.CFDataGetLength(untrustedData)))
untrustedRoots := NewCertPool()
untrustedRoots.AppendCertsFromPEM(buf)
trustedRoots := NewCertPool()
for _, c := range roots.certs {
if !untrustedRoots.contains(c) {
trustedRoots.AddCert(c)
}
}
return trustedRoots, nil
}

View file

@ -0,0 +1,264 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
//go:generate go run root_darwin_arm_gen.go -output root_darwin_armx.go
package x509
import (
"bufio"
"bytes"
"crypto/sha1"
"encoding/pem"
"fmt"
"io"
"io/ioutil"
"os"
"os/exec"
"os/user"
"path/filepath"
"strings"
"sync"
)
var debugExecDarwinRoots = strings.Contains(os.Getenv("GODEBUG"), "x509roots=1")
func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate, err error) {
return nil, nil
}
// This code is only used when compiling without cgo.
// It is here, instead of root_nocgo_darwin.go, so that tests can check it
// even if the tests are run with cgo enabled.
// The linker will not include these unused functions in binaries built with cgo enabled.
// execSecurityRoots finds the macOS list of trusted root certificates
// using only command-line tools. This is our fallback path when cgo isn't available.
//
// The strategy is as follows:
//
// 1. Run "security trust-settings-export" and "security
// trust-settings-export -d" to discover the set of certs with some
// user-tweaked trust policy. We're too lazy to parse the XML (at
// least at this stage of Go 1.8) to understand what the trust
// policy actually is. We just learn that there is _some_ policy.
//
// 2. Run "security find-certificate" to dump the list of system root
// CAs in PEM format.
//
// 3. For each dumped cert, conditionally verify it with "security
// verify-cert" if that cert was in the set discovered in Step 1.
// Without the Step 1 optimization, running "security verify-cert"
// 150-200 times takes 3.5 seconds. With the optimization, the
// whole process takes about 180 milliseconds with 1 untrusted root
// CA. (Compared to 110ms in the cgo path)
func execSecurityRoots() (*CertPool, error) {
hasPolicy, err := getCertsWithTrustPolicy()
if err != nil {
return nil, err
}
if debugExecDarwinRoots {
println(fmt.Sprintf("crypto/x509: %d certs have a trust policy", len(hasPolicy)))
}
args := []string{"find-certificate", "-a", "-p",
"/System/Library/Keychains/SystemRootCertificates.keychain",
"/Library/Keychains/System.keychain",
}
u, err := user.Current()
if err != nil {
if debugExecDarwinRoots {
println(fmt.Sprintf("crypto/x509: get current user: %v", err))
}
} else {
args = append(args,
filepath.Join(u.HomeDir, "/Library/Keychains/login.keychain"),
// Fresh installs of Sierra use a slightly different path for the login keychain
filepath.Join(u.HomeDir, "/Library/Keychains/login.keychain-db"),
)
}
cmd := exec.Command("/usr/bin/security", args...)
data, err := cmd.Output()
if err != nil {
return nil, err
}
var (
mu sync.Mutex
roots = NewCertPool()
numVerified int // number of execs of 'security verify-cert', for debug stats
)
blockCh := make(chan *pem.Block)
var wg sync.WaitGroup
// Using 4 goroutines to pipe into verify-cert seems to be
// about the best we can do. The verify-cert binary seems to
// just RPC to another server with coarse locking anyway, so
// running 16 at a time for instance doesn't help at all. Due
// to the "if hasPolicy" check below, though, we will rarely
// (or never) call verify-cert on stock macOS systems, though.
// The hope is that we only call verify-cert when the user has
// tweaked their trust policy. These 4 goroutines are only
// defensive in the pathological case of many trust edits.
for i := 0; i < 4; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for block := range blockCh {
cert, err := ParseCertificate(block.Bytes)
if err != nil {
continue
}
sha1CapHex := fmt.Sprintf("%X", sha1.Sum(block.Bytes))
valid := true
verifyChecks := 0
if hasPolicy[sha1CapHex] {
verifyChecks++
if !verifyCertWithSystem(block, cert) {
valid = false
}
}
mu.Lock()
numVerified += verifyChecks
if valid {
roots.AddCert(cert)
}
mu.Unlock()
}
}()
}
for len(data) > 0 {
var block *pem.Block
block, data = pem.Decode(data)
if block == nil {
break
}
if block.Type != "CERTIFICATE" || len(block.Headers) != 0 {
continue
}
blockCh <- block
}
close(blockCh)
wg.Wait()
if debugExecDarwinRoots {
mu.Lock()
defer mu.Unlock()
println(fmt.Sprintf("crypto/x509: ran security verify-cert %d times", numVerified))
}
return roots, nil
}
func verifyCertWithSystem(block *pem.Block, cert *Certificate) bool {
data := pem.EncodeToMemory(block)
f, err := ioutil.TempFile("", "cert")
if err != nil {
fmt.Fprintf(os.Stderr, "can't create temporary file for cert: %v", err)
return false
}
defer os.Remove(f.Name())
if _, err := f.Write(data); err != nil {
fmt.Fprintf(os.Stderr, "can't write temporary file for cert: %v", err)
return false
}
if err := f.Close(); err != nil {
fmt.Fprintf(os.Stderr, "can't write temporary file for cert: %v", err)
return false
}
cmd := exec.Command("/usr/bin/security", "verify-cert", "-c", f.Name(), "-l", "-L")
var stderr bytes.Buffer
if debugExecDarwinRoots {
cmd.Stderr = &stderr
}
if err := cmd.Run(); err != nil {
if debugExecDarwinRoots {
println(fmt.Sprintf("crypto/x509: verify-cert rejected %s: %q", cert.Subject.CommonName, bytes.TrimSpace(stderr.Bytes())))
}
return false
}
if debugExecDarwinRoots {
println(fmt.Sprintf("crypto/x509: verify-cert approved %s", cert.Subject.CommonName))
}
return true
}
// getCertsWithTrustPolicy returns the set of certs that have a
// possibly-altered trust policy. The keys of the map are capitalized
// sha1 hex of the raw cert.
// They are the certs that should be checked against `security
// verify-cert` to see whether the user altered the default trust
// settings. This code is only used for cgo-disabled builds.
func getCertsWithTrustPolicy() (map[string]bool, error) {
set := map[string]bool{}
td, err := ioutil.TempDir("", "x509trustpolicy")
if err != nil {
return nil, err
}
defer os.RemoveAll(td)
run := func(file string, args ...string) error {
file = filepath.Join(td, file)
args = append(args, file)
cmd := exec.Command("/usr/bin/security", args...)
var stderr bytes.Buffer
cmd.Stderr = &stderr
if err := cmd.Run(); err != nil {
// If there are no trust settings, the
// `security trust-settings-export` command
// fails with:
// exit status 1, SecTrustSettingsCreateExternalRepresentation: No Trust Settings were found.
// Rather than match on English substrings that are probably
// localized on macOS, just interpret any failure to mean that
// there are no trust settings.
if debugExecDarwinRoots {
println(fmt.Sprintf("crypto/x509: exec %q: %v, %s", cmd.Args, err, stderr.Bytes()))
}
return nil
}
f, err := os.Open(file)
if err != nil {
return err
}
defer f.Close()
// Gather all the runs of 40 capitalized hex characters.
br := bufio.NewReader(f)
var hexBuf bytes.Buffer
for {
b, err := br.ReadByte()
isHex := ('A' <= b && b <= 'F') || ('0' <= b && b <= '9')
if isHex {
hexBuf.WriteByte(b)
} else {
if hexBuf.Len() == 40 {
set[hexBuf.String()] = true
}
hexBuf.Reset()
}
if err == io.EOF {
break
}
if err != nil {
return err
}
}
return nil
}
if err := run("user", "trust-settings-export"); err != nil {
return nil, fmt.Errorf("dump-trust-settings (user): %v", err)
}
if err := run("admin", "trust-settings-export", "-d"); err != nil {
return nil, fmt.Errorf("dump-trust-settings (admin): %v", err)
}
return set, nil
}

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,14 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
// Possible certificate files; stop after finding one.
var certFiles = []string{
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL 6
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/pki/tls/cacert.pem", // OpenELEC
"/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem", // CentOS/RHEL 7
}

View file

@ -0,0 +1,8 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
// Possible certificate files; stop after finding one.
var certFiles = []string{}

View file

@ -0,0 +1,11 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !cgo
package x509
func loadSystemRoots() (*CertPool, error) {
return execSecurityRoots()
}

View file

@ -6,7 +6,10 @@
package x509
import "io/ioutil"
import (
"io/ioutil"
"os"
)
// Possible certificate files; stop after finding one.
var certFiles = []string{
@ -17,17 +20,18 @@ func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate
return nil, nil
}
func initSystemRoots() {
func loadSystemRoots() (*CertPool, error) {
roots := NewCertPool()
var bestErr error
for _, file := range certFiles {
data, err := ioutil.ReadFile(file)
if err == nil {
roots.AppendCertsFromPEM(data)
systemRoots = roots
return
return roots, nil
}
if bestErr == nil || (os.IsNotExist(bestErr) && !os.IsNotExist(err)) {
bestErr = err
}
}
// All of the files failed to load. systemRoots will be nil which will
// trigger a specific error at verification time.
return nil, bestErr
}

View file

@ -0,0 +1,12 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
// Possible certificate files; stop after finding one.
var certFiles = []string{
"/etc/certs/ca-certificates.crt", // Solaris 11.2+
"/etc/ssl/certs/ca-certificates.crt", // Joyent SmartOS
"/etc/ssl/cacert.pem", // OmniOS
}

View file

@ -0,0 +1,88 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build dragonfly freebsd linux nacl netbsd openbsd solaris
package x509
import (
"io/ioutil"
"os"
)
// Possible directories with certificate files; stop after successfully
// reading at least one file from a directory.
var certDirectories = []string{
"/etc/ssl/certs", // SLES10/SLES11, https://golang.org/issue/12139
"/system/etc/security/cacerts", // Android
"/usr/local/share/certs", // FreeBSD
"/etc/pki/tls/certs", // Fedora/RHEL
"/etc/openssl/certs", // NetBSD
}
const (
// certFileEnv is the environment variable which identifies where to locate
// the SSL certificate file. If set this overrides the system default.
certFileEnv = "SSL_CERT_FILE"
// certDirEnv is the environment variable which identifies which directory
// to check for SSL certificate files. If set this overrides the system default.
certDirEnv = "SSL_CERT_DIR"
)
func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate, err error) {
return nil, nil
}
func loadSystemRoots() (*CertPool, error) {
roots := NewCertPool()
files := certFiles
if f := os.Getenv(certFileEnv); f != "" {
files = []string{f}
}
var firstErr error
for _, file := range files {
data, err := ioutil.ReadFile(file)
if err == nil {
roots.AppendCertsFromPEM(data)
break
}
if firstErr == nil && !os.IsNotExist(err) {
firstErr = err
}
}
dirs := certDirectories
if d := os.Getenv(certDirEnv); d != "" {
dirs = []string{d}
}
for _, directory := range dirs {
fis, err := ioutil.ReadDir(directory)
if err != nil {
if firstErr == nil && !os.IsNotExist(err) {
firstErr = err
}
continue
}
rootsAdded := false
for _, fi := range fis {
data, err := ioutil.ReadFile(directory + "/" + fi.Name())
if err == nil && roots.AppendCertsFromPEM(data) {
rootsAdded = true
}
}
if rootsAdded {
return roots, nil
}
}
if len(roots.certs) > 0 {
return roots, nil
}
return nil, firstErr
}

View file

@ -87,7 +87,7 @@ func checkChainTrustStatus(c *Certificate, chainCtx *syscall.CertChainContext) e
status := chainCtx.TrustStatus.ErrorStatus
switch status {
case syscall.CERT_TRUST_IS_NOT_TIME_VALID:
return CertificateInvalidError{c, Expired}
return CertificateInvalidError{c, Expired, ""}
default:
return UnknownAuthorityError{c, nil, nil}
}
@ -125,7 +125,7 @@ func checkChainSSLServerPolicy(c *Certificate, chainCtx *syscall.CertChainContex
if status.Error != 0 {
switch status.Error {
case syscall.CERT_E_EXPIRED:
return CertificateInvalidError{c, Expired}
return CertificateInvalidError{c, Expired, ""}
case syscall.CERT_E_CN_NO_MATCH:
return HostnameError{c, opts.DNSName}
case syscall.CERT_E_UNTRUSTEDROOT:
@ -179,7 +179,7 @@ func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate
}
// CertGetCertificateChain will traverse Windows's root stores
// in an attempt to build a verified certificate chain. Once
// in an attempt to build a verified certificate chain. Once
// it has found a verified chain, it stops. MSDN docs on
// CERT_CHAIN_CONTEXT:
//
@ -225,5 +225,42 @@ func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate
return chains, nil
}
func initSystemRoots() {
func loadSystemRoots() (*CertPool, error) {
// TODO: restore this functionality on Windows. We tried to do
// it in Go 1.8 but had to revert it. See Issue 18609.
// Returning (nil, nil) was the old behavior, prior to CL 30578.
return nil, nil
const CRYPT_E_NOT_FOUND = 0x80092004
store, err := syscall.CertOpenSystemStore(0, syscall.StringToUTF16Ptr("ROOT"))
if err != nil {
return nil, err
}
defer syscall.CertCloseStore(store, 0)
roots := NewCertPool()
var cert *syscall.CertContext
for {
cert, err = syscall.CertEnumCertificatesInStore(store, cert)
if err != nil {
if errno, ok := err.(syscall.Errno); ok {
if errno == CRYPT_E_NOT_FOUND {
break
}
}
return nil, err
}
if cert == nil {
break
}
// Copy the buf, since ParseCertificate does not create its own copy.
buf := (*[1 << 20]byte)(unsafe.Pointer(cert.EncodedCert))[:]
buf2 := make([]byte, cert.Length)
copy(buf2, buf)
if c, err := ParseCertificate(buf2); err == nil {
roots.AddCert(c)
}
}
return roots, nil
}

View file

@ -7,21 +7,20 @@ package x509
import (
"crypto/ecdsa"
"crypto/elliptic"
// START CT CHANGES
"github.com/google/certificate-transparency/go/asn1"
// START CT CHANGES
"errors"
"fmt"
"math/big"
"github.com/google/certificate-transparency-go/asn1"
)
const ecPrivKeyVersion = 1
// ecPrivateKey reflects an ASN.1 Elliptic Curve Private Key Structure.
// References:
// RFC5915
// SEC1 - http://www.secg.org/download/aid-780/sec1-v2.pdf
// Per RFC5915 the NamedCurveOID is marked as ASN.1 OPTIONAL, however in
// RFC 5915
// SEC1 - http://www.secg.org/sec1-v2.pdf
// Per RFC 5915 the NamedCurveOID is marked as ASN.1 OPTIONAL, however in
// most cases it is not.
type ecPrivateKey struct {
Version int
@ -31,19 +30,30 @@ type ecPrivateKey struct {
}
// ParseECPrivateKey parses an ASN.1 Elliptic Curve Private Key Structure.
func ParseECPrivateKey(der []byte) (key *ecdsa.PrivateKey, err error) {
func ParseECPrivateKey(der []byte) (*ecdsa.PrivateKey, error) {
return parseECPrivateKey(nil, der)
}
// MarshalECPrivateKey marshals an EC private key into ASN.1, DER format.
func MarshalECPrivateKey(key *ecdsa.PrivateKey) ([]byte, error) {
oid, ok := oidFromNamedCurve(key.Curve)
oid, ok := OIDFromNamedCurve(key.Curve)
if !ok {
return nil, errors.New("x509: unknown elliptic curve")
}
return marshalECPrivateKeyWithOID(key, oid)
}
// marshalECPrivateKey marshals an EC private key into ASN.1, DER format and
// sets the curve ID to the given OID, or omits it if OID is nil.
func marshalECPrivateKeyWithOID(key *ecdsa.PrivateKey, oid asn1.ObjectIdentifier) ([]byte, error) {
privateKeyBytes := key.D.Bytes()
paddedPrivateKey := make([]byte, (key.Curve.Params().N.BitLen()+7)/8)
copy(paddedPrivateKey[len(paddedPrivateKey)-len(privateKeyBytes):], privateKeyBytes)
return asn1.Marshal(ecPrivateKey{
Version: 1,
PrivateKey: key.D.Bytes(),
PrivateKey: paddedPrivateKey,
NamedCurveOID: oid,
PublicKey: asn1.BitString{Bytes: elliptic.Marshal(key.Curve, key.X, key.Y)},
})
@ -73,13 +83,30 @@ func parseECPrivateKey(namedCurveOID *asn1.ObjectIdentifier, der []byte) (key *e
}
k := new(big.Int).SetBytes(privKey.PrivateKey)
if k.Cmp(curve.Params().N) >= 0 {
curveOrder := curve.Params().N
if k.Cmp(curveOrder) >= 0 {
return nil, errors.New("x509: invalid elliptic curve private key value")
}
priv := new(ecdsa.PrivateKey)
priv.Curve = curve
priv.D = k
priv.X, priv.Y = curve.ScalarBaseMult(privKey.PrivateKey)
privateKey := make([]byte, (curveOrder.BitLen()+7)/8)
// Some private keys have leading zero padding. This is invalid
// according to [SEC1], but this code will ignore it.
for len(privKey.PrivateKey) > len(privateKey) {
if privKey.PrivateKey[0] != 0 {
return nil, errors.New("x509: invalid private key length")
}
privKey.PrivateKey = privKey.PrivateKey[1:]
}
// Some private keys remove all leading zeros, this is also invalid
// according to [SEC1] but since OpenSSL used to do this, we ignore
// this too.
copy(privateKey[len(privateKey)-len(privKey.PrivateKey):], privKey.PrivateKey)
priv.X, priv.Y = curve.ScalarBaseMult(privateKey)
return priv, nil
}

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -1,55 +0,0 @@
OS X Specific Instructions
==========================
Builds
------
We recommend that you use GClient to build on OSX. Please follow the
instructions in the [main readme](README.md) file.
Trusted root certificates
-------------------------
The CT code requires a set of trusted root certificates in order to:
1. Validate outbound HTTPS connections
2. (In the case of the log-server) decide whether to accept a certificate
chain for inclusion.
On OSX, the system version of OpenSSL (0.9.8gz at time of writing) contains
Apple-provided patches which intercept failed chain validations and re-attempts
them using roots obtained from the system keychain. Since we use a much more
recent (and unpatched) version of OpenSSL this behaviour is unsupported and so
a PEM file containing the trusted root certs must be used.
To use a certificate PEM bundle file with the CT C++ code, the following
methods may be used.
### Incoming inclusion requests (ct-server only)
Set the `--trusted_cert_file` flag to point to the location of the PEM file
containing the set of root certificates whose chains should be accepted for
inclusion into the log.
### For verifying outbound HTTPS connections (ct-mirror)
Either set the `--trusted_roots_certs` flag, or the `SSL_CERT_FILE`
environment variable, to point to the location of the PEM file containing the
root certificates to be used to verify the outbound HTTPS connection.
Sources of trusted roots
------------------------
Obviously the choice of root certificates to trust for outbound HTTPS
connections and incoming inclusion requests are a matter of operating policy,
but it is often useful to have a set of common roots for testing and
development at the very least.
While OSX ships with a set of common trusted roots, they are not directly
available to OpenSSL and must be exported from the keychain first. This can be
achieved with the following command:
```bash
security find-certificates -a -p /Library/Keychains/System.keychain > certs.pem
security find-certificates -a -p /System/Library/Keychains/SystemRootCertificates.keychain >> certs.pem
```

View file

@ -1,303 +0,0 @@
certificate-transparency: Auditing for TLS certificates
=======================================================
[![Build Status](https://travis-ci.org/google/certificate-transparency.svg?branch=master)](https://travis-ci.org/google/certificate-transparency)
- [Introduction](#introduction)
- [Build Quick Start](#build-quick-start)
- [Code Layout](#code-layout)
- [Building the code](#building-the-code)
- [Build Dependencies](#build-dependencies)
- [Software Dependencies](#software-dependencies)
- [Build Troubleshooting](#build-troubleshooting)
- [Compiler Warnings/Errors](#compiler-warnings-errors)
- [Working on a Branch](#working-on-a-branch)
- [Using BoringSSL](#using-boringssl)
- [Testing the code](#testing-the-code)
- [Unit Tests](#unit-tests)
- [Testing and Logging Options](#testing-and-logging-options)
- [Deploying a Log](#deploying-a-log)
- [Operating a Log](#operating-a-log)
Introduction
------------
This repository holds open-source code for functionality related
to [certificate transparency](https://www.certificate-transparency.org/) (CT).
The main areas covered are:
- An open-source, distributed, implementation of a CT Log server, also including:
- An implementation of a read-only ["mirror" server](docs/MirrorLog.md)
that mimics a remote Log.
- Ancillary tools needed for managing and maintaining the Log.
- A collection of client tools and libraries for interacting with a CT Log, in
various programming languages.
- An **experimental** implementation of a [DNS server](docs/DnsServer.md) that
returns CT proofs in the form of DNS records.
- An **experimental** implementation of a [general Log](docs/XjsonServer.md)
that allows arbitrary data (not just TLS certificates) to be logged.
The supported platforms are:
- **Linux**: tested on Ubuntu 14.04; other variants (Fedora 22, CentOS 7) may
require tweaking of [compiler options](#build-troubleshooting).
- **OS X**: version 10.10
- **FreeBSD**: version 10.*
Build Quick Start
-----------------
First, ensure that the build machine has all of the required [build dependencies](#build-dependencies).
Then use
[gclient](https://www.chromium.org/developers/how-tos/depottools#TOC-gclient) to
retrieve and build the [other software](#software-dependencies) needed by the Log,
and then use (GNU) `make` to build and test the CT code:
```bash
export CXX=clang++ CC=clang
mkdir ct # or whatever directory you prefer
cd ct
gclient config --name="certificate-transparency" https://github.com/google/certificate-transparency.git
gclient sync # retrieve and build dependencies
# substitute gmake or gnumake below if that's what your platform calls it:
make -C certificate-transparency check # build the CT software & self-test
```
Code Layout
-----------
The source code is generally arranged according to implementation language, in
the `cpp`, `go`, `java` and `python` subdirectories. The key subdirectories
are:
- For the main distributed CT Log itself:
- `cpp/log`: Main distributed CT Log implementation.
- `cpp/merkletree`: Merkle tree implementation.
- `cpp/server`: Top-level code for server implementations.
- `cpp/monitoring`: Code to export operation statistics from CT Log.
- The [CT mirror Log](docs/MirrorLog.md) implementation also uses:
- `cpp/fetcher`: Code to fetch entries from another Log
- Client code for accessing a CT Log instance:
- `cpp/client`: CT Log client code in C++
- `go/client`: CT Log client code in Go
- `python/ct`: CT Log client code in Python
- `java/src/org/certificatetransparency/ctlog`: CT Log client code in Java
- Other tools:
- `go/fixchain`: Tool to fix up certificate chains
- `go/gossip`: Code to allow gossip-based synchronization of cert info
- `go/scanner`: CT Log scanner tool
- `go/merkletree`: Merkle tree implementation in Go.
Building the Code
-----------------
The CT software in this repository relies on a number of other
[open-source projects](#software-dependencies), and we recommend that:
- The CT software should be built using local copies of these dependencies
rather than installed packages, to prevent version incompatibilities.
- The dependent libraries should be statically linked into the CT binaries,
rather than relying on dynamically linked libraries that may be different in
the deployed environment.
The supported build system uses the
[gclient](https://www.chromium.org/developers/how-tos/depottools#TOC-gclient)
tool from the Chromium project to handle these requirements and to ensure a
reliable, reproducible build. Older build instructions for using
[Ubuntu](docs/archive/BuildUbuntu.md) or
[Fedora](docs/archive/BuildFedora.md) packages and for
[manually building dependencies from source](docs/archive/BuildSrc.md) are no
longer supported.
Within a main top-level directory, gclient handles the process of:
- generating subdirectories for each dependency
- generating a subdirectory for for the CT Log code itself
- building all of the dependencies
- installing the built dependencies into an `install/` subdirectory
- configuring the CT build to reference the built dependencies.
Under the covers, this gclient build process is controlled by:
- The master [DEPS](DEPS) file, which configures the locations and versions
of the source code needed for the dependencies, and which hooks onto ...
- The makefiles in the [build/](build) subdirectory, which govern the build
process for each dependency, ensuring that:
- Static libraries are built.
- Built code is installed into the local `install/` directory, where it
is available for the build of the CT code itself.
### Build Dependencies
The following tools are needed to build the CT software and its dependencies.
- [depot_tools](https://www.chromium.org/developers/how-tos/install-depot-tools)
- autoconf/automake etc.
- libtool
- shtool
- clang++ (>=3.4)
- cmake (>=v3.1.2)
- git
- GNU make
- Tcl
- pkg-config
- Python 2.7
The exact packages required to install these tools depends on the platform.
For a Debian-based system, the relevant packages are:
`autoconf automake libtool shtool cmake clang git make tcl pkg-config python2.7`
### Software Dependencies
The following collections of additional software are used by the main CT
Log codebase.
- Google utility libraries:
- [gflags](https://github.com/gflags/gflags): command-line flag handling
- [glog](https://github.com/google/glog): logging infrastructure, which
also requires libunwind.
- [Google Mock](https://github.com/google/googlemock.git): C++ test framework
- [Google Test](https://github.com/google/googletest.git): C++ mocking
framework
- [Protocol Buffers](https://developers.google.com/protocol-buffers/):
language-neutral data serialization library
- [tcmalloc](http://goog-perftools.sourceforge.net/doc/tcmalloc.html):
efficient `malloc` replacement optimized for multi-threaded use
- Other utility libraries:
- [libevent](http://libevent.org/): event-processing library
- [libevhtp](https://github.com/ellzey/libevhtp): HTTP server
plug-in/replacement for libevent
- [json-c](https://github.com/json-c/json-c): JSON processing library
- [libunwind](http://www.nongnu.org/libunwind/): library for generating
stack traces
- Cryptographic library: one of the following, selected via the `SSL` build
variable.
- [OpenSSL](https://github.com/google/googletest.git): default
cryptography library.
- [BoringSSL](https://boringssl.googlesource.com/boringssl/): Google's
fork of OpenSSL
- Data storage functionality: one of the following, defaulting (and highly
recommended to stick with) LevelDB.
- [LevelDB](https://github.com/google/leveldb): fast key-value store,
which uses:
- [Snappy](http://google.github.io/snappy/): compression library
- [SQLite](https://www.sqlite.org/): file-based SQL library
The extra (experimental) CT projects in this repo involve additional
dependencies:
- The experimental CT [DNS server](docs/DnsServer.md) uses:
- [ldnbs](http://www.nlnetlabs.nl/projects/ldns/): DNS library, including
DNSSEC function (which relies on OpenSSL for crypto functionality)
- The experimental [general Log](docs/XjsonServer.md) uses:
- [objecthash](https://github.com/benlaurie/objecthash): tools for
hashing objects in a language/encoding-agnostic manner
- [ICU](http://site.icu-project.org/): Unicode libraries (needed to
normalize international text in objects)
Build Troubleshooting
---------------------
### Compiler Warnings/Errors
The CT C++ codebase is built with the Clang `-Werror` flag so that the
codebase stays warning-free. However, this can cause build errors when
newer/different versions of the C++ compiler are used, as any newly created
warnings are treated as errors. To fix this, add the appropriate
`-Wno-error=<warning-name>` option to `CXXFLAGS`.
For example, on errors involving unused variables try using:
```bash
CXXFLAGS="-O2 -Wno-error=unused-variable" gclient sync
```
If an error about an unused typedef in a `glog` header file occurs, try this:
```bash
CXXFLAGS="-O2 -Wno-error=unused-variable -Wno-error=unused-local-typedefs" gclient sync
```
When changing `CXXFLAGS` it's safer to remove the existing build directories
in case not all dependencies are properly accounted for and rebuilt. If
problems persist, check that the Makefile in `certificate-transparency`
contains the options that were passed in `CXXFLAGS`.
### Working on a Branch
If you're trying to clone from a branch on the CT repository then you'll need
to substitute the following command for the `gclient config` command
[above](#build-quick-start), replacing `branch` as appropriate
```bash
gclient config --name="certificate-transparency" https://github.com/google/certificate-transparency.git@branch
```
### Using BoringSSL
The BoringSSL fork of OpenSSL can be used in place of OpenSSL (but note that
the experimental [CT DNS server](docs/DnsServer.md) does not support this
configuration). To enable this, after the first step (`gclient config ...`)
in the gclient [build process](#build-quick-start), modify the top-level
`.gclient` to add:
```python
"custom_vars": { "ssl_impl": "boringssl" } },
```
Then continue the [build process](#build-quick-start) with the `gclient sync` step.
Testing the Code
----------------
### Unit Tests
The unit tests for the CT code can be run with the `make check` target of
`certificate-transparency/Makefile`.
## Testing and Logging Options ##
Note that several tests write files on disk. The default directory for
storing temporary testdata is `/tmp`. You can change this by setting
`TMPDIR=<tmpdir>` for make.
End-to-end tests also create temporary certificate and server files in
`test/tmp`. All these files are cleaned up after a successful test
run.
For logging options, see the
[glog documentation](http://htmlpreview.github.io/?https://github.com/google/glog/blob/master/doc/glog.html).
By default, unit tests log to `stderr`, and log only messages with a FATAL
level (i.e., those that result in abnormal program termination). You can
override the defaults with command-line flags.
Deploying a Log
---------------
The build process described so far generates a set of executables; however,
other components and configuration is needed to set up a running CT Log.
In particular, as shown in the following diagram:
- A set of web servers that act as HTTPS terminators and load
balancers is needed in front of the CT Log instances.
- A cluster of [etcd](https://github.com/coreos/etcd) instances is needed to
provide replication and synchronization services for the CT Log instances.
<img src="docs/images/SystemDiagram.png" width="650">
Configuring and setting up a distributed production Log is covered in a
[separate document](docs/Deployment.md).
Operating a Log
---------------
Running a successful, trusted, certificate transparency Log involves more than
just deploying a set of binaries. Information and advice on operating a
running CT Log is covered in a [separate document](docs/Operation.md)

View file

@ -1,214 +0,0 @@
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2012, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at http://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
/* This file is an amalgamation of hostcheck.c and most of rawstr.c
from cURL. The contents of the COPYING file mentioned above are:
COPYRIGHT AND PERMISSION NOTICE
Copyright (c) 1996 - 2013, Daniel Stenberg, <daniel@haxx.se>.
All rights reserved.
Permission to use, copy, modify, and distribute this software for any purpose
with or without fee is hereby granted, provided that the above copyright
notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT OF THIRD PARTY RIGHTS. IN
NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE
OR OTHER DEALINGS IN THE SOFTWARE.
Except as contained in this notice, the name of a copyright holder shall not
be used in advertising or otherwise to promote the sale, use or other dealings
in this Software without prior written authorization of the copyright holder.
*/
#include "hostcheck.h"
#include <string.h>
/* Portable, consistent toupper (remember EBCDIC). Do not use toupper() because
its behavior is altered by the current locale. */
static char Curl_raw_toupper(char in) {
switch (in) {
case 'a':
return 'A';
case 'b':
return 'B';
case 'c':
return 'C';
case 'd':
return 'D';
case 'e':
return 'E';
case 'f':
return 'F';
case 'g':
return 'G';
case 'h':
return 'H';
case 'i':
return 'I';
case 'j':
return 'J';
case 'k':
return 'K';
case 'l':
return 'L';
case 'm':
return 'M';
case 'n':
return 'N';
case 'o':
return 'O';
case 'p':
return 'P';
case 'q':
return 'Q';
case 'r':
return 'R';
case 's':
return 'S';
case 't':
return 'T';
case 'u':
return 'U';
case 'v':
return 'V';
case 'w':
return 'W';
case 'x':
return 'X';
case 'y':
return 'Y';
case 'z':
return 'Z';
}
return in;
}
/*
* Curl_raw_equal() is for doing "raw" case insensitive strings. This is meant
* to be locale independent and only compare strings we know are safe for
* this. See http://daniel.haxx.se/blog/2008/10/15/strcasecmp-in-turkish/ for
* some further explanation to why this function is necessary.
*
* The function is capable of comparing a-z case insensitively even for
* non-ascii.
*/
static int Curl_raw_equal(const char *first, const char *second) {
while (*first && *second) {
if (Curl_raw_toupper(*first) != Curl_raw_toupper(*second))
/* get out of the loop as soon as they don't match */
break;
first++;
second++;
}
/* we do the comparison here (possibly again), just to make sure that if the
loop above is skipped because one of the strings reached zero, we must not
return this as a successful match */
return (Curl_raw_toupper(*first) == Curl_raw_toupper(*second));
}
static int Curl_raw_nequal(const char *first, const char *second, size_t max) {
while (*first && *second && max) {
if (Curl_raw_toupper(*first) != Curl_raw_toupper(*second)) {
break;
}
max--;
first++;
second++;
}
if (0 == max)
return 1; /* they are equal this far */
return Curl_raw_toupper(*first) == Curl_raw_toupper(*second);
}
/*
* Match a hostname against a wildcard pattern.
* E.g.
* "foo.host.com" matches "*.host.com".
*
* We use the matching rule described in RFC6125, section 6.4.3.
* http://tools.ietf.org/html/rfc6125#section-6.4.3
*/
static int hostmatch(const char *hostname, const char *pattern) {
const char *pattern_label_end, *pattern_wildcard, *hostname_label_end;
int wildcard_enabled;
size_t prefixlen, suffixlen;
pattern_wildcard = strchr(pattern, '*');
if (pattern_wildcard == NULL)
return Curl_raw_equal(pattern, hostname) ? CURL_HOST_MATCH
: CURL_HOST_NOMATCH;
/* We require at least 2 dots in pattern to avoid too wide wildcard
match. */
wildcard_enabled = 1;
pattern_label_end = strchr(pattern, '.');
if (pattern_label_end == NULL ||
strchr(pattern_label_end + 1, '.') == NULL ||
pattern_wildcard > pattern_label_end ||
Curl_raw_nequal(pattern, "xn--", 4)) {
wildcard_enabled = 0;
}
if (!wildcard_enabled)
return Curl_raw_equal(pattern, hostname) ? CURL_HOST_MATCH
: CURL_HOST_NOMATCH;
hostname_label_end = strchr(hostname, '.');
if (hostname_label_end == NULL ||
!Curl_raw_equal(pattern_label_end, hostname_label_end))
return CURL_HOST_NOMATCH;
/* The wildcard must match at least one character, so the left-most
label of the hostname is at least as large as the left-most label
of the pattern. */
if (hostname_label_end - hostname < pattern_label_end - pattern)
return CURL_HOST_NOMATCH;
prefixlen = pattern_wildcard - pattern;
suffixlen = pattern_label_end - (pattern_wildcard + 1);
return Curl_raw_nequal(pattern, hostname, prefixlen) &&
Curl_raw_nequal(pattern_wildcard + 1,
hostname_label_end - suffixlen, suffixlen)
? CURL_HOST_MATCH
: CURL_HOST_NOMATCH;
}
int Curl_cert_hostcheck(const char *match_pattern, const char *hostname) {
if (!match_pattern || !*match_pattern || !hostname ||
!*hostname) /* sanity check */
return 0;
if (Curl_raw_equal(hostname, match_pattern)) /* trivial case */
return 1;
if (hostmatch(hostname, match_pattern) == CURL_HOST_MATCH)
return 1;
return 0;
}

View file

@ -1,29 +0,0 @@
#ifndef HEADER_CURL_HOSTCHECK_H
#define HEADER_CURL_HOSTCHECK_H
/***************************************************************************
* _ _ ____ _
* Project ___| | | | _ \| |
* / __| | | | |_) | |
* | (__| |_| | _ <| |___
* \___|\___/|_| \_\_____|
*
* Copyright (C) 1998 - 2012, Daniel Stenberg, <daniel@haxx.se>, et al.
*
* This software is licensed as described in the file COPYING, which
* you should have received as part of this distribution. The terms
* are also available at http://curl.haxx.se/docs/copyright.html.
*
* You may opt to use, copy, modify, merge, publish, distribute and/or sell
* copies of the Software, and permit persons to whom the Software is
* furnished to do so, under the terms of the COPYING file.
*
* This software is distributed on an "AS IS" basis, WITHOUT WARRANTY OF ANY
* KIND, either express or implied.
*
***************************************************************************/
#define CURL_HOST_NOMATCH 0
#define CURL_HOST_MATCH 1
int Curl_cert_hostcheck(const char* match_pattern, const char* hostname);
#endif /* HEADER_CURL_HOSTCHECK_H */

View file

@ -1,180 +0,0 @@
/* Obtained from: https://github.com/iSECPartners/ssl-conservatory */
/*
Copyright (C) 2012, iSEC Partners.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
/*
* Helper functions to perform basic hostname validation using OpenSSL.
*
* Please read "everything-you-wanted-to-know-about-openssl.pdf" before
* attempting to use this code. This whitepaper describes how the code works,
* how it should be used, and what its limitations are.
*
* Author: Alban Diquet
* License: See LICENSE
*
*/
#include <openssl/x509v3.h>
#include <openssl/ssl.h>
#include "third_party/curl/hostcheck.h"
#include "third_party/isec_partners/openssl_hostname_validation.h"
#define HOSTNAME_MAX_SIZE 255
/**
* Tries to find a match for hostname in the certificate's Common Name field.
*
* Returns MatchFound if a match was found.
* Returns MatchNotFound if no matches were found.
* Returns MalformedCertificate if the Common Name had a NUL character embedded
* in it.
* Returns Error if the Common Name could not be extracted.
*/
static HostnameValidationResult matches_common_name(const char *hostname,
const X509 *server_cert) {
int common_name_loc = -1;
X509_NAME_ENTRY *common_name_entry = NULL;
ASN1_STRING *common_name_asn1 = NULL;
char *common_name_str = NULL;
// Find the position of the CN field in the Subject field of the certificate
common_name_loc =
X509_NAME_get_index_by_NID(X509_get_subject_name((X509 *)server_cert),
NID_commonName, -1);
if (common_name_loc < 0) {
return Error;
}
// Extract the CN field
common_name_entry =
X509_NAME_get_entry(X509_get_subject_name((X509 *)server_cert),
common_name_loc);
if (common_name_entry == NULL) {
return Error;
}
// Convert the CN field to a C string
common_name_asn1 = X509_NAME_ENTRY_get_data(common_name_entry);
if (common_name_asn1 == NULL) {
return Error;
}
common_name_str = (char *)ASN1_STRING_data(common_name_asn1);
// Make sure there isn't an embedded NUL character in the CN
if ((size_t)ASN1_STRING_length(common_name_asn1) !=
strlen(common_name_str)) {
return MalformedCertificate;
}
// Compare expected hostname with the CN
if (Curl_cert_hostcheck(common_name_str, hostname) == CURL_HOST_MATCH) {
return MatchFound;
} else {
return MatchNotFound;
}
}
/**
* Tries to find a match for hostname in the certificate's Subject Alternative
* Name extension.
*
* Returns MatchFound if a match was found.
* Returns MatchNotFound if no matches were found.
* Returns MalformedCertificate if any of the hostnames had a NUL character
* embedded in it.
* Returns NoSANPresent if the SAN extension was not present in the certificate.
*/
static HostnameValidationResult matches_subject_alternative_name(
const char *hostname, const X509 *server_cert) {
HostnameValidationResult result = MatchNotFound;
int i;
int san_names_nb = -1;
STACK_OF(GENERAL_NAME) *san_names = NULL;
// Try to extract the names within the SAN extension from the certificate
san_names =
X509_get_ext_d2i((X509 *)server_cert, NID_subject_alt_name, NULL, NULL);
if (san_names == NULL) {
return NoSANPresent;
}
san_names_nb = sk_GENERAL_NAME_num(san_names);
// Check each name within the extension
for (i = 0; i < san_names_nb; i++) {
const GENERAL_NAME *current_name = sk_GENERAL_NAME_value(san_names, i);
if (current_name->type == GEN_DNS) {
// Current name is a DNS name, let's check it
char *dns_name = (char *)ASN1_STRING_data(current_name->d.dNSName);
// Make sure there isn't an embedded NUL character in the DNS name
if ((size_t)ASN1_STRING_length(current_name->d.dNSName) !=
strlen(dns_name)) {
result = MalformedCertificate;
break;
} else { // Compare expected hostname with the DNS name
if (Curl_cert_hostcheck(dns_name, hostname) == CURL_HOST_MATCH) {
result = MatchFound;
break;
}
}
}
}
sk_GENERAL_NAME_pop_free(san_names, GENERAL_NAME_free);
return result;
}
/**
* Validates the server's identity by looking for the expected hostname in the
* server's certificate. As described in RFC 6125, it first tries to find a
* match
* in the Subject Alternative Name extension. If the extension is not present in
* the certificate, it checks the Common Name instead.
*
* Returns MatchFound if a match was found.
* Returns MatchNotFound if no matches were found.
* Returns MalformedCertificate if any of the hostnames had a NUL character
* embedded in it.
* Returns Error if there was an error.
*/
HostnameValidationResult validate_hostname(const char *hostname,
const X509 *server_cert) {
HostnameValidationResult result;
if ((hostname == NULL) || (server_cert == NULL))
return Error;
// First try the Subject Alternative Names extension
result = matches_subject_alternative_name(hostname, server_cert);
if (result == NoSANPresent) {
// Extension was not found: try the Common Name
result = matches_common_name(hostname, server_cert);
}
return result;
}

View file

@ -1,59 +0,0 @@
/* Obtained from: https://github.com/iSECPartners/ssl-conservatory */
/*
Copyright (C) 2012, iSEC Partners.
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do
so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
*/
/*
* Helper functions to perform basic hostname validation using OpenSSL.
*
* Please read "everything-you-wanted-to-know-about-openssl.pdf" before
* attempting to use this code. This whitepaper describes how the code works,
* how it should be used, and what its limitations are.
*
* Author: Alban Diquet
* License: See LICENSE
*
*/
typedef enum {
MatchFound,
MatchNotFound,
NoSANPresent,
MalformedCertificate,
Error
} HostnameValidationResult;
/**
* Validates the server's identity by looking for the expected hostname in the
* server's certificate. As described in RFC 6125, it first tries to find a
* match
* in the Subject Alternative Name extension. If the extension is not present in
* the certificate, it checks the Common Name instead.
*
* Returns MatchFound if a match was found.
* Returns MatchNotFound if no matches were found.
* Returns MalformedCertificate if any of the hostnames had a NUL character
* embedded in it.
* Returns Error if there was an error.
*/
HostnameValidationResult validate_hostname(const char* hostname,
const X509* server_cert);

View file

@ -1,12 +0,0 @@
#ifndef CERT_TRANS_VERSION_H_
#define CERT_TRANS_VERSION_H_
namespace cert_trans {
extern const char kBuildVersion[];
} // namespace cert_trans
#endif // CERT_TRANS_VERSION_H_

View file

@ -1,25 +0,0 @@
This is the really early beginnings of a certificate transparency log
client written in Go, along with a log scanner tool.
You'll need go v1.1 or higher to compile.
# Installation
This go code must be imported into your go workspace before you can
use it, which can be done with:
go get github.com/google/certificate-transparency/go/client
go get github.com/google/certificate-transparency/go/scanner
etc.
# Building the binaries
To compile the log scanner run:
go build github.com/google/certificate-transparency/go/scanner/main/scanner.go
# Contributing
When sending pull requests, please ensure that everything's been run
through ```gofmt``` beforehand so we can keep everything nice and
tidy.

View file

@ -1,581 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package asn1
import (
"bytes"
"errors"
"fmt"
"io"
"math/big"
"reflect"
"time"
"unicode/utf8"
)
// A forkableWriter is an in-memory buffer that can be
// 'forked' to create new forkableWriters that bracket the
// original. After
// pre, post := w.fork();
// the overall sequence of bytes represented is logically w+pre+post.
type forkableWriter struct {
*bytes.Buffer
pre, post *forkableWriter
}
func newForkableWriter() *forkableWriter {
return &forkableWriter{new(bytes.Buffer), nil, nil}
}
func (f *forkableWriter) fork() (pre, post *forkableWriter) {
if f.pre != nil || f.post != nil {
panic("have already forked")
}
f.pre = newForkableWriter()
f.post = newForkableWriter()
return f.pre, f.post
}
func (f *forkableWriter) Len() (l int) {
l += f.Buffer.Len()
if f.pre != nil {
l += f.pre.Len()
}
if f.post != nil {
l += f.post.Len()
}
return
}
func (f *forkableWriter) writeTo(out io.Writer) (n int, err error) {
n, err = out.Write(f.Bytes())
if err != nil {
return
}
var nn int
if f.pre != nil {
nn, err = f.pre.writeTo(out)
n += nn
if err != nil {
return
}
}
if f.post != nil {
nn, err = f.post.writeTo(out)
n += nn
}
return
}
func marshalBase128Int(out *forkableWriter, n int64) (err error) {
if n == 0 {
err = out.WriteByte(0)
return
}
l := 0
for i := n; i > 0; i >>= 7 {
l++
}
for i := l - 1; i >= 0; i-- {
o := byte(n >> uint(i*7))
o &= 0x7f
if i != 0 {
o |= 0x80
}
err = out.WriteByte(o)
if err != nil {
return
}
}
return nil
}
func marshalInt64(out *forkableWriter, i int64) (err error) {
n := int64Length(i)
for ; n > 0; n-- {
err = out.WriteByte(byte(i >> uint((n-1)*8)))
if err != nil {
return
}
}
return nil
}
func int64Length(i int64) (numBytes int) {
numBytes = 1
for i > 127 {
numBytes++
i >>= 8
}
for i < -128 {
numBytes++
i >>= 8
}
return
}
func marshalBigInt(out *forkableWriter, n *big.Int) (err error) {
if n.Sign() < 0 {
// A negative number has to be converted to two's-complement
// form. So we'll subtract 1 and invert. If the
// most-significant-bit isn't set then we'll need to pad the
// beginning with 0xff in order to keep the number negative.
nMinus1 := new(big.Int).Neg(n)
nMinus1.Sub(nMinus1, bigOne)
bytes := nMinus1.Bytes()
for i := range bytes {
bytes[i] ^= 0xff
}
if len(bytes) == 0 || bytes[0]&0x80 == 0 {
err = out.WriteByte(0xff)
if err != nil {
return
}
}
_, err = out.Write(bytes)
} else if n.Sign() == 0 {
// Zero is written as a single 0 zero rather than no bytes.
err = out.WriteByte(0x00)
} else {
bytes := n.Bytes()
if len(bytes) > 0 && bytes[0]&0x80 != 0 {
// We'll have to pad this with 0x00 in order to stop it
// looking like a negative number.
err = out.WriteByte(0)
if err != nil {
return
}
}
_, err = out.Write(bytes)
}
return
}
func marshalLength(out *forkableWriter, i int) (err error) {
n := lengthLength(i)
for ; n > 0; n-- {
err = out.WriteByte(byte(i >> uint((n-1)*8)))
if err != nil {
return
}
}
return nil
}
func lengthLength(i int) (numBytes int) {
numBytes = 1
for i > 255 {
numBytes++
i >>= 8
}
return
}
func marshalTagAndLength(out *forkableWriter, t tagAndLength) (err error) {
b := uint8(t.class) << 6
if t.isCompound {
b |= 0x20
}
if t.tag >= 31 {
b |= 0x1f
err = out.WriteByte(b)
if err != nil {
return
}
err = marshalBase128Int(out, int64(t.tag))
if err != nil {
return
}
} else {
b |= uint8(t.tag)
err = out.WriteByte(b)
if err != nil {
return
}
}
if t.length >= 128 {
l := lengthLength(t.length)
err = out.WriteByte(0x80 | byte(l))
if err != nil {
return
}
err = marshalLength(out, t.length)
if err != nil {
return
}
} else {
err = out.WriteByte(byte(t.length))
if err != nil {
return
}
}
return nil
}
func marshalBitString(out *forkableWriter, b BitString) (err error) {
paddingBits := byte((8 - b.BitLength%8) % 8)
err = out.WriteByte(paddingBits)
if err != nil {
return
}
_, err = out.Write(b.Bytes)
return
}
func marshalObjectIdentifier(out *forkableWriter, oid []int) (err error) {
if len(oid) < 2 || oid[0] > 2 || (oid[0] < 2 && oid[1] >= 40) {
return StructuralError{"invalid object identifier"}
}
err = marshalBase128Int(out, int64(oid[0]*40+oid[1]))
if err != nil {
return
}
for i := 2; i < len(oid); i++ {
err = marshalBase128Int(out, int64(oid[i]))
if err != nil {
return
}
}
return
}
func marshalPrintableString(out *forkableWriter, s string) (err error) {
b := []byte(s)
for _, c := range b {
if !isPrintable(c) {
return StructuralError{"PrintableString contains invalid character"}
}
}
_, err = out.Write(b)
return
}
func marshalIA5String(out *forkableWriter, s string) (err error) {
b := []byte(s)
for _, c := range b {
if c > 127 {
return StructuralError{"IA5String contains invalid character"}
}
}
_, err = out.Write(b)
return
}
func marshalUTF8String(out *forkableWriter, s string) (err error) {
_, err = out.Write([]byte(s))
return
}
func marshalTwoDigits(out *forkableWriter, v int) (err error) {
err = out.WriteByte(byte('0' + (v/10)%10))
if err != nil {
return
}
return out.WriteByte(byte('0' + v%10))
}
func marshalUTCTime(out *forkableWriter, t time.Time) (err error) {
year, month, day := t.Date()
switch {
case 1950 <= year && year < 2000:
err = marshalTwoDigits(out, int(year-1900))
case 2000 <= year && year < 2050:
err = marshalTwoDigits(out, int(year-2000))
default:
return StructuralError{"cannot represent time as UTCTime"}
}
if err != nil {
return
}
err = marshalTwoDigits(out, int(month))
if err != nil {
return
}
err = marshalTwoDigits(out, day)
if err != nil {
return
}
hour, min, sec := t.Clock()
err = marshalTwoDigits(out, hour)
if err != nil {
return
}
err = marshalTwoDigits(out, min)
if err != nil {
return
}
err = marshalTwoDigits(out, sec)
if err != nil {
return
}
_, offset := t.Zone()
switch {
case offset/60 == 0:
err = out.WriteByte('Z')
return
case offset > 0:
err = out.WriteByte('+')
case offset < 0:
err = out.WriteByte('-')
}
if err != nil {
return
}
offsetMinutes := offset / 60
if offsetMinutes < 0 {
offsetMinutes = -offsetMinutes
}
err = marshalTwoDigits(out, offsetMinutes/60)
if err != nil {
return
}
err = marshalTwoDigits(out, offsetMinutes%60)
return
}
func stripTagAndLength(in []byte) []byte {
_, offset, err := parseTagAndLength(in, 0)
if err != nil {
return in
}
return in[offset:]
}
func marshalBody(out *forkableWriter, value reflect.Value, params fieldParameters) (err error) {
switch value.Type() {
case timeType:
return marshalUTCTime(out, value.Interface().(time.Time))
case bitStringType:
return marshalBitString(out, value.Interface().(BitString))
case objectIdentifierType:
return marshalObjectIdentifier(out, value.Interface().(ObjectIdentifier))
case bigIntType:
return marshalBigInt(out, value.Interface().(*big.Int))
}
switch v := value; v.Kind() {
case reflect.Bool:
if v.Bool() {
return out.WriteByte(255)
} else {
return out.WriteByte(0)
}
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return marshalInt64(out, int64(v.Int()))
case reflect.Struct:
t := v.Type()
startingField := 0
// If the first element of the structure is a non-empty
// RawContents, then we don't bother serializing the rest.
if t.NumField() > 0 && t.Field(0).Type == rawContentsType {
s := v.Field(0)
if s.Len() > 0 {
bytes := make([]byte, s.Len())
for i := 0; i < s.Len(); i++ {
bytes[i] = uint8(s.Index(i).Uint())
}
/* The RawContents will contain the tag and
* length fields but we'll also be writing
* those ourselves, so we strip them out of
* bytes */
_, err = out.Write(stripTagAndLength(bytes))
return
} else {
startingField = 1
}
}
for i := startingField; i < t.NumField(); i++ {
var pre *forkableWriter
pre, out = out.fork()
err = marshalField(pre, v.Field(i), parseFieldParameters(t.Field(i).Tag.Get("asn1")))
if err != nil {
return
}
}
return
case reflect.Slice:
sliceType := v.Type()
if sliceType.Elem().Kind() == reflect.Uint8 {
bytes := make([]byte, v.Len())
for i := 0; i < v.Len(); i++ {
bytes[i] = uint8(v.Index(i).Uint())
}
_, err = out.Write(bytes)
return
}
var fp fieldParameters
for i := 0; i < v.Len(); i++ {
var pre *forkableWriter
pre, out = out.fork()
err = marshalField(pre, v.Index(i), fp)
if err != nil {
return
}
}
return
case reflect.String:
switch params.stringType {
case tagIA5String:
return marshalIA5String(out, v.String())
case tagPrintableString:
return marshalPrintableString(out, v.String())
default:
return marshalUTF8String(out, v.String())
}
}
return StructuralError{"unknown Go type"}
}
func marshalField(out *forkableWriter, v reflect.Value, params fieldParameters) (err error) {
// If the field is an interface{} then recurse into it.
if v.Kind() == reflect.Interface && v.Type().NumMethod() == 0 {
return marshalField(out, v.Elem(), params)
}
if v.Kind() == reflect.Slice && v.Len() == 0 && params.omitEmpty {
return
}
if params.optional && reflect.DeepEqual(v.Interface(), reflect.Zero(v.Type()).Interface()) {
return
}
if v.Type() == rawValueType {
rv := v.Interface().(RawValue)
if len(rv.FullBytes) != 0 {
_, err = out.Write(rv.FullBytes)
} else {
err = marshalTagAndLength(out, tagAndLength{rv.Class, rv.Tag, len(rv.Bytes), rv.IsCompound})
if err != nil {
return
}
_, err = out.Write(rv.Bytes)
}
return
}
tag, isCompound, ok := getUniversalType(v.Type())
if !ok {
err = StructuralError{fmt.Sprintf("unknown Go type: %v", v.Type())}
return
}
class := classUniversal
if params.stringType != 0 && tag != tagPrintableString {
return StructuralError{"explicit string type given to non-string member"}
}
if tag == tagPrintableString {
if params.stringType == 0 {
// This is a string without an explicit string type. We'll use
// a PrintableString if the character set in the string is
// sufficiently limited, otherwise we'll use a UTF8String.
for _, r := range v.String() {
if r >= utf8.RuneSelf || !isPrintable(byte(r)) {
if !utf8.ValidString(v.String()) {
return errors.New("asn1: string not valid UTF-8")
}
tag = tagUTF8String
break
}
}
} else {
tag = params.stringType
}
}
if params.set {
if tag != tagSequence {
return StructuralError{"non sequence tagged as set"}
}
tag = tagSet
}
tags, body := out.fork()
err = marshalBody(body, v, params)
if err != nil {
return
}
bodyLen := body.Len()
var explicitTag *forkableWriter
if params.explicit {
explicitTag, tags = tags.fork()
}
if !params.explicit && params.tag != nil {
// implicit tag.
tag = *params.tag
class = classContextSpecific
}
err = marshalTagAndLength(tags, tagAndLength{class, tag, bodyLen, isCompound})
if err != nil {
return
}
if params.explicit {
err = marshalTagAndLength(explicitTag, tagAndLength{
class: classContextSpecific,
tag: *params.tag,
length: bodyLen + tags.Len(),
isCompound: true,
})
}
return nil
}
// Marshal returns the ASN.1 encoding of val.
func Marshal(val interface{}) ([]byte, error) {
var out bytes.Buffer
v := reflect.ValueOf(val)
f := newForkableWriter()
err := marshalField(f, v, fieldParameters{})
if err != nil {
return nil, err
}
_, err = f.writeTo(&out)
return out.Bytes(), nil
}

View file

@ -1,88 +0,0 @@
package client
import (
"bytes"
"errors"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
ct "github.com/google/certificate-transparency/go"
"golang.org/x/net/context"
)
// LeafEntry respresents a JSON leaf entry.
type LeafEntry struct {
LeafInput []byte `json:"leaf_input"`
ExtraData []byte `json:"extra_data"`
}
// GetEntriesResponse respresents the JSON response to the CT get-entries method.
type GetEntriesResponse struct {
Entries []LeafEntry `json:"entries"` // the list of returned entries
}
// GetRawEntries exposes the /ct/v1/get-entries result with only the JSON parsing done.
func GetRawEntries(ctx context.Context, httpClient *http.Client, logURL string, start, end int64) (*GetEntriesResponse, error) {
if end < 0 {
return nil, errors.New("end should be >= 0")
}
if end < start {
return nil, errors.New("start should be <= end")
}
baseURL, err := url.Parse(strings.TrimRight(logURL, "/") + GetEntriesPath)
if err != nil {
return nil, err
}
baseURL.RawQuery = url.Values{
"start": []string{strconv.FormatInt(start, 10)},
"end": []string{strconv.FormatInt(end, 10)},
}.Encode()
var resp GetEntriesResponse
err = fetchAndParse(context.TODO(), httpClient, baseURL.String(), &resp)
if err != nil {
return nil, err
}
return &resp, nil
}
// GetEntries attempts to retrieve the entries in the sequence [|start|, |end|] from the CT log server. (see section 4.6.)
// Returns a slice of LeafInputs or a non-nil error.
func (c *LogClient) GetEntries(start, end int64) ([]ct.LogEntry, error) {
resp, err := GetRawEntries(context.TODO(), c.httpClient, c.uri, start, end)
if err != nil {
return nil, err
}
entries := make([]ct.LogEntry, len(resp.Entries))
for index, entry := range resp.Entries {
leaf, err := ct.ReadMerkleTreeLeaf(bytes.NewBuffer(entry.LeafInput))
if err != nil {
return nil, err
}
entries[index].Leaf = *leaf
var chain []ct.ASN1Cert
switch leaf.TimestampedEntry.EntryType {
case ct.X509LogEntryType:
chain, err = ct.UnmarshalX509ChainArray(entry.ExtraData)
case ct.PrecertLogEntryType:
chain, err = ct.UnmarshalPrecertChainArray(entry.ExtraData)
default:
return nil, fmt.Errorf("saw unknown entry type: %v", leaf.TimestampedEntry.EntryType)
}
if err != nil {
return nil, err
}
entries[index].Chain = chain
entries[index].Index = start + int64(index)
}
return entries, nil
}

View file

@ -1,412 +0,0 @@
// Package client is a CT log client implementation and contains types and code
// for interacting with RFC6962-compliant CT Log instances.
// See http://tools.ietf.org/html/rfc6962 for details
package client
import (
"bytes"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"errors"
"fmt"
"io/ioutil"
"log"
"net/http"
"net/url"
"strconv"
"time"
ct "github.com/google/certificate-transparency/go"
"golang.org/x/net/context"
)
// URI paths for CT Log endpoints
const (
AddChainPath = "/ct/v1/add-chain"
AddPreChainPath = "/ct/v1/add-pre-chain"
AddJSONPath = "/ct/v1/add-json"
GetSTHPath = "/ct/v1/get-sth"
GetEntriesPath = "/ct/v1/get-entries"
GetProofByHashPath = "/ct/v1/get-proof-by-hash"
GetSTHConsistencyPath = "/ct/v1/get-sth-consistency"
)
// LogClient represents a client for a given CT Log instance
type LogClient struct {
uri string // the base URI of the log. e.g. http://ct.googleapis/pilot
httpClient *http.Client // used to interact with the log via HTTP
verifier *ct.SignatureVerifier // nil if no public key for log available
}
//////////////////////////////////////////////////////////////////////////////////
// JSON structures follow.
// These represent the structures returned by the CT Log server.
//////////////////////////////////////////////////////////////////////////////////
// addChainRequest represents the JSON request body sent to the add-chain CT
// method.
type addChainRequest struct {
Chain [][]byte `json:"chain"`
}
// addChainResponse represents the JSON response to the add-chain CT method.
// An SCT represents a Log's promise to integrate a [pre-]certificate into the
// log within a defined period of time.
type addChainResponse struct {
SCTVersion ct.Version `json:"sct_version"` // SCT structure version
ID []byte `json:"id"` // Log ID
Timestamp uint64 `json:"timestamp"` // Timestamp of issuance
Extensions string `json:"extensions"` // Holder for any CT extensions
Signature []byte `json:"signature"` // Log signature for this SCT
}
// addJSONRequest represents the JSON request body sent to the add-json CT
// method.
type addJSONRequest struct {
Data interface{} `json:"data"`
}
// getSTHResponse respresents the JSON response to the get-sth CT method
type getSTHResponse struct {
TreeSize uint64 `json:"tree_size"` // Number of certs in the current tree
Timestamp uint64 `json:"timestamp"` // Time that the tree was created
SHA256RootHash []byte `json:"sha256_root_hash"` // Root hash of the tree
TreeHeadSignature []byte `json:"tree_head_signature"` // Log signature for this STH
}
// getConsistencyProofResponse represents the JSON response to the get-consistency-proof CT method
type getConsistencyProofResponse struct {
Consistency [][]byte `json:"consistency"`
}
// getAuditProofResponse represents the JSON response to the CT get-audit-proof method
type getAuditProofResponse struct {
Hash []string `json:"hash"` // the hashes which make up the proof
TreeSize uint64 `json:"tree_size"` // the tree size against which this proof is constructed
}
// getAcceptedRootsResponse represents the JSON response to the CT get-roots method.
type getAcceptedRootsResponse struct {
Certificates []string `json:"certificates"`
}
// getEntryAndProodReponse represents the JSON response to the CT get-entry-and-proof method
type getEntryAndProofResponse struct {
LeafInput string `json:"leaf_input"` // the entry itself
ExtraData string `json:"extra_data"` // any chain provided when the entry was added to the log
AuditPath []string `json:"audit_path"` // the corresponding proof
}
// GetProofByHashResponse represents the JSON response to the CT get-proof-by-hash method.
type GetProofByHashResponse struct {
LeafIndex int64 `json:"leaf_index"` // The 0-based index of the end entity corresponding to the "hash" parameter.
AuditPath [][]byte `json:"audit_path"` // An array of base64-encoded Merkle Tree nodes proving the inclusion of the chosen certificate.
}
// New constructs a new LogClient instance.
// |uri| is the base URI of the CT log instance to interact with, e.g.
// http://ct.googleapis.com/pilot
// |hc| is the underlying client to be used for HTTP requests to the CT log.
func New(uri string, hc *http.Client) *LogClient {
if hc == nil {
hc = new(http.Client)
}
return &LogClient{uri: uri, httpClient: hc}
}
// NewWithPubKey constructs a new LogClient instance that includes public
// key information for the log; this instance will check signatures on
// responses from the log.
func NewWithPubKey(uri string, hc *http.Client, pemEncodedKey string) (*LogClient, error) {
pubkey, _, rest, err := ct.PublicKeyFromPEM([]byte(pemEncodedKey))
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, errors.New("extra data found after PEM key decoded")
}
verifier, err := ct.NewSignatureVerifier(pubkey)
if err != nil {
return nil, err
}
if hc == nil {
hc = new(http.Client)
}
return &LogClient{uri: uri, httpClient: hc, verifier: verifier}, nil
}
// Makes a HTTP call to |uri|, and attempts to parse the response as a
// JSON representation of the structure in |res|. Uses |ctx| to
// control the HTTP call (so it can have a timeout or be cancelled by
// the caller), and |httpClient| to make the actual HTTP call.
// Returns a non-nil |error| if there was a problem.
func fetchAndParse(ctx context.Context, httpClient *http.Client, uri string, res interface{}) error {
req, err := http.NewRequest(http.MethodGet, uri, nil)
if err != nil {
return err
}
req.Cancel = ctx.Done()
resp, err := httpClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
// Make sure everything is read, so http.Client can reuse the connection.
defer ioutil.ReadAll(resp.Body)
if resp.StatusCode != 200 {
return fmt.Errorf("got HTTP Status %s", resp.Status)
}
if err := json.NewDecoder(resp.Body).Decode(res); err != nil {
return err
}
return nil
}
// Makes a HTTP POST call to |uri|, and attempts to parse the response as a JSON
// representation of the structure in |res|.
// Returns a non-nil |error| if there was a problem.
func (c *LogClient) postAndParse(uri string, req interface{}, res interface{}) (*http.Response, string, error) {
postBody, err := json.Marshal(req)
if err != nil {
return nil, "", err
}
httpReq, err := http.NewRequest(http.MethodPost, uri, bytes.NewReader(postBody))
if err != nil {
return nil, "", err
}
httpReq.Header.Set("Content-Type", "application/json")
resp, err := c.httpClient.Do(httpReq)
// Read all of the body, if there is one, so that the http.Client can do
// Keep-Alive:
var body []byte
if resp != nil {
body, err = ioutil.ReadAll(resp.Body)
resp.Body.Close()
}
if err != nil {
return resp, string(body), err
}
if resp.StatusCode == 200 {
if err != nil {
return resp, string(body), err
}
if err = json.Unmarshal(body, &res); err != nil {
return resp, string(body), err
}
}
return resp, string(body), nil
}
func backoffForRetry(ctx context.Context, d time.Duration) error {
backoffTimer := time.NewTimer(d)
if ctx != nil {
select {
case <-ctx.Done():
return ctx.Err()
case <-backoffTimer.C:
}
} else {
<-backoffTimer.C
}
return nil
}
// Attempts to add |chain| to the log, using the api end-point specified by
// |path|. If provided context expires before submission is complete an
// error will be returned.
func (c *LogClient) addChainWithRetry(ctx context.Context, ctype ct.LogEntryType, path string, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
var resp addChainResponse
var req addChainRequest
for _, link := range chain {
req.Chain = append(req.Chain, link)
}
httpStatus := "Unknown"
backoffSeconds := 0
done := false
for !done {
if backoffSeconds > 0 {
log.Printf("Got %s, backing-off %d seconds", httpStatus, backoffSeconds)
}
err := backoffForRetry(ctx, time.Second*time.Duration(backoffSeconds))
if err != nil {
return nil, err
}
if backoffSeconds > 0 {
backoffSeconds = 0
}
httpResp, _, err := c.postAndParse(c.uri+path, &req, &resp)
if err != nil {
backoffSeconds = 10
continue
}
switch {
case httpResp.StatusCode == 200:
done = true
case httpResp.StatusCode == 408:
// request timeout, retry immediately
case httpResp.StatusCode == 503:
// Retry
backoffSeconds = 10
if retryAfter := httpResp.Header.Get("Retry-After"); retryAfter != "" {
if seconds, err := strconv.Atoi(retryAfter); err == nil {
backoffSeconds = seconds
}
}
default:
return nil, fmt.Errorf("got HTTP Status %s", httpResp.Status)
}
httpStatus = httpResp.Status
}
ds, err := ct.UnmarshalDigitallySigned(bytes.NewReader(resp.Signature))
if err != nil {
return nil, err
}
var logID ct.SHA256Hash
copy(logID[:], resp.ID)
sct := &ct.SignedCertificateTimestamp{
SCTVersion: resp.SCTVersion,
LogID: logID,
Timestamp: resp.Timestamp,
Extensions: ct.CTExtensions(resp.Extensions),
Signature: *ds}
err = c.VerifySCTSignature(*sct, ctype, chain)
if err != nil {
return nil, err
}
return sct, nil
}
// AddChain adds the (DER represented) X509 |chain| to the log.
func (c *LogClient) AddChain(chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return c.addChainWithRetry(nil, ct.X509LogEntryType, AddChainPath, chain)
}
// AddPreChain adds the (DER represented) Precertificate |chain| to the log.
func (c *LogClient) AddPreChain(chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return c.addChainWithRetry(nil, ct.PrecertLogEntryType, AddPreChainPath, chain)
}
// AddChainWithContext adds the (DER represented) X509 |chain| to the log and
// fails if the provided context expires before the chain is submitted.
func (c *LogClient) AddChainWithContext(ctx context.Context, chain []ct.ASN1Cert) (*ct.SignedCertificateTimestamp, error) {
return c.addChainWithRetry(ctx, ct.X509LogEntryType, AddChainPath, chain)
}
// AddJSON submits arbitrary data to to XJSON server.
func (c *LogClient) AddJSON(data interface{}) (*ct.SignedCertificateTimestamp, error) {
req := addJSONRequest{
Data: data,
}
var resp addChainResponse
_, _, err := c.postAndParse(c.uri+AddJSONPath, &req, &resp)
if err != nil {
return nil, err
}
ds, err := ct.UnmarshalDigitallySigned(bytes.NewReader(resp.Signature))
if err != nil {
return nil, err
}
var logID ct.SHA256Hash
copy(logID[:], resp.ID)
return &ct.SignedCertificateTimestamp{
SCTVersion: resp.SCTVersion,
LogID: logID,
Timestamp: resp.Timestamp,
Extensions: ct.CTExtensions(resp.Extensions),
Signature: *ds}, nil
}
// GetSTH retrieves the current STH from the log.
// Returns a populated SignedTreeHead, or a non-nil error.
func (c *LogClient) GetSTH() (sth *ct.SignedTreeHead, err error) {
var resp getSTHResponse
if err = fetchAndParse(context.TODO(), c.httpClient, c.uri+GetSTHPath, &resp); err != nil {
return
}
sth = &ct.SignedTreeHead{
TreeSize: resp.TreeSize,
Timestamp: resp.Timestamp,
}
if len(resp.SHA256RootHash) != sha256.Size {
return nil, fmt.Errorf("sha256_root_hash is invalid length, expected %d got %d", sha256.Size, len(resp.SHA256RootHash))
}
copy(sth.SHA256RootHash[:], resp.SHA256RootHash)
ds, err := ct.UnmarshalDigitallySigned(bytes.NewReader(resp.TreeHeadSignature))
if err != nil {
return nil, err
}
sth.TreeHeadSignature = *ds
err = c.VerifySTHSignature(*sth)
if err != nil {
return nil, err
}
return
}
// VerifySTHSignature checks the signature in sth, returning any error encountered or nil if verification is
// successful.
func (c *LogClient) VerifySTHSignature(sth ct.SignedTreeHead) error {
if c.verifier == nil {
// Can't verify signatures without a verifier
return nil
}
return c.verifier.VerifySTHSignature(sth)
}
// VerifySCTSignature checks the signature in sct for the given LogEntryType, with associated certificate chain.
func (c *LogClient) VerifySCTSignature(sct ct.SignedCertificateTimestamp, ctype ct.LogEntryType, certData []ct.ASN1Cert) error {
if c.verifier == nil {
// Can't verify signatures without a verifier
return nil
}
if ctype == ct.PrecertLogEntryType {
// TODO(drysdale): cope with pre-certs, which need to have the
// following fields set:
// leaf.PrecertEntry.TBSCertificate
// leaf.PrecertEntry.IssuerKeyHash (SHA-256 of issuer's public key)
return errors.New("SCT verification for pre-certificates unimplemented")
}
// Build enough of a Merkle tree leaf for the verifier to work on.
leaf := ct.MerkleTreeLeaf{
Version: sct.SCTVersion,
LeafType: ct.TimestampedEntryLeafType,
TimestampedEntry: ct.TimestampedEntry{
Timestamp: sct.Timestamp,
EntryType: ctype,
X509Entry: certData[0],
Extensions: sct.Extensions}}
entry := ct.LogEntry{Leaf: leaf}
return c.verifier.VerifySCTSignature(sct, entry)
}
// GetSTHConsistency retrieves the consistency proof between two snapshots.
func (c *LogClient) GetSTHConsistency(ctx context.Context, first, second uint64) ([][]byte, error) {
u := fmt.Sprintf("%s%s?first=%d&second=%d", c.uri, GetSTHConsistencyPath, first, second)
var resp getConsistencyProofResponse
if err := fetchAndParse(ctx, c.httpClient, u, &resp); err != nil {
return nil, err
}
return resp.Consistency, nil
}
// GetProofByHash returns an audit path for the hash of an SCT.
func (c *LogClient) GetProofByHash(ctx context.Context, hash []byte, treeSize uint64) (*GetProofByHashResponse, error) {
b64Hash := url.QueryEscape(base64.StdEncoding.EncodeToString(hash))
u := fmt.Sprintf("%s%s?tree_size=%d&hash=%v", c.uri, GetProofByHashPath, treeSize, b64Hash)
var resp GetProofByHashResponse
if err := fetchAndParse(ctx, c.httpClient, u, &resp); err != nil {
return nil, err
}
return &resp, nil
}

View file

@ -1,691 +0,0 @@
package ct
import (
"bytes"
"container/list"
"crypto"
"encoding/asn1"
"encoding/binary"
"encoding/json"
"errors"
"fmt"
"io"
"strings"
)
// Variable size structure prefix-header byte lengths
const (
CertificateLengthBytes = 3
PreCertificateLengthBytes = 3
ExtensionsLengthBytes = 2
CertificateChainLengthBytes = 3
SignatureLengthBytes = 2
JSONLengthBytes = 3
)
// Max lengths
const (
MaxCertificateLength = (1 << 24) - 1
MaxExtensionsLength = (1 << 16) - 1
MaxSCTInListLength = (1 << 16) - 1
MaxSCTListLength = (1 << 16) - 1
)
func writeUint(w io.Writer, value uint64, numBytes int) error {
buf := make([]uint8, numBytes)
for i := 0; i < numBytes; i++ {
buf[numBytes-i-1] = uint8(value & 0xff)
value >>= 8
}
if value != 0 {
return errors.New("numBytes was insufficiently large to represent value")
}
if _, err := w.Write(buf); err != nil {
return err
}
return nil
}
func writeVarBytes(w io.Writer, value []byte, numLenBytes int) error {
if err := writeUint(w, uint64(len(value)), numLenBytes); err != nil {
return err
}
if _, err := w.Write(value); err != nil {
return err
}
return nil
}
func readUint(r io.Reader, numBytes int) (uint64, error) {
var l uint64
for i := 0; i < numBytes; i++ {
l <<= 8
var t uint8
if err := binary.Read(r, binary.BigEndian, &t); err != nil {
return 0, err
}
l |= uint64(t)
}
return l, nil
}
// Reads a variable length array of bytes from |r|. |numLenBytes| specifies the
// number of (BigEndian) prefix-bytes which contain the length of the actual
// array data bytes that follow.
// Allocates an array to hold the contents and returns a slice view into it if
// the read was successful, or an error otherwise.
func readVarBytes(r io.Reader, numLenBytes int) ([]byte, error) {
switch {
case numLenBytes > 8:
return nil, fmt.Errorf("numLenBytes too large (%d)", numLenBytes)
case numLenBytes == 0:
return nil, errors.New("numLenBytes should be > 0")
}
l, err := readUint(r, numLenBytes)
if err != nil {
return nil, err
}
data := make([]byte, l)
if n, err := io.ReadFull(r, data); err != nil {
if err == io.EOF || err == io.ErrUnexpectedEOF {
return nil, fmt.Errorf("short read: expected %d but got %d", l, n)
}
return nil, err
}
return data, nil
}
// Reads a list of ASN1Cert types from |r|
func readASN1CertList(r io.Reader, totalLenBytes int, elementLenBytes int) ([]ASN1Cert, error) {
listBytes, err := readVarBytes(r, totalLenBytes)
if err != nil {
return []ASN1Cert{}, err
}
list := list.New()
listReader := bytes.NewReader(listBytes)
var entry []byte
for err == nil {
entry, err = readVarBytes(listReader, elementLenBytes)
if err != nil {
if err != io.EOF {
return []ASN1Cert{}, err
}
} else {
list.PushBack(entry)
}
}
ret := make([]ASN1Cert, list.Len())
i := 0
for e := list.Front(); e != nil; e = e.Next() {
ret[i] = e.Value.([]byte)
i++
}
return ret, nil
}
// ReadTimestampedEntryInto parses the byte-stream representation of a
// TimestampedEntry from |r| and populates the struct |t| with the data. See
// RFC section 3.4 for details on the format.
// Returns a non-nil error if there was a problem.
func ReadTimestampedEntryInto(r io.Reader, t *TimestampedEntry) error {
var err error
if err = binary.Read(r, binary.BigEndian, &t.Timestamp); err != nil {
return err
}
if err = binary.Read(r, binary.BigEndian, &t.EntryType); err != nil {
return err
}
switch t.EntryType {
case X509LogEntryType:
if t.X509Entry, err = readVarBytes(r, CertificateLengthBytes); err != nil {
return err
}
case PrecertLogEntryType:
if err := binary.Read(r, binary.BigEndian, &t.PrecertEntry.IssuerKeyHash); err != nil {
return err
}
if t.PrecertEntry.TBSCertificate, err = readVarBytes(r, PreCertificateLengthBytes); err != nil {
return err
}
case XJSONLogEntryType:
if t.JSONData, err = readVarBytes(r, JSONLengthBytes); err != nil {
return err
}
default:
return fmt.Errorf("unknown EntryType: %d", t.EntryType)
}
t.Extensions, err = readVarBytes(r, ExtensionsLengthBytes)
return nil
}
// SerializeTimestampedEntry writes timestamped entry to Writer.
// In case of error, w may contain garbage.
func SerializeTimestampedEntry(w io.Writer, t *TimestampedEntry) error {
if err := binary.Write(w, binary.BigEndian, t.Timestamp); err != nil {
return err
}
if err := binary.Write(w, binary.BigEndian, t.EntryType); err != nil {
return err
}
switch t.EntryType {
case X509LogEntryType:
if err := writeVarBytes(w, t.X509Entry, CertificateLengthBytes); err != nil {
return err
}
case PrecertLogEntryType:
if err := binary.Write(w, binary.BigEndian, t.PrecertEntry.IssuerKeyHash); err != nil {
return err
}
if err := writeVarBytes(w, t.PrecertEntry.TBSCertificate, PreCertificateLengthBytes); err != nil {
return err
}
case XJSONLogEntryType:
// TODO: Pending google/certificate-transparency#1243, replace
// with ObjectHash once supported by CT server.
//jsonhash := objecthash.CommonJSONHash(string(t.JSONData))
if err := writeVarBytes(w, []byte(t.JSONData), JSONLengthBytes); err != nil {
return err
}
default:
return fmt.Errorf("unknown EntryType: %d", t.EntryType)
}
writeVarBytes(w, t.Extensions, ExtensionsLengthBytes)
return nil
}
// ReadMerkleTreeLeaf parses the byte-stream representation of a MerkleTreeLeaf
// and returns a pointer to a new MerkleTreeLeaf structure containing the
// parsed data.
// See RFC section 3.4 for details on the format.
// Returns a pointer to a new MerkleTreeLeaf or non-nil error if there was a
// problem
func ReadMerkleTreeLeaf(r io.Reader) (*MerkleTreeLeaf, error) {
var m MerkleTreeLeaf
if err := binary.Read(r, binary.BigEndian, &m.Version); err != nil {
return nil, err
}
if m.Version != V1 {
return nil, fmt.Errorf("unknown Version %d", m.Version)
}
if err := binary.Read(r, binary.BigEndian, &m.LeafType); err != nil {
return nil, err
}
if m.LeafType != TimestampedEntryLeafType {
return nil, fmt.Errorf("unknown LeafType %d", m.LeafType)
}
if err := ReadTimestampedEntryInto(r, &m.TimestampedEntry); err != nil {
return nil, err
}
return &m, nil
}
// UnmarshalX509ChainArray unmarshalls the contents of the "chain:" entry in a
// GetEntries response in the case where the entry refers to an X509 leaf.
func UnmarshalX509ChainArray(b []byte) ([]ASN1Cert, error) {
return readASN1CertList(bytes.NewReader(b), CertificateChainLengthBytes, CertificateLengthBytes)
}
// UnmarshalPrecertChainArray unmarshalls the contents of the "chain:" entry in
// a GetEntries response in the case where the entry refers to a Precertificate
// leaf.
func UnmarshalPrecertChainArray(b []byte) ([]ASN1Cert, error) {
var chain []ASN1Cert
reader := bytes.NewReader(b)
// read the pre-cert entry:
precert, err := readVarBytes(reader, CertificateLengthBytes)
if err != nil {
return chain, err
}
chain = append(chain, precert)
// and then read and return the chain up to the root:
remainingChain, err := readASN1CertList(reader, CertificateChainLengthBytes, CertificateLengthBytes)
if err != nil {
return chain, err
}
chain = append(chain, remainingChain...)
return chain, nil
}
// UnmarshalDigitallySigned reconstructs a DigitallySigned structure from a Reader
func UnmarshalDigitallySigned(r io.Reader) (*DigitallySigned, error) {
var h byte
if err := binary.Read(r, binary.BigEndian, &h); err != nil {
return nil, fmt.Errorf("failed to read HashAlgorithm: %v", err)
}
var s byte
if err := binary.Read(r, binary.BigEndian, &s); err != nil {
return nil, fmt.Errorf("failed to read SignatureAlgorithm: %v", err)
}
sig, err := readVarBytes(r, SignatureLengthBytes)
if err != nil {
return nil, fmt.Errorf("failed to read Signature bytes: %v", err)
}
return &DigitallySigned{
HashAlgorithm: HashAlgorithm(h),
SignatureAlgorithm: SignatureAlgorithm(s),
Signature: sig,
}, nil
}
func marshalDigitallySignedHere(ds DigitallySigned, here []byte) ([]byte, error) {
sigLen := len(ds.Signature)
dsOutLen := 2 + SignatureLengthBytes + sigLen
if here == nil {
here = make([]byte, dsOutLen)
}
if len(here) < dsOutLen {
return nil, ErrNotEnoughBuffer
}
here = here[0:dsOutLen]
here[0] = byte(ds.HashAlgorithm)
here[1] = byte(ds.SignatureAlgorithm)
binary.BigEndian.PutUint16(here[2:4], uint16(sigLen))
copy(here[4:], ds.Signature)
return here, nil
}
// MarshalDigitallySigned marshalls a DigitallySigned structure into a byte array
func MarshalDigitallySigned(ds DigitallySigned) ([]byte, error) {
return marshalDigitallySignedHere(ds, nil)
}
func checkCertificateFormat(cert ASN1Cert) error {
if len(cert) == 0 {
return errors.New("certificate is zero length")
}
if len(cert) > MaxCertificateLength {
return errors.New("certificate too large")
}
return nil
}
func checkExtensionsFormat(ext CTExtensions) error {
if len(ext) > MaxExtensionsLength {
return errors.New("extensions too large")
}
return nil
}
func serializeV1CertSCTSignatureInput(timestamp uint64, cert ASN1Cert, ext CTExtensions) ([]byte, error) {
if err := checkCertificateFormat(cert); err != nil {
return nil, err
}
if err := checkExtensionsFormat(ext); err != nil {
return nil, err
}
var buf bytes.Buffer
if err := binary.Write(&buf, binary.BigEndian, V1); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, CertificateTimestampSignatureType); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, timestamp); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, X509LogEntryType); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, cert, CertificateLengthBytes); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, ext, ExtensionsLengthBytes); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func serializeV1JSONSCTSignatureInput(timestamp uint64, j []byte) ([]byte, error) {
var buf bytes.Buffer
if err := binary.Write(&buf, binary.BigEndian, V1); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, CertificateTimestampSignatureType); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, timestamp); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, XJSONLogEntryType); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, j, JSONLengthBytes); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, nil, ExtensionsLengthBytes); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func serializeV1PrecertSCTSignatureInput(timestamp uint64, issuerKeyHash [issuerKeyHashLength]byte, tbs []byte, ext CTExtensions) ([]byte, error) {
if err := checkCertificateFormat(tbs); err != nil {
return nil, err
}
if err := checkExtensionsFormat(ext); err != nil {
return nil, err
}
var buf bytes.Buffer
if err := binary.Write(&buf, binary.BigEndian, V1); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, CertificateTimestampSignatureType); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, timestamp); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, PrecertLogEntryType); err != nil {
return nil, err
}
if _, err := buf.Write(issuerKeyHash[:]); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, tbs, CertificateLengthBytes); err != nil {
return nil, err
}
if err := writeVarBytes(&buf, ext, ExtensionsLengthBytes); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
func serializeV1SCTSignatureInput(sct SignedCertificateTimestamp, entry LogEntry) ([]byte, error) {
if sct.SCTVersion != V1 {
return nil, fmt.Errorf("unsupported SCT version, expected V1, but got %s", sct.SCTVersion)
}
if entry.Leaf.LeafType != TimestampedEntryLeafType {
return nil, fmt.Errorf("Unsupported leaf type %s", entry.Leaf.LeafType)
}
switch entry.Leaf.TimestampedEntry.EntryType {
case X509LogEntryType:
return serializeV1CertSCTSignatureInput(sct.Timestamp, entry.Leaf.TimestampedEntry.X509Entry, entry.Leaf.TimestampedEntry.Extensions)
case PrecertLogEntryType:
return serializeV1PrecertSCTSignatureInput(sct.Timestamp, entry.Leaf.TimestampedEntry.PrecertEntry.IssuerKeyHash,
entry.Leaf.TimestampedEntry.PrecertEntry.TBSCertificate,
entry.Leaf.TimestampedEntry.Extensions)
case XJSONLogEntryType:
return serializeV1JSONSCTSignatureInput(sct.Timestamp, entry.Leaf.TimestampedEntry.JSONData)
default:
return nil, fmt.Errorf("unknown TimestampedEntryLeafType %s", entry.Leaf.TimestampedEntry.EntryType)
}
}
// SerializeSCTSignatureInput serializes the passed in sct and log entry into
// the correct format for signing.
func SerializeSCTSignatureInput(sct SignedCertificateTimestamp, entry LogEntry) ([]byte, error) {
switch sct.SCTVersion {
case V1:
return serializeV1SCTSignatureInput(sct, entry)
default:
return nil, fmt.Errorf("unknown SCT version %d", sct.SCTVersion)
}
}
// SerializedLength will return the space (in bytes)
func (sct SignedCertificateTimestamp) SerializedLength() (int, error) {
switch sct.SCTVersion {
case V1:
extLen := len(sct.Extensions)
sigLen := len(sct.Signature.Signature)
return 1 + 32 + 8 + 2 + extLen + 2 + 2 + sigLen, nil
default:
return 0, ErrInvalidVersion
}
}
func serializeV1SCTHere(sct SignedCertificateTimestamp, here []byte) ([]byte, error) {
if sct.SCTVersion != V1 {
return nil, ErrInvalidVersion
}
sctLen, err := sct.SerializedLength()
if err != nil {
return nil, err
}
if here == nil {
here = make([]byte, sctLen)
}
if len(here) < sctLen {
return nil, ErrNotEnoughBuffer
}
if err := checkExtensionsFormat(sct.Extensions); err != nil {
return nil, err
}
here = here[0:sctLen]
// Write Version
here[0] = byte(sct.SCTVersion)
// Write LogID
copy(here[1:33], sct.LogID[:])
// Write Timestamp
binary.BigEndian.PutUint64(here[33:41], sct.Timestamp)
// Write Extensions
extLen := len(sct.Extensions)
binary.BigEndian.PutUint16(here[41:43], uint16(extLen))
n := 43 + extLen
copy(here[43:n], sct.Extensions)
// Write Signature
_, err = marshalDigitallySignedHere(sct.Signature, here[n:])
if err != nil {
return nil, err
}
return here, nil
}
// SerializeSCTHere serializes the passed in sct into the format specified
// by RFC6962 section 3.2.
// If a bytes slice here is provided then it will attempt to serialize into the
// provided byte slice, ErrNotEnoughBuffer will be returned if the buffer is
// too small.
// If a nil byte slice is provided, a buffer for will be allocated for you
// The returned slice will be sliced to the correct length.
func SerializeSCTHere(sct SignedCertificateTimestamp, here []byte) ([]byte, error) {
switch sct.SCTVersion {
case V1:
return serializeV1SCTHere(sct, here)
default:
return nil, fmt.Errorf("unknown SCT version %d", sct.SCTVersion)
}
}
// SerializeSCT serializes the passed in sct into the format specified
// by RFC6962 section 3.2
// Equivalent to SerializeSCTHere(sct, nil)
func SerializeSCT(sct SignedCertificateTimestamp) ([]byte, error) {
return SerializeSCTHere(sct, nil)
}
func deserializeSCTV1(r io.Reader, sct *SignedCertificateTimestamp) error {
if err := binary.Read(r, binary.BigEndian, &sct.LogID); err != nil {
return err
}
if err := binary.Read(r, binary.BigEndian, &sct.Timestamp); err != nil {
return err
}
ext, err := readVarBytes(r, ExtensionsLengthBytes)
if err != nil {
return err
}
sct.Extensions = ext
ds, err := UnmarshalDigitallySigned(r)
if err != nil {
return err
}
sct.Signature = *ds
return nil
}
// DeserializeSCT reads an SCT from Reader.
func DeserializeSCT(r io.Reader) (*SignedCertificateTimestamp, error) {
var sct SignedCertificateTimestamp
if err := binary.Read(r, binary.BigEndian, &sct.SCTVersion); err != nil {
return nil, err
}
switch sct.SCTVersion {
case V1:
return &sct, deserializeSCTV1(r, &sct)
default:
return nil, fmt.Errorf("unknown SCT version %d", sct.SCTVersion)
}
}
func serializeV1STHSignatureInput(sth SignedTreeHead) ([]byte, error) {
if sth.Version != V1 {
return nil, fmt.Errorf("invalid STH version %d", sth.Version)
}
if sth.TreeSize < 0 {
return nil, fmt.Errorf("invalid tree size %d", sth.TreeSize)
}
if len(sth.SHA256RootHash) != crypto.SHA256.Size() {
return nil, fmt.Errorf("invalid TreeHash length, got %d expected %d", len(sth.SHA256RootHash), crypto.SHA256.Size())
}
var buf bytes.Buffer
if err := binary.Write(&buf, binary.BigEndian, V1); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, TreeHashSignatureType); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, sth.Timestamp); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, sth.TreeSize); err != nil {
return nil, err
}
if err := binary.Write(&buf, binary.BigEndian, sth.SHA256RootHash); err != nil {
return nil, err
}
return buf.Bytes(), nil
}
// SerializeSTHSignatureInput serializes the passed in sth into the correct
// format for signing.
func SerializeSTHSignatureInput(sth SignedTreeHead) ([]byte, error) {
switch sth.Version {
case V1:
return serializeV1STHSignatureInput(sth)
default:
return nil, fmt.Errorf("unsupported STH version %d", sth.Version)
}
}
// SCTListSerializedLength determines the length of the required buffer should a SCT List need to be serialized
func SCTListSerializedLength(scts []SignedCertificateTimestamp) (int, error) {
if len(scts) == 0 {
return 0, fmt.Errorf("SCT List empty")
}
sctListLen := 0
for i, sct := range scts {
n, err := sct.SerializedLength()
if err != nil {
return 0, fmt.Errorf("unable to determine length of SCT in position %d: %v", i, err)
}
if n > MaxSCTInListLength {
return 0, fmt.Errorf("SCT in position %d too large: %d", i, n)
}
sctListLen += 2 + n
}
return sctListLen, nil
}
// SerializeSCTList serializes the passed-in slice of SignedCertificateTimestamp into a
// byte slice as a SignedCertificateTimestampList (see RFC6962 Section 3.3)
func SerializeSCTList(scts []SignedCertificateTimestamp) ([]byte, error) {
size, err := SCTListSerializedLength(scts)
if err != nil {
return nil, err
}
fullSize := 2 + size // 2 bytes for length + size of SCT list
if fullSize > MaxSCTListLength {
return nil, fmt.Errorf("SCT List too large to serialize: %d", fullSize)
}
buf := new(bytes.Buffer)
buf.Grow(fullSize)
if err = writeUint(buf, uint64(size), 2); err != nil {
return nil, err
}
for _, sct := range scts {
serialized, err := SerializeSCT(sct)
if err != nil {
return nil, err
}
if err = writeVarBytes(buf, serialized, 2); err != nil {
return nil, err
}
}
return asn1.Marshal(buf.Bytes()) // transform to Octet String
}
// SerializeMerkleTreeLeaf writes MerkleTreeLeaf to Writer.
// In case of error, w may contain garbage.
func SerializeMerkleTreeLeaf(w io.Writer, m *MerkleTreeLeaf) error {
if m.Version != V1 {
return fmt.Errorf("unknown Version %d", m.Version)
}
if err := binary.Write(w, binary.BigEndian, m.Version); err != nil {
return err
}
if m.LeafType != TimestampedEntryLeafType {
return fmt.Errorf("unknown LeafType %d", m.LeafType)
}
if err := binary.Write(w, binary.BigEndian, m.LeafType); err != nil {
return err
}
if err := SerializeTimestampedEntry(w, &m.TimestampedEntry); err != nil {
return err
}
return nil
}
// CreateX509MerkleTreeLeaf generates a MerkleTreeLeaf for an X509 cert
func CreateX509MerkleTreeLeaf(cert ASN1Cert, timestamp uint64) *MerkleTreeLeaf {
return &MerkleTreeLeaf{
Version: V1,
LeafType: TimestampedEntryLeafType,
TimestampedEntry: TimestampedEntry{
Timestamp: timestamp,
EntryType: X509LogEntryType,
X509Entry: cert,
},
}
}
// CreateJSONMerkleTreeLeaf creates the merkle tree leaf for json data.
func CreateJSONMerkleTreeLeaf(data interface{}, timestamp uint64) *MerkleTreeLeaf {
jsonData, err := json.Marshal(AddJSONRequest{Data: data})
if err != nil {
return nil
}
// Match the JSON serialization implemented by json-c
jsonStr := strings.Replace(string(jsonData), ":", ": ", -1)
jsonStr = strings.Replace(jsonStr, ",", ", ", -1)
jsonStr = strings.Replace(jsonStr, "{", "{ ", -1)
jsonStr = strings.Replace(jsonStr, "}", " }", -1)
jsonStr = strings.Replace(jsonStr, "/", `\/`, -1)
// TODO: Pending google/certificate-transparency#1243, replace with
// ObjectHash once supported by CT server.
return &MerkleTreeLeaf{
Version: V1,
LeafType: TimestampedEntryLeafType,
TimestampedEntry: TimestampedEntry{
Timestamp: timestamp,
EntryType: XJSONLogEntryType,
JSONData: []byte(jsonStr),
},
}
}

View file

@ -1,374 +0,0 @@
package ct
import (
"bytes"
"crypto/sha256"
"encoding/base64"
"encoding/json"
"fmt"
"github.com/google/certificate-transparency/go/x509"
)
const (
issuerKeyHashLength = 32
)
///////////////////////////////////////////////////////////////////////////////
// The following structures represent those outlined in the RFC6962 document:
///////////////////////////////////////////////////////////////////////////////
// LogEntryType represents the LogEntryType enum from section 3.1 of the RFC:
// enum { x509_entry(0), precert_entry(1), (65535) } LogEntryType;
type LogEntryType uint16
func (e LogEntryType) String() string {
switch e {
case X509LogEntryType:
return "X509LogEntryType"
case PrecertLogEntryType:
return "PrecertLogEntryType"
case XJSONLogEntryType:
return "XJSONLogEntryType"
}
panic(fmt.Sprintf("No string defined for LogEntryType constant value %d", e))
}
// LogEntryType constants, see section 3.1 of RFC6962.
const (
X509LogEntryType LogEntryType = 0
PrecertLogEntryType LogEntryType = 1
XJSONLogEntryType LogEntryType = 0x8000 // Experimental. Don't rely on this!
)
// MerkleLeafType represents the MerkleLeafType enum from section 3.4 of the
// RFC: enum { timestamped_entry(0), (255) } MerkleLeafType;
type MerkleLeafType uint8
func (m MerkleLeafType) String() string {
switch m {
case TimestampedEntryLeafType:
return "TimestampedEntryLeafType"
default:
return fmt.Sprintf("UnknownLeafType(%d)", m)
}
}
// MerkleLeafType constants, see section 3.4 of the RFC.
const (
TimestampedEntryLeafType MerkleLeafType = 0 // Entry type for an SCT
)
// Version represents the Version enum from section 3.2 of the RFC:
// enum { v1(0), (255) } Version;
type Version uint8
func (v Version) String() string {
switch v {
case V1:
return "V1"
default:
return fmt.Sprintf("UnknownVersion(%d)", v)
}
}
// CT Version constants, see section 3.2 of the RFC.
const (
V1 Version = 0
)
// SignatureType differentiates STH signatures from SCT signatures, see RFC
// section 3.2
type SignatureType uint8
func (st SignatureType) String() string {
switch st {
case CertificateTimestampSignatureType:
return "CertificateTimestamp"
case TreeHashSignatureType:
return "TreeHash"
default:
return fmt.Sprintf("UnknownSignatureType(%d)", st)
}
}
// SignatureType constants, see RFC section 3.2
const (
CertificateTimestampSignatureType SignatureType = 0
TreeHashSignatureType SignatureType = 1
)
// ASN1Cert type for holding the raw DER bytes of an ASN.1 Certificate
// (section 3.1)
type ASN1Cert []byte
// PreCert represents a Precertificate (section 3.2)
type PreCert struct {
IssuerKeyHash [issuerKeyHashLength]byte
TBSCertificate []byte
}
// CTExtensions is a representation of the raw bytes of any CtExtension
// structure (see section 3.2)
type CTExtensions []byte
// MerkleTreeNode represents an internal node in the CT tree
type MerkleTreeNode []byte
// ConsistencyProof represents a CT consistency proof (see sections 2.1.2 and
// 4.4)
type ConsistencyProof []MerkleTreeNode
// AuditPath represents a CT inclusion proof (see sections 2.1.1 and 4.5)
type AuditPath []MerkleTreeNode
// LeafInput represents a serialized MerkleTreeLeaf structure
type LeafInput []byte
// HashAlgorithm from the DigitallySigned struct
type HashAlgorithm byte
// HashAlgorithm constants
const (
None HashAlgorithm = 0
MD5 HashAlgorithm = 1
SHA1 HashAlgorithm = 2
SHA224 HashAlgorithm = 3
SHA256 HashAlgorithm = 4
SHA384 HashAlgorithm = 5
SHA512 HashAlgorithm = 6
)
func (h HashAlgorithm) String() string {
switch h {
case None:
return "None"
case MD5:
return "MD5"
case SHA1:
return "SHA1"
case SHA224:
return "SHA224"
case SHA256:
return "SHA256"
case SHA384:
return "SHA384"
case SHA512:
return "SHA512"
default:
return fmt.Sprintf("UNKNOWN(%d)", h)
}
}
// SignatureAlgorithm from the the DigitallySigned struct
type SignatureAlgorithm byte
// SignatureAlgorithm constants
const (
Anonymous SignatureAlgorithm = 0
RSA SignatureAlgorithm = 1
DSA SignatureAlgorithm = 2
ECDSA SignatureAlgorithm = 3
)
func (s SignatureAlgorithm) String() string {
switch s {
case Anonymous:
return "Anonymous"
case RSA:
return "RSA"
case DSA:
return "DSA"
case ECDSA:
return "ECDSA"
default:
return fmt.Sprintf("UNKNOWN(%d)", s)
}
}
// DigitallySigned represents an RFC5246 DigitallySigned structure
type DigitallySigned struct {
HashAlgorithm HashAlgorithm
SignatureAlgorithm SignatureAlgorithm
Signature []byte
}
// FromBase64String populates the DigitallySigned structure from the base64 data passed in.
// Returns an error if the base64 data is invalid.
func (d *DigitallySigned) FromBase64String(b64 string) error {
raw, err := base64.StdEncoding.DecodeString(b64)
if err != nil {
return fmt.Errorf("failed to unbase64 DigitallySigned: %v", err)
}
ds, err := UnmarshalDigitallySigned(bytes.NewReader(raw))
if err != nil {
return fmt.Errorf("failed to unmarshal DigitallySigned: %v", err)
}
*d = *ds
return nil
}
// Base64String returns the base64 representation of the DigitallySigned struct.
func (d DigitallySigned) Base64String() (string, error) {
b, err := MarshalDigitallySigned(d)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(b), nil
}
// MarshalJSON implements the json.Marshaller interface.
func (d DigitallySigned) MarshalJSON() ([]byte, error) {
b64, err := d.Base64String()
if err != nil {
return []byte{}, err
}
return []byte(`"` + b64 + `"`), nil
}
// UnmarshalJSON implements the json.Unmarshaler interface.
func (d *DigitallySigned) UnmarshalJSON(b []byte) error {
var content string
if err := json.Unmarshal(b, &content); err != nil {
return fmt.Errorf("failed to unmarshal DigitallySigned: %v", err)
}
return d.FromBase64String(content)
}
// LogEntry represents the contents of an entry in a CT log, see section 3.1.
type LogEntry struct {
Index int64
Leaf MerkleTreeLeaf
X509Cert *x509.Certificate
Precert *Precertificate
JSONData []byte
Chain []ASN1Cert
}
// SHA256Hash represents the output from the SHA256 hash function.
type SHA256Hash [sha256.Size]byte
// FromBase64String populates the SHA256 struct with the contents of the base64 data passed in.
func (s *SHA256Hash) FromBase64String(b64 string) error {
bs, err := base64.StdEncoding.DecodeString(b64)
if err != nil {
return fmt.Errorf("failed to unbase64 LogID: %v", err)
}
if len(bs) != sha256.Size {
return fmt.Errorf("invalid SHA256 length, expected 32 but got %d", len(bs))
}
copy(s[:], bs)
return nil
}
// Base64String returns the base64 representation of this SHA256Hash.
func (s SHA256Hash) Base64String() string {
return base64.StdEncoding.EncodeToString(s[:])
}
// MarshalJSON implements the json.Marshaller interface for SHA256Hash.
func (s SHA256Hash) MarshalJSON() ([]byte, error) {
return []byte(`"` + s.Base64String() + `"`), nil
}
// UnmarshalJSON implements the json.Unmarshaller interface.
func (s *SHA256Hash) UnmarshalJSON(b []byte) error {
var content string
if err := json.Unmarshal(b, &content); err != nil {
return fmt.Errorf("failed to unmarshal SHA256Hash: %v", err)
}
return s.FromBase64String(content)
}
// SignedTreeHead represents the structure returned by the get-sth CT method
// after base64 decoding. See sections 3.5 and 4.3 in the RFC)
type SignedTreeHead struct {
Version Version `json:"sth_version"` // The version of the protocol to which the STH conforms
TreeSize uint64 `json:"tree_size"` // The number of entries in the new tree
Timestamp uint64 `json:"timestamp"` // The time at which the STH was created
SHA256RootHash SHA256Hash `json:"sha256_root_hash"` // The root hash of the log's Merkle tree
TreeHeadSignature DigitallySigned `json:"tree_head_signature"` // The Log's signature for this STH (see RFC section 3.5)
LogID SHA256Hash `json:"log_id"` // The SHA256 hash of the log's public key
}
// SignedCertificateTimestamp represents the structure returned by the
// add-chain and add-pre-chain methods after base64 decoding. (see RFC sections
// 3.2 ,4.1 and 4.2)
type SignedCertificateTimestamp struct {
SCTVersion Version // The version of the protocol to which the SCT conforms
LogID SHA256Hash // the SHA-256 hash of the log's public key, calculated over
// the DER encoding of the key represented as SubjectPublicKeyInfo.
Timestamp uint64 // Timestamp (in ms since unix epoc) at which the SCT was issued
Extensions CTExtensions // For future extensions to the protocol
Signature DigitallySigned // The Log's signature for this SCT
}
func (s SignedCertificateTimestamp) String() string {
return fmt.Sprintf("{Version:%d LogId:%s Timestamp:%d Extensions:'%s' Signature:%v}", s.SCTVersion,
base64.StdEncoding.EncodeToString(s.LogID[:]),
s.Timestamp,
s.Extensions,
s.Signature)
}
// TimestampedEntry is part of the MerkleTreeLeaf structure.
// See RFC section 3.4
type TimestampedEntry struct {
Timestamp uint64
EntryType LogEntryType
X509Entry ASN1Cert
JSONData []byte
PrecertEntry PreCert
Extensions CTExtensions
}
// MerkleTreeLeaf represents the deserialized sructure of the hash input for the
// leaves of a log's Merkle tree. See RFC section 3.4
type MerkleTreeLeaf struct {
Version Version // the version of the protocol to which the MerkleTreeLeaf corresponds
LeafType MerkleLeafType // The type of the leaf input, currently only TimestampedEntry can exist
TimestampedEntry TimestampedEntry // The entry data itself
}
// Precertificate represents the parsed CT Precertificate structure.
type Precertificate struct {
// Raw DER bytes of the precert
Raw []byte
// SHA256 hash of the issuing key
IssuerKeyHash [issuerKeyHashLength]byte
// Parsed TBSCertificate structure (held in an x509.Certificate for ease of
// access.
TBSCertificate x509.Certificate
}
// X509Certificate returns the X.509 Certificate contained within the
// MerkleTreeLeaf.
// Returns a pointer to an x509.Certificate or a non-nil error.
func (m *MerkleTreeLeaf) X509Certificate() (*x509.Certificate, error) {
return x509.ParseCertificate(m.TimestampedEntry.X509Entry)
}
type sctError int
// Preallocate errors for performance
var (
ErrInvalidVersion error = sctError(1)
ErrNotEnoughBuffer error = sctError(2)
)
func (e sctError) Error() string {
switch e {
case ErrInvalidVersion:
return "invalid SCT version detected"
case ErrNotEnoughBuffer:
return "provided buffer was too small"
default:
return "unknown error"
}
}
// AddJSONRequest represents the JSON request body sent ot the add-json CT
// method.
type AddJSONRequest struct {
Data interface{} `json:"data"`
}

View file

@ -1,56 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
import (
// START CT CHANGES
"github.com/google/certificate-transparency/go/asn1"
"github.com/google/certificate-transparency/go/x509/pkix"
// END CT CHANGES
"errors"
"fmt"
)
// pkcs8 reflects an ASN.1, PKCS#8 PrivateKey. See
// ftp://ftp.rsasecurity.com/pub/pkcs/pkcs-8/pkcs-8v1_2.asn
// and RFC5208.
type pkcs8 struct {
Version int
Algo pkix.AlgorithmIdentifier
PrivateKey []byte
// optional attributes omitted.
}
// ParsePKCS8PrivateKey parses an unencrypted, PKCS#8 private key. See
// http://www.rsa.com/rsalabs/node.asp?id=2130 and RFC5208.
func ParsePKCS8PrivateKey(der []byte) (key interface{}, err error) {
var privKey pkcs8
if _, err := asn1.Unmarshal(der, &privKey); err != nil {
return nil, err
}
switch {
case privKey.Algo.Algorithm.Equal(oidPublicKeyRSA):
key, err = ParsePKCS1PrivateKey(privKey.PrivateKey)
if err != nil {
return nil, errors.New("x509: failed to parse RSA private key embedded in PKCS#8: " + err.Error())
}
return key, nil
case privKey.Algo.Algorithm.Equal(oidPublicKeyECDSA):
bytes := privKey.Algo.Parameters.FullBytes
namedCurveOID := new(asn1.ObjectIdentifier)
if _, err := asn1.Unmarshal(bytes, namedCurveOID); err != nil {
namedCurveOID = nil
}
key, err = parseECPrivateKey(namedCurveOID, privKey.PrivateKey)
if err != nil {
return nil, errors.New("x509: failed to parse EC private key embedded in PKCS#8: " + err.Error())
}
return key, nil
default:
return nil, fmt.Errorf("x509: PKCS#8 wrapping contained private key with unknown algorithm: %v", privKey.Algo.Algorithm)
}
}

View file

@ -1,173 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package pkix contains shared, low level structures used for ASN.1 parsing
// and serialization of X.509 certificates, CRL and OCSP.
package pkix
import (
// START CT CHANGES
"github.com/google/certificate-transparency/go/asn1"
// END CT CHANGES
"math/big"
"time"
)
// AlgorithmIdentifier represents the ASN.1 structure of the same name. See RFC
// 5280, section 4.1.1.2.
type AlgorithmIdentifier struct {
Algorithm asn1.ObjectIdentifier
Parameters asn1.RawValue `asn1:"optional"`
}
type RDNSequence []RelativeDistinguishedNameSET
type RelativeDistinguishedNameSET []AttributeTypeAndValue
// AttributeTypeAndValue mirrors the ASN.1 structure of the same name in
// http://tools.ietf.org/html/rfc5280#section-4.1.2.4
type AttributeTypeAndValue struct {
Type asn1.ObjectIdentifier
Value interface{}
}
// Extension represents the ASN.1 structure of the same name. See RFC
// 5280, section 4.2.
type Extension struct {
Id asn1.ObjectIdentifier
Critical bool `asn1:"optional"`
Value []byte
}
// Name represents an X.509 distinguished name. This only includes the common
// elements of a DN. Additional elements in the name are ignored.
type Name struct {
Country, Organization, OrganizationalUnit []string
Locality, Province []string
StreetAddress, PostalCode []string
SerialNumber, CommonName string
Names []AttributeTypeAndValue
}
func (n *Name) FillFromRDNSequence(rdns *RDNSequence) {
for _, rdn := range *rdns {
if len(rdn) == 0 {
continue
}
atv := rdn[0]
n.Names = append(n.Names, atv)
value, ok := atv.Value.(string)
if !ok {
continue
}
t := atv.Type
if len(t) == 4 && t[0] == 2 && t[1] == 5 && t[2] == 4 {
switch t[3] {
case 3:
n.CommonName = value
case 5:
n.SerialNumber = value
case 6:
n.Country = append(n.Country, value)
case 7:
n.Locality = append(n.Locality, value)
case 8:
n.Province = append(n.Province, value)
case 9:
n.StreetAddress = append(n.StreetAddress, value)
case 10:
n.Organization = append(n.Organization, value)
case 11:
n.OrganizationalUnit = append(n.OrganizationalUnit, value)
case 17:
n.PostalCode = append(n.PostalCode, value)
}
}
}
}
var (
oidCountry = []int{2, 5, 4, 6}
oidOrganization = []int{2, 5, 4, 10}
oidOrganizationalUnit = []int{2, 5, 4, 11}
oidCommonName = []int{2, 5, 4, 3}
oidSerialNumber = []int{2, 5, 4, 5}
oidLocality = []int{2, 5, 4, 7}
oidProvince = []int{2, 5, 4, 8}
oidStreetAddress = []int{2, 5, 4, 9}
oidPostalCode = []int{2, 5, 4, 17}
)
// appendRDNs appends a relativeDistinguishedNameSET to the given RDNSequence
// and returns the new value. The relativeDistinguishedNameSET contains an
// attributeTypeAndValue for each of the given values. See RFC 5280, A.1, and
// search for AttributeTypeAndValue.
func appendRDNs(in RDNSequence, values []string, oid asn1.ObjectIdentifier) RDNSequence {
if len(values) == 0 {
return in
}
s := make([]AttributeTypeAndValue, len(values))
for i, value := range values {
s[i].Type = oid
s[i].Value = value
}
return append(in, s)
}
func (n Name) ToRDNSequence() (ret RDNSequence) {
ret = appendRDNs(ret, n.Country, oidCountry)
ret = appendRDNs(ret, n.Organization, oidOrganization)
ret = appendRDNs(ret, n.OrganizationalUnit, oidOrganizationalUnit)
ret = appendRDNs(ret, n.Locality, oidLocality)
ret = appendRDNs(ret, n.Province, oidProvince)
ret = appendRDNs(ret, n.StreetAddress, oidStreetAddress)
ret = appendRDNs(ret, n.PostalCode, oidPostalCode)
if len(n.CommonName) > 0 {
ret = appendRDNs(ret, []string{n.CommonName}, oidCommonName)
}
if len(n.SerialNumber) > 0 {
ret = appendRDNs(ret, []string{n.SerialNumber}, oidSerialNumber)
}
return ret
}
// CertificateList represents the ASN.1 structure of the same name. See RFC
// 5280, section 5.1. Use Certificate.CheckCRLSignature to verify the
// signature.
type CertificateList struct {
TBSCertList TBSCertificateList
SignatureAlgorithm AlgorithmIdentifier
SignatureValue asn1.BitString
}
// HasExpired reports whether now is past the expiry time of certList.
func (certList *CertificateList) HasExpired(now time.Time) bool {
return now.After(certList.TBSCertList.NextUpdate)
}
// TBSCertificateList represents the ASN.1 structure of the same name. See RFC
// 5280, section 5.1.
type TBSCertificateList struct {
Raw asn1.RawContent
Version int `asn1:"optional,default:2"`
Signature AlgorithmIdentifier
Issuer RDNSequence
ThisUpdate time.Time
NextUpdate time.Time
RevokedCertificates []RevokedCertificate `asn1:"optional"`
Extensions []Extension `asn1:"tag:0,optional,explicit"`
}
// RevokedCertificate represents the ASN.1 structure of the same name. See RFC
// 5280, section 5.1.
type RevokedCertificate struct {
SerialNumber *big.Int
RevocationTime time.Time
Extensions []Extension `asn1:"optional"`
}

View file

@ -1,83 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin,cgo
package x509
/*
#cgo CFLAGS: -mmacosx-version-min=10.6 -D__MAC_OS_X_VERSION_MAX_ALLOWED=1060
#cgo LDFLAGS: -framework CoreFoundation -framework Security
#include <CoreFoundation/CoreFoundation.h>
#include <Security/Security.h>
// FetchPEMRootsCTX509 fetches the system's list of trusted X.509 root certificates.
//
// On success it returns 0 and fills pemRoots with a CFDataRef that contains the extracted root
// certificates of the system. On failure, the function returns -1.
//
// Note: The CFDataRef returned in pemRoots must be released (using CFRelease) after
// we've consumed its content.
int FetchPEMRootsCTX509(CFDataRef *pemRoots) {
if (pemRoots == NULL) {
return -1;
}
CFArrayRef certs = NULL;
OSStatus err = SecTrustCopyAnchorCertificates(&certs);
if (err != noErr) {
return -1;
}
CFMutableDataRef combinedData = CFDataCreateMutable(kCFAllocatorDefault, 0);
int i, ncerts = CFArrayGetCount(certs);
for (i = 0; i < ncerts; i++) {
CFDataRef data = NULL;
SecCertificateRef cert = (SecCertificateRef)CFArrayGetValueAtIndex(certs, i);
if (cert == NULL) {
continue;
}
// Note: SecKeychainItemExport is deprecated as of 10.7 in favor of SecItemExport.
// Once we support weak imports via cgo we should prefer that, and fall back to this
// for older systems.
err = SecKeychainItemExport(cert, kSecFormatX509Cert, kSecItemPemArmour, NULL, &data);
if (err != noErr) {
continue;
}
if (data != NULL) {
CFDataAppendBytes(combinedData, CFDataGetBytePtr(data), CFDataGetLength(data));
CFRelease(data);
}
}
CFRelease(certs);
*pemRoots = combinedData;
return 0;
}
*/
import "C"
import "unsafe"
func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate, err error) {
return nil, nil
}
func initSystemRoots() {
roots := NewCertPool()
var data C.CFDataRef = nil
err := C.FetchPEMRootsCTX509(&data)
if err == -1 {
return
}
defer C.CFRelease(C.CFTypeRef(data))
buf := C.GoBytes(unsafe.Pointer(C.CFDataGetBytePtr(data)), C.int(C.CFDataGetLength(data)))
roots.AppendCertsFromPEM(buf)
systemRoots = roots
}

View file

@ -1,14 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build darwin,!cgo
package x509
func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate, err error) {
return nil, nil
}
func initSystemRoots() {
}

View file

@ -1,37 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build dragonfly freebsd linux openbsd netbsd solaris
package x509
import "io/ioutil"
// Possible certificate files; stop after finding one.
var certFiles = []string{
"/etc/ssl/certs/ca-certificates.crt", // Debian/Ubuntu/Gentoo etc.
"/etc/pki/tls/certs/ca-bundle.crt", // Fedora/RHEL
"/etc/ssl/ca-bundle.pem", // OpenSUSE
"/etc/ssl/cert.pem", // OpenBSD
"/usr/local/share/certs/ca-root-nss.crt", // FreeBSD/DragonFly
}
func (c *Certificate) systemVerify(opts *VerifyOptions) (chains [][]*Certificate, err error) {
return nil, nil
}
func initSystemRoots() {
roots := NewCertPool()
for _, file := range certFiles {
data, err := ioutil.ReadFile(file)
if err == nil {
roots.AppendCertsFromPEM(data)
systemRoots = roots
return
}
}
// All of the files failed to load. systemRoots will be nil which will
// trigger a specific error at verification time.
}

View file

@ -1,476 +0,0 @@
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package x509
import (
"fmt"
"net"
"runtime"
"strings"
"time"
"unicode/utf8"
)
type InvalidReason int
const (
// NotAuthorizedToSign results when a certificate is signed by another
// which isn't marked as a CA certificate.
NotAuthorizedToSign InvalidReason = iota
// Expired results when a certificate has expired, based on the time
// given in the VerifyOptions.
Expired
// CANotAuthorizedForThisName results when an intermediate or root
// certificate has a name constraint which doesn't include the name
// being checked.
CANotAuthorizedForThisName
// TooManyIntermediates results when a path length constraint is
// violated.
TooManyIntermediates
// IncompatibleUsage results when the certificate's key usage indicates
// that it may only be used for a different purpose.
IncompatibleUsage
)
// CertificateInvalidError results when an odd error occurs. Users of this
// library probably want to handle all these errors uniformly.
type CertificateInvalidError struct {
Cert *Certificate
Reason InvalidReason
}
func (e CertificateInvalidError) Error() string {
switch e.Reason {
case NotAuthorizedToSign:
return "x509: certificate is not authorized to sign other certificates"
case Expired:
return "x509: certificate has expired or is not yet valid"
case CANotAuthorizedForThisName:
return "x509: a root or intermediate certificate is not authorized to sign in this domain"
case TooManyIntermediates:
return "x509: too many intermediates for path length constraint"
case IncompatibleUsage:
return "x509: certificate specifies an incompatible key usage"
}
return "x509: unknown error"
}
// HostnameError results when the set of authorized names doesn't match the
// requested name.
type HostnameError struct {
Certificate *Certificate
Host string
}
func (h HostnameError) Error() string {
c := h.Certificate
var valid string
if ip := net.ParseIP(h.Host); ip != nil {
// Trying to validate an IP
if len(c.IPAddresses) == 0 {
return "x509: cannot validate certificate for " + h.Host + " because it doesn't contain any IP SANs"
}
for _, san := range c.IPAddresses {
if len(valid) > 0 {
valid += ", "
}
valid += san.String()
}
} else {
if len(c.DNSNames) > 0 {
valid = strings.Join(c.DNSNames, ", ")
} else {
valid = c.Subject.CommonName
}
}
return "x509: certificate is valid for " + valid + ", not " + h.Host
}
// UnknownAuthorityError results when the certificate issuer is unknown
type UnknownAuthorityError struct {
cert *Certificate
// hintErr contains an error that may be helpful in determining why an
// authority wasn't found.
hintErr error
// hintCert contains a possible authority certificate that was rejected
// because of the error in hintErr.
hintCert *Certificate
}
func (e UnknownAuthorityError) Error() string {
s := "x509: certificate signed by unknown authority"
if e.hintErr != nil {
certName := e.hintCert.Subject.CommonName
if len(certName) == 0 {
if len(e.hintCert.Subject.Organization) > 0 {
certName = e.hintCert.Subject.Organization[0]
}
certName = "serial:" + e.hintCert.SerialNumber.String()
}
s += fmt.Sprintf(" (possibly because of %q while trying to verify candidate authority certificate %q)", e.hintErr, certName)
}
return s
}
// SystemRootsError results when we fail to load the system root certificates.
type SystemRootsError struct {
}
func (e SystemRootsError) Error() string {
return "x509: failed to load system roots and no roots provided"
}
// VerifyOptions contains parameters for Certificate.Verify. It's a structure
// because other PKIX verification APIs have ended up needing many options.
type VerifyOptions struct {
DNSName string
Intermediates *CertPool
Roots *CertPool // if nil, the system roots are used
CurrentTime time.Time // if zero, the current time is used
DisableTimeChecks bool
// KeyUsage specifies which Extended Key Usage values are acceptable.
// An empty list means ExtKeyUsageServerAuth. Key usage is considered a
// constraint down the chain which mirrors Windows CryptoAPI behaviour,
// but not the spec. To accept any key usage, include ExtKeyUsageAny.
KeyUsages []ExtKeyUsage
}
const (
leafCertificate = iota
intermediateCertificate
rootCertificate
)
// isValid performs validity checks on the c.
func (c *Certificate) isValid(certType int, currentChain []*Certificate, opts *VerifyOptions) error {
if !opts.DisableTimeChecks {
now := opts.CurrentTime
if now.IsZero() {
now = time.Now()
}
if now.Before(c.NotBefore) || now.After(c.NotAfter) {
return CertificateInvalidError{c, Expired}
}
}
if len(c.PermittedDNSDomains) > 0 {
ok := false
for _, domain := range c.PermittedDNSDomains {
if opts.DNSName == domain ||
(strings.HasSuffix(opts.DNSName, domain) &&
len(opts.DNSName) >= 1+len(domain) &&
opts.DNSName[len(opts.DNSName)-len(domain)-1] == '.') {
ok = true
break
}
}
if !ok {
return CertificateInvalidError{c, CANotAuthorizedForThisName}
}
}
// KeyUsage status flags are ignored. From Engineering Security, Peter
// Gutmann: A European government CA marked its signing certificates as
// being valid for encryption only, but no-one noticed. Another
// European CA marked its signature keys as not being valid for
// signatures. A different CA marked its own trusted root certificate
// as being invalid for certificate signing. Another national CA
// distributed a certificate to be used to encrypt data for the
// countrys tax authority that was marked as only being usable for
// digital signatures but not for encryption. Yet another CA reversed
// the order of the bit flags in the keyUsage due to confusion over
// encoding endianness, essentially setting a random keyUsage in
// certificates that it issued. Another CA created a self-invalidating
// certificate by adding a certificate policy statement stipulating
// that the certificate had to be used strictly as specified in the
// keyUsage, and a keyUsage containing a flag indicating that the RSA
// encryption key could only be used for Diffie-Hellman key agreement.
if certType == intermediateCertificate && (!c.BasicConstraintsValid || !c.IsCA) {
return CertificateInvalidError{c, NotAuthorizedToSign}
}
if c.BasicConstraintsValid && c.MaxPathLen >= 0 {
numIntermediates := len(currentChain) - 1
if numIntermediates > c.MaxPathLen {
return CertificateInvalidError{c, TooManyIntermediates}
}
}
return nil
}
// Verify attempts to verify c by building one or more chains from c to a
// certificate in opts.Roots, using certificates in opts.Intermediates if
// needed. If successful, it returns one or more chains where the first
// element of the chain is c and the last element is from opts.Roots.
//
// WARNING: this doesn't do any revocation checking.
func (c *Certificate) Verify(opts VerifyOptions) (chains [][]*Certificate, err error) {
// Use Windows's own verification and chain building.
if opts.Roots == nil && runtime.GOOS == "windows" {
return c.systemVerify(&opts)
}
if opts.Roots == nil {
opts.Roots = systemRootsPool()
if opts.Roots == nil {
return nil, SystemRootsError{}
}
}
err = c.isValid(leafCertificate, nil, &opts)
if err != nil {
return
}
if len(opts.DNSName) > 0 {
err = c.VerifyHostname(opts.DNSName)
if err != nil {
return
}
}
candidateChains, err := c.buildChains(make(map[int][][]*Certificate), []*Certificate{c}, &opts)
if err != nil {
return
}
keyUsages := opts.KeyUsages
if len(keyUsages) == 0 {
keyUsages = []ExtKeyUsage{ExtKeyUsageServerAuth}
}
// If any key usage is acceptable then we're done.
for _, usage := range keyUsages {
if usage == ExtKeyUsageAny {
chains = candidateChains
return
}
}
for _, candidate := range candidateChains {
if checkChainForKeyUsage(candidate, keyUsages) {
chains = append(chains, candidate)
}
}
if len(chains) == 0 {
err = CertificateInvalidError{c, IncompatibleUsage}
}
return
}
func appendToFreshChain(chain []*Certificate, cert *Certificate) []*Certificate {
n := make([]*Certificate, len(chain)+1)
copy(n, chain)
n[len(chain)] = cert
return n
}
func (c *Certificate) buildChains(cache map[int][][]*Certificate, currentChain []*Certificate, opts *VerifyOptions) (chains [][]*Certificate, err error) {
possibleRoots, failedRoot, rootErr := opts.Roots.findVerifiedParents(c)
for _, rootNum := range possibleRoots {
root := opts.Roots.certs[rootNum]
err = root.isValid(rootCertificate, currentChain, opts)
if err != nil {
continue
}
chains = append(chains, appendToFreshChain(currentChain, root))
}
possibleIntermediates, failedIntermediate, intermediateErr := opts.Intermediates.findVerifiedParents(c)
nextIntermediate:
for _, intermediateNum := range possibleIntermediates {
intermediate := opts.Intermediates.certs[intermediateNum]
for _, cert := range currentChain {
if cert == intermediate {
continue nextIntermediate
}
}
err = intermediate.isValid(intermediateCertificate, currentChain, opts)
if err != nil {
continue
}
var childChains [][]*Certificate
childChains, ok := cache[intermediateNum]
if !ok {
childChains, err = intermediate.buildChains(cache, appendToFreshChain(currentChain, intermediate), opts)
cache[intermediateNum] = childChains
}
chains = append(chains, childChains...)
}
if len(chains) > 0 {
err = nil
}
if len(chains) == 0 && err == nil {
hintErr := rootErr
hintCert := failedRoot
if hintErr == nil {
hintErr = intermediateErr
hintCert = failedIntermediate
}
err = UnknownAuthorityError{c, hintErr, hintCert}
}
return
}
func matchHostnames(pattern, host string) bool {
if len(pattern) == 0 || len(host) == 0 {
return false
}
patternParts := strings.Split(pattern, ".")
hostParts := strings.Split(host, ".")
if len(patternParts) != len(hostParts) {
return false
}
for i, patternPart := range patternParts {
if patternPart == "*" {
continue
}
if patternPart != hostParts[i] {
return false
}
}
return true
}
// toLowerCaseASCII returns a lower-case version of in. See RFC 6125 6.4.1. We use
// an explicitly ASCII function to avoid any sharp corners resulting from
// performing Unicode operations on DNS labels.
func toLowerCaseASCII(in string) string {
// If the string is already lower-case then there's nothing to do.
isAlreadyLowerCase := true
for _, c := range in {
if c == utf8.RuneError {
// If we get a UTF-8 error then there might be
// upper-case ASCII bytes in the invalid sequence.
isAlreadyLowerCase = false
break
}
if 'A' <= c && c <= 'Z' {
isAlreadyLowerCase = false
break
}
}
if isAlreadyLowerCase {
return in
}
out := []byte(in)
for i, c := range out {
if 'A' <= c && c <= 'Z' {
out[i] += 'a' - 'A'
}
}
return string(out)
}
// VerifyHostname returns nil if c is a valid certificate for the named host.
// Otherwise it returns an error describing the mismatch.
func (c *Certificate) VerifyHostname(h string) error {
// IP addresses may be written in [ ].
candidateIP := h
if len(h) >= 3 && h[0] == '[' && h[len(h)-1] == ']' {
candidateIP = h[1 : len(h)-1]
}
if ip := net.ParseIP(candidateIP); ip != nil {
// We only match IP addresses against IP SANs.
// https://tools.ietf.org/html/rfc6125#appendix-B.2
for _, candidate := range c.IPAddresses {
if ip.Equal(candidate) {
return nil
}
}
return HostnameError{c, candidateIP}
}
lowered := toLowerCaseASCII(h)
if len(c.DNSNames) > 0 {
for _, match := range c.DNSNames {
if matchHostnames(toLowerCaseASCII(match), lowered) {
return nil
}
}
// If Subject Alt Name is given, we ignore the common name.
} else if matchHostnames(toLowerCaseASCII(c.Subject.CommonName), lowered) {
return nil
}
return HostnameError{c, h}
}
func checkChainForKeyUsage(chain []*Certificate, keyUsages []ExtKeyUsage) bool {
usages := make([]ExtKeyUsage, len(keyUsages))
copy(usages, keyUsages)
if len(chain) == 0 {
return false
}
usagesRemaining := len(usages)
// We walk down the list and cross out any usages that aren't supported
// by each certificate. If we cross out all the usages, then the chain
// is unacceptable.
for i := len(chain) - 1; i >= 0; i-- {
cert := chain[i]
if len(cert.ExtKeyUsage) == 0 && len(cert.UnknownExtKeyUsage) == 0 {
// The certificate doesn't have any extended key usage specified.
continue
}
for _, usage := range cert.ExtKeyUsage {
if usage == ExtKeyUsageAny {
// The certificate is explicitly good for any usage.
continue
}
}
const invalidUsage ExtKeyUsage = -1
NextRequestedUsage:
for i, requestedUsage := range usages {
if requestedUsage == invalidUsage {
continue
}
for _, usage := range cert.ExtKeyUsage {
if requestedUsage == usage {
continue NextRequestedUsage
} else if requestedUsage == ExtKeyUsageServerAuth &&
(usage == ExtKeyUsageNetscapeServerGatedCrypto ||
usage == ExtKeyUsageMicrosoftServerGatedCrypto) {
// In order to support COMODO
// certificate chains, we have to
// accept Netscape or Microsoft SGC
// usages as equal to ServerAuth.
continue NextRequestedUsage
}
}
usages[i] = invalidUsage
usagesRemaining--
if usagesRemaining == 0 {
return false
}
}
}
return true
}

File diff suppressed because it is too large Load diff

View file

@ -1,320 +0,0 @@
syntax = "proto2";
package ct;
////////////////////////////////////////////////////////////////////////////////
// These protocol buffers should be kept aligned with the I-D. //
////////////////////////////////////////////////////////////////////////////////
// RFC 5246
message DigitallySigned {
enum HashAlgorithm {
NONE = 0;
MD5 = 1;
SHA1 = 2;
SHA224 = 3;
SHA256 = 4;
SHA384 = 5;
SHA512 = 6;
}
enum SignatureAlgorithm {
ANONYMOUS = 0;
RSA = 1;
DSA = 2;
ECDSA = 3;
}
// 1 byte
optional HashAlgorithm hash_algorithm = 1 [ default = NONE ];
// 1 byte
optional SignatureAlgorithm sig_algorithm = 2 [ default = ANONYMOUS ];
// 0..2^16-1 bytes
optional bytes signature = 3;
}
enum LogEntryType {
X509_ENTRY = 0;
PRECERT_ENTRY = 1;
PRECERT_ENTRY_V2 = 2;
// Not part of the I-D, and outside the valid range.
X_JSON_ENTRY = 32768; // Experimental, don't rely on this!
UNKNOWN_ENTRY_TYPE = 65536;
}
message X509ChainEntry {
// For V1 this entry just includes the certificate in the leaf_certificate
// field
// <1..2^24-1>
optional bytes leaf_certificate = 1;
// For V2 it includes the cert and key hash using CertInfo. The
// leaf_certificate field is not used
optional CertInfo cert_info = 3;
// <0..2^24-1>
// A chain from the leaf to a trusted root
// (excluding leaf and possibly root).
repeated bytes certificate_chain = 2;
}
// opaque TBSCertificate<1..2^16-1>;
// struct {
// opaque issuer_key_hash[32];
// TBSCertificate tbs_certificate;
// } PreCert;
// Retained for V1 API compatibility. May be removed in a future release.
message PreCert {
optional bytes issuer_key_hash = 1;
optional bytes tbs_certificate = 2;
}
// In V2 this is used for both certificates and precertificates in SCTs. It
// replaces PreCert and has the same structure. The older message remains for
// compatibility with existing code that depends on this proto.
message CertInfo {
optional bytes issuer_key_hash = 1;
optional bytes tbs_certificate = 2;
}
message PrecertChainEntry {
// <1..2^24-1>
optional bytes pre_certificate = 1;
// <0..2^24-1>
// The chain certifying the precertificate, as submitted by the CA.
repeated bytes precertificate_chain = 2;
// PreCert input to the SCT. Can be computed from the above.
// Store it alongside the entry data so that the signers don't have to
// parse certificates to recompute it.
optional PreCert pre_cert = 3;
// As above for V2 messages. Only one of these fields will be set in a
// valid message
optional CertInfo cert_info = 4;
}
message XJSONEntry {
optional string json = 1;
}
// TODO(alcutter): Consider using extensions here instead.
message LogEntry {
optional LogEntryType type = 1 [ default = UNKNOWN_ENTRY_TYPE ];
optional X509ChainEntry x509_entry = 2;
optional PrecertChainEntry precert_entry = 3;
optional XJSONEntry x_json_entry = 4;
}
enum SignatureType {
CERTIFICATE_TIMESTAMP = 0;
// TODO(ekasper): called tree_hash in I-D.
TREE_HEAD = 1;
}
enum Version {
V1 = 0;
V2 = 1;
// Not part of the I-D, and outside the valid range.
UNKNOWN_VERSION = 256;
}
message LogID {
// 32 bytes
optional bytes key_id = 1;
}
message SctExtension {
// Valid range is 0-65534
optional uint32 sct_extension_type = 1;
// Data is opaque and type specific. <0..2^16-1> bytes
optional bytes sct_extension_data = 2;
}
// TODO(ekasper): implement support for id.
message SignedCertificateTimestamp {
optional Version version = 1 [ default = UNKNOWN_VERSION ];
optional LogID id = 2;
// UTC time in milliseconds, since January 1, 1970, 00:00.
optional uint64 timestamp = 3;
optional DigitallySigned signature = 4;
// V1 extensions
optional bytes extensions = 5;
// V2 extensions <0..2^16-1>. Must be ordered by type (lowest first)
repeated SctExtension sct_extension = 6;
}
message SignedCertificateTimestampList {
// One or more SCTs, <1..2^16-1> bytes each
repeated bytes sct_list = 1;
}
enum MerkleLeafType {
TIMESTAMPED_ENTRY = 0;
UNKNOWN_LEAF_TYPE = 256;
}
message SignedEntry {
// For V1 signed entries either the x509 or precert field will be set
optional bytes x509 = 1;
optional PreCert precert = 2;
optional bytes json = 3;
// For V2 all entries use the CertInfo field and the above fields are
// not set
optional CertInfo cert_info = 4;
}
message TimestampedEntry {
optional uint64 timestamp = 1;
optional LogEntryType entry_type = 2;
optional SignedEntry signed_entry = 3;
// V1 extensions
optional bytes extensions = 4;
// V2 extensions <0..2^16-1>. Must be ordered by type (lowest first)
repeated SctExtension sct_extension = 5;
}
// Stuff that's hashed into a Merkle leaf.
message MerkleTreeLeaf {
// The version of the corresponding SCT.
optional Version version = 1 [ default = UNKNOWN_VERSION ];
optional MerkleLeafType type = 2 [ default = UNKNOWN_LEAF_TYPE ];
optional TimestampedEntry timestamped_entry = 3;
}
// TODO(benl): No longer needed?
//
// Used by cpp/client/ct: it assembles the one from the I-D JSON
// protocol.
//
// Used by cpp/server/blob-server: it uses one to call a variant of
// LogLookup::AuditProof.
message MerkleAuditProof {
optional Version version = 1 [ default = UNKNOWN_VERSION ];
optional LogID id = 2;
optional int64 tree_size = 3;
optional uint64 timestamp = 4;
optional int64 leaf_index = 5;
repeated bytes path_node = 6;
optional DigitallySigned tree_head_signature = 7;
}
message ShortMerkleAuditProof {
required int64 leaf_index = 1;
repeated bytes path_node = 2;
}
////////////////////////////////////////////////////////////////////////////////
// Finally, stuff that's not in the I-D but that we use internally //
// for logging entries and tree head state. //
////////////////////////////////////////////////////////////////////////////////
// TODO(alcutter): Come up with a better name :/
message LoggedEntryPB {
optional int64 sequence_number = 1;
optional bytes merkle_leaf_hash = 2;
message Contents {
optional SignedCertificateTimestamp sct = 1;
optional LogEntry entry = 2;
}
required Contents contents = 3;
}
message SthExtension {
// Valid range is 0-65534
optional uint32 sth_extension_type = 1;
// Data is opaque and type specific <0..2^16-1> bytes
optional bytes sth_extension_data = 2;
}
message SignedTreeHead {
// The version of the tree head signature.
// (Note that each leaf has its own version, so a V2 tree
// can contain V1 leaves, too.
optional Version version = 1 [ default = UNKNOWN_VERSION ];
optional LogID id = 2;
optional uint64 timestamp = 3;
optional int64 tree_size = 4;
optional bytes sha256_root_hash = 5;
optional DigitallySigned signature = 6;
// Only supported in V2. <0..2^16-1>
repeated SthExtension sth_extension = 7;
}
// Stuff the SSL client spits out from a connection.
message SSLClientCTData {
optional LogEntry reconstructed_entry = 1;
optional bytes certificate_sha256_hash = 2;
message SCTInfo {
// There is an entry + sct -> leaf hash mapping.
optional SignedCertificateTimestamp sct = 1;
optional bytes merkle_leaf_hash = 2;
}
repeated SCTInfo attached_sct_info = 3;
}
message ClusterNodeState {
optional string node_id = 1;
optional int64 contiguous_tree_size = 2 [deprecated = true];
optional SignedTreeHead newest_sth = 3;
optional SignedTreeHead current_serving_sth = 4;
// The following host_name/log_port pair are used to allow a log node to
// contact other nodes in the cluster, primarily for the purposes of
// replication.
// hostname/ip which can be used to contact [just] this log node
optional string hostname = 5;
// port on which this log node is listening.
optional int32 log_port = 6;
}
message ClusterControl {
optional bool accept_new_entries = 1 [ default = true ];
}
message ClusterConfig {
/////////////////////////////////
// This section of the config affects the selection of the cluster's current
// serving STH.
// The cluster will always attempt to determine the newest (and
// largest) possible STH which meets the constraints defined below from the
// set of STHs available at the individual cluster nodes.
// (Note that nodes with newer/larger STHs can, of course, serve
// earlier/smaller STHs.)
// The minimum number of nodes which must be able to serve a given STH.
// This setting allows you to configure the level of cluster resiliency
// against data (in the form of node/node database) loss.
// i.e.: Once an STH has been created, it must have been replicated to
// at least this many nodes before being considered as a candidate for
// the overall cluster serving STH.
optional int32 minimum_serving_nodes = 1;
// The minimum fraction of nodes which must be able to serve a given STH.
// This setting allows you to configure the serving capacity redundancy of
// your cluster.
// e.g. you determine you need 3 nodes to serve your expected peak traffic
// levels, but want to be over-provisioned by 25% to ensure the cluster will
// continue to be able to handle the traffic in the case of a single node
// failure, you might set this to 0.75 to ensure that any cluster-wide
// serving STH candidate must be servable from at least 3 of your 4 nodes.
optional double minimum_serving_fraction = 2;
/////////////////////////////////
// When the number of entries in the EtcedConsistentStore exceeds this value,
// the log server will reject all calls to add-[pre-]chain to protect itself
// and etcd.
optional double etcd_reject_add_pending_threshold = 3 [default = 30000];
}
message SequenceMapping {
message Mapping {
optional bytes entry_hash = 1;
optional int64 sequence_number = 2;
}
repeated Mapping mapping = 1;
}

751
vendor/golang.org/x/crypto/cryptobyte/asn1.go generated vendored Normal file
View file

@ -0,0 +1,751 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cryptobyte
import (
encoding_asn1 "encoding/asn1"
"fmt"
"math/big"
"reflect"
"time"
"golang.org/x/crypto/cryptobyte/asn1"
)
// This file contains ASN.1-related methods for String and Builder.
// Builder
// AddASN1Int64 appends a DER-encoded ASN.1 INTEGER.
func (b *Builder) AddASN1Int64(v int64) {
b.addASN1Signed(asn1.INTEGER, v)
}
// AddASN1Int64WithTag appends a DER-encoded ASN.1 INTEGER with the
// given tag.
func (b *Builder) AddASN1Int64WithTag(v int64, tag asn1.Tag) {
b.addASN1Signed(tag, v)
}
// AddASN1Enum appends a DER-encoded ASN.1 ENUMERATION.
func (b *Builder) AddASN1Enum(v int64) {
b.addASN1Signed(asn1.ENUM, v)
}
func (b *Builder) addASN1Signed(tag asn1.Tag, v int64) {
b.AddASN1(tag, func(c *Builder) {
length := 1
for i := v; i >= 0x80 || i < -0x80; i >>= 8 {
length++
}
for ; length > 0; length-- {
i := v >> uint((length-1)*8) & 0xff
c.AddUint8(uint8(i))
}
})
}
// AddASN1Uint64 appends a DER-encoded ASN.1 INTEGER.
func (b *Builder) AddASN1Uint64(v uint64) {
b.AddASN1(asn1.INTEGER, func(c *Builder) {
length := 1
for i := v; i >= 0x80; i >>= 8 {
length++
}
for ; length > 0; length-- {
i := v >> uint((length-1)*8) & 0xff
c.AddUint8(uint8(i))
}
})
}
// AddASN1BigInt appends a DER-encoded ASN.1 INTEGER.
func (b *Builder) AddASN1BigInt(n *big.Int) {
if b.err != nil {
return
}
b.AddASN1(asn1.INTEGER, func(c *Builder) {
if n.Sign() < 0 {
// A negative number has to be converted to two's-complement form. So we
// invert and subtract 1. If the most-significant-bit isn't set then
// we'll need to pad the beginning with 0xff in order to keep the number
// negative.
nMinus1 := new(big.Int).Neg(n)
nMinus1.Sub(nMinus1, bigOne)
bytes := nMinus1.Bytes()
for i := range bytes {
bytes[i] ^= 0xff
}
if bytes[0]&0x80 == 0 {
c.add(0xff)
}
c.add(bytes...)
} else if n.Sign() == 0 {
c.add(0)
} else {
bytes := n.Bytes()
if bytes[0]&0x80 != 0 {
c.add(0)
}
c.add(bytes...)
}
})
}
// AddASN1OctetString appends a DER-encoded ASN.1 OCTET STRING.
func (b *Builder) AddASN1OctetString(bytes []byte) {
b.AddASN1(asn1.OCTET_STRING, func(c *Builder) {
c.AddBytes(bytes)
})
}
const generalizedTimeFormatStr = "20060102150405Z0700"
// AddASN1GeneralizedTime appends a DER-encoded ASN.1 GENERALIZEDTIME.
func (b *Builder) AddASN1GeneralizedTime(t time.Time) {
if t.Year() < 0 || t.Year() > 9999 {
b.err = fmt.Errorf("cryptobyte: cannot represent %v as a GeneralizedTime", t)
return
}
b.AddASN1(asn1.GeneralizedTime, func(c *Builder) {
c.AddBytes([]byte(t.Format(generalizedTimeFormatStr)))
})
}
// AddASN1BitString appends a DER-encoded ASN.1 BIT STRING. This does not
// support BIT STRINGs that are not a whole number of bytes.
func (b *Builder) AddASN1BitString(data []byte) {
b.AddASN1(asn1.BIT_STRING, func(b *Builder) {
b.AddUint8(0)
b.AddBytes(data)
})
}
func (b *Builder) addBase128Int(n int64) {
var length int
if n == 0 {
length = 1
} else {
for i := n; i > 0; i >>= 7 {
length++
}
}
for i := length - 1; i >= 0; i-- {
o := byte(n >> uint(i*7))
o &= 0x7f
if i != 0 {
o |= 0x80
}
b.add(o)
}
}
func isValidOID(oid encoding_asn1.ObjectIdentifier) bool {
if len(oid) < 2 {
return false
}
if oid[0] > 2 || (oid[0] <= 1 && oid[1] >= 40) {
return false
}
for _, v := range oid {
if v < 0 {
return false
}
}
return true
}
func (b *Builder) AddASN1ObjectIdentifier(oid encoding_asn1.ObjectIdentifier) {
b.AddASN1(asn1.OBJECT_IDENTIFIER, func(b *Builder) {
if !isValidOID(oid) {
b.err = fmt.Errorf("cryptobyte: invalid OID: %v", oid)
return
}
b.addBase128Int(int64(oid[0])*40 + int64(oid[1]))
for _, v := range oid[2:] {
b.addBase128Int(int64(v))
}
})
}
func (b *Builder) AddASN1Boolean(v bool) {
b.AddASN1(asn1.BOOLEAN, func(b *Builder) {
if v {
b.AddUint8(0xff)
} else {
b.AddUint8(0)
}
})
}
func (b *Builder) AddASN1NULL() {
b.add(uint8(asn1.NULL), 0)
}
// MarshalASN1 calls encoding_asn1.Marshal on its input and appends the result if
// successful or records an error if one occurred.
func (b *Builder) MarshalASN1(v interface{}) {
// NOTE(martinkr): This is somewhat of a hack to allow propagation of
// encoding_asn1.Marshal errors into Builder.err. N.B. if you call MarshalASN1 with a
// value embedded into a struct, its tag information is lost.
if b.err != nil {
return
}
bytes, err := encoding_asn1.Marshal(v)
if err != nil {
b.err = err
return
}
b.AddBytes(bytes)
}
// AddASN1 appends an ASN.1 object. The object is prefixed with the given tag.
// Tags greater than 30 are not supported and result in an error (i.e.
// low-tag-number form only). The child builder passed to the
// BuilderContinuation can be used to build the content of the ASN.1 object.
func (b *Builder) AddASN1(tag asn1.Tag, f BuilderContinuation) {
if b.err != nil {
return
}
// Identifiers with the low five bits set indicate high-tag-number format
// (two or more octets), which we don't support.
if tag&0x1f == 0x1f {
b.err = fmt.Errorf("cryptobyte: high-tag number identifier octects not supported: 0x%x", tag)
return
}
b.AddUint8(uint8(tag))
b.addLengthPrefixed(1, true, f)
}
// String
// ReadASN1Boolean decodes an ASN.1 INTEGER and converts it to a boolean
// representation into out and advances. It reports whether the read
// was successful.
func (s *String) ReadASN1Boolean(out *bool) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.INTEGER) || len(bytes) != 1 {
return false
}
switch bytes[0] {
case 0:
*out = false
case 0xff:
*out = true
default:
return false
}
return true
}
var bigIntType = reflect.TypeOf((*big.Int)(nil)).Elem()
// ReadASN1Integer decodes an ASN.1 INTEGER into out and advances. If out does
// not point to an integer or to a big.Int, it panics. It reports whether the
// read was successful.
func (s *String) ReadASN1Integer(out interface{}) bool {
if reflect.TypeOf(out).Kind() != reflect.Ptr {
panic("out is not a pointer")
}
switch reflect.ValueOf(out).Elem().Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
var i int64
if !s.readASN1Int64(&i) || reflect.ValueOf(out).Elem().OverflowInt(i) {
return false
}
reflect.ValueOf(out).Elem().SetInt(i)
return true
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
var u uint64
if !s.readASN1Uint64(&u) || reflect.ValueOf(out).Elem().OverflowUint(u) {
return false
}
reflect.ValueOf(out).Elem().SetUint(u)
return true
case reflect.Struct:
if reflect.TypeOf(out).Elem() == bigIntType {
return s.readASN1BigInt(out.(*big.Int))
}
}
panic("out does not point to an integer type")
}
func checkASN1Integer(bytes []byte) bool {
if len(bytes) == 0 {
// An INTEGER is encoded with at least one octet.
return false
}
if len(bytes) == 1 {
return true
}
if bytes[0] == 0 && bytes[1]&0x80 == 0 || bytes[0] == 0xff && bytes[1]&0x80 == 0x80 {
// Value is not minimally encoded.
return false
}
return true
}
var bigOne = big.NewInt(1)
func (s *String) readASN1BigInt(out *big.Int) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.INTEGER) || !checkASN1Integer(bytes) {
return false
}
if bytes[0]&0x80 == 0x80 {
// Negative number.
neg := make([]byte, len(bytes))
for i, b := range bytes {
neg[i] = ^b
}
out.SetBytes(neg)
out.Add(out, bigOne)
out.Neg(out)
} else {
out.SetBytes(bytes)
}
return true
}
func (s *String) readASN1Int64(out *int64) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.INTEGER) || !checkASN1Integer(bytes) || !asn1Signed(out, bytes) {
return false
}
return true
}
func asn1Signed(out *int64, n []byte) bool {
length := len(n)
if length > 8 {
return false
}
for i := 0; i < length; i++ {
*out <<= 8
*out |= int64(n[i])
}
// Shift up and down in order to sign extend the result.
*out <<= 64 - uint8(length)*8
*out >>= 64 - uint8(length)*8
return true
}
func (s *String) readASN1Uint64(out *uint64) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.INTEGER) || !checkASN1Integer(bytes) || !asn1Unsigned(out, bytes) {
return false
}
return true
}
func asn1Unsigned(out *uint64, n []byte) bool {
length := len(n)
if length > 9 || length == 9 && n[0] != 0 {
// Too large for uint64.
return false
}
if n[0]&0x80 != 0 {
// Negative number.
return false
}
for i := 0; i < length; i++ {
*out <<= 8
*out |= uint64(n[i])
}
return true
}
// ReadASN1Int64WithTag decodes an ASN.1 INTEGER with the given tag into out
// and advances. It reports whether the read was successful and resulted in a
// value that can be represented in an int64.
func (s *String) ReadASN1Int64WithTag(out *int64, tag asn1.Tag) bool {
var bytes String
return s.ReadASN1(&bytes, tag) && checkASN1Integer(bytes) && asn1Signed(out, bytes)
}
// ReadASN1Enum decodes an ASN.1 ENUMERATION into out and advances. It reports
// whether the read was successful.
func (s *String) ReadASN1Enum(out *int) bool {
var bytes String
var i int64
if !s.ReadASN1(&bytes, asn1.ENUM) || !checkASN1Integer(bytes) || !asn1Signed(&i, bytes) {
return false
}
if int64(int(i)) != i {
return false
}
*out = int(i)
return true
}
func (s *String) readBase128Int(out *int) bool {
ret := 0
for i := 0; len(*s) > 0; i++ {
if i == 4 {
return false
}
ret <<= 7
b := s.read(1)[0]
ret |= int(b & 0x7f)
if b&0x80 == 0 {
*out = ret
return true
}
}
return false // truncated
}
// ReadASN1ObjectIdentifier decodes an ASN.1 OBJECT IDENTIFIER into out and
// advances. It reports whether the read was successful.
func (s *String) ReadASN1ObjectIdentifier(out *encoding_asn1.ObjectIdentifier) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.OBJECT_IDENTIFIER) || len(bytes) == 0 {
return false
}
// In the worst case, we get two elements from the first byte (which is
// encoded differently) and then every varint is a single byte long.
components := make([]int, len(bytes)+1)
// The first varint is 40*value1 + value2:
// According to this packing, value1 can take the values 0, 1 and 2 only.
// When value1 = 0 or value1 = 1, then value2 is <= 39. When value1 = 2,
// then there are no restrictions on value2.
var v int
if !bytes.readBase128Int(&v) {
return false
}
if v < 80 {
components[0] = v / 40
components[1] = v % 40
} else {
components[0] = 2
components[1] = v - 80
}
i := 2
for ; len(bytes) > 0; i++ {
if !bytes.readBase128Int(&v) {
return false
}
components[i] = v
}
*out = components[:i]
return true
}
// ReadASN1GeneralizedTime decodes an ASN.1 GENERALIZEDTIME into out and
// advances. It reports whether the read was successful.
func (s *String) ReadASN1GeneralizedTime(out *time.Time) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.GeneralizedTime) {
return false
}
t := string(bytes)
res, err := time.Parse(generalizedTimeFormatStr, t)
if err != nil {
return false
}
if serialized := res.Format(generalizedTimeFormatStr); serialized != t {
return false
}
*out = res
return true
}
// ReadASN1BitString decodes an ASN.1 BIT STRING into out and advances.
// It reports whether the read was successful.
func (s *String) ReadASN1BitString(out *encoding_asn1.BitString) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.BIT_STRING) || len(bytes) == 0 {
return false
}
paddingBits := uint8(bytes[0])
bytes = bytes[1:]
if paddingBits > 7 ||
len(bytes) == 0 && paddingBits != 0 ||
len(bytes) > 0 && bytes[len(bytes)-1]&(1<<paddingBits-1) != 0 {
return false
}
out.BitLength = len(bytes)*8 - int(paddingBits)
out.Bytes = bytes
return true
}
// ReadASN1BitString decodes an ASN.1 BIT STRING into out and advances. It is
// an error if the BIT STRING is not a whole number of bytes. It reports
// whether the read was successful.
func (s *String) ReadASN1BitStringAsBytes(out *[]byte) bool {
var bytes String
if !s.ReadASN1(&bytes, asn1.BIT_STRING) || len(bytes) == 0 {
return false
}
paddingBits := uint8(bytes[0])
if paddingBits != 0 {
return false
}
*out = bytes[1:]
return true
}
// ReadASN1Bytes reads the contents of a DER-encoded ASN.1 element (not including
// tag and length bytes) into out, and advances. The element must match the
// given tag. It reports whether the read was successful.
func (s *String) ReadASN1Bytes(out *[]byte, tag asn1.Tag) bool {
return s.ReadASN1((*String)(out), tag)
}
// ReadASN1 reads the contents of a DER-encoded ASN.1 element (not including
// tag and length bytes) into out, and advances. The element must match the
// given tag. It reports whether the read was successful.
//
// Tags greater than 30 are not supported (i.e. low-tag-number format only).
func (s *String) ReadASN1(out *String, tag asn1.Tag) bool {
var t asn1.Tag
if !s.ReadAnyASN1(out, &t) || t != tag {
return false
}
return true
}
// ReadASN1Element reads the contents of a DER-encoded ASN.1 element (including
// tag and length bytes) into out, and advances. The element must match the
// given tag. It reports whether the read was successful.
//
// Tags greater than 30 are not supported (i.e. low-tag-number format only).
func (s *String) ReadASN1Element(out *String, tag asn1.Tag) bool {
var t asn1.Tag
if !s.ReadAnyASN1Element(out, &t) || t != tag {
return false
}
return true
}
// ReadAnyASN1 reads the contents of a DER-encoded ASN.1 element (not including
// tag and length bytes) into out, sets outTag to its tag, and advances.
// It reports whether the read was successful.
//
// Tags greater than 30 are not supported (i.e. low-tag-number format only).
func (s *String) ReadAnyASN1(out *String, outTag *asn1.Tag) bool {
return s.readASN1(out, outTag, true /* skip header */)
}
// ReadAnyASN1Element reads the contents of a DER-encoded ASN.1 element
// (including tag and length bytes) into out, sets outTag to is tag, and
// advances. It reports whether the read was successful.
//
// Tags greater than 30 are not supported (i.e. low-tag-number format only).
func (s *String) ReadAnyASN1Element(out *String, outTag *asn1.Tag) bool {
return s.readASN1(out, outTag, false /* include header */)
}
// PeekASN1Tag reports whether the next ASN.1 value on the string starts with
// the given tag.
func (s String) PeekASN1Tag(tag asn1.Tag) bool {
if len(s) == 0 {
return false
}
return asn1.Tag(s[0]) == tag
}
// SkipASN1 reads and discards an ASN.1 element with the given tag. It
// reports whether the operation was successful.
func (s *String) SkipASN1(tag asn1.Tag) bool {
var unused String
return s.ReadASN1(&unused, tag)
}
// ReadOptionalASN1 attempts to read the contents of a DER-encoded ASN.1
// element (not including tag and length bytes) tagged with the given tag into
// out. It stores whether an element with the tag was found in outPresent,
// unless outPresent is nil. It reports whether the read was successful.
func (s *String) ReadOptionalASN1(out *String, outPresent *bool, tag asn1.Tag) bool {
present := s.PeekASN1Tag(tag)
if outPresent != nil {
*outPresent = present
}
if present && !s.ReadASN1(out, tag) {
return false
}
return true
}
// SkipOptionalASN1 advances s over an ASN.1 element with the given tag, or
// else leaves s unchanged. It reports whether the operation was successful.
func (s *String) SkipOptionalASN1(tag asn1.Tag) bool {
if !s.PeekASN1Tag(tag) {
return true
}
var unused String
return s.ReadASN1(&unused, tag)
}
// ReadOptionalASN1Integer attempts to read an optional ASN.1 INTEGER
// explicitly tagged with tag into out and advances. If no element with a
// matching tag is present, it writes defaultValue into out instead. If out
// does not point to an integer or to a big.Int, it panics. It reports
// whether the read was successful.
func (s *String) ReadOptionalASN1Integer(out interface{}, tag asn1.Tag, defaultValue interface{}) bool {
if reflect.TypeOf(out).Kind() != reflect.Ptr {
panic("out is not a pointer")
}
var present bool
var i String
if !s.ReadOptionalASN1(&i, &present, tag) {
return false
}
if !present {
switch reflect.ValueOf(out).Elem().Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
reflect.ValueOf(out).Elem().Set(reflect.ValueOf(defaultValue))
case reflect.Struct:
if reflect.TypeOf(out).Elem() != bigIntType {
panic("invalid integer type")
}
if reflect.TypeOf(defaultValue).Kind() != reflect.Ptr ||
reflect.TypeOf(defaultValue).Elem() != bigIntType {
panic("out points to big.Int, but defaultValue does not")
}
out.(*big.Int).Set(defaultValue.(*big.Int))
default:
panic("invalid integer type")
}
return true
}
if !i.ReadASN1Integer(out) || !i.Empty() {
return false
}
return true
}
// ReadOptionalASN1OctetString attempts to read an optional ASN.1 OCTET STRING
// explicitly tagged with tag into out and advances. If no element with a
// matching tag is present, it sets "out" to nil instead. It reports
// whether the read was successful.
func (s *String) ReadOptionalASN1OctetString(out *[]byte, outPresent *bool, tag asn1.Tag) bool {
var present bool
var child String
if !s.ReadOptionalASN1(&child, &present, tag) {
return false
}
if outPresent != nil {
*outPresent = present
}
if present {
var oct String
if !child.ReadASN1(&oct, asn1.OCTET_STRING) || !child.Empty() {
return false
}
*out = oct
} else {
*out = nil
}
return true
}
// ReadOptionalASN1Boolean sets *out to the value of the next ASN.1 BOOLEAN or,
// if the next bytes are not an ASN.1 BOOLEAN, to the value of defaultValue.
// It reports whether the operation was successful.
func (s *String) ReadOptionalASN1Boolean(out *bool, defaultValue bool) bool {
var present bool
var child String
if !s.ReadOptionalASN1(&child, &present, asn1.BOOLEAN) {
return false
}
if !present {
*out = defaultValue
return true
}
return s.ReadASN1Boolean(out)
}
func (s *String) readASN1(out *String, outTag *asn1.Tag, skipHeader bool) bool {
if len(*s) < 2 {
return false
}
tag, lenByte := (*s)[0], (*s)[1]
if tag&0x1f == 0x1f {
// ITU-T X.690 section 8.1.2
//
// An identifier octet with a tag part of 0x1f indicates a high-tag-number
// form identifier with two or more octets. We only support tags less than
// 31 (i.e. low-tag-number form, single octet identifier).
return false
}
if outTag != nil {
*outTag = asn1.Tag(tag)
}
// ITU-T X.690 section 8.1.3
//
// Bit 8 of the first length byte indicates whether the length is short- or
// long-form.
var length, headerLen uint32 // length includes headerLen
if lenByte&0x80 == 0 {
// Short-form length (section 8.1.3.4), encoded in bits 1-7.
length = uint32(lenByte) + 2
headerLen = 2
} else {
// Long-form length (section 8.1.3.5). Bits 1-7 encode the number of octets
// used to encode the length.
lenLen := lenByte & 0x7f
var len32 uint32
if lenLen == 0 || lenLen > 4 || len(*s) < int(2+lenLen) {
return false
}
lenBytes := String((*s)[2 : 2+lenLen])
if !lenBytes.readUnsigned(&len32, int(lenLen)) {
return false
}
// ITU-T X.690 section 10.1 (DER length forms) requires encoding the length
// with the minimum number of octets.
if len32 < 128 {
// Length should have used short-form encoding.
return false
}
if len32>>((lenLen-1)*8) == 0 {
// Leading octet is 0. Length should have been at least one byte shorter.
return false
}
headerLen = 2 + uint32(lenLen)
if headerLen+len32 < len32 {
// Overflow.
return false
}
length = headerLen + len32
}
if uint32(int(length)) != length || !s.ReadBytes((*[]byte)(out), int(length)) {
return false
}
if skipHeader && !out.Skip(int(headerLen)) {
panic("cryptobyte: internal error")
}
return true
}

46
vendor/golang.org/x/crypto/cryptobyte/asn1/asn1.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package asn1 contains supporting types for parsing and building ASN.1
// messages with the cryptobyte package.
package asn1 // import "golang.org/x/crypto/cryptobyte/asn1"
// Tag represents an ASN.1 identifier octet, consisting of a tag number
// (indicating a type) and class (such as context-specific or constructed).
//
// Methods in the cryptobyte package only support the low-tag-number form, i.e.
// a single identifier octet with bits 7-8 encoding the class and bits 1-6
// encoding the tag number.
type Tag uint8
const (
classConstructed = 0x20
classContextSpecific = 0x80
)
// Constructed returns t with the constructed class bit set.
func (t Tag) Constructed() Tag { return t | classConstructed }
// ContextSpecific returns t with the context-specific class bit set.
func (t Tag) ContextSpecific() Tag { return t | classContextSpecific }
// The following is a list of standard tag and class combinations.
const (
BOOLEAN = Tag(1)
INTEGER = Tag(2)
BIT_STRING = Tag(3)
OCTET_STRING = Tag(4)
NULL = Tag(5)
OBJECT_IDENTIFIER = Tag(6)
ENUM = Tag(10)
UTF8String = Tag(12)
SEQUENCE = Tag(16 | classConstructed)
SET = Tag(17 | classConstructed)
PrintableString = Tag(19)
T61String = Tag(20)
IA5String = Tag(22)
UTCTime = Tag(23)
GeneralizedTime = Tag(24)
GeneralString = Tag(27)
)

309
vendor/golang.org/x/crypto/cryptobyte/builder.go generated vendored Normal file
View file

@ -0,0 +1,309 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package cryptobyte
import (
"errors"
"fmt"
)
// A Builder builds byte strings from fixed-length and length-prefixed values.
// Builders either allocate space as needed, or are fixed, which means that
// they write into a given buffer and produce an error if it's exhausted.
//
// The zero value is a usable Builder that allocates space as needed.
//
// Simple values are marshaled and appended to a Builder using methods on the
// Builder. Length-prefixed values are marshaled by providing a
// BuilderContinuation, which is a function that writes the inner contents of
// the value to a given Builder. See the documentation for BuilderContinuation
// for details.
type Builder struct {
err error
result []byte
fixedSize bool
child *Builder
offset int
pendingLenLen int
pendingIsASN1 bool
inContinuation *bool
}
// NewBuilder creates a Builder that appends its output to the given buffer.
// Like append(), the slice will be reallocated if its capacity is exceeded.
// Use Bytes to get the final buffer.
func NewBuilder(buffer []byte) *Builder {
return &Builder{
result: buffer,
}
}
// NewFixedBuilder creates a Builder that appends its output into the given
// buffer. This builder does not reallocate the output buffer. Writes that
// would exceed the buffer's capacity are treated as an error.
func NewFixedBuilder(buffer []byte) *Builder {
return &Builder{
result: buffer,
fixedSize: true,
}
}
// Bytes returns the bytes written by the builder or an error if one has
// occurred during during building.
func (b *Builder) Bytes() ([]byte, error) {
if b.err != nil {
return nil, b.err
}
return b.result[b.offset:], nil
}
// BytesOrPanic returns the bytes written by the builder or panics if an error
// has occurred during building.
func (b *Builder) BytesOrPanic() []byte {
if b.err != nil {
panic(b.err)
}
return b.result[b.offset:]
}
// AddUint8 appends an 8-bit value to the byte string.
func (b *Builder) AddUint8(v uint8) {
b.add(byte(v))
}
// AddUint16 appends a big-endian, 16-bit value to the byte string.
func (b *Builder) AddUint16(v uint16) {
b.add(byte(v>>8), byte(v))
}
// AddUint24 appends a big-endian, 24-bit value to the byte string. The highest
// byte of the 32-bit input value is silently truncated.
func (b *Builder) AddUint24(v uint32) {
b.add(byte(v>>16), byte(v>>8), byte(v))
}
// AddUint32 appends a big-endian, 32-bit value to the byte string.
func (b *Builder) AddUint32(v uint32) {
b.add(byte(v>>24), byte(v>>16), byte(v>>8), byte(v))
}
// AddBytes appends a sequence of bytes to the byte string.
func (b *Builder) AddBytes(v []byte) {
b.add(v...)
}
// BuilderContinuation is continuation-passing interface for building
// length-prefixed byte sequences. Builder methods for length-prefixed
// sequences (AddUint8LengthPrefixed etc) will invoke the BuilderContinuation
// supplied to them. The child builder passed to the continuation can be used
// to build the content of the length-prefixed sequence. For example:
//
// parent := cryptobyte.NewBuilder()
// parent.AddUint8LengthPrefixed(func (child *Builder) {
// child.AddUint8(42)
// child.AddUint8LengthPrefixed(func (grandchild *Builder) {
// grandchild.AddUint8(5)
// })
// })
//
// It is an error to write more bytes to the child than allowed by the reserved
// length prefix. After the continuation returns, the child must be considered
// invalid, i.e. users must not store any copies or references of the child
// that outlive the continuation.
//
// If the continuation panics with a value of type BuildError then the inner
// error will be returned as the error from Bytes. If the child panics
// otherwise then Bytes will repanic with the same value.
type BuilderContinuation func(child *Builder)
// BuildError wraps an error. If a BuilderContinuation panics with this value,
// the panic will be recovered and the inner error will be returned from
// Builder.Bytes.
type BuildError struct {
Err error
}
// AddUint8LengthPrefixed adds a 8-bit length-prefixed byte sequence.
func (b *Builder) AddUint8LengthPrefixed(f BuilderContinuation) {
b.addLengthPrefixed(1, false, f)
}
// AddUint16LengthPrefixed adds a big-endian, 16-bit length-prefixed byte sequence.
func (b *Builder) AddUint16LengthPrefixed(f BuilderContinuation) {
b.addLengthPrefixed(2, false, f)
}
// AddUint24LengthPrefixed adds a big-endian, 24-bit length-prefixed byte sequence.
func (b *Builder) AddUint24LengthPrefixed(f BuilderContinuation) {
b.addLengthPrefixed(3, false, f)
}
// AddUint32LengthPrefixed adds a big-endian, 32-bit length-prefixed byte sequence.
func (b *Builder) AddUint32LengthPrefixed(f BuilderContinuation) {
b.addLengthPrefixed(4, false, f)
}
func (b *Builder) callContinuation(f BuilderContinuation, arg *Builder) {
if !*b.inContinuation {
*b.inContinuation = true
defer func() {
*b.inContinuation = false
r := recover()
if r == nil {
return
}
if buildError, ok := r.(BuildError); ok {
b.err = buildError.Err
} else {
panic(r)
}
}()
}
f(arg)
}
func (b *Builder) addLengthPrefixed(lenLen int, isASN1 bool, f BuilderContinuation) {
// Subsequent writes can be ignored if the builder has encountered an error.
if b.err != nil {
return
}
offset := len(b.result)
b.add(make([]byte, lenLen)...)
if b.inContinuation == nil {
b.inContinuation = new(bool)
}
b.child = &Builder{
result: b.result,
fixedSize: b.fixedSize,
offset: offset,
pendingLenLen: lenLen,
pendingIsASN1: isASN1,
inContinuation: b.inContinuation,
}
b.callContinuation(f, b.child)
b.flushChild()
if b.child != nil {
panic("cryptobyte: internal error")
}
}
func (b *Builder) flushChild() {
if b.child == nil {
return
}
b.child.flushChild()
child := b.child
b.child = nil
if child.err != nil {
b.err = child.err
return
}
length := len(child.result) - child.pendingLenLen - child.offset
if length < 0 {
panic("cryptobyte: internal error") // result unexpectedly shrunk
}
if child.pendingIsASN1 {
// For ASN.1, we reserved a single byte for the length. If that turned out
// to be incorrect, we have to move the contents along in order to make
// space.
if child.pendingLenLen != 1 {
panic("cryptobyte: internal error")
}
var lenLen, lenByte uint8
if int64(length) > 0xfffffffe {
b.err = errors.New("pending ASN.1 child too long")
return
} else if length > 0xffffff {
lenLen = 5
lenByte = 0x80 | 4
} else if length > 0xffff {
lenLen = 4
lenByte = 0x80 | 3
} else if length > 0xff {
lenLen = 3
lenByte = 0x80 | 2
} else if length > 0x7f {
lenLen = 2
lenByte = 0x80 | 1
} else {
lenLen = 1
lenByte = uint8(length)
length = 0
}
// Insert the initial length byte, make space for successive length bytes,
// and adjust the offset.
child.result[child.offset] = lenByte
extraBytes := int(lenLen - 1)
if extraBytes != 0 {
child.add(make([]byte, extraBytes)...)
childStart := child.offset + child.pendingLenLen
copy(child.result[childStart+extraBytes:], child.result[childStart:])
}
child.offset++
child.pendingLenLen = extraBytes
}
l := length
for i := child.pendingLenLen - 1; i >= 0; i-- {
child.result[child.offset+i] = uint8(l)
l >>= 8
}
if l != 0 {
b.err = fmt.Errorf("cryptobyte: pending child length %d exceeds %d-byte length prefix", length, child.pendingLenLen)
return
}
if !b.fixedSize {
b.result = child.result // In case child reallocated result.
}
}
func (b *Builder) add(bytes ...byte) {
if b.err != nil {
return
}
if b.child != nil {
panic("attempted write while child is pending")
}
if len(b.result)+len(bytes) < len(bytes) {
b.err = errors.New("cryptobyte: length overflow")
}
if b.fixedSize && len(b.result)+len(bytes) > cap(b.result) {
b.err = errors.New("cryptobyte: Builder is exceeding its fixed-size buffer")
return
}
b.result = append(b.result, bytes...)
}
// A MarshalingValue marshals itself into a Builder.
type MarshalingValue interface {
// Marshal is called by Builder.AddValue. It receives a pointer to a builder
// to marshal itself into. It may return an error that occurred during
// marshaling, such as unset or invalid values.
Marshal(b *Builder) error
}
// AddValue calls Marshal on v, passing a pointer to the builder to append to.
// If Marshal returns an error, it is set on the Builder so that subsequent
// appends don't have an effect.
func (b *Builder) AddValue(v MarshalingValue) {
err := v.Marshal(b)
if err != nil {
b.err = err
}
}

166
vendor/golang.org/x/crypto/cryptobyte/string.go generated vendored Normal file
View file

@ -0,0 +1,166 @@
// Copyright 2017 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package cryptobyte contains types that help with parsing and constructing
// length-prefixed, binary messages, including ASN.1 DER. (The asn1 subpackage
// contains useful ASN.1 constants.)
//
// The String type is for parsing. It wraps a []byte slice and provides helper
// functions for consuming structures, value by value.
//
// The Builder type is for constructing messages. It providers helper functions
// for appending values and also for appending length-prefixed submessages
// without having to worry about calculating the length prefix ahead of time.
//
// See the documentation and examples for the Builder and String types to get
// started.
package cryptobyte // import "golang.org/x/crypto/cryptobyte"
// String represents a string of bytes. It provides methods for parsing
// fixed-length and length-prefixed values from it.
type String []byte
// read advances a String by n bytes and returns them. If less than n bytes
// remain, it returns nil.
func (s *String) read(n int) []byte {
if len(*s) < n {
return nil
}
v := (*s)[:n]
*s = (*s)[n:]
return v
}
// Skip advances the String by n byte and reports whether it was successful.
func (s *String) Skip(n int) bool {
return s.read(n) != nil
}
// ReadUint8 decodes an 8-bit value into out and advances over it.
// It reports whether the read was successful.
func (s *String) ReadUint8(out *uint8) bool {
v := s.read(1)
if v == nil {
return false
}
*out = uint8(v[0])
return true
}
// ReadUint16 decodes a big-endian, 16-bit value into out and advances over it.
// It reports whether the read was successful.
func (s *String) ReadUint16(out *uint16) bool {
v := s.read(2)
if v == nil {
return false
}
*out = uint16(v[0])<<8 | uint16(v[1])
return true
}
// ReadUint24 decodes a big-endian, 24-bit value into out and advances over it.
// It reports whether the read was successful.
func (s *String) ReadUint24(out *uint32) bool {
v := s.read(3)
if v == nil {
return false
}
*out = uint32(v[0])<<16 | uint32(v[1])<<8 | uint32(v[2])
return true
}
// ReadUint32 decodes a big-endian, 32-bit value into out and advances over it.
// It reports whether the read was successful.
func (s *String) ReadUint32(out *uint32) bool {
v := s.read(4)
if v == nil {
return false
}
*out = uint32(v[0])<<24 | uint32(v[1])<<16 | uint32(v[2])<<8 | uint32(v[3])
return true
}
func (s *String) readUnsigned(out *uint32, length int) bool {
v := s.read(length)
if v == nil {
return false
}
var result uint32
for i := 0; i < length; i++ {
result <<= 8
result |= uint32(v[i])
}
*out = result
return true
}
func (s *String) readLengthPrefixed(lenLen int, outChild *String) bool {
lenBytes := s.read(lenLen)
if lenBytes == nil {
return false
}
var length uint32
for _, b := range lenBytes {
length = length << 8
length = length | uint32(b)
}
if int(length) < 0 {
// This currently cannot overflow because we read uint24 at most, but check
// anyway in case that changes in the future.
return false
}
v := s.read(int(length))
if v == nil {
return false
}
*outChild = v
return true
}
// ReadUint8LengthPrefixed reads the content of an 8-bit length-prefixed value
// into out and advances over it. It reports whether the read was successful.
func (s *String) ReadUint8LengthPrefixed(out *String) bool {
return s.readLengthPrefixed(1, out)
}
// ReadUint16LengthPrefixed reads the content of a big-endian, 16-bit
// length-prefixed value into out and advances over it. It reports whether the
// read was successful.
func (s *String) ReadUint16LengthPrefixed(out *String) bool {
return s.readLengthPrefixed(2, out)
}
// ReadUint24LengthPrefixed reads the content of a big-endian, 24-bit
// length-prefixed value into out and advances over it. It reports whether
// the read was successful.
func (s *String) ReadUint24LengthPrefixed(out *String) bool {
return s.readLengthPrefixed(3, out)
}
// ReadBytes reads n bytes into out and advances over them. It reports
// whether the read was successful.
func (s *String) ReadBytes(out *[]byte, n int) bool {
v := s.read(n)
if v == nil {
return false
}
*out = v
return true
}
// CopyBytes copies len(out) bytes into out and advances over them. It reports
// whether the copy operation was successful
func (s *String) CopyBytes(out []byte) bool {
n := len(out)
v := s.read(n)
if v == nil {
return false
}
return copy(out, v) == n
}
// Empty reports whether the string does not contain any bytes.
func (s String) Empty() bool {
return len(s) == 0
}

781
vendor/golang.org/x/crypto/ocsp/ocsp.go generated vendored Normal file
View file

@ -0,0 +1,781 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// Package ocsp parses OCSP responses as specified in RFC 2560. OCSP responses
// are signed messages attesting to the validity of a certificate for a small
// period of time. This is used to manage revocation for X.509 certificates.
package ocsp // import "golang.org/x/crypto/ocsp"
import (
"crypto"
"crypto/ecdsa"
"crypto/elliptic"
"crypto/rand"
"crypto/rsa"
_ "crypto/sha1"
_ "crypto/sha256"
_ "crypto/sha512"
"crypto/x509"
"crypto/x509/pkix"
"encoding/asn1"
"errors"
"fmt"
"math/big"
"strconv"
"time"
)
var idPKIXOCSPBasic = asn1.ObjectIdentifier([]int{1, 3, 6, 1, 5, 5, 7, 48, 1, 1})
// ResponseStatus contains the result of an OCSP request. See
// https://tools.ietf.org/html/rfc6960#section-2.3
type ResponseStatus int
const (
Success ResponseStatus = 0
Malformed ResponseStatus = 1
InternalError ResponseStatus = 2
TryLater ResponseStatus = 3
// Status code four is unused in OCSP. See
// https://tools.ietf.org/html/rfc6960#section-4.2.1
SignatureRequired ResponseStatus = 5
Unauthorized ResponseStatus = 6
)
func (r ResponseStatus) String() string {
switch r {
case Success:
return "success"
case Malformed:
return "malformed"
case InternalError:
return "internal error"
case TryLater:
return "try later"
case SignatureRequired:
return "signature required"
case Unauthorized:
return "unauthorized"
default:
return "unknown OCSP status: " + strconv.Itoa(int(r))
}
}
// ResponseError is an error that may be returned by ParseResponse to indicate
// that the response itself is an error, not just that its indicating that a
// certificate is revoked, unknown, etc.
type ResponseError struct {
Status ResponseStatus
}
func (r ResponseError) Error() string {
return "ocsp: error from server: " + r.Status.String()
}
// These are internal structures that reflect the ASN.1 structure of an OCSP
// response. See RFC 2560, section 4.2.
type certID struct {
HashAlgorithm pkix.AlgorithmIdentifier
NameHash []byte
IssuerKeyHash []byte
SerialNumber *big.Int
}
// https://tools.ietf.org/html/rfc2560#section-4.1.1
type ocspRequest struct {
TBSRequest tbsRequest
}
type tbsRequest struct {
Version int `asn1:"explicit,tag:0,default:0,optional"`
RequestorName pkix.RDNSequence `asn1:"explicit,tag:1,optional"`
RequestList []request
}
type request struct {
Cert certID
}
type responseASN1 struct {
Status asn1.Enumerated
Response responseBytes `asn1:"explicit,tag:0,optional"`
}
type responseBytes struct {
ResponseType asn1.ObjectIdentifier
Response []byte
}
type basicResponse struct {
TBSResponseData responseData
SignatureAlgorithm pkix.AlgorithmIdentifier
Signature asn1.BitString
Certificates []asn1.RawValue `asn1:"explicit,tag:0,optional"`
}
type responseData struct {
Raw asn1.RawContent
Version int `asn1:"optional,default:0,explicit,tag:0"`
RawResponderID asn1.RawValue
ProducedAt time.Time `asn1:"generalized"`
Responses []singleResponse
}
type singleResponse struct {
CertID certID
Good asn1.Flag `asn1:"tag:0,optional"`
Revoked revokedInfo `asn1:"tag:1,optional"`
Unknown asn1.Flag `asn1:"tag:2,optional"`
ThisUpdate time.Time `asn1:"generalized"`
NextUpdate time.Time `asn1:"generalized,explicit,tag:0,optional"`
SingleExtensions []pkix.Extension `asn1:"explicit,tag:1,optional"`
}
type revokedInfo struct {
RevocationTime time.Time `asn1:"generalized"`
Reason asn1.Enumerated `asn1:"explicit,tag:0,optional"`
}
var (
oidSignatureMD2WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 2}
oidSignatureMD5WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 4}
oidSignatureSHA1WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 5}
oidSignatureSHA256WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 11}
oidSignatureSHA384WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 12}
oidSignatureSHA512WithRSA = asn1.ObjectIdentifier{1, 2, 840, 113549, 1, 1, 13}
oidSignatureDSAWithSHA1 = asn1.ObjectIdentifier{1, 2, 840, 10040, 4, 3}
oidSignatureDSAWithSHA256 = asn1.ObjectIdentifier{2, 16, 840, 1, 101, 3, 4, 3, 2}
oidSignatureECDSAWithSHA1 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 1}
oidSignatureECDSAWithSHA256 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 2}
oidSignatureECDSAWithSHA384 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 3}
oidSignatureECDSAWithSHA512 = asn1.ObjectIdentifier{1, 2, 840, 10045, 4, 3, 4}
)
var hashOIDs = map[crypto.Hash]asn1.ObjectIdentifier{
crypto.SHA1: asn1.ObjectIdentifier([]int{1, 3, 14, 3, 2, 26}),
crypto.SHA256: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 1}),
crypto.SHA384: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 2}),
crypto.SHA512: asn1.ObjectIdentifier([]int{2, 16, 840, 1, 101, 3, 4, 2, 3}),
}
// TODO(rlb): This is also from crypto/x509, so same comment as AGL's below
var signatureAlgorithmDetails = []struct {
algo x509.SignatureAlgorithm
oid asn1.ObjectIdentifier
pubKeyAlgo x509.PublicKeyAlgorithm
hash crypto.Hash
}{
{x509.MD2WithRSA, oidSignatureMD2WithRSA, x509.RSA, crypto.Hash(0) /* no value for MD2 */},
{x509.MD5WithRSA, oidSignatureMD5WithRSA, x509.RSA, crypto.MD5},
{x509.SHA1WithRSA, oidSignatureSHA1WithRSA, x509.RSA, crypto.SHA1},
{x509.SHA256WithRSA, oidSignatureSHA256WithRSA, x509.RSA, crypto.SHA256},
{x509.SHA384WithRSA, oidSignatureSHA384WithRSA, x509.RSA, crypto.SHA384},
{x509.SHA512WithRSA, oidSignatureSHA512WithRSA, x509.RSA, crypto.SHA512},
{x509.DSAWithSHA1, oidSignatureDSAWithSHA1, x509.DSA, crypto.SHA1},
{x509.DSAWithSHA256, oidSignatureDSAWithSHA256, x509.DSA, crypto.SHA256},
{x509.ECDSAWithSHA1, oidSignatureECDSAWithSHA1, x509.ECDSA, crypto.SHA1},
{x509.ECDSAWithSHA256, oidSignatureECDSAWithSHA256, x509.ECDSA, crypto.SHA256},
{x509.ECDSAWithSHA384, oidSignatureECDSAWithSHA384, x509.ECDSA, crypto.SHA384},
{x509.ECDSAWithSHA512, oidSignatureECDSAWithSHA512, x509.ECDSA, crypto.SHA512},
}
// TODO(rlb): This is also from crypto/x509, so same comment as AGL's below
func signingParamsForPublicKey(pub interface{}, requestedSigAlgo x509.SignatureAlgorithm) (hashFunc crypto.Hash, sigAlgo pkix.AlgorithmIdentifier, err error) {
var pubType x509.PublicKeyAlgorithm
switch pub := pub.(type) {
case *rsa.PublicKey:
pubType = x509.RSA
hashFunc = crypto.SHA256
sigAlgo.Algorithm = oidSignatureSHA256WithRSA
sigAlgo.Parameters = asn1.RawValue{
Tag: 5,
}
case *ecdsa.PublicKey:
pubType = x509.ECDSA
switch pub.Curve {
case elliptic.P224(), elliptic.P256():
hashFunc = crypto.SHA256
sigAlgo.Algorithm = oidSignatureECDSAWithSHA256
case elliptic.P384():
hashFunc = crypto.SHA384
sigAlgo.Algorithm = oidSignatureECDSAWithSHA384
case elliptic.P521():
hashFunc = crypto.SHA512
sigAlgo.Algorithm = oidSignatureECDSAWithSHA512
default:
err = errors.New("x509: unknown elliptic curve")
}
default:
err = errors.New("x509: only RSA and ECDSA keys supported")
}
if err != nil {
return
}
if requestedSigAlgo == 0 {
return
}
found := false
for _, details := range signatureAlgorithmDetails {
if details.algo == requestedSigAlgo {
if details.pubKeyAlgo != pubType {
err = errors.New("x509: requested SignatureAlgorithm does not match private key type")
return
}
sigAlgo.Algorithm, hashFunc = details.oid, details.hash
if hashFunc == 0 {
err = errors.New("x509: cannot sign with hash function requested")
return
}
found = true
break
}
}
if !found {
err = errors.New("x509: unknown SignatureAlgorithm")
}
return
}
// TODO(agl): this is taken from crypto/x509 and so should probably be exported
// from crypto/x509 or crypto/x509/pkix.
func getSignatureAlgorithmFromOID(oid asn1.ObjectIdentifier) x509.SignatureAlgorithm {
for _, details := range signatureAlgorithmDetails {
if oid.Equal(details.oid) {
return details.algo
}
}
return x509.UnknownSignatureAlgorithm
}
// TODO(rlb): This is not taken from crypto/x509, but it's of the same general form.
func getHashAlgorithmFromOID(target asn1.ObjectIdentifier) crypto.Hash {
for hash, oid := range hashOIDs {
if oid.Equal(target) {
return hash
}
}
return crypto.Hash(0)
}
func getOIDFromHashAlgorithm(target crypto.Hash) asn1.ObjectIdentifier {
for hash, oid := range hashOIDs {
if hash == target {
return oid
}
}
return nil
}
// This is the exposed reflection of the internal OCSP structures.
// The status values that can be expressed in OCSP. See RFC 6960.
const (
// Good means that the certificate is valid.
Good = iota
// Revoked means that the certificate has been deliberately revoked.
Revoked
// Unknown means that the OCSP responder doesn't know about the certificate.
Unknown
// ServerFailed is unused and was never used (see
// https://go-review.googlesource.com/#/c/18944). ParseResponse will
// return a ResponseError when an error response is parsed.
ServerFailed
)
// The enumerated reasons for revoking a certificate. See RFC 5280.
const (
Unspecified = 0
KeyCompromise = 1
CACompromise = 2
AffiliationChanged = 3
Superseded = 4
CessationOfOperation = 5
CertificateHold = 6
RemoveFromCRL = 8
PrivilegeWithdrawn = 9
AACompromise = 10
)
// Request represents an OCSP request. See RFC 6960.
type Request struct {
HashAlgorithm crypto.Hash
IssuerNameHash []byte
IssuerKeyHash []byte
SerialNumber *big.Int
}
// Marshal marshals the OCSP request to ASN.1 DER encoded form.
func (req *Request) Marshal() ([]byte, error) {
hashAlg := getOIDFromHashAlgorithm(req.HashAlgorithm)
if hashAlg == nil {
return nil, errors.New("Unknown hash algorithm")
}
return asn1.Marshal(ocspRequest{
tbsRequest{
Version: 0,
RequestList: []request{
{
Cert: certID{
pkix.AlgorithmIdentifier{
Algorithm: hashAlg,
Parameters: asn1.RawValue{Tag: 5 /* ASN.1 NULL */},
},
req.IssuerNameHash,
req.IssuerKeyHash,
req.SerialNumber,
},
},
},
},
})
}
// Response represents an OCSP response containing a single SingleResponse. See
// RFC 6960.
type Response struct {
// Status is one of {Good, Revoked, Unknown}
Status int
SerialNumber *big.Int
ProducedAt, ThisUpdate, NextUpdate, RevokedAt time.Time
RevocationReason int
Certificate *x509.Certificate
// TBSResponseData contains the raw bytes of the signed response. If
// Certificate is nil then this can be used to verify Signature.
TBSResponseData []byte
Signature []byte
SignatureAlgorithm x509.SignatureAlgorithm
// IssuerHash is the hash used to compute the IssuerNameHash and IssuerKeyHash.
// Valid values are crypto.SHA1, crypto.SHA256, crypto.SHA384, and crypto.SHA512.
// If zero, the default is crypto.SHA1.
IssuerHash crypto.Hash
// RawResponderName optionally contains the DER-encoded subject of the
// responder certificate. Exactly one of RawResponderName and
// ResponderKeyHash is set.
RawResponderName []byte
// ResponderKeyHash optionally contains the SHA-1 hash of the
// responder's public key. Exactly one of RawResponderName and
// ResponderKeyHash is set.
ResponderKeyHash []byte
// Extensions contains raw X.509 extensions from the singleExtensions field
// of the OCSP response. When parsing certificates, this can be used to
// extract non-critical extensions that are not parsed by this package. When
// marshaling OCSP responses, the Extensions field is ignored, see
// ExtraExtensions.
Extensions []pkix.Extension
// ExtraExtensions contains extensions to be copied, raw, into any marshaled
// OCSP response (in the singleExtensions field). Values override any
// extensions that would otherwise be produced based on the other fields. The
// ExtraExtensions field is not populated when parsing certificates, see
// Extensions.
ExtraExtensions []pkix.Extension
}
// These are pre-serialized error responses for the various non-success codes
// defined by OCSP. The Unauthorized code in particular can be used by an OCSP
// responder that supports only pre-signed responses as a response to requests
// for certificates with unknown status. See RFC 5019.
var (
MalformedRequestErrorResponse = []byte{0x30, 0x03, 0x0A, 0x01, 0x01}
InternalErrorErrorResponse = []byte{0x30, 0x03, 0x0A, 0x01, 0x02}
TryLaterErrorResponse = []byte{0x30, 0x03, 0x0A, 0x01, 0x03}
SigRequredErrorResponse = []byte{0x30, 0x03, 0x0A, 0x01, 0x05}
UnauthorizedErrorResponse = []byte{0x30, 0x03, 0x0A, 0x01, 0x06}
)
// CheckSignatureFrom checks that the signature in resp is a valid signature
// from issuer. This should only be used if resp.Certificate is nil. Otherwise,
// the OCSP response contained an intermediate certificate that created the
// signature. That signature is checked by ParseResponse and only
// resp.Certificate remains to be validated.
func (resp *Response) CheckSignatureFrom(issuer *x509.Certificate) error {
return issuer.CheckSignature(resp.SignatureAlgorithm, resp.TBSResponseData, resp.Signature)
}
// ParseError results from an invalid OCSP response.
type ParseError string
func (p ParseError) Error() string {
return string(p)
}
// ParseRequest parses an OCSP request in DER form. It only supports
// requests for a single certificate. Signed requests are not supported.
// If a request includes a signature, it will result in a ParseError.
func ParseRequest(bytes []byte) (*Request, error) {
var req ocspRequest
rest, err := asn1.Unmarshal(bytes, &req)
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, ParseError("trailing data in OCSP request")
}
if len(req.TBSRequest.RequestList) == 0 {
return nil, ParseError("OCSP request contains no request body")
}
innerRequest := req.TBSRequest.RequestList[0]
hashFunc := getHashAlgorithmFromOID(innerRequest.Cert.HashAlgorithm.Algorithm)
if hashFunc == crypto.Hash(0) {
return nil, ParseError("OCSP request uses unknown hash function")
}
return &Request{
HashAlgorithm: hashFunc,
IssuerNameHash: innerRequest.Cert.NameHash,
IssuerKeyHash: innerRequest.Cert.IssuerKeyHash,
SerialNumber: innerRequest.Cert.SerialNumber,
}, nil
}
// ParseResponse parses an OCSP response in DER form. It only supports
// responses for a single certificate. If the response contains a certificate
// then the signature over the response is checked. If issuer is not nil then
// it will be used to validate the signature or embedded certificate.
//
// Invalid responses and parse failures will result in a ParseError.
// Error responses will result in a ResponseError.
func ParseResponse(bytes []byte, issuer *x509.Certificate) (*Response, error) {
return ParseResponseForCert(bytes, nil, issuer)
}
// ParseResponseForCert parses an OCSP response in DER form and searches for a
// Response relating to cert. If such a Response is found and the OCSP response
// contains a certificate then the signature over the response is checked. If
// issuer is not nil then it will be used to validate the signature or embedded
// certificate.
//
// Invalid responses and parse failures will result in a ParseError.
// Error responses will result in a ResponseError.
func ParseResponseForCert(bytes []byte, cert, issuer *x509.Certificate) (*Response, error) {
var resp responseASN1
rest, err := asn1.Unmarshal(bytes, &resp)
if err != nil {
return nil, err
}
if len(rest) > 0 {
return nil, ParseError("trailing data in OCSP response")
}
if status := ResponseStatus(resp.Status); status != Success {
return nil, ResponseError{status}
}
if !resp.Response.ResponseType.Equal(idPKIXOCSPBasic) {
return nil, ParseError("bad OCSP response type")
}
var basicResp basicResponse
rest, err = asn1.Unmarshal(resp.Response.Response, &basicResp)
if err != nil {
return nil, err
}
if n := len(basicResp.TBSResponseData.Responses); n == 0 || cert == nil && n > 1 {
return nil, ParseError("OCSP response contains bad number of responses")
}
var singleResp singleResponse
if cert == nil {
singleResp = basicResp.TBSResponseData.Responses[0]
} else {
match := false
for _, resp := range basicResp.TBSResponseData.Responses {
if cert.SerialNumber.Cmp(resp.CertID.SerialNumber) == 0 {
singleResp = resp
match = true
break
}
}
if !match {
return nil, ParseError("no response matching the supplied certificate")
}
}
ret := &Response{
TBSResponseData: basicResp.TBSResponseData.Raw,
Signature: basicResp.Signature.RightAlign(),
SignatureAlgorithm: getSignatureAlgorithmFromOID(basicResp.SignatureAlgorithm.Algorithm),
Extensions: singleResp.SingleExtensions,
SerialNumber: singleResp.CertID.SerialNumber,
ProducedAt: basicResp.TBSResponseData.ProducedAt,
ThisUpdate: singleResp.ThisUpdate,
NextUpdate: singleResp.NextUpdate,
}
// Handle the ResponderID CHOICE tag. ResponderID can be flattened into
// TBSResponseData once https://go-review.googlesource.com/34503 has been
// released.
rawResponderID := basicResp.TBSResponseData.RawResponderID
switch rawResponderID.Tag {
case 1: // Name
var rdn pkix.RDNSequence
if rest, err := asn1.Unmarshal(rawResponderID.Bytes, &rdn); err != nil || len(rest) != 0 {
return nil, ParseError("invalid responder name")
}
ret.RawResponderName = rawResponderID.Bytes
case 2: // KeyHash
if rest, err := asn1.Unmarshal(rawResponderID.Bytes, &ret.ResponderKeyHash); err != nil || len(rest) != 0 {
return nil, ParseError("invalid responder key hash")
}
default:
return nil, ParseError("invalid responder id tag")
}
if len(basicResp.Certificates) > 0 {
// Responders should only send a single certificate (if they
// send any) that connects the responder's certificate to the
// original issuer. We accept responses with multiple
// certificates due to a number responders sending them[1], but
// ignore all but the first.
//
// [1] https://github.com/golang/go/issues/21527
ret.Certificate, err = x509.ParseCertificate(basicResp.Certificates[0].FullBytes)
if err != nil {
return nil, err
}
if err := ret.CheckSignatureFrom(ret.Certificate); err != nil {
return nil, ParseError("bad signature on embedded certificate: " + err.Error())
}
if issuer != nil {
if err := issuer.CheckSignature(ret.Certificate.SignatureAlgorithm, ret.Certificate.RawTBSCertificate, ret.Certificate.Signature); err != nil {
return nil, ParseError("bad OCSP signature: " + err.Error())
}
}
} else if issuer != nil {
if err := ret.CheckSignatureFrom(issuer); err != nil {
return nil, ParseError("bad OCSP signature: " + err.Error())
}
}
for _, ext := range singleResp.SingleExtensions {
if ext.Critical {
return nil, ParseError("unsupported critical extension")
}
}
for h, oid := range hashOIDs {
if singleResp.CertID.HashAlgorithm.Algorithm.Equal(oid) {
ret.IssuerHash = h
break
}
}
if ret.IssuerHash == 0 {
return nil, ParseError("unsupported issuer hash algorithm")
}
switch {
case bool(singleResp.Good):
ret.Status = Good
case bool(singleResp.Unknown):
ret.Status = Unknown
default:
ret.Status = Revoked
ret.RevokedAt = singleResp.Revoked.RevocationTime
ret.RevocationReason = int(singleResp.Revoked.Reason)
}
return ret, nil
}
// RequestOptions contains options for constructing OCSP requests.
type RequestOptions struct {
// Hash contains the hash function that should be used when
// constructing the OCSP request. If zero, SHA-1 will be used.
Hash crypto.Hash
}
func (opts *RequestOptions) hash() crypto.Hash {
if opts == nil || opts.Hash == 0 {
// SHA-1 is nearly universally used in OCSP.
return crypto.SHA1
}
return opts.Hash
}
// CreateRequest returns a DER-encoded, OCSP request for the status of cert. If
// opts is nil then sensible defaults are used.
func CreateRequest(cert, issuer *x509.Certificate, opts *RequestOptions) ([]byte, error) {
hashFunc := opts.hash()
// OCSP seems to be the only place where these raw hash identifiers are
// used. I took the following from
// http://msdn.microsoft.com/en-us/library/ff635603.aspx
_, ok := hashOIDs[hashFunc]
if !ok {
return nil, x509.ErrUnsupportedAlgorithm
}
if !hashFunc.Available() {
return nil, x509.ErrUnsupportedAlgorithm
}
h := opts.hash().New()
var publicKeyInfo struct {
Algorithm pkix.AlgorithmIdentifier
PublicKey asn1.BitString
}
if _, err := asn1.Unmarshal(issuer.RawSubjectPublicKeyInfo, &publicKeyInfo); err != nil {
return nil, err
}
h.Write(publicKeyInfo.PublicKey.RightAlign())
issuerKeyHash := h.Sum(nil)
h.Reset()
h.Write(issuer.RawSubject)
issuerNameHash := h.Sum(nil)
req := &Request{
HashAlgorithm: hashFunc,
IssuerNameHash: issuerNameHash,
IssuerKeyHash: issuerKeyHash,
SerialNumber: cert.SerialNumber,
}
return req.Marshal()
}
// CreateResponse returns a DER-encoded OCSP response with the specified contents.
// The fields in the response are populated as follows:
//
// The responder cert is used to populate the responder's name field, and the
// certificate itself is provided alongside the OCSP response signature.
//
// The issuer cert is used to puplate the IssuerNameHash and IssuerKeyHash fields.
//
// The template is used to populate the SerialNumber, Status, RevokedAt,
// RevocationReason, ThisUpdate, and NextUpdate fields.
//
// If template.IssuerHash is not set, SHA1 will be used.
//
// The ProducedAt date is automatically set to the current date, to the nearest minute.
func CreateResponse(issuer, responderCert *x509.Certificate, template Response, priv crypto.Signer) ([]byte, error) {
var publicKeyInfo struct {
Algorithm pkix.AlgorithmIdentifier
PublicKey asn1.BitString
}
if _, err := asn1.Unmarshal(issuer.RawSubjectPublicKeyInfo, &publicKeyInfo); err != nil {
return nil, err
}
if template.IssuerHash == 0 {
template.IssuerHash = crypto.SHA1
}
hashOID := getOIDFromHashAlgorithm(template.IssuerHash)
if hashOID == nil {
return nil, errors.New("unsupported issuer hash algorithm")
}
if !template.IssuerHash.Available() {
return nil, fmt.Errorf("issuer hash algorithm %v not linked into binary", template.IssuerHash)
}
h := template.IssuerHash.New()
h.Write(publicKeyInfo.PublicKey.RightAlign())
issuerKeyHash := h.Sum(nil)
h.Reset()
h.Write(issuer.RawSubject)
issuerNameHash := h.Sum(nil)
innerResponse := singleResponse{
CertID: certID{
HashAlgorithm: pkix.AlgorithmIdentifier{
Algorithm: hashOID,
Parameters: asn1.RawValue{Tag: 5 /* ASN.1 NULL */},
},
NameHash: issuerNameHash,
IssuerKeyHash: issuerKeyHash,
SerialNumber: template.SerialNumber,
},
ThisUpdate: template.ThisUpdate.UTC(),
NextUpdate: template.NextUpdate.UTC(),
SingleExtensions: template.ExtraExtensions,
}
switch template.Status {
case Good:
innerResponse.Good = true
case Unknown:
innerResponse.Unknown = true
case Revoked:
innerResponse.Revoked = revokedInfo{
RevocationTime: template.RevokedAt.UTC(),
Reason: asn1.Enumerated(template.RevocationReason),
}
}
rawResponderID := asn1.RawValue{
Class: 2, // context-specific
Tag: 1, // Name (explicit tag)
IsCompound: true,
Bytes: responderCert.RawSubject,
}
tbsResponseData := responseData{
Version: 0,
RawResponderID: rawResponderID,
ProducedAt: time.Now().Truncate(time.Minute).UTC(),
Responses: []singleResponse{innerResponse},
}
tbsResponseDataDER, err := asn1.Marshal(tbsResponseData)
if err != nil {
return nil, err
}
hashFunc, signatureAlgorithm, err := signingParamsForPublicKey(priv.Public(), template.SignatureAlgorithm)
if err != nil {
return nil, err
}
responseHash := hashFunc.New()
responseHash.Write(tbsResponseDataDER)
signature, err := priv.Sign(rand.Reader, responseHash.Sum(nil), hashFunc)
if err != nil {
return nil, err
}
response := basicResponse{
TBSResponseData: tbsResponseData,
SignatureAlgorithm: signatureAlgorithm,
Signature: asn1.BitString{
Bytes: signature,
BitLength: 8 * len(signature),
},
}
if template.Certificate != nil {
response.Certificates = []asn1.RawValue{
{FullBytes: template.Certificate.Raw},
}
}
responseDER, err := asn1.Marshal(response)
if err != nil {
return nil, err
}
return asn1.Marshal(responseASN1{
Status: asn1.Enumerated(Success),
Response: responseBytes{
ResponseType: idPKIXOCSPBasic,
Response: responseDER,
},
})
}