Bump to latest libkv/libnetwork + dependencies

The latest libkv uses a different etcd library.  Unfortunately
that library uses some funky import paths, so I've added a new cleanup
routine for our vendor scripts to be able to normalize the imports
to be consistent with how imports work in this tree.

Signed-off-by: Daniel Hiltgen <daniel.hiltgen@docker.com>
This commit is contained in:
Daniel Hiltgen 2015-10-13 11:33:47 -07:00
parent 4ea3ff7061
commit 7078e7ddf4
76 changed files with 44457 additions and 1968 deletions

View file

@ -128,3 +128,13 @@ clean() {
echo done
}
# Fix up hard-coded imports that refer to Godeps paths so they'll work with our vendoring
fix_rewritten_imports () {
local pkg="$1"
local remove="${pkg}/Godeps/_workspace/src/"
local target="vendor/src/$pkg"
echo "$pkg: fixing rewritten imports"
find "$target" -name \*.go -exec sed -i -e "s|\"${remove}|\"|g" {} \;
}

View file

@ -20,18 +20,20 @@ clone git github.com/tchap/go-patricia v2.1.0
clone git golang.org/x/net 3cffabab72adf04f8e3b01c5baf775361837b5fe https://github.com/golang/net.git
#get libnetwork packages
clone git github.com/docker/libnetwork 0521fe53fc3e7d4e7c2e463800f36662c9169f20
clone git github.com/docker/libnetwork dd1c5f0ffe7697f75a82bd8e4bbd6f225ec5e383
clone git github.com/armon/go-metrics eb0af217e5e9747e41dd5303755356b62d28e3ec
clone git github.com/hashicorp/go-msgpack 71c2886f5a673a35f909803f38ece5810165097b
clone git github.com/hashicorp/memberlist 9a1e242e454d2443df330bdd51a436d5a9058fc4
clone git github.com/hashicorp/serf 7151adcef72687bf95f451a2e0ba15cb19412bf2
clone git github.com/docker/libkv 958cd316db2bc916466bdcc632a3188d62ce6e87
clone git github.com/docker/libkv 749af6c5b3fb755bec1738cc5e0d3a6f1574d730
clone git github.com/vishvananda/netns 604eaf189ee867d8c147fafc28def2394e878d25
clone git github.com/vishvananda/netlink 4b5dce31de6d42af5bb9811c6d265472199e0fec
clone git github.com/BurntSushi/toml f706d00e3de6abe700c994cdd545a1a4915af060
clone git github.com/samuel/go-zookeeper d0e0d8e11f318e000a8cc434616d69e329edc374
clone git github.com/deckarep/golang-set ef32fa3046d9f249d399f98ebaf9be944430fd1d
clone git github.com/coreos/go-etcd v2.0.0
clone git github.com/coreos/etcd v2.2.0
fix_rewritten_imports github.com/coreos/etcd
clone git github.com/ugorji/go 5abd4e96a45c386928ed2ca2a7ef63e2533e18ec
clone git github.com/hashicorp/consul v0.5.2
clone git github.com/boltdb/bolt v1.0

View file

@ -0,0 +1,94 @@
# etcd/client
etcd/client is the Go client library for etcd.
[![GoDoc](https://godoc.org/github.com/coreos/etcd/client?status.png)](https://godoc.org/github.com/coreos/etcd/client)
## Install
```bash
go get github.com/coreos/etcd/client
```
## Usage
```go
package main
import (
"log"
"time"
"github.com/coreos/etcd/Godeps/_workspace/src/golang.org/x/net/context"
"github.com/coreos/etcd/client"
)
func main() {
cfg := client.Config{
Endpoints: []string{"http://127.0.0.1:2379"},
Transport: client.DefaultTransport,
// set timeout per request to fail fast when the target endpoint is unavailable
HeaderTimeoutPerRequest: time.Second,
}
c, err := client.New(cfg)
if err != nil {
log.Fatal(err)
}
kapi := client.NewKeysAPI(c)
resp, err := kapi.Set(context.Background(), "foo", "bar", nil)
if err != nil {
log.Fatal(err)
}
}
```
## Error Handling
etcd client might return three types of errors.
- context error
Each API call has its first parameter as `context`. A context can be canceled or have an attached deadline. If the context is canceled or reaches its deadline, the responding context error will be returned no matter what internal errors the API call has already encountered.
- cluster error
Each API call tries to send request to the cluster endpoints one by one until it successfully gets a response. If a requests to an endpoint fails, due to exceeding per request timeout or connection issues, the error will be added into a list of errors. If all possible endpoints fail, a cluster error that includes all encountered errors will be returned.
- response error
If the response gets from the cluster is invalid, a plain string error will be returned. For example, it might be a invalid JSON error.
Here is the example code to handle client errors:
```go
cfg := client.Config{Endpoints: []string{"http://etcd1:2379,http://etcd2:2379,http://etcd3:2379"}}
c, err := client.New(cfg)
if err != nil {
log.Fatal(err)
}
kapi := client.NewKeysAPI(c)
resp, err := kapi.Set(ctx, "test", "bar", nil)
if err != nil {
if err == context.Canceled {
// ctx is canceled by another routine
} else if err == context.DeadlineExceeded {
// ctx is attached with a deadline and it exceeded
} else if cerr, ok := err.(*client.ClusterError); ok {
// process (cerr.Errors)
} else {
// bad cluster endpoints, which are not etcd servers
}
}
```
## Caveat
1. etcd/client prefers to use the same endpoint as long as the endpoint continues to work well. This saves socket resources, and improves efficiency for both client and server side. This preference doesn't remove consistency from the data consumed by the client because data replicated to each etcd member has already passed through the consensus process.
2. etcd/client does round-robin rotation on other available endpoints if the preferred endpoint isn't functioning properly. For example, if the member that etcd/client connects to is hard killed, etcd/client will fail on the first attempt with the killed member, and succeed on the second attempt with another member. If it fails to talk to all available endpoints, it will return all errors happened.
3. Default etcd/client cannot handle the case that the remote server is SIGSTOPed now. TCP keepalive mechanism doesn't help in this scenario because operating system may still send TCP keep-alive packets. Over time we'd like to improve this functionality, but solving this issue isn't high priority because a real-life case in which a server is stopped, but the connection is kept alive, hasn't been brought to our attention.
4. etcd/client cannot detect whether the member in use is healthy when doing read requests. If the member is isolated from the cluster, etcd/client may retrieve outdated data. As a workaround, users could monitor experimental /health endpoint for member healthy information. We are improving it at [#3265](https://github.com/coreos/etcd/issues/3265).

View file

@ -0,0 +1,235 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"bytes"
"encoding/json"
"net/http"
"net/url"
"golang.org/x/net/context"
)
type Role struct {
Role string `json:"role"`
Permissions Permissions `json:"permissions"`
Grant *Permissions `json:"grant,omitempty"`
Revoke *Permissions `json:"revoke,omitempty"`
}
type Permissions struct {
KV rwPermission `json:"kv"`
}
type rwPermission struct {
Read []string `json:"read"`
Write []string `json:"write"`
}
type PermissionType int
const (
ReadPermission PermissionType = iota
WritePermission
ReadWritePermission
)
// NewAuthRoleAPI constructs a new AuthRoleAPI that uses HTTP to
// interact with etcd's role creation and modification features.
func NewAuthRoleAPI(c Client) AuthRoleAPI {
return &httpAuthRoleAPI{
client: c,
}
}
type AuthRoleAPI interface {
// Add a role.
AddRole(ctx context.Context, role string) error
// Remove a role.
RemoveRole(ctx context.Context, role string) error
// Get role details.
GetRole(ctx context.Context, role string) (*Role, error)
// Grant a role some permission prefixes for the KV store.
GrantRoleKV(ctx context.Context, role string, prefixes []string, permType PermissionType) (*Role, error)
// Revoke some some permission prefixes for a role on the KV store.
RevokeRoleKV(ctx context.Context, role string, prefixes []string, permType PermissionType) (*Role, error)
// List roles.
ListRoles(ctx context.Context) ([]string, error)
}
type httpAuthRoleAPI struct {
client httpClient
}
type authRoleAPIAction struct {
verb string
name string
role *Role
}
type authRoleAPIList struct{}
func (list *authRoleAPIList) HTTPRequest(ep url.URL) *http.Request {
u := v2AuthURL(ep, "roles", "")
req, _ := http.NewRequest("GET", u.String(), nil)
req.Header.Set("Content-Type", "application/json")
return req
}
func (l *authRoleAPIAction) HTTPRequest(ep url.URL) *http.Request {
u := v2AuthURL(ep, "roles", l.name)
if l.role == nil {
req, _ := http.NewRequest(l.verb, u.String(), nil)
return req
}
b, err := json.Marshal(l.role)
if err != nil {
panic(err)
}
body := bytes.NewReader(b)
req, _ := http.NewRequest(l.verb, u.String(), body)
req.Header.Set("Content-Type", "application/json")
return req
}
func (r *httpAuthRoleAPI) ListRoles(ctx context.Context) ([]string, error) {
resp, body, err := r.client.Do(ctx, &authRoleAPIList{})
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK); err != nil {
return nil, err
}
var userList struct {
Roles []string `json:"roles"`
}
err = json.Unmarshal(body, &userList)
if err != nil {
return nil, err
}
return userList.Roles, nil
}
func (r *httpAuthRoleAPI) AddRole(ctx context.Context, rolename string) error {
role := &Role{
Role: rolename,
}
return r.addRemoveRole(ctx, &authRoleAPIAction{
verb: "PUT",
name: rolename,
role: role,
})
}
func (r *httpAuthRoleAPI) RemoveRole(ctx context.Context, rolename string) error {
return r.addRemoveRole(ctx, &authRoleAPIAction{
verb: "DELETE",
name: rolename,
})
}
func (r *httpAuthRoleAPI) addRemoveRole(ctx context.Context, req *authRoleAPIAction) error {
resp, body, err := r.client.Do(ctx, req)
if err != nil {
return err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK, http.StatusCreated); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return err
}
return sec
}
return nil
}
func (r *httpAuthRoleAPI) GetRole(ctx context.Context, rolename string) (*Role, error) {
return r.modRole(ctx, &authRoleAPIAction{
verb: "GET",
name: rolename,
})
}
func buildRWPermission(prefixes []string, permType PermissionType) rwPermission {
var out rwPermission
switch permType {
case ReadPermission:
out.Read = prefixes
case WritePermission:
out.Write = prefixes
case ReadWritePermission:
out.Read = prefixes
out.Write = prefixes
}
return out
}
func (r *httpAuthRoleAPI) GrantRoleKV(ctx context.Context, rolename string, prefixes []string, permType PermissionType) (*Role, error) {
rwp := buildRWPermission(prefixes, permType)
role := &Role{
Role: rolename,
Grant: &Permissions{
KV: rwp,
},
}
return r.modRole(ctx, &authRoleAPIAction{
verb: "PUT",
name: rolename,
role: role,
})
}
func (r *httpAuthRoleAPI) RevokeRoleKV(ctx context.Context, rolename string, prefixes []string, permType PermissionType) (*Role, error) {
rwp := buildRWPermission(prefixes, permType)
role := &Role{
Role: rolename,
Revoke: &Permissions{
KV: rwp,
},
}
return r.modRole(ctx, &authRoleAPIAction{
verb: "PUT",
name: rolename,
role: role,
})
}
func (r *httpAuthRoleAPI) modRole(ctx context.Context, req *authRoleAPIAction) (*Role, error) {
resp, body, err := r.client.Do(ctx, req)
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return nil, err
}
return nil, sec
}
var role Role
err = json.Unmarshal(body, &role)
if err != nil {
return nil, err
}
return &role, nil
}

View file

@ -0,0 +1,297 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"bytes"
"encoding/json"
"net/http"
"net/url"
"path"
"golang.org/x/net/context"
)
var (
defaultV2AuthPrefix = "/v2/auth"
)
type User struct {
User string `json:"user"`
Password string `json:"password,omitempty"`
Roles []string `json:"roles"`
Grant []string `json:"grant,omitempty"`
Revoke []string `json:"revoke,omitempty"`
}
func v2AuthURL(ep url.URL, action string, name string) *url.URL {
if name != "" {
ep.Path = path.Join(ep.Path, defaultV2AuthPrefix, action, name)
return &ep
}
ep.Path = path.Join(ep.Path, defaultV2AuthPrefix, action)
return &ep
}
// NewAuthAPI constructs a new AuthAPI that uses HTTP to
// interact with etcd's general auth features.
func NewAuthAPI(c Client) AuthAPI {
return &httpAuthAPI{
client: c,
}
}
type AuthAPI interface {
// Enable auth.
Enable(ctx context.Context) error
// Disable auth.
Disable(ctx context.Context) error
}
type httpAuthAPI struct {
client httpClient
}
func (s *httpAuthAPI) Enable(ctx context.Context) error {
return s.enableDisable(ctx, &authAPIAction{"PUT"})
}
func (s *httpAuthAPI) Disable(ctx context.Context) error {
return s.enableDisable(ctx, &authAPIAction{"DELETE"})
}
func (s *httpAuthAPI) enableDisable(ctx context.Context, req httpAction) error {
resp, body, err := s.client.Do(ctx, req)
if err != nil {
return err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK, http.StatusCreated); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return err
}
return sec
}
return nil
}
type authAPIAction struct {
verb string
}
func (l *authAPIAction) HTTPRequest(ep url.URL) *http.Request {
u := v2AuthURL(ep, "enable", "")
req, _ := http.NewRequest(l.verb, u.String(), nil)
return req
}
type authError struct {
Message string `json:"message"`
Code int `json:"-"`
}
func (e authError) Error() string {
return e.Message
}
// NewAuthUserAPI constructs a new AuthUserAPI that uses HTTP to
// interact with etcd's user creation and modification features.
func NewAuthUserAPI(c Client) AuthUserAPI {
return &httpAuthUserAPI{
client: c,
}
}
type AuthUserAPI interface {
// Add a user.
AddUser(ctx context.Context, username string, password string) error
// Remove a user.
RemoveUser(ctx context.Context, username string) error
// Get user details.
GetUser(ctx context.Context, username string) (*User, error)
// Grant a user some permission roles.
GrantUser(ctx context.Context, username string, roles []string) (*User, error)
// Revoke some permission roles from a user.
RevokeUser(ctx context.Context, username string, roles []string) (*User, error)
// Change the user's password.
ChangePassword(ctx context.Context, username string, password string) (*User, error)
// List users.
ListUsers(ctx context.Context) ([]string, error)
}
type httpAuthUserAPI struct {
client httpClient
}
type authUserAPIAction struct {
verb string
username string
user *User
}
type authUserAPIList struct{}
func (list *authUserAPIList) HTTPRequest(ep url.URL) *http.Request {
u := v2AuthURL(ep, "users", "")
req, _ := http.NewRequest("GET", u.String(), nil)
req.Header.Set("Content-Type", "application/json")
return req
}
func (l *authUserAPIAction) HTTPRequest(ep url.URL) *http.Request {
u := v2AuthURL(ep, "users", l.username)
if l.user == nil {
req, _ := http.NewRequest(l.verb, u.String(), nil)
return req
}
b, err := json.Marshal(l.user)
if err != nil {
panic(err)
}
body := bytes.NewReader(b)
req, _ := http.NewRequest(l.verb, u.String(), body)
req.Header.Set("Content-Type", "application/json")
return req
}
func (u *httpAuthUserAPI) ListUsers(ctx context.Context) ([]string, error) {
resp, body, err := u.client.Do(ctx, &authUserAPIList{})
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return nil, err
}
return nil, sec
}
var userList struct {
Users []string `json:"users"`
}
err = json.Unmarshal(body, &userList)
if err != nil {
return nil, err
}
return userList.Users, nil
}
func (u *httpAuthUserAPI) AddUser(ctx context.Context, username string, password string) error {
user := &User{
User: username,
Password: password,
}
return u.addRemoveUser(ctx, &authUserAPIAction{
verb: "PUT",
username: username,
user: user,
})
}
func (u *httpAuthUserAPI) RemoveUser(ctx context.Context, username string) error {
return u.addRemoveUser(ctx, &authUserAPIAction{
verb: "DELETE",
username: username,
})
}
func (u *httpAuthUserAPI) addRemoveUser(ctx context.Context, req *authUserAPIAction) error {
resp, body, err := u.client.Do(ctx, req)
if err != nil {
return err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK, http.StatusCreated); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return err
}
return sec
}
return nil
}
func (u *httpAuthUserAPI) GetUser(ctx context.Context, username string) (*User, error) {
return u.modUser(ctx, &authUserAPIAction{
verb: "GET",
username: username,
})
}
func (u *httpAuthUserAPI) GrantUser(ctx context.Context, username string, roles []string) (*User, error) {
user := &User{
User: username,
Grant: roles,
}
return u.modUser(ctx, &authUserAPIAction{
verb: "PUT",
username: username,
user: user,
})
}
func (u *httpAuthUserAPI) RevokeUser(ctx context.Context, username string, roles []string) (*User, error) {
user := &User{
User: username,
Revoke: roles,
}
return u.modUser(ctx, &authUserAPIAction{
verb: "PUT",
username: username,
user: user,
})
}
func (u *httpAuthUserAPI) ChangePassword(ctx context.Context, username string, password string) (*User, error) {
user := &User{
User: username,
Password: password,
}
return u.modUser(ctx, &authUserAPIAction{
verb: "PUT",
username: username,
user: user,
})
}
func (u *httpAuthUserAPI) modUser(ctx context.Context, req *authUserAPIAction) (*User, error) {
resp, body, err := u.client.Do(ctx, req)
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK); err != nil {
var sec authError
err := json.Unmarshal(body, &sec)
if err != nil {
return nil, err
}
return nil, sec
}
var user User
err = json.Unmarshal(body, &user)
if err != nil {
return nil, err
}
return &user, nil
}

View file

@ -0,0 +1,20 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// borrowed from golang/net/context/ctxhttp/cancelreq.go
// +build go1.5
package client
import "net/http"
func requestCanceler(tr CancelableTransport, req *http.Request) func() {
ch := make(chan struct{})
req.Cancel = ch
return func() {
close(ch)
}
}

View file

@ -0,0 +1,17 @@
// Copyright 2015 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// borrowed from golang/net/context/ctxhttp/cancelreq_go14.go
// +build !go1.5
package client
import "net/http"
func requestCanceler(tr CancelableTransport, req *http.Request) func() {
return func() {
tr.CancelRequest(req)
}
}

View file

@ -0,0 +1,514 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"errors"
"fmt"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/url"
"reflect"
"sort"
"sync"
"time"
"golang.org/x/net/context"
)
var (
ErrNoEndpoints = errors.New("client: no endpoints available")
ErrTooManyRedirects = errors.New("client: too many redirects")
ErrClusterUnavailable = errors.New("client: etcd cluster is unavailable or misconfigured")
errTooManyRedirectChecks = errors.New("client: too many redirect checks")
)
var DefaultRequestTimeout = 5 * time.Second
var DefaultTransport CancelableTransport = &http.Transport{
Proxy: http.ProxyFromEnvironment,
Dial: (&net.Dialer{
Timeout: 30 * time.Second,
KeepAlive: 30 * time.Second,
}).Dial,
TLSHandshakeTimeout: 10 * time.Second,
}
type Config struct {
// Endpoints defines a set of URLs (schemes, hosts and ports only)
// that can be used to communicate with a logical etcd cluster. For
// example, a three-node cluster could be provided like so:
//
// Endpoints: []string{
// "http://node1.example.com:2379",
// "http://node2.example.com:2379",
// "http://node3.example.com:2379",
// }
//
// If multiple endpoints are provided, the Client will attempt to
// use them all in the event that one or more of them are unusable.
//
// If Client.Sync is ever called, the Client may cache an alternate
// set of endpoints to continue operation.
Endpoints []string
// Transport is used by the Client to drive HTTP requests. If not
// provided, DefaultTransport will be used.
Transport CancelableTransport
// CheckRedirect specifies the policy for handling HTTP redirects.
// If CheckRedirect is not nil, the Client calls it before
// following an HTTP redirect. The sole argument is the number of
// requests that have alrady been made. If CheckRedirect returns
// an error, Client.Do will not make any further requests and return
// the error back it to the caller.
//
// If CheckRedirect is nil, the Client uses its default policy,
// which is to stop after 10 consecutive requests.
CheckRedirect CheckRedirectFunc
// Username specifies the user credential to add as an authorization header
Username string
// Password is the password for the specified user to add as an authorization header
// to the request.
Password string
// HeaderTimeoutPerRequest specifies the time limit to wait for response
// header in a single request made by the Client. The timeout includes
// connection time, any redirects, and header wait time.
//
// For non-watch GET request, server returns the response body immediately.
// For PUT/POST/DELETE request, server will attempt to commit request
// before responding, which is expected to take `100ms + 2 * RTT`.
// For watch request, server returns the header immediately to notify Client
// watch start. But if server is behind some kind of proxy, the response
// header may be cached at proxy, and Client cannot rely on this behavior.
//
// One API call may send multiple requests to different etcd servers until it
// succeeds. Use context of the API to specify the overall timeout.
//
// A HeaderTimeoutPerRequest of zero means no timeout.
HeaderTimeoutPerRequest time.Duration
}
func (cfg *Config) transport() CancelableTransport {
if cfg.Transport == nil {
return DefaultTransport
}
return cfg.Transport
}
func (cfg *Config) checkRedirect() CheckRedirectFunc {
if cfg.CheckRedirect == nil {
return DefaultCheckRedirect
}
return cfg.CheckRedirect
}
// CancelableTransport mimics net/http.Transport, but requires that
// the object also support request cancellation.
type CancelableTransport interface {
http.RoundTripper
CancelRequest(req *http.Request)
}
type CheckRedirectFunc func(via int) error
// DefaultCheckRedirect follows up to 10 redirects, but no more.
var DefaultCheckRedirect CheckRedirectFunc = func(via int) error {
if via > 10 {
return ErrTooManyRedirects
}
return nil
}
type Client interface {
// Sync updates the internal cache of the etcd cluster's membership.
Sync(context.Context) error
// AutoSync periodically calls Sync() every given interval.
// The recommended sync interval is 10 seconds to 1 minute, which does
// not bring too much overhead to server and makes client catch up the
// cluster change in time.
//
// The example to use it:
//
// for {
// err := client.AutoSync(ctx, 10*time.Second)
// if err == context.DeadlineExceeded || err == context.Canceled {
// break
// }
// log.Print(err)
// }
AutoSync(context.Context, time.Duration) error
// Endpoints returns a copy of the current set of API endpoints used
// by Client to resolve HTTP requests. If Sync has ever been called,
// this may differ from the initial Endpoints provided in the Config.
Endpoints() []string
httpClient
}
func New(cfg Config) (Client, error) {
c := &httpClusterClient{
clientFactory: newHTTPClientFactory(cfg.transport(), cfg.checkRedirect(), cfg.HeaderTimeoutPerRequest),
rand: rand.New(rand.NewSource(int64(time.Now().Nanosecond()))),
}
if cfg.Username != "" {
c.credentials = &credentials{
username: cfg.Username,
password: cfg.Password,
}
}
if err := c.reset(cfg.Endpoints); err != nil {
return nil, err
}
return c, nil
}
type httpClient interface {
Do(context.Context, httpAction) (*http.Response, []byte, error)
}
func newHTTPClientFactory(tr CancelableTransport, cr CheckRedirectFunc, headerTimeout time.Duration) httpClientFactory {
return func(ep url.URL) httpClient {
return &redirectFollowingHTTPClient{
checkRedirect: cr,
client: &simpleHTTPClient{
transport: tr,
endpoint: ep,
headerTimeout: headerTimeout,
},
}
}
}
type credentials struct {
username string
password string
}
type httpClientFactory func(url.URL) httpClient
type httpAction interface {
HTTPRequest(url.URL) *http.Request
}
type httpClusterClient struct {
clientFactory httpClientFactory
endpoints []url.URL
pinned int
credentials *credentials
sync.RWMutex
rand *rand.Rand
}
func (c *httpClusterClient) reset(eps []string) error {
if len(eps) == 0 {
return ErrNoEndpoints
}
neps := make([]url.URL, len(eps))
for i, ep := range eps {
u, err := url.Parse(ep)
if err != nil {
return err
}
neps[i] = *u
}
c.endpoints = shuffleEndpoints(c.rand, neps)
// TODO: pin old endpoint if possible, and rebalance when new endpoint appears
c.pinned = 0
return nil
}
func (c *httpClusterClient) Do(ctx context.Context, act httpAction) (*http.Response, []byte, error) {
action := act
c.RLock()
leps := len(c.endpoints)
eps := make([]url.URL, leps)
n := copy(eps, c.endpoints)
pinned := c.pinned
if c.credentials != nil {
action = &authedAction{
act: act,
credentials: *c.credentials,
}
}
c.RUnlock()
if leps == 0 {
return nil, nil, ErrNoEndpoints
}
if leps != n {
return nil, nil, errors.New("unable to pick endpoint: copy failed")
}
var resp *http.Response
var body []byte
var err error
cerr := &ClusterError{}
for i := pinned; i < leps+pinned; i++ {
k := i % leps
hc := c.clientFactory(eps[k])
resp, body, err = hc.Do(ctx, action)
if err != nil {
cerr.Errors = append(cerr.Errors, err)
// mask previous errors with context error, which is controlled by user
if err == context.Canceled || err == context.DeadlineExceeded {
return nil, nil, err
}
continue
}
if resp.StatusCode/100 == 5 {
switch resp.StatusCode {
case http.StatusInternalServerError, http.StatusServiceUnavailable:
// TODO: make sure this is a no leader response
cerr.Errors = append(cerr.Errors, fmt.Errorf("client: etcd member %s has no leader", eps[k].String()))
default:
cerr.Errors = append(cerr.Errors, fmt.Errorf("client: etcd member %s returns server error [%s]", eps[k].String(), http.StatusText(resp.StatusCode)))
}
continue
}
if k != pinned {
c.Lock()
c.pinned = k
c.Unlock()
}
return resp, body, nil
}
return nil, nil, cerr
}
func (c *httpClusterClient) Endpoints() []string {
c.RLock()
defer c.RUnlock()
eps := make([]string, len(c.endpoints))
for i, ep := range c.endpoints {
eps[i] = ep.String()
}
return eps
}
func (c *httpClusterClient) Sync(ctx context.Context) error {
mAPI := NewMembersAPI(c)
ms, err := mAPI.List(ctx)
if err != nil {
return err
}
c.Lock()
defer c.Unlock()
eps := make([]string, 0)
for _, m := range ms {
eps = append(eps, m.ClientURLs...)
}
sort.Sort(sort.StringSlice(eps))
ceps := make([]string, len(c.endpoints))
for i, cep := range c.endpoints {
ceps[i] = cep.String()
}
sort.Sort(sort.StringSlice(ceps))
// fast path if no change happens
// this helps client to pin the endpoint when no cluster change
if reflect.DeepEqual(eps, ceps) {
return nil
}
return c.reset(eps)
}
func (c *httpClusterClient) AutoSync(ctx context.Context, interval time.Duration) error {
ticker := time.NewTicker(interval)
defer ticker.Stop()
for {
err := c.Sync(ctx)
if err != nil {
return err
}
select {
case <-ctx.Done():
return ctx.Err()
case <-ticker.C:
}
}
}
type roundTripResponse struct {
resp *http.Response
err error
}
type simpleHTTPClient struct {
transport CancelableTransport
endpoint url.URL
headerTimeout time.Duration
}
func (c *simpleHTTPClient) Do(ctx context.Context, act httpAction) (*http.Response, []byte, error) {
req := act.HTTPRequest(c.endpoint)
if err := printcURL(req); err != nil {
return nil, nil, err
}
hctx, hcancel := context.WithCancel(ctx)
if c.headerTimeout > 0 {
hctx, hcancel = context.WithTimeout(ctx, c.headerTimeout)
}
defer hcancel()
reqcancel := requestCanceler(c.transport, req)
rtchan := make(chan roundTripResponse, 1)
go func() {
resp, err := c.transport.RoundTrip(req)
rtchan <- roundTripResponse{resp: resp, err: err}
close(rtchan)
}()
var resp *http.Response
var err error
select {
case rtresp := <-rtchan:
resp, err = rtresp.resp, rtresp.err
case <-hctx.Done():
// cancel and wait for request to actually exit before continuing
reqcancel()
rtresp := <-rtchan
resp = rtresp.resp
switch {
case ctx.Err() != nil:
err = ctx.Err()
case hctx.Err() != nil:
err = fmt.Errorf("client: endpoint %s exceeded header timeout", c.endpoint.String())
default:
panic("failed to get error from context")
}
}
// always check for resp nil-ness to deal with possible
// race conditions between channels above
defer func() {
if resp != nil {
resp.Body.Close()
}
}()
if err != nil {
return nil, nil, err
}
var body []byte
done := make(chan struct{})
go func() {
body, err = ioutil.ReadAll(resp.Body)
done <- struct{}{}
}()
select {
case <-ctx.Done():
resp.Body.Close()
<-done
return nil, nil, ctx.Err()
case <-done:
}
return resp, body, err
}
type authedAction struct {
act httpAction
credentials credentials
}
func (a *authedAction) HTTPRequest(url url.URL) *http.Request {
r := a.act.HTTPRequest(url)
r.SetBasicAuth(a.credentials.username, a.credentials.password)
return r
}
type redirectFollowingHTTPClient struct {
client httpClient
checkRedirect CheckRedirectFunc
}
func (r *redirectFollowingHTTPClient) Do(ctx context.Context, act httpAction) (*http.Response, []byte, error) {
next := act
for i := 0; i < 100; i++ {
if i > 0 {
if err := r.checkRedirect(i); err != nil {
return nil, nil, err
}
}
resp, body, err := r.client.Do(ctx, next)
if err != nil {
return nil, nil, err
}
if resp.StatusCode/100 == 3 {
hdr := resp.Header.Get("Location")
if hdr == "" {
return nil, nil, fmt.Errorf("Location header not set")
}
loc, err := url.Parse(hdr)
if err != nil {
return nil, nil, fmt.Errorf("Location header not valid URL: %s", hdr)
}
next = &redirectedHTTPAction{
action: act,
location: *loc,
}
continue
}
return resp, body, nil
}
return nil, nil, errTooManyRedirectChecks
}
type redirectedHTTPAction struct {
action httpAction
location url.URL
}
func (r *redirectedHTTPAction) HTTPRequest(ep url.URL) *http.Request {
orig := r.action.HTTPRequest(ep)
orig.URL = &r.location
return orig
}
func shuffleEndpoints(r *rand.Rand, eps []url.URL) []url.URL {
p := r.Perm(len(eps))
neps := make([]url.URL, len(eps))
for i, k := range p {
neps[i] = eps[k]
}
return neps
}

View file

@ -0,0 +1,33 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import "fmt"
type ClusterError struct {
Errors []error
}
func (ce *ClusterError) Error() string {
return ErrClusterUnavailable.Error()
}
func (ce *ClusterError) Detail() string {
s := ""
for i, e := range ce.Errors {
s += fmt.Sprintf("error #%d: %s\n", i, e)
}
return s
}

View file

@ -0,0 +1,70 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"bytes"
"fmt"
"io/ioutil"
"net/http"
"os"
)
var (
cURLDebug = false
)
func EnablecURLDebug() {
cURLDebug = true
}
func DisablecURLDebug() {
cURLDebug = false
}
// printcURL prints the cURL equivalent request to stderr.
// It returns an error if the body of the request cannot
// be read.
// The caller MUST cancel the request if there is an error.
func printcURL(req *http.Request) error {
if !cURLDebug {
return nil
}
var (
command string
b []byte
err error
)
if req.URL != nil {
command = fmt.Sprintf("curl -X %s %s", req.Method, req.URL.String())
}
if req.Body != nil {
b, err = ioutil.ReadAll(req.Body)
if err != nil {
return err
}
command += fmt.Sprintf(" -d %q", string(b))
}
fmt.Fprintf(os.Stderr, "cURL Command: %s\n", command)
// reset body
body := bytes.NewBuffer(b)
req.Body = ioutil.NopCloser(body)
return nil
}

View file

@ -0,0 +1,21 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
// Discoverer is an interface that wraps the Discover method.
type Discoverer interface {
// Dicover looks up the etcd servers for the domain.
Discover(domain string) ([]string, error)
}

View file

@ -0,0 +1,71 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/*
Package client provides bindings for the etcd APIs.
Create a Config and exchange it for a Client:
import (
"net/http"
"github.com/coreos/etcd/client"
"golang.org/x/net/context"
)
cfg := client.Config{
Endpoints: []string{"http://127.0.0.1:2379"},
Transport: DefaultTransport,
}
c, err := client.New(cfg)
if err != nil {
// handle error
}
Create a KeysAPI using the Client, then use it to interact with etcd:
kAPI := client.NewKeysAPI(c)
// create a new key /foo with the value "bar"
_, err = kAPI.Create(context.Background(), "/foo", "bar")
if err != nil {
// handle error
}
// delete the newly created key only if the value is still "bar"
_, err = kAPI.Delete(context.Background(), "/foo", &DeleteOptions{PrevValue: "bar"})
if err != nil {
// handle error
}
Use a custom context to set timeouts on your operations:
import "time"
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()
// set a new key, ignoring it's previous state
_, err := kAPI.Set(ctx, "/ping", "pong", nil)
if err != nil {
if err == context.DeadlineExceeded {
// request took longer than 5s
} else {
// handle error
}
}
*/
package client

View file

@ -0,0 +1,891 @@
// ************************************************************
// DO NOT EDIT.
// THIS FILE IS AUTO-GENERATED BY codecgen.
// ************************************************************
package client
import (
"errors"
"fmt"
codec1978 "github.com/ugorji/go/codec"
"reflect"
"runtime"
"time"
)
const (
codecSelferC_UTF85311 = 1
codecSelferC_RAW5311 = 0
codecSelverValueTypeArray5311 = 10
codecSelverValueTypeMap5311 = 9
)
var (
codecSelferBitsize5311 = uint8(reflect.TypeOf(uint(0)).Bits())
codecSelferOnlyMapOrArrayEncodeToStructErr5311 = errors.New(`only encoded map or array can be decoded into a struct`)
)
type codecSelfer5311 struct{}
func init() {
if codec1978.GenVersion != 2 {
_, file, _, _ := runtime.Caller(0)
err := fmt.Errorf("codecgen version mismatch: current: %v, need %v. Re-generate file: %v",
2, codec1978.GenVersion, file)
panic(err)
}
if false { // reference the types, but skip this branch at build/run time
var v0 time.Time
_ = v0
}
}
func (x *Response) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yysep1 := !z.EncBinary()
yy2arr1 := z.EncBasicHandle().StructToArray
var yyfirst1 bool
var yyq1 [3]bool
_, _, _, _ = yysep1, yyfirst1, yyq1, yy2arr1
const yyr1 bool = false
if yyr1 || yy2arr1 {
r.EncodeArrayStart(3)
} else {
var yynn1 int = 3
for _, b := range yyq1 {
if b {
yynn1++
}
}
r.EncodeMapStart(yynn1)
}
if yyr1 || yy2arr1 {
r.EncodeString(codecSelferC_UTF85311, string(x.Action))
} else {
yyfirst1 = true
r.EncodeString(codecSelferC_UTF85311, string("action"))
if yysep1 {
r.EncodeMapKVSeparator()
}
r.EncodeString(codecSelferC_UTF85311, string(x.Action))
}
if yyr1 || yy2arr1 {
if yysep1 {
r.EncodeArrayEntrySeparator()
}
if x.Node == nil {
r.EncodeNil()
} else {
x.Node.CodecEncodeSelf(e)
}
} else {
if yyfirst1 {
r.EncodeMapEntrySeparator()
} else {
yyfirst1 = true
}
r.EncodeString(codecSelferC_UTF85311, string("node"))
if yysep1 {
r.EncodeMapKVSeparator()
}
if x.Node == nil {
r.EncodeNil()
} else {
x.Node.CodecEncodeSelf(e)
}
}
if yyr1 || yy2arr1 {
if yysep1 {
r.EncodeArrayEntrySeparator()
}
if x.PrevNode == nil {
r.EncodeNil()
} else {
x.PrevNode.CodecEncodeSelf(e)
}
} else {
if yyfirst1 {
r.EncodeMapEntrySeparator()
} else {
yyfirst1 = true
}
r.EncodeString(codecSelferC_UTF85311, string("prevNode"))
if yysep1 {
r.EncodeMapKVSeparator()
}
if x.PrevNode == nil {
r.EncodeNil()
} else {
x.PrevNode.CodecEncodeSelf(e)
}
}
if yysep1 {
if yyr1 || yy2arr1 {
r.EncodeArrayEnd()
} else {
r.EncodeMapEnd()
}
}
}
}
func (x *Response) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
if r.IsContainerType(codecSelverValueTypeMap5311) {
yyl5 := r.ReadMapStart()
if yyl5 == 0 {
r.ReadMapEnd()
} else {
x.codecDecodeSelfFromMap(yyl5, d)
}
} else if r.IsContainerType(codecSelverValueTypeArray5311) {
yyl5 := r.ReadArrayStart()
if yyl5 == 0 {
r.ReadArrayEnd()
} else {
x.codecDecodeSelfFromArray(yyl5, d)
}
} else {
panic(codecSelferOnlyMapOrArrayEncodeToStructErr5311)
}
}
func (x *Response) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yys6Slc = z.DecScratchBuffer() // default slice to decode into
_ = yys6Slc
var yyhl6 bool = l >= 0
for yyj6 := 0; ; yyj6++ {
if yyhl6 {
if yyj6 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
if yyj6 > 0 {
r.ReadMapEntrySeparator()
}
}
yys6Slc = r.DecodeBytes(yys6Slc, true, true)
yys6 := string(yys6Slc)
if !yyhl6 {
r.ReadMapKVSeparator()
}
switch yys6 {
case "action":
if r.TryDecodeAsNil() {
x.Action = ""
} else {
x.Action = string(r.DecodeString())
}
case "node":
if r.TryDecodeAsNil() {
if x.Node != nil {
x.Node = nil
}
} else {
if x.Node == nil {
x.Node = new(Node)
}
x.Node.CodecDecodeSelf(d)
}
case "prevNode":
if r.TryDecodeAsNil() {
if x.PrevNode != nil {
x.PrevNode = nil
}
} else {
if x.PrevNode == nil {
x.PrevNode = new(Node)
}
x.PrevNode.CodecDecodeSelf(d)
}
default:
z.DecStructFieldNotFound(-1, yys6)
} // end switch yys6
} // end for yyj6
if !yyhl6 {
r.ReadMapEnd()
}
}
func (x *Response) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj10 int
var yyb10 bool
var yyhl10 bool = l >= 0
yyj10++
if yyhl10 {
yyb10 = yyj10 > l
} else {
yyb10 = r.CheckBreak()
}
if yyb10 {
r.ReadArrayEnd()
return
}
if r.TryDecodeAsNil() {
x.Action = ""
} else {
x.Action = string(r.DecodeString())
}
yyj10++
if yyhl10 {
yyb10 = yyj10 > l
} else {
yyb10 = r.CheckBreak()
}
if yyb10 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
if x.Node != nil {
x.Node = nil
}
} else {
if x.Node == nil {
x.Node = new(Node)
}
x.Node.CodecDecodeSelf(d)
}
yyj10++
if yyhl10 {
yyb10 = yyj10 > l
} else {
yyb10 = r.CheckBreak()
}
if yyb10 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
if x.PrevNode != nil {
x.PrevNode = nil
}
} else {
if x.PrevNode == nil {
x.PrevNode = new(Node)
}
x.PrevNode.CodecDecodeSelf(d)
}
for {
yyj10++
if yyhl10 {
yyb10 = yyj10 > l
} else {
yyb10 = r.CheckBreak()
}
if yyb10 {
break
}
if yyj10 > 1 {
r.ReadArrayEntrySeparator()
}
z.DecStructFieldNotFound(yyj10-1, "")
}
r.ReadArrayEnd()
}
func (x *Node) CodecEncodeSelf(e *codec1978.Encoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
if x == nil {
r.EncodeNil()
} else {
yysep14 := !z.EncBinary()
yy2arr14 := z.EncBasicHandle().StructToArray
var yyfirst14 bool
var yyq14 [8]bool
_, _, _, _ = yysep14, yyfirst14, yyq14, yy2arr14
const yyr14 bool = false
yyq14[1] = x.Dir != false
yyq14[6] = x.Expiration != nil
yyq14[7] = x.TTL != 0
if yyr14 || yy2arr14 {
r.EncodeArrayStart(8)
} else {
var yynn14 int = 5
for _, b := range yyq14 {
if b {
yynn14++
}
}
r.EncodeMapStart(yynn14)
}
if yyr14 || yy2arr14 {
r.EncodeString(codecSelferC_UTF85311, string(x.Key))
} else {
yyfirst14 = true
r.EncodeString(codecSelferC_UTF85311, string("key"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeString(codecSelferC_UTF85311, string(x.Key))
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
if yyq14[1] {
r.EncodeBool(bool(x.Dir))
} else {
r.EncodeBool(false)
}
} else {
if yyq14[1] {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("dir"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeBool(bool(x.Dir))
}
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
r.EncodeString(codecSelferC_UTF85311, string(x.Value))
} else {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("value"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeString(codecSelferC_UTF85311, string(x.Value))
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
if x.Nodes == nil {
r.EncodeNil()
} else {
h.encSlicePtrtoNode(([]*Node)(x.Nodes), e)
}
} else {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("nodes"))
if yysep14 {
r.EncodeMapKVSeparator()
}
if x.Nodes == nil {
r.EncodeNil()
} else {
h.encSlicePtrtoNode(([]*Node)(x.Nodes), e)
}
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
r.EncodeUint(uint64(x.CreatedIndex))
} else {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("createdIndex"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeUint(uint64(x.CreatedIndex))
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
r.EncodeUint(uint64(x.ModifiedIndex))
} else {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("modifiedIndex"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeUint(uint64(x.ModifiedIndex))
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
if yyq14[6] {
if x.Expiration == nil {
r.EncodeNil()
} else {
z.EncFallback(x.Expiration)
}
} else {
r.EncodeNil()
}
} else {
if yyq14[6] {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("expiration"))
if yysep14 {
r.EncodeMapKVSeparator()
}
if x.Expiration == nil {
r.EncodeNil()
} else {
z.EncFallback(x.Expiration)
}
}
}
if yyr14 || yy2arr14 {
if yysep14 {
r.EncodeArrayEntrySeparator()
}
if yyq14[7] {
r.EncodeInt(int64(x.TTL))
} else {
r.EncodeInt(0)
}
} else {
if yyq14[7] {
if yyfirst14 {
r.EncodeMapEntrySeparator()
} else {
yyfirst14 = true
}
r.EncodeString(codecSelferC_UTF85311, string("ttl"))
if yysep14 {
r.EncodeMapKVSeparator()
}
r.EncodeInt(int64(x.TTL))
}
}
if yysep14 {
if yyr14 || yy2arr14 {
r.EncodeArrayEnd()
} else {
r.EncodeMapEnd()
}
}
}
}
func (x *Node) CodecDecodeSelf(d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
if r.IsContainerType(codecSelverValueTypeMap5311) {
yyl23 := r.ReadMapStart()
if yyl23 == 0 {
r.ReadMapEnd()
} else {
x.codecDecodeSelfFromMap(yyl23, d)
}
} else if r.IsContainerType(codecSelverValueTypeArray5311) {
yyl23 := r.ReadArrayStart()
if yyl23 == 0 {
r.ReadArrayEnd()
} else {
x.codecDecodeSelfFromArray(yyl23, d)
}
} else {
panic(codecSelferOnlyMapOrArrayEncodeToStructErr5311)
}
}
func (x *Node) codecDecodeSelfFromMap(l int, d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yys24Slc = z.DecScratchBuffer() // default slice to decode into
_ = yys24Slc
var yyhl24 bool = l >= 0
for yyj24 := 0; ; yyj24++ {
if yyhl24 {
if yyj24 >= l {
break
}
} else {
if r.CheckBreak() {
break
}
if yyj24 > 0 {
r.ReadMapEntrySeparator()
}
}
yys24Slc = r.DecodeBytes(yys24Slc, true, true)
yys24 := string(yys24Slc)
if !yyhl24 {
r.ReadMapKVSeparator()
}
switch yys24 {
case "key":
if r.TryDecodeAsNil() {
x.Key = ""
} else {
x.Key = string(r.DecodeString())
}
case "dir":
if r.TryDecodeAsNil() {
x.Dir = false
} else {
x.Dir = bool(r.DecodeBool())
}
case "value":
if r.TryDecodeAsNil() {
x.Value = ""
} else {
x.Value = string(r.DecodeString())
}
case "nodes":
if r.TryDecodeAsNil() {
x.Nodes = nil
} else {
yyv28 := &x.Nodes
h.decSlicePtrtoNode((*[]*Node)(yyv28), d)
}
case "createdIndex":
if r.TryDecodeAsNil() {
x.CreatedIndex = 0
} else {
x.CreatedIndex = uint64(r.DecodeUint(64))
}
case "modifiedIndex":
if r.TryDecodeAsNil() {
x.ModifiedIndex = 0
} else {
x.ModifiedIndex = uint64(r.DecodeUint(64))
}
case "expiration":
if r.TryDecodeAsNil() {
if x.Expiration != nil {
x.Expiration = nil
}
} else {
if x.Expiration == nil {
x.Expiration = new(time.Time)
}
z.DecFallback(x.Expiration, false)
}
case "ttl":
if r.TryDecodeAsNil() {
x.TTL = 0
} else {
x.TTL = int64(r.DecodeInt(64))
}
default:
z.DecStructFieldNotFound(-1, yys24)
} // end switch yys24
} // end for yyj24
if !yyhl24 {
r.ReadMapEnd()
}
}
func (x *Node) codecDecodeSelfFromArray(l int, d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
var yyj33 int
var yyb33 bool
var yyhl33 bool = l >= 0
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
if r.TryDecodeAsNil() {
x.Key = ""
} else {
x.Key = string(r.DecodeString())
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.Dir = false
} else {
x.Dir = bool(r.DecodeBool())
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.Value = ""
} else {
x.Value = string(r.DecodeString())
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.Nodes = nil
} else {
yyv37 := &x.Nodes
h.decSlicePtrtoNode((*[]*Node)(yyv37), d)
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.CreatedIndex = 0
} else {
x.CreatedIndex = uint64(r.DecodeUint(64))
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.ModifiedIndex = 0
} else {
x.ModifiedIndex = uint64(r.DecodeUint(64))
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
if x.Expiration != nil {
x.Expiration = nil
}
} else {
if x.Expiration == nil {
x.Expiration = new(time.Time)
}
z.DecFallback(x.Expiration, false)
}
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
r.ReadArrayEnd()
return
}
r.ReadArrayEntrySeparator()
if r.TryDecodeAsNil() {
x.TTL = 0
} else {
x.TTL = int64(r.DecodeInt(64))
}
for {
yyj33++
if yyhl33 {
yyb33 = yyj33 > l
} else {
yyb33 = r.CheckBreak()
}
if yyb33 {
break
}
if yyj33 > 1 {
r.ReadArrayEntrySeparator()
}
z.DecStructFieldNotFound(yyj33-1, "")
}
r.ReadArrayEnd()
}
func (x codecSelfer5311) encSlicePtrtoNode(v []*Node, e *codec1978.Encoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperEncoder(e)
_, _, _ = h, z, r
r.EncodeArrayStart(len(v))
yys42 := !z.EncBinary()
if yys42 {
for yyi42, yyv42 := range v {
if yyi42 > 0 {
r.EncodeArrayEntrySeparator()
}
if yyv42 == nil {
r.EncodeNil()
} else {
yyv42.CodecEncodeSelf(e)
}
}
r.EncodeArrayEnd()
} else {
for _, yyv42 := range v {
if yyv42 == nil {
r.EncodeNil()
} else {
yyv42.CodecEncodeSelf(e)
}
}
}
}
func (x codecSelfer5311) decSlicePtrtoNode(v *[]*Node, d *codec1978.Decoder) {
var h codecSelfer5311
z, r := codec1978.GenHelperDecoder(d)
_, _, _ = h, z, r
yyv43 := *v
yyh43, yyl43 := z.DecSliceHelperStart()
var yyc43 bool
_ = yyc43
if yyv43 == nil {
if yyl43 <= 0 {
yyv43 = make([]*Node, 0)
} else {
yyv43 = make([]*Node, yyl43)
}
yyc43 = true
}
if yyl43 == 0 {
if len(yyv43) != 0 {
yyv43 = yyv43[:0]
yyc43 = true
}
} else if yyl43 > 0 {
yyn43 := yyl43
if yyl43 > cap(yyv43) {
yyv43 = make([]*Node, yyl43, yyl43)
yyc43 = true
} else if yyl43 != len(yyv43) {
yyv43 = yyv43[:yyl43]
yyc43 = true
}
yyj43 := 0
for ; yyj43 < yyn43; yyj43++ {
if r.TryDecodeAsNil() {
if yyv43[yyj43] != nil {
*yyv43[yyj43] = Node{}
}
} else {
if yyv43[yyj43] == nil {
yyv43[yyj43] = new(Node)
}
yyw44 := yyv43[yyj43]
yyw44.CodecDecodeSelf(d)
}
}
} else {
for yyj43 := 0; !r.CheckBreak(); yyj43++ {
if yyj43 >= len(yyv43) {
yyv43 = append(yyv43, nil) // var yyz43 *Node
yyc43 = true
}
if yyj43 > 0 {
yyh43.Sep(yyj43)
}
if yyj43 < len(yyv43) {
if r.TryDecodeAsNil() {
if yyv43[yyj43] != nil {
*yyv43[yyj43] = Node{}
}
} else {
if yyv43[yyj43] == nil {
yyv43[yyj43] = new(Node)
}
yyw45 := yyv43[yyj43]
yyw45.CodecDecodeSelf(d)
}
} else {
z.DecSwallow()
}
}
yyh43.End()
}
if yyc43 {
*v = yyv43
}
}

View file

@ -0,0 +1,644 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
//go:generate codecgen -r "Node|Response" -o keys.generated.go keys.go
import (
"encoding/json"
"errors"
"fmt"
"net/http"
"net/url"
"strconv"
"strings"
"time"
"github.com/ugorji/go/codec"
"golang.org/x/net/context"
"github.com/coreos/etcd/pkg/pathutil"
)
const (
ErrorCodeKeyNotFound = 100
ErrorCodeTestFailed = 101
ErrorCodeNotFile = 102
ErrorCodeNotDir = 104
ErrorCodeNodeExist = 105
ErrorCodeRootROnly = 107
ErrorCodeDirNotEmpty = 108
ErrorCodeUnauthorized = 110
ErrorCodePrevValueRequired = 201
ErrorCodeTTLNaN = 202
ErrorCodeIndexNaN = 203
ErrorCodeInvalidField = 209
ErrorCodeInvalidForm = 210
ErrorCodeRaftInternal = 300
ErrorCodeLeaderElect = 301
ErrorCodeWatcherCleared = 400
ErrorCodeEventIndexCleared = 401
)
type Error struct {
Code int `json:"errorCode"`
Message string `json:"message"`
Cause string `json:"cause"`
Index uint64 `json:"index"`
}
func (e Error) Error() string {
return fmt.Sprintf("%v: %v (%v) [%v]", e.Code, e.Message, e.Cause, e.Index)
}
var (
ErrInvalidJSON = errors.New("client: response is invalid json. The endpoint is probably not valid etcd cluster endpoint.")
ErrEmptyBody = errors.New("client: response body is empty")
)
// PrevExistType is used to define an existence condition when setting
// or deleting Nodes.
type PrevExistType string
const (
PrevIgnore = PrevExistType("")
PrevExist = PrevExistType("true")
PrevNoExist = PrevExistType("false")
)
var (
defaultV2KeysPrefix = "/v2/keys"
)
// NewKeysAPI builds a KeysAPI that interacts with etcd's key-value
// API over HTTP.
func NewKeysAPI(c Client) KeysAPI {
return NewKeysAPIWithPrefix(c, defaultV2KeysPrefix)
}
// NewKeysAPIWithPrefix acts like NewKeysAPI, but allows the caller
// to provide a custom base URL path. This should only be used in
// very rare cases.
func NewKeysAPIWithPrefix(c Client, p string) KeysAPI {
return &httpKeysAPI{
client: c,
prefix: p,
}
}
type KeysAPI interface {
// Get retrieves a set of Nodes from etcd
Get(ctx context.Context, key string, opts *GetOptions) (*Response, error)
// Set assigns a new value to a Node identified by a given key. The caller
// may define a set of conditions in the SetOptions. If SetOptions.Dir=true
// than value is ignored.
Set(ctx context.Context, key, value string, opts *SetOptions) (*Response, error)
// Delete removes a Node identified by the given key, optionally destroying
// all of its children as well. The caller may define a set of required
// conditions in an DeleteOptions object.
Delete(ctx context.Context, key string, opts *DeleteOptions) (*Response, error)
// Create is an alias for Set w/ PrevExist=false
Create(ctx context.Context, key, value string) (*Response, error)
// CreateInOrder is used to atomically create in-order keys within the given directory.
CreateInOrder(ctx context.Context, dir, value string, opts *CreateInOrderOptions) (*Response, error)
// Update is an alias for Set w/ PrevExist=true
Update(ctx context.Context, key, value string) (*Response, error)
// Watcher builds a new Watcher targeted at a specific Node identified
// by the given key. The Watcher may be configured at creation time
// through a WatcherOptions object. The returned Watcher is designed
// to emit events that happen to a Node, and optionally to its children.
Watcher(key string, opts *WatcherOptions) Watcher
}
type WatcherOptions struct {
// AfterIndex defines the index after-which the Watcher should
// start emitting events. For example, if a value of 5 is
// provided, the first event will have an index >= 6.
//
// Setting AfterIndex to 0 (default) means that the Watcher
// should start watching for events starting at the current
// index, whatever that may be.
AfterIndex uint64
// Recursive specifies whether or not the Watcher should emit
// events that occur in children of the given keyspace. If set
// to false (default), events will be limited to those that
// occur for the exact key.
Recursive bool
}
type CreateInOrderOptions struct {
// TTL defines a period of time after-which the Node should
// expire and no longer exist. Values <= 0 are ignored. Given
// that the zero-value is ignored, TTL cannot be used to set
// a TTL of 0.
TTL time.Duration
}
type SetOptions struct {
// PrevValue specifies what the current value of the Node must
// be in order for the Set operation to succeed.
//
// Leaving this field empty means that the caller wishes to
// ignore the current value of the Node. This cannot be used
// to compare the Node's current value to an empty string.
//
// PrevValue is ignored if Dir=true
PrevValue string
// PrevIndex indicates what the current ModifiedIndex of the
// Node must be in order for the Set operation to succeed.
//
// If PrevIndex is set to 0 (default), no comparison is made.
PrevIndex uint64
// PrevExist specifies whether the Node must currently exist
// (PrevExist) or not (PrevNoExist). If the caller does not
// care about existence, set PrevExist to PrevIgnore, or simply
// leave it unset.
PrevExist PrevExistType
// TTL defines a period of time after-which the Node should
// expire and no longer exist. Values <= 0 are ignored. Given
// that the zero-value is ignored, TTL cannot be used to set
// a TTL of 0.
TTL time.Duration
// Dir specifies whether or not this Node should be created as a directory.
Dir bool
}
type GetOptions struct {
// Recursive defines whether or not all children of the Node
// should be returned.
Recursive bool
// Sort instructs the server whether or not to sort the Nodes.
// If true, the Nodes are sorted alphabetically by key in
// ascending order (A to z). If false (default), the Nodes will
// not be sorted and the ordering used should not be considered
// predictable.
Sort bool
// Quorum specifies whether it gets the latest committed value that
// has been applied in quorum of members, which ensures external
// consistency (or linearizability).
Quorum bool
}
type DeleteOptions struct {
// PrevValue specifies what the current value of the Node must
// be in order for the Delete operation to succeed.
//
// Leaving this field empty means that the caller wishes to
// ignore the current value of the Node. This cannot be used
// to compare the Node's current value to an empty string.
PrevValue string
// PrevIndex indicates what the current ModifiedIndex of the
// Node must be in order for the Delete operation to succeed.
//
// If PrevIndex is set to 0 (default), no comparison is made.
PrevIndex uint64
// Recursive defines whether or not all children of the Node
// should be deleted. If set to true, all children of the Node
// identified by the given key will be deleted. If left unset
// or explicitly set to false, only a single Node will be
// deleted.
Recursive bool
// Dir specifies whether or not this Node should be removed as a directory.
Dir bool
}
type Watcher interface {
// Next blocks until an etcd event occurs, then returns a Response
// represeting that event. The behavior of Next depends on the
// WatcherOptions used to construct the Watcher. Next is designed to
// be called repeatedly, each time blocking until a subsequent event
// is available.
//
// If the provided context is cancelled, Next will return a non-nil
// error. Any other failures encountered while waiting for the next
// event (connection issues, deserialization failures, etc) will
// also result in a non-nil error.
Next(context.Context) (*Response, error)
}
type Response struct {
// Action is the name of the operation that occurred. Possible values
// include get, set, delete, update, create, compareAndSwap,
// compareAndDelete and expire.
Action string `json:"action"`
// Node represents the state of the relevant etcd Node.
Node *Node `json:"node"`
// PrevNode represents the previous state of the Node. PrevNode is non-nil
// only if the Node existed before the action occurred and the action
// caused a change to the Node.
PrevNode *Node `json:"prevNode"`
// Index holds the cluster-level index at the time the Response was generated.
// This index is not tied to the Node(s) contained in this Response.
Index uint64 `json:"-"`
}
type Node struct {
// Key represents the unique location of this Node (e.g. "/foo/bar").
Key string `json:"key"`
// Dir reports whether node describes a directory.
Dir bool `json:"dir,omitempty"`
// Value is the current data stored on this Node. If this Node
// is a directory, Value will be empty.
Value string `json:"value"`
// Nodes holds the children of this Node, only if this Node is a directory.
// This slice of will be arbitrarily deep (children, grandchildren, great-
// grandchildren, etc.) if a recursive Get or Watch request were made.
Nodes []*Node `json:"nodes"`
// CreatedIndex is the etcd index at-which this Node was created.
CreatedIndex uint64 `json:"createdIndex"`
// ModifiedIndex is the etcd index at-which this Node was last modified.
ModifiedIndex uint64 `json:"modifiedIndex"`
// Expiration is the server side expiration time of the key.
Expiration *time.Time `json:"expiration,omitempty"`
// TTL is the time to live of the key in second.
TTL int64 `json:"ttl,omitempty"`
}
func (n *Node) String() string {
return fmt.Sprintf("{Key: %s, CreatedIndex: %d, ModifiedIndex: %d, TTL: %d}", n.Key, n.CreatedIndex, n.ModifiedIndex, n.TTL)
}
// TTLDuration returns the Node's TTL as a time.Duration object
func (n *Node) TTLDuration() time.Duration {
return time.Duration(n.TTL) * time.Second
}
type httpKeysAPI struct {
client httpClient
prefix string
}
func (k *httpKeysAPI) Set(ctx context.Context, key, val string, opts *SetOptions) (*Response, error) {
act := &setAction{
Prefix: k.prefix,
Key: key,
Value: val,
}
if opts != nil {
act.PrevValue = opts.PrevValue
act.PrevIndex = opts.PrevIndex
act.PrevExist = opts.PrevExist
act.TTL = opts.TTL
act.Dir = opts.Dir
}
resp, body, err := k.client.Do(ctx, act)
if err != nil {
return nil, err
}
return unmarshalHTTPResponse(resp.StatusCode, resp.Header, body)
}
func (k *httpKeysAPI) Create(ctx context.Context, key, val string) (*Response, error) {
return k.Set(ctx, key, val, &SetOptions{PrevExist: PrevNoExist})
}
func (k *httpKeysAPI) CreateInOrder(ctx context.Context, dir, val string, opts *CreateInOrderOptions) (*Response, error) {
act := &createInOrderAction{
Prefix: k.prefix,
Dir: dir,
Value: val,
}
if opts != nil {
act.TTL = opts.TTL
}
resp, body, err := k.client.Do(ctx, act)
if err != nil {
return nil, err
}
return unmarshalHTTPResponse(resp.StatusCode, resp.Header, body)
}
func (k *httpKeysAPI) Update(ctx context.Context, key, val string) (*Response, error) {
return k.Set(ctx, key, val, &SetOptions{PrevExist: PrevExist})
}
func (k *httpKeysAPI) Delete(ctx context.Context, key string, opts *DeleteOptions) (*Response, error) {
act := &deleteAction{
Prefix: k.prefix,
Key: key,
}
if opts != nil {
act.PrevValue = opts.PrevValue
act.PrevIndex = opts.PrevIndex
act.Dir = opts.Dir
act.Recursive = opts.Recursive
}
resp, body, err := k.client.Do(ctx, act)
if err != nil {
return nil, err
}
return unmarshalHTTPResponse(resp.StatusCode, resp.Header, body)
}
func (k *httpKeysAPI) Get(ctx context.Context, key string, opts *GetOptions) (*Response, error) {
act := &getAction{
Prefix: k.prefix,
Key: key,
}
if opts != nil {
act.Recursive = opts.Recursive
act.Sorted = opts.Sort
act.Quorum = opts.Quorum
}
resp, body, err := k.client.Do(ctx, act)
if err != nil {
return nil, err
}
return unmarshalHTTPResponse(resp.StatusCode, resp.Header, body)
}
func (k *httpKeysAPI) Watcher(key string, opts *WatcherOptions) Watcher {
act := waitAction{
Prefix: k.prefix,
Key: key,
}
if opts != nil {
act.Recursive = opts.Recursive
if opts.AfterIndex > 0 {
act.WaitIndex = opts.AfterIndex + 1
}
}
return &httpWatcher{
client: k.client,
nextWait: act,
}
}
type httpWatcher struct {
client httpClient
nextWait waitAction
}
func (hw *httpWatcher) Next(ctx context.Context) (*Response, error) {
for {
httpresp, body, err := hw.client.Do(ctx, &hw.nextWait)
if err != nil {
return nil, err
}
resp, err := unmarshalHTTPResponse(httpresp.StatusCode, httpresp.Header, body)
if err != nil {
if err == ErrEmptyBody {
continue
}
return nil, err
}
hw.nextWait.WaitIndex = resp.Node.ModifiedIndex + 1
return resp, nil
}
}
// v2KeysURL forms a URL representing the location of a key.
// The endpoint argument represents the base URL of an etcd
// server. The prefix is the path needed to route from the
// provided endpoint's path to the root of the keys API
// (typically "/v2/keys").
func v2KeysURL(ep url.URL, prefix, key string) *url.URL {
// We concatenate all parts together manually. We cannot use
// path.Join because it does not reserve trailing slash.
// We call CanonicalURLPath to further cleanup the path.
if prefix != "" && prefix[0] != '/' {
prefix = "/" + prefix
}
if key != "" && key[0] != '/' {
key = "/" + key
}
ep.Path = pathutil.CanonicalURLPath(ep.Path + prefix + key)
return &ep
}
type getAction struct {
Prefix string
Key string
Recursive bool
Sorted bool
Quorum bool
}
func (g *getAction) HTTPRequest(ep url.URL) *http.Request {
u := v2KeysURL(ep, g.Prefix, g.Key)
params := u.Query()
params.Set("recursive", strconv.FormatBool(g.Recursive))
params.Set("sorted", strconv.FormatBool(g.Sorted))
params.Set("quorum", strconv.FormatBool(g.Quorum))
u.RawQuery = params.Encode()
req, _ := http.NewRequest("GET", u.String(), nil)
return req
}
type waitAction struct {
Prefix string
Key string
WaitIndex uint64
Recursive bool
}
func (w *waitAction) HTTPRequest(ep url.URL) *http.Request {
u := v2KeysURL(ep, w.Prefix, w.Key)
params := u.Query()
params.Set("wait", "true")
params.Set("waitIndex", strconv.FormatUint(w.WaitIndex, 10))
params.Set("recursive", strconv.FormatBool(w.Recursive))
u.RawQuery = params.Encode()
req, _ := http.NewRequest("GET", u.String(), nil)
return req
}
type setAction struct {
Prefix string
Key string
Value string
PrevValue string
PrevIndex uint64
PrevExist PrevExistType
TTL time.Duration
Dir bool
}
func (a *setAction) HTTPRequest(ep url.URL) *http.Request {
u := v2KeysURL(ep, a.Prefix, a.Key)
params := u.Query()
form := url.Values{}
// we're either creating a directory or setting a key
if a.Dir {
params.Set("dir", strconv.FormatBool(a.Dir))
} else {
// These options are only valid for setting a key
if a.PrevValue != "" {
params.Set("prevValue", a.PrevValue)
}
form.Add("value", a.Value)
}
// Options which apply to both setting a key and creating a dir
if a.PrevIndex != 0 {
params.Set("prevIndex", strconv.FormatUint(a.PrevIndex, 10))
}
if a.PrevExist != PrevIgnore {
params.Set("prevExist", string(a.PrevExist))
}
if a.TTL > 0 {
form.Add("ttl", strconv.FormatUint(uint64(a.TTL.Seconds()), 10))
}
u.RawQuery = params.Encode()
body := strings.NewReader(form.Encode())
req, _ := http.NewRequest("PUT", u.String(), body)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
return req
}
type deleteAction struct {
Prefix string
Key string
PrevValue string
PrevIndex uint64
Dir bool
Recursive bool
}
func (a *deleteAction) HTTPRequest(ep url.URL) *http.Request {
u := v2KeysURL(ep, a.Prefix, a.Key)
params := u.Query()
if a.PrevValue != "" {
params.Set("prevValue", a.PrevValue)
}
if a.PrevIndex != 0 {
params.Set("prevIndex", strconv.FormatUint(a.PrevIndex, 10))
}
if a.Dir {
params.Set("dir", "true")
}
if a.Recursive {
params.Set("recursive", "true")
}
u.RawQuery = params.Encode()
req, _ := http.NewRequest("DELETE", u.String(), nil)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
return req
}
type createInOrderAction struct {
Prefix string
Dir string
Value string
TTL time.Duration
}
func (a *createInOrderAction) HTTPRequest(ep url.URL) *http.Request {
u := v2KeysURL(ep, a.Prefix, a.Dir)
form := url.Values{}
form.Add("value", a.Value)
if a.TTL > 0 {
form.Add("ttl", strconv.FormatUint(uint64(a.TTL.Seconds()), 10))
}
body := strings.NewReader(form.Encode())
req, _ := http.NewRequest("POST", u.String(), body)
req.Header.Set("Content-Type", "application/x-www-form-urlencoded")
return req
}
func unmarshalHTTPResponse(code int, header http.Header, body []byte) (res *Response, err error) {
switch code {
case http.StatusOK, http.StatusCreated:
if len(body) == 0 {
return nil, ErrEmptyBody
}
res, err = unmarshalSuccessfulKeysResponse(header, body)
default:
err = unmarshalFailedKeysResponse(body)
}
return
}
func unmarshalSuccessfulKeysResponse(header http.Header, body []byte) (*Response, error) {
var res Response
err := codec.NewDecoderBytes(body, new(codec.JsonHandle)).Decode(&res)
if err != nil {
return nil, ErrInvalidJSON
}
if header.Get("X-Etcd-Index") != "" {
res.Index, err = strconv.ParseUint(header.Get("X-Etcd-Index"), 10, 64)
if err != nil {
return nil, err
}
}
return &res, nil
}
func unmarshalFailedKeysResponse(body []byte) error {
var etcdErr Error
if err := json.Unmarshal(body, &etcdErr); err != nil {
return ErrInvalidJSON
}
return etcdErr
}

View file

@ -0,0 +1,272 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"bytes"
"encoding/json"
"fmt"
"net/http"
"net/url"
"path"
"golang.org/x/net/context"
"github.com/coreos/etcd/pkg/types"
)
var (
defaultV2MembersPrefix = "/v2/members"
)
type Member struct {
// ID is the unique identifier of this Member.
ID string `json:"id"`
// Name is a human-readable, non-unique identifier of this Member.
Name string `json:"name"`
// PeerURLs represents the HTTP(S) endpoints this Member uses to
// participate in etcd's consensus protocol.
PeerURLs []string `json:"peerURLs"`
// ClientURLs represents the HTTP(S) endpoints on which this Member
// serves it's client-facing APIs.
ClientURLs []string `json:"clientURLs"`
}
type memberCollection []Member
func (c *memberCollection) UnmarshalJSON(data []byte) error {
d := struct {
Members []Member
}{}
if err := json.Unmarshal(data, &d); err != nil {
return err
}
if d.Members == nil {
*c = make([]Member, 0)
return nil
}
*c = d.Members
return nil
}
type memberCreateOrUpdateRequest struct {
PeerURLs types.URLs
}
func (m *memberCreateOrUpdateRequest) MarshalJSON() ([]byte, error) {
s := struct {
PeerURLs []string `json:"peerURLs"`
}{
PeerURLs: make([]string, len(m.PeerURLs)),
}
for i, u := range m.PeerURLs {
s.PeerURLs[i] = u.String()
}
return json.Marshal(&s)
}
// NewMembersAPI constructs a new MembersAPI that uses HTTP to
// interact with etcd's membership API.
func NewMembersAPI(c Client) MembersAPI {
return &httpMembersAPI{
client: c,
}
}
type MembersAPI interface {
// List enumerates the current cluster membership.
List(ctx context.Context) ([]Member, error)
// Add instructs etcd to accept a new Member into the cluster.
Add(ctx context.Context, peerURL string) (*Member, error)
// Remove demotes an existing Member out of the cluster.
Remove(ctx context.Context, mID string) error
// Update instructs etcd to update an existing Member in the cluster.
Update(ctx context.Context, mID string, peerURLs []string) error
}
type httpMembersAPI struct {
client httpClient
}
func (m *httpMembersAPI) List(ctx context.Context) ([]Member, error) {
req := &membersAPIActionList{}
resp, body, err := m.client.Do(ctx, req)
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusOK); err != nil {
return nil, err
}
var mCollection memberCollection
if err := json.Unmarshal(body, &mCollection); err != nil {
return nil, err
}
return []Member(mCollection), nil
}
func (m *httpMembersAPI) Add(ctx context.Context, peerURL string) (*Member, error) {
urls, err := types.NewURLs([]string{peerURL})
if err != nil {
return nil, err
}
req := &membersAPIActionAdd{peerURLs: urls}
resp, body, err := m.client.Do(ctx, req)
if err != nil {
return nil, err
}
if err := assertStatusCode(resp.StatusCode, http.StatusCreated, http.StatusConflict); err != nil {
return nil, err
}
if resp.StatusCode != http.StatusCreated {
var merr membersError
if err := json.Unmarshal(body, &merr); err != nil {
return nil, err
}
return nil, merr
}
var memb Member
if err := json.Unmarshal(body, &memb); err != nil {
return nil, err
}
return &memb, nil
}
func (m *httpMembersAPI) Update(ctx context.Context, memberID string, peerURLs []string) error {
urls, err := types.NewURLs(peerURLs)
if err != nil {
return err
}
req := &membersAPIActionUpdate{peerURLs: urls, memberID: memberID}
resp, body, err := m.client.Do(ctx, req)
if err != nil {
return err
}
if err := assertStatusCode(resp.StatusCode, http.StatusNoContent, http.StatusNotFound, http.StatusConflict); err != nil {
return err
}
if resp.StatusCode != http.StatusNoContent {
var merr membersError
if err := json.Unmarshal(body, &merr); err != nil {
return err
}
return merr
}
return nil
}
func (m *httpMembersAPI) Remove(ctx context.Context, memberID string) error {
req := &membersAPIActionRemove{memberID: memberID}
resp, _, err := m.client.Do(ctx, req)
if err != nil {
return err
}
return assertStatusCode(resp.StatusCode, http.StatusNoContent, http.StatusGone)
}
type membersAPIActionList struct{}
func (l *membersAPIActionList) HTTPRequest(ep url.URL) *http.Request {
u := v2MembersURL(ep)
req, _ := http.NewRequest("GET", u.String(), nil)
return req
}
type membersAPIActionRemove struct {
memberID string
}
func (d *membersAPIActionRemove) HTTPRequest(ep url.URL) *http.Request {
u := v2MembersURL(ep)
u.Path = path.Join(u.Path, d.memberID)
req, _ := http.NewRequest("DELETE", u.String(), nil)
return req
}
type membersAPIActionAdd struct {
peerURLs types.URLs
}
func (a *membersAPIActionAdd) HTTPRequest(ep url.URL) *http.Request {
u := v2MembersURL(ep)
m := memberCreateOrUpdateRequest{PeerURLs: a.peerURLs}
b, _ := json.Marshal(&m)
req, _ := http.NewRequest("POST", u.String(), bytes.NewReader(b))
req.Header.Set("Content-Type", "application/json")
return req
}
type membersAPIActionUpdate struct {
memberID string
peerURLs types.URLs
}
func (a *membersAPIActionUpdate) HTTPRequest(ep url.URL) *http.Request {
u := v2MembersURL(ep)
m := memberCreateOrUpdateRequest{PeerURLs: a.peerURLs}
u.Path = path.Join(u.Path, a.memberID)
b, _ := json.Marshal(&m)
req, _ := http.NewRequest("PUT", u.String(), bytes.NewReader(b))
req.Header.Set("Content-Type", "application/json")
return req
}
func assertStatusCode(got int, want ...int) (err error) {
for _, w := range want {
if w == got {
return nil
}
}
return fmt.Errorf("unexpected status code %d", got)
}
// v2MembersURL add the necessary path to the provided endpoint
// to route requests to the default v2 members API.
func v2MembersURL(ep url.URL) *url.URL {
ep.Path = path.Join(ep.Path, defaultV2MembersPrefix)
return &ep
}
type membersError struct {
Message string `json:"message"`
Code int `json:"-"`
}
func (e membersError) Error() string {
return e.Message
}

View file

@ -0,0 +1,65 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package client
import (
"fmt"
"net"
"net/url"
)
var (
// indirection for testing
lookupSRV = net.LookupSRV
)
type srvDiscover struct{}
// NewSRVDiscover constructs a new Dicoverer that uses the stdlib to lookup SRV records.
func NewSRVDiscover() Discoverer {
return &srvDiscover{}
}
// Discover looks up the etcd servers for the domain.
func (d *srvDiscover) Discover(domain string) ([]string, error) {
var urls []*url.URL
updateURLs := func(service, scheme string) error {
_, addrs, err := lookupSRV(service, "tcp", domain)
if err != nil {
return err
}
for _, srv := range addrs {
urls = append(urls, &url.URL{
Scheme: scheme,
Host: net.JoinHostPort(srv.Target, fmt.Sprintf("%d", srv.Port)),
})
}
return nil
}
errHTTPS := updateURLs("etcd-server-ssl", "https")
errHTTP := updateURLs("etcd-server", "http")
if errHTTPS != nil && errHTTP != nil {
return nil, fmt.Errorf("dns lookup errors: %s and %s", errHTTPS, errHTTP)
}
endpoints := make([]string, len(urls))
for i := range urls {
endpoints[i] = urls[i].String()
}
return endpoints, nil
}

View file

@ -0,0 +1,29 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package pathutil
import "path"
// CanonicalURLPath returns the canonical url path for p, which follows the rules:
// 1. the path always starts with "/"
// 2. replace multiple slashes with a single slash
// 3. replace each '.' '..' path name element with equivalent one
// 4. keep the trailing slash
// The function is borrowed from stdlib http.cleanPath in server.go.
func CanonicalURLPath(p string) string {
if p == "" {
return "/"
}
if p[0] != '/' {
p = "/" + p
}
np := path.Clean(p)
// path.Clean removes trailing slash except for root,
// put the trailing slash back if necessary.
if p[len(p)-1] == '/' && np != "/" {
np += "/"
}
return np
}

View file

@ -0,0 +1,41 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"strconv"
)
// ID represents a generic identifier which is canonically
// stored as a uint64 but is typically represented as a
// base-16 string for input/output
type ID uint64
func (i ID) String() string {
return strconv.FormatUint(uint64(i), 16)
}
// IDFromString attempts to create an ID from a base-16 string.
func IDFromString(s string) (ID, error) {
i, err := strconv.ParseUint(s, 16, 64)
return ID(i), err
}
// IDSlice implements the sort interface
type IDSlice []ID
func (p IDSlice) Len() int { return len(p) }
func (p IDSlice) Less(i, j int) bool { return uint64(p[i]) < uint64(p[j]) }
func (p IDSlice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

View file

@ -0,0 +1,178 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"reflect"
"sort"
"sync"
)
type Set interface {
Add(string)
Remove(string)
Contains(string) bool
Equals(Set) bool
Length() int
Values() []string
Copy() Set
Sub(Set) Set
}
func NewUnsafeSet(values ...string) *unsafeSet {
set := &unsafeSet{make(map[string]struct{})}
for _, v := range values {
set.Add(v)
}
return set
}
func NewThreadsafeSet(values ...string) *tsafeSet {
us := NewUnsafeSet(values...)
return &tsafeSet{us, sync.RWMutex{}}
}
type unsafeSet struct {
d map[string]struct{}
}
// Add adds a new value to the set (no-op if the value is already present)
func (us *unsafeSet) Add(value string) {
us.d[value] = struct{}{}
}
// Remove removes the given value from the set
func (us *unsafeSet) Remove(value string) {
delete(us.d, value)
}
// Contains returns whether the set contains the given value
func (us *unsafeSet) Contains(value string) (exists bool) {
_, exists = us.d[value]
return
}
// ContainsAll returns whether the set contains all given values
func (us *unsafeSet) ContainsAll(values []string) bool {
for _, s := range values {
if !us.Contains(s) {
return false
}
}
return true
}
// Equals returns whether the contents of two sets are identical
func (us *unsafeSet) Equals(other Set) bool {
v1 := sort.StringSlice(us.Values())
v2 := sort.StringSlice(other.Values())
v1.Sort()
v2.Sort()
return reflect.DeepEqual(v1, v2)
}
// Length returns the number of elements in the set
func (us *unsafeSet) Length() int {
return len(us.d)
}
// Values returns the values of the Set in an unspecified order.
func (us *unsafeSet) Values() (values []string) {
values = make([]string, 0)
for val := range us.d {
values = append(values, val)
}
return
}
// Copy creates a new Set containing the values of the first
func (us *unsafeSet) Copy() Set {
cp := NewUnsafeSet()
for val := range us.d {
cp.Add(val)
}
return cp
}
// Sub removes all elements in other from the set
func (us *unsafeSet) Sub(other Set) Set {
oValues := other.Values()
result := us.Copy().(*unsafeSet)
for _, val := range oValues {
if _, ok := result.d[val]; !ok {
continue
}
delete(result.d, val)
}
return result
}
type tsafeSet struct {
us *unsafeSet
m sync.RWMutex
}
func (ts *tsafeSet) Add(value string) {
ts.m.Lock()
defer ts.m.Unlock()
ts.us.Add(value)
}
func (ts *tsafeSet) Remove(value string) {
ts.m.Lock()
defer ts.m.Unlock()
ts.us.Remove(value)
}
func (ts *tsafeSet) Contains(value string) (exists bool) {
ts.m.RLock()
defer ts.m.RUnlock()
return ts.us.Contains(value)
}
func (ts *tsafeSet) Equals(other Set) bool {
ts.m.RLock()
defer ts.m.RUnlock()
return ts.us.Equals(other)
}
func (ts *tsafeSet) Length() int {
ts.m.RLock()
defer ts.m.RUnlock()
return ts.us.Length()
}
func (ts *tsafeSet) Values() (values []string) {
ts.m.RLock()
defer ts.m.RUnlock()
return ts.us.Values()
}
func (ts *tsafeSet) Copy() Set {
ts.m.RLock()
defer ts.m.RUnlock()
usResult := ts.us.Copy().(*unsafeSet)
return &tsafeSet{usResult, sync.RWMutex{}}
}
func (ts *tsafeSet) Sub(other Set) Set {
ts.m.RLock()
defer ts.m.RUnlock()
usResult := ts.us.Sub(other).(*unsafeSet)
return &tsafeSet{usResult, sync.RWMutex{}}
}

View file

@ -0,0 +1,22 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
// Uint64Slice implements sort interface
type Uint64Slice []uint64
func (p Uint64Slice) Len() int { return len(p) }
func (p Uint64Slice) Less(i, j int) bool { return p[i] < p[j] }
func (p Uint64Slice) Swap(i, j int) { p[i], p[j] = p[j], p[i] }

View file

@ -0,0 +1,74 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"errors"
"fmt"
"net"
"net/url"
"sort"
"strings"
)
type URLs []url.URL
func NewURLs(strs []string) (URLs, error) {
all := make([]url.URL, len(strs))
if len(all) == 0 {
return nil, errors.New("no valid URLs given")
}
for i, in := range strs {
in = strings.TrimSpace(in)
u, err := url.Parse(in)
if err != nil {
return nil, err
}
if u.Scheme != "http" && u.Scheme != "https" {
return nil, fmt.Errorf("URL scheme must be http or https: %s", in)
}
if _, _, err := net.SplitHostPort(u.Host); err != nil {
return nil, fmt.Errorf(`URL address does not have the form "host:port": %s`, in)
}
if u.Path != "" {
return nil, fmt.Errorf("URL must not contain a path: %s", in)
}
all[i] = *u
}
us := URLs(all)
us.Sort()
return us, nil
}
func (us URLs) String() string {
return strings.Join(us.StringSlice(), ",")
}
func (us *URLs) Sort() {
sort.Sort(us)
}
func (us URLs) Len() int { return len(us) }
func (us URLs) Less(i, j int) bool { return us[i].String() < us[j].String() }
func (us URLs) Swap(i, j int) { us[i], us[j] = us[j], us[i] }
func (us URLs) StringSlice() []string {
out := make([]string, len(us))
for i := range us {
out[i] = us[i].String()
}
return out
}

View file

@ -0,0 +1,75 @@
// Copyright 2015 CoreOS, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
package types
import (
"fmt"
"net/url"
"sort"
"strings"
)
type URLsMap map[string]URLs
// NewURLsMap returns a URLsMap instantiated from the given string,
// which consists of discovery-formatted names-to-URLs, like:
// mach0=http://1.1.1.1:2380,mach0=http://2.2.2.2::2380,mach1=http://3.3.3.3:2380,mach2=http://4.4.4.4:2380
func NewURLsMap(s string) (URLsMap, error) {
cl := URLsMap{}
v, err := url.ParseQuery(strings.Replace(s, ",", "&", -1))
if err != nil {
return nil, err
}
for name, urls := range v {
if len(urls) == 0 || urls[0] == "" {
return nil, fmt.Errorf("empty URL given for %q", name)
}
us, err := NewURLs(urls)
if err != nil {
return nil, err
}
cl[name] = us
}
return cl, nil
}
// String returns NameURLPairs into discovery-formatted name-to-URLs sorted by name.
func (c URLsMap) String() string {
pairs := make([]string, 0)
for name, urls := range c {
for _, url := range urls {
pairs = append(pairs, fmt.Sprintf("%s=%s", name, url.String()))
}
}
sort.Strings(pairs)
return strings.Join(pairs, ",")
}
// URLs returns a list of all URLs.
// The returned list is sorted in ascending lexicographical order.
func (c URLsMap) URLs() []string {
urls := make([]string, 0)
for _, us := range c {
for _, u := range us {
urls = append(urls, u.String())
}
}
sort.Strings(urls)
return urls
}
func (c URLsMap) Len() int {
return len(c)
}

View file

@ -1,23 +0,0 @@
package etcd
// Add a new directory with a random etcd-generated key under the given path.
func (c *Client) AddChildDir(key string, ttl uint64) (*Response, error) {
raw, err := c.post(key, "", ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// Add a new file with a random etcd-generated key under the given path.
func (c *Client) AddChild(key string, value string, ttl uint64) (*Response, error) {
raw, err := c.post(key, value, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}

View file

@ -1,481 +0,0 @@
package etcd
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"io"
"io/ioutil"
"math/rand"
"net"
"net/http"
"net/url"
"os"
"path"
"strings"
"time"
)
// See SetConsistency for how to use these constants.
const (
// Using strings rather than iota because the consistency level
// could be persisted to disk, so it'd be better to use
// human-readable values.
STRONG_CONSISTENCY = "STRONG"
WEAK_CONSISTENCY = "WEAK"
)
const (
defaultBufferSize = 10
)
func init() {
rand.Seed(int64(time.Now().Nanosecond()))
}
type Config struct {
CertFile string `json:"certFile"`
KeyFile string `json:"keyFile"`
CaCertFile []string `json:"caCertFiles"`
DialTimeout time.Duration `json:"timeout"`
Consistency string `json:"consistency"`
}
type credentials struct {
username string
password string
}
type Client struct {
config Config `json:"config"`
cluster *Cluster `json:"cluster"`
httpClient *http.Client
credentials *credentials
transport *http.Transport
persistence io.Writer
cURLch chan string
// CheckRetry can be used to control the policy for failed requests
// and modify the cluster if needed.
// The client calls it before sending requests again, and
// stops retrying if CheckRetry returns some error. The cases that
// this function needs to handle include no response and unexpected
// http status code of response.
// If CheckRetry is nil, client will call the default one
// `DefaultCheckRetry`.
// Argument cluster is the etcd.Cluster object that these requests have been made on.
// Argument numReqs is the number of http.Requests that have been made so far.
// Argument lastResp is the http.Responses from the last request.
// Argument err is the reason of the failure.
CheckRetry func(cluster *Cluster, numReqs int,
lastResp http.Response, err error) error
}
// NewClient create a basic client that is configured to be used
// with the given machine list.
func NewClient(machines []string) *Client {
config := Config{
// default timeout is one second
DialTimeout: time.Second,
Consistency: WEAK_CONSISTENCY,
}
client := &Client{
cluster: NewCluster(machines),
config: config,
}
client.initHTTPClient()
client.saveConfig()
return client
}
// NewTLSClient create a basic client with TLS configuration
func NewTLSClient(machines []string, cert, key, caCert string) (*Client, error) {
// overwrite the default machine to use https
if len(machines) == 0 {
machines = []string{"https://127.0.0.1:4001"}
}
config := Config{
// default timeout is one second
DialTimeout: time.Second,
Consistency: WEAK_CONSISTENCY,
CertFile: cert,
KeyFile: key,
CaCertFile: make([]string, 0),
}
client := &Client{
cluster: NewCluster(machines),
config: config,
}
err := client.initHTTPSClient(cert, key)
if err != nil {
return nil, err
}
err = client.AddRootCA(caCert)
client.saveConfig()
return client, nil
}
// NewClientFromFile creates a client from a given file path.
// The given file is expected to use the JSON format.
func NewClientFromFile(fpath string) (*Client, error) {
fi, err := os.Open(fpath)
if err != nil {
return nil, err
}
defer func() {
if err := fi.Close(); err != nil {
panic(err)
}
}()
return NewClientFromReader(fi)
}
// NewClientFromReader creates a Client configured from a given reader.
// The configuration is expected to use the JSON format.
func NewClientFromReader(reader io.Reader) (*Client, error) {
c := new(Client)
b, err := ioutil.ReadAll(reader)
if err != nil {
return nil, err
}
err = json.Unmarshal(b, c)
if err != nil {
return nil, err
}
if c.config.CertFile == "" {
c.initHTTPClient()
} else {
err = c.initHTTPSClient(c.config.CertFile, c.config.KeyFile)
}
if err != nil {
return nil, err
}
for _, caCert := range c.config.CaCertFile {
if err := c.AddRootCA(caCert); err != nil {
return nil, err
}
}
return c, nil
}
// Override the Client's HTTP Transport object
func (c *Client) SetTransport(tr *http.Transport) {
c.httpClient.Transport = tr
c.transport = tr
}
func (c *Client) SetCredentials(username, password string) {
c.credentials = &credentials{username, password}
}
func (c *Client) Close() {
c.transport.DisableKeepAlives = true
c.transport.CloseIdleConnections()
}
// initHTTPClient initializes a HTTP client for etcd client
func (c *Client) initHTTPClient() {
c.transport = &http.Transport{
Dial: c.dial,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
}
c.httpClient = &http.Client{Transport: c.transport}
}
// initHTTPClient initializes a HTTPS client for etcd client
func (c *Client) initHTTPSClient(cert, key string) error {
if cert == "" || key == "" {
return errors.New("Require both cert and key path")
}
tlsCert, err := tls.LoadX509KeyPair(cert, key)
if err != nil {
return err
}
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{tlsCert},
InsecureSkipVerify: true,
}
tr := &http.Transport{
TLSClientConfig: tlsConfig,
Dial: c.dial,
}
c.httpClient = &http.Client{Transport: tr}
return nil
}
// SetPersistence sets a writer to which the config will be
// written every time it's changed.
func (c *Client) SetPersistence(writer io.Writer) {
c.persistence = writer
}
// SetConsistency changes the consistency level of the client.
//
// When consistency is set to STRONG_CONSISTENCY, all requests,
// including GET, are sent to the leader. This means that, assuming
// the absence of leader failures, GET requests are guaranteed to see
// the changes made by previous requests.
//
// When consistency is set to WEAK_CONSISTENCY, other requests
// are still sent to the leader, but GET requests are sent to a
// random server from the server pool. This reduces the read
// load on the leader, but it's not guaranteed that the GET requests
// will see changes made by previous requests (they might have not
// yet been committed on non-leader servers).
func (c *Client) SetConsistency(consistency string) error {
if !(consistency == STRONG_CONSISTENCY || consistency == WEAK_CONSISTENCY) {
return errors.New("The argument must be either STRONG_CONSISTENCY or WEAK_CONSISTENCY.")
}
c.config.Consistency = consistency
return nil
}
// Sets the DialTimeout value
func (c *Client) SetDialTimeout(d time.Duration) {
c.config.DialTimeout = d
}
// AddRootCA adds a root CA cert for the etcd client
func (c *Client) AddRootCA(caCert string) error {
if c.httpClient == nil {
return errors.New("Client has not been initialized yet!")
}
certBytes, err := ioutil.ReadFile(caCert)
if err != nil {
return err
}
tr, ok := c.httpClient.Transport.(*http.Transport)
if !ok {
panic("AddRootCA(): Transport type assert should not fail")
}
if tr.TLSClientConfig.RootCAs == nil {
caCertPool := x509.NewCertPool()
ok = caCertPool.AppendCertsFromPEM(certBytes)
if ok {
tr.TLSClientConfig.RootCAs = caCertPool
}
tr.TLSClientConfig.InsecureSkipVerify = false
} else {
ok = tr.TLSClientConfig.RootCAs.AppendCertsFromPEM(certBytes)
}
if !ok {
err = errors.New("Unable to load caCert")
}
c.config.CaCertFile = append(c.config.CaCertFile, caCert)
c.saveConfig()
return err
}
// SetCluster updates cluster information using the given machine list.
func (c *Client) SetCluster(machines []string) bool {
success := c.internalSyncCluster(machines)
return success
}
func (c *Client) GetCluster() []string {
return c.cluster.Machines
}
// SyncCluster updates the cluster information using the internal machine list.
func (c *Client) SyncCluster() bool {
return c.internalSyncCluster(c.cluster.Machines)
}
// internalSyncCluster syncs cluster information using the given machine list.
func (c *Client) internalSyncCluster(machines []string) bool {
for _, machine := range machines {
httpPath := c.createHttpPath(machine, path.Join(version, "members"))
resp, err := c.httpClient.Get(httpPath)
if err != nil {
// try another machine in the cluster
continue
}
if resp.StatusCode != http.StatusOK { // fall-back to old endpoint
httpPath := c.createHttpPath(machine, path.Join(version, "machines"))
resp, err := c.httpClient.Get(httpPath)
if err != nil {
// try another machine in the cluster
continue
}
b, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
// try another machine in the cluster
continue
}
// update Machines List
c.cluster.updateFromStr(string(b))
} else {
b, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
// try another machine in the cluster
continue
}
var mCollection memberCollection
if err := json.Unmarshal(b, &mCollection); err != nil {
// try another machine
continue
}
urls := make([]string, 0)
for _, m := range mCollection {
urls = append(urls, m.ClientURLs...)
}
// update Machines List
c.cluster.updateFromStr(strings.Join(urls, ","))
}
logger.Debug("sync.machines ", c.cluster.Machines)
c.saveConfig()
return true
}
return false
}
// createHttpPath creates a complete HTTP URL.
// serverName should contain both the host name and a port number, if any.
func (c *Client) createHttpPath(serverName string, _path string) string {
u, err := url.Parse(serverName)
if err != nil {
panic(err)
}
u.Path = path.Join(u.Path, _path)
if u.Scheme == "" {
u.Scheme = "http"
}
return u.String()
}
// dial attempts to open a TCP connection to the provided address, explicitly
// enabling keep-alives with a one-second interval.
func (c *Client) dial(network, addr string) (net.Conn, error) {
conn, err := net.DialTimeout(network, addr, c.config.DialTimeout)
if err != nil {
return nil, err
}
tcpConn, ok := conn.(*net.TCPConn)
if !ok {
return nil, errors.New("Failed type-assertion of net.Conn as *net.TCPConn")
}
// Keep TCP alive to check whether or not the remote machine is down
if err = tcpConn.SetKeepAlive(true); err != nil {
return nil, err
}
if err = tcpConn.SetKeepAlivePeriod(time.Second); err != nil {
return nil, err
}
return tcpConn, nil
}
func (c *Client) OpenCURL() {
c.cURLch = make(chan string, defaultBufferSize)
}
func (c *Client) CloseCURL() {
c.cURLch = nil
}
func (c *Client) sendCURL(command string) {
go func() {
select {
case c.cURLch <- command:
default:
}
}()
}
func (c *Client) RecvCURL() string {
return <-c.cURLch
}
// saveConfig saves the current config using c.persistence.
func (c *Client) saveConfig() error {
if c.persistence != nil {
b, err := json.Marshal(c)
if err != nil {
return err
}
_, err = c.persistence.Write(b)
if err != nil {
return err
}
}
return nil
}
// MarshalJSON implements the Marshaller interface
// as defined by the standard JSON package.
func (c *Client) MarshalJSON() ([]byte, error) {
b, err := json.Marshal(struct {
Config Config `json:"config"`
Cluster *Cluster `json:"cluster"`
}{
Config: c.config,
Cluster: c.cluster,
})
if err != nil {
return nil, err
}
return b, nil
}
// UnmarshalJSON implements the Unmarshaller interface
// as defined by the standard JSON package.
func (c *Client) UnmarshalJSON(b []byte) error {
temp := struct {
Config Config `json:"config"`
Cluster *Cluster `json:"cluster"`
}{}
err := json.Unmarshal(b, &temp)
if err != nil {
return err
}
c.cluster = temp.Cluster
c.config = temp.Config
return nil
}

View file

@ -1,37 +0,0 @@
package etcd
import (
"math/rand"
"strings"
)
type Cluster struct {
Leader string `json:"leader"`
Machines []string `json:"machines"`
picked int
}
func NewCluster(machines []string) *Cluster {
// if an empty slice was sent in then just assume HTTP 4001 on localhost
if len(machines) == 0 {
machines = []string{"http://127.0.0.1:4001"}
}
// default leader and machines
return &Cluster{
Leader: "",
Machines: machines,
picked: rand.Intn(len(machines)),
}
}
func (cl *Cluster) failure() { cl.picked = rand.Intn(len(cl.Machines)) }
func (cl *Cluster) pick() string { return cl.Machines[cl.picked] }
func (cl *Cluster) updateFromStr(machines string) {
cl.Machines = strings.Split(machines, ",")
for i := range cl.Machines {
cl.Machines[i] = strings.TrimSpace(cl.Machines[i])
}
cl.picked = rand.Intn(len(cl.Machines))
}

View file

@ -1,34 +0,0 @@
package etcd
import "fmt"
func (c *Client) CompareAndDelete(key string, prevValue string, prevIndex uint64) (*Response, error) {
raw, err := c.RawCompareAndDelete(key, prevValue, prevIndex)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
func (c *Client) RawCompareAndDelete(key string, prevValue string, prevIndex uint64) (*RawResponse, error) {
if prevValue == "" && prevIndex == 0 {
return nil, fmt.Errorf("You must give either prevValue or prevIndex.")
}
options := Options{}
if prevValue != "" {
options["prevValue"] = prevValue
}
if prevIndex != 0 {
options["prevIndex"] = prevIndex
}
raw, err := c.delete(key, options)
if err != nil {
return nil, err
}
return raw, err
}

View file

@ -1,36 +0,0 @@
package etcd
import "fmt"
func (c *Client) CompareAndSwap(key string, value string, ttl uint64,
prevValue string, prevIndex uint64) (*Response, error) {
raw, err := c.RawCompareAndSwap(key, value, ttl, prevValue, prevIndex)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
func (c *Client) RawCompareAndSwap(key string, value string, ttl uint64,
prevValue string, prevIndex uint64) (*RawResponse, error) {
if prevValue == "" && prevIndex == 0 {
return nil, fmt.Errorf("You must give either prevValue or prevIndex.")
}
options := Options{}
if prevValue != "" {
options["prevValue"] = prevValue
}
if prevIndex != 0 {
options["prevIndex"] = prevIndex
}
raw, err := c.put(key, value, ttl, options)
if err != nil {
return nil, err
}
return raw, err
}

View file

@ -1,55 +0,0 @@
package etcd
import (
"fmt"
"io/ioutil"
"log"
"strings"
)
var logger *etcdLogger
func SetLogger(l *log.Logger) {
logger = &etcdLogger{l}
}
func GetLogger() *log.Logger {
return logger.log
}
type etcdLogger struct {
log *log.Logger
}
func (p *etcdLogger) Debug(args ...interface{}) {
msg := "DEBUG: " + fmt.Sprint(args...)
p.log.Println(msg)
}
func (p *etcdLogger) Debugf(f string, args ...interface{}) {
msg := "DEBUG: " + fmt.Sprintf(f, args...)
// Append newline if necessary
if !strings.HasSuffix(msg, "\n") {
msg = msg + "\n"
}
p.log.Print(msg)
}
func (p *etcdLogger) Warning(args ...interface{}) {
msg := "WARNING: " + fmt.Sprint(args...)
p.log.Println(msg)
}
func (p *etcdLogger) Warningf(f string, args ...interface{}) {
msg := "WARNING: " + fmt.Sprintf(f, args...)
// Append newline if necessary
if !strings.HasSuffix(msg, "\n") {
msg = msg + "\n"
}
p.log.Print(msg)
}
func init() {
// Default logger uses the go default log.
SetLogger(log.New(ioutil.Discard, "go-etcd", log.LstdFlags))
}

View file

@ -1,40 +0,0 @@
package etcd
// Delete deletes the given key.
//
// When recursive set to false, if the key points to a
// directory the method will fail.
//
// When recursive set to true, if the key points to a file,
// the file will be deleted; if the key points to a directory,
// then everything under the directory (including all child directories)
// will be deleted.
func (c *Client) Delete(key string, recursive bool) (*Response, error) {
raw, err := c.RawDelete(key, recursive, false)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// DeleteDir deletes an empty directory or a key value pair
func (c *Client) DeleteDir(key string) (*Response, error) {
raw, err := c.RawDelete(key, false, true)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
func (c *Client) RawDelete(key string, recursive bool, dir bool) (*RawResponse, error) {
ops := Options{
"recursive": recursive,
"dir": dir,
}
return c.delete(key, ops)
}

View file

@ -1,49 +0,0 @@
package etcd
import (
"encoding/json"
"fmt"
)
const (
ErrCodeEtcdNotReachable = 501
ErrCodeUnhandledHTTPStatus = 502
)
var (
errorMap = map[int]string{
ErrCodeEtcdNotReachable: "All the given peers are not reachable",
}
)
type EtcdError struct {
ErrorCode int `json:"errorCode"`
Message string `json:"message"`
Cause string `json:"cause,omitempty"`
Index uint64 `json:"index"`
}
func (e EtcdError) Error() string {
return fmt.Sprintf("%v: %v (%v) [%v]", e.ErrorCode, e.Message, e.Cause, e.Index)
}
func newError(errorCode int, cause string, index uint64) *EtcdError {
return &EtcdError{
ErrorCode: errorCode,
Message: errorMap[errorCode],
Cause: cause,
Index: index,
}
}
func handleError(b []byte) error {
etcdErr := new(EtcdError)
err := json.Unmarshal(b, etcdErr)
if err != nil {
logger.Warningf("cannot unmarshal etcd error: %v", err)
return err
}
return etcdErr
}

View file

@ -1,32 +0,0 @@
package etcd
// Get gets the file or directory associated with the given key.
// If the key points to a directory, files and directories under
// it will be returned in sorted or unsorted order, depending on
// the sort flag.
// If recursive is set to false, contents under child directories
// will not be returned.
// If recursive is set to true, all the contents will be returned.
func (c *Client) Get(key string, sort, recursive bool) (*Response, error) {
raw, err := c.RawGet(key, sort, recursive)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
func (c *Client) RawGet(key string, sort, recursive bool) (*RawResponse, error) {
var q bool
if c.config.Consistency == STRONG_CONSISTENCY {
q = true
}
ops := Options{
"recursive": recursive,
"sorted": sort,
"quorum": q,
}
return c.get(key, ops)
}

View file

@ -1,30 +0,0 @@
package etcd
import "encoding/json"
type Member struct {
ID string `json:"id"`
Name string `json:"name"`
PeerURLs []string `json:"peerURLs"`
ClientURLs []string `json:"clientURLs"`
}
type memberCollection []Member
func (c *memberCollection) UnmarshalJSON(data []byte) error {
d := struct {
Members []Member
}{}
if err := json.Unmarshal(data, &d); err != nil {
return err
}
if d.Members == nil {
*c = make([]Member, 0)
return nil
}
*c = d.Members
return nil
}

View file

@ -1,72 +0,0 @@
package etcd
import (
"fmt"
"net/url"
"reflect"
)
type Options map[string]interface{}
// An internally-used data structure that represents a mapping
// between valid options and their kinds
type validOptions map[string]reflect.Kind
// Valid options for GET, PUT, POST, DELETE
// Using CAPITALIZED_UNDERSCORE to emphasize that these
// values are meant to be used as constants.
var (
VALID_GET_OPTIONS = validOptions{
"recursive": reflect.Bool,
"quorum": reflect.Bool,
"sorted": reflect.Bool,
"wait": reflect.Bool,
"waitIndex": reflect.Uint64,
}
VALID_PUT_OPTIONS = validOptions{
"prevValue": reflect.String,
"prevIndex": reflect.Uint64,
"prevExist": reflect.Bool,
"dir": reflect.Bool,
}
VALID_POST_OPTIONS = validOptions{}
VALID_DELETE_OPTIONS = validOptions{
"recursive": reflect.Bool,
"dir": reflect.Bool,
"prevValue": reflect.String,
"prevIndex": reflect.Uint64,
}
)
// Convert options to a string of HTML parameters
func (ops Options) toParameters(validOps validOptions) (string, error) {
p := "?"
values := url.Values{}
if ops == nil {
return "", nil
}
for k, v := range ops {
// Check if the given option is valid (that it exists)
kind := validOps[k]
if kind == reflect.Invalid {
return "", fmt.Errorf("Invalid option: %v", k)
}
// Check if the given option is of the valid type
t := reflect.TypeOf(v)
if kind != t.Kind() {
return "", fmt.Errorf("Option %s should be of %v kind, not of %v kind.",
k, kind, t.Kind())
}
values.Set(k, fmt.Sprintf("%v", v))
}
p += values.Encode()
return p, nil
}

View file

@ -1,405 +0,0 @@
package etcd
import (
"errors"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/url"
"path"
"strings"
"sync"
"time"
)
// Errors introduced by handling requests
var (
ErrRequestCancelled = errors.New("sending request is cancelled")
)
type RawRequest struct {
Method string
RelativePath string
Values url.Values
Cancel <-chan bool
}
// NewRawRequest returns a new RawRequest
func NewRawRequest(method, relativePath string, values url.Values, cancel <-chan bool) *RawRequest {
return &RawRequest{
Method: method,
RelativePath: relativePath,
Values: values,
Cancel: cancel,
}
}
// getCancelable issues a cancelable GET request
func (c *Client) getCancelable(key string, options Options,
cancel <-chan bool) (*RawResponse, error) {
logger.Debugf("get %s [%s]", key, c.cluster.pick())
p := keyToPath(key)
str, err := options.toParameters(VALID_GET_OPTIONS)
if err != nil {
return nil, err
}
p += str
req := NewRawRequest("GET", p, nil, cancel)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// get issues a GET request
func (c *Client) get(key string, options Options) (*RawResponse, error) {
return c.getCancelable(key, options, nil)
}
// put issues a PUT request
func (c *Client) put(key string, value string, ttl uint64,
options Options) (*RawResponse, error) {
logger.Debugf("put %s, %s, ttl: %d, [%s]", key, value, ttl, c.cluster.pick())
p := keyToPath(key)
str, err := options.toParameters(VALID_PUT_OPTIONS)
if err != nil {
return nil, err
}
p += str
req := NewRawRequest("PUT", p, buildValues(value, ttl), nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// post issues a POST request
func (c *Client) post(key string, value string, ttl uint64) (*RawResponse, error) {
logger.Debugf("post %s, %s, ttl: %d, [%s]", key, value, ttl, c.cluster.pick())
p := keyToPath(key)
req := NewRawRequest("POST", p, buildValues(value, ttl), nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// delete issues a DELETE request
func (c *Client) delete(key string, options Options) (*RawResponse, error) {
logger.Debugf("delete %s [%s]", key, c.cluster.pick())
p := keyToPath(key)
str, err := options.toParameters(VALID_DELETE_OPTIONS)
if err != nil {
return nil, err
}
p += str
req := NewRawRequest("DELETE", p, nil, nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// SendRequest sends a HTTP request and returns a Response as defined by etcd
func (c *Client) SendRequest(rr *RawRequest) (*RawResponse, error) {
var req *http.Request
var resp *http.Response
var httpPath string
var err error
var respBody []byte
var numReqs = 1
checkRetry := c.CheckRetry
if checkRetry == nil {
checkRetry = DefaultCheckRetry
}
cancelled := make(chan bool, 1)
reqLock := new(sync.Mutex)
if rr.Cancel != nil {
cancelRoutine := make(chan bool)
defer close(cancelRoutine)
go func() {
select {
case <-rr.Cancel:
cancelled <- true
logger.Debug("send.request is cancelled")
case <-cancelRoutine:
return
}
// Repeat canceling request until this thread is stopped
// because we have no idea about whether it succeeds.
for {
reqLock.Lock()
c.httpClient.Transport.(*http.Transport).CancelRequest(req)
reqLock.Unlock()
select {
case <-time.After(100 * time.Millisecond):
case <-cancelRoutine:
return
}
}
}()
}
// If we connect to a follower and consistency is required, retry until
// we connect to a leader
sleep := 25 * time.Millisecond
maxSleep := time.Second
for attempt := 0; ; attempt++ {
if attempt > 0 {
select {
case <-cancelled:
return nil, ErrRequestCancelled
case <-time.After(sleep):
sleep = sleep * 2
if sleep > maxSleep {
sleep = maxSleep
}
}
}
logger.Debug("Connecting to etcd: attempt ", attempt+1, " for ", rr.RelativePath)
// get httpPath if not set
if httpPath == "" {
httpPath = c.getHttpPath(rr.RelativePath)
}
// Return a cURL command if curlChan is set
if c.cURLch != nil {
command := fmt.Sprintf("curl -X %s %s", rr.Method, httpPath)
for key, value := range rr.Values {
command += fmt.Sprintf(" -d %s=%s", key, value[0])
}
if c.credentials != nil {
command += fmt.Sprintf(" -u %s", c.credentials.username)
}
c.sendCURL(command)
}
logger.Debug("send.request.to ", httpPath, " | method ", rr.Method)
req, err := func() (*http.Request, error) {
reqLock.Lock()
defer reqLock.Unlock()
if rr.Values == nil {
if req, err = http.NewRequest(rr.Method, httpPath, nil); err != nil {
return nil, err
}
} else {
body := strings.NewReader(rr.Values.Encode())
if req, err = http.NewRequest(rr.Method, httpPath, body); err != nil {
return nil, err
}
req.Header.Set("Content-Type",
"application/x-www-form-urlencoded; param=value")
}
return req, nil
}()
if err != nil {
return nil, err
}
if c.credentials != nil {
req.SetBasicAuth(c.credentials.username, c.credentials.password)
}
resp, err = c.httpClient.Do(req)
// clear previous httpPath
httpPath = ""
defer func() {
if resp != nil {
resp.Body.Close()
}
}()
// If the request was cancelled, return ErrRequestCancelled directly
select {
case <-cancelled:
return nil, ErrRequestCancelled
default:
}
numReqs++
// network error, change a machine!
if err != nil {
logger.Debug("network error: ", err.Error())
lastResp := http.Response{}
if checkErr := checkRetry(c.cluster, numReqs, lastResp, err); checkErr != nil {
return nil, checkErr
}
c.cluster.failure()
continue
}
// if there is no error, it should receive response
logger.Debug("recv.response.from ", httpPath)
if validHttpStatusCode[resp.StatusCode] {
// try to read byte code and break the loop
respBody, err = ioutil.ReadAll(resp.Body)
if err == nil {
logger.Debug("recv.success ", httpPath)
break
}
// ReadAll error may be caused due to cancel request
select {
case <-cancelled:
return nil, ErrRequestCancelled
default:
}
if err == io.ErrUnexpectedEOF {
// underlying connection was closed prematurely, probably by timeout
// TODO: empty body or unexpectedEOF can cause http.Transport to get hosed;
// this allows the client to detect that and take evasive action. Need
// to revisit once code.google.com/p/go/issues/detail?id=8648 gets fixed.
respBody = []byte{}
break
}
}
if resp.StatusCode == http.StatusTemporaryRedirect {
u, err := resp.Location()
if err != nil {
logger.Warning(err)
} else {
// set httpPath for following redirection
httpPath = u.String()
}
resp.Body.Close()
continue
}
if checkErr := checkRetry(c.cluster, numReqs, *resp,
errors.New("Unexpected HTTP status code")); checkErr != nil {
return nil, checkErr
}
resp.Body.Close()
}
r := &RawResponse{
StatusCode: resp.StatusCode,
Body: respBody,
Header: resp.Header,
}
return r, nil
}
// DefaultCheckRetry defines the retrying behaviour for bad HTTP requests
// If we have retried 2 * machine number, stop retrying.
// If status code is InternalServerError, sleep for 200ms.
func DefaultCheckRetry(cluster *Cluster, numReqs int, lastResp http.Response,
err error) error {
if isEmptyResponse(lastResp) {
// always retry if it failed to get response from one machine
return err
} else if !shouldRetry(lastResp) {
body := []byte("nil")
if lastResp.Body != nil {
if b, err := ioutil.ReadAll(lastResp.Body); err == nil {
body = b
}
}
errStr := fmt.Sprintf("unhandled http status [%s] with body [%s]", http.StatusText(lastResp.StatusCode), body)
return newError(ErrCodeUnhandledHTTPStatus, errStr, 0)
}
if numReqs > 2*len(cluster.Machines) {
errStr := fmt.Sprintf("failed to propose on members %v twice [last error: %v]", cluster.Machines, err)
return newError(ErrCodeEtcdNotReachable, errStr, 0)
}
if shouldRetry(lastResp) {
// sleep some time and expect leader election finish
time.Sleep(time.Millisecond * 200)
}
logger.Warning("bad response status code", lastResp.StatusCode)
return nil
}
func isEmptyResponse(r http.Response) bool { return r.StatusCode == 0 }
// shouldRetry returns whether the reponse deserves retry.
func shouldRetry(r http.Response) bool {
// TODO: only retry when the cluster is in leader election
// We cannot do it exactly because etcd doesn't support it well.
return r.StatusCode == http.StatusInternalServerError
}
func (c *Client) getHttpPath(s ...string) string {
fullPath := c.cluster.pick() + "/" + version
for _, seg := range s {
fullPath = fullPath + "/" + seg
}
return fullPath
}
// buildValues builds a url.Values map according to the given value and ttl
func buildValues(value string, ttl uint64) url.Values {
v := url.Values{}
if value != "" {
v.Set("value", value)
}
if ttl > 0 {
v.Set("ttl", fmt.Sprintf("%v", ttl))
}
return v
}
// convert key string to http path exclude version, including URL escaping
// for example: key[foo] -> path[keys/foo]
// key[/%z] -> path[keys/%25z]
// key[/] -> path[keys/]
func keyToPath(key string) string {
// URL-escape our key, except for slashes
p := strings.Replace(url.QueryEscape(path.Join("keys", key)), "%2F", "/", -1)
// corner case: if key is "/" or "//" ect
// path join will clear the tailing "/"
// we need to add it back
if p == "keys" {
p = "keys/"
}
return p
}

View file

@ -1,89 +0,0 @@
package etcd
import (
"encoding/json"
"net/http"
"strconv"
"time"
)
const (
rawResponse = iota
normalResponse
)
type responseType int
type RawResponse struct {
StatusCode int
Body []byte
Header http.Header
}
var (
validHttpStatusCode = map[int]bool{
http.StatusCreated: true,
http.StatusOK: true,
http.StatusBadRequest: true,
http.StatusNotFound: true,
http.StatusPreconditionFailed: true,
http.StatusForbidden: true,
}
)
// Unmarshal parses RawResponse and stores the result in Response
func (rr *RawResponse) Unmarshal() (*Response, error) {
if rr.StatusCode != http.StatusOK && rr.StatusCode != http.StatusCreated {
return nil, handleError(rr.Body)
}
resp := new(Response)
err := json.Unmarshal(rr.Body, resp)
if err != nil {
return nil, err
}
// attach index and term to response
resp.EtcdIndex, _ = strconv.ParseUint(rr.Header.Get("X-Etcd-Index"), 10, 64)
resp.RaftIndex, _ = strconv.ParseUint(rr.Header.Get("X-Raft-Index"), 10, 64)
resp.RaftTerm, _ = strconv.ParseUint(rr.Header.Get("X-Raft-Term"), 10, 64)
return resp, nil
}
type Response struct {
Action string `json:"action"`
Node *Node `json:"node"`
PrevNode *Node `json:"prevNode,omitempty"`
EtcdIndex uint64 `json:"etcdIndex"`
RaftIndex uint64 `json:"raftIndex"`
RaftTerm uint64 `json:"raftTerm"`
}
type Node struct {
Key string `json:"key, omitempty"`
Value string `json:"value,omitempty"`
Dir bool `json:"dir,omitempty"`
Expiration *time.Time `json:"expiration,omitempty"`
TTL int64 `json:"ttl,omitempty"`
Nodes Nodes `json:"nodes,omitempty"`
ModifiedIndex uint64 `json:"modifiedIndex,omitempty"`
CreatedIndex uint64 `json:"createdIndex,omitempty"`
}
type Nodes []*Node
// interfaces for sorting
func (ns Nodes) Len() int {
return len(ns)
}
func (ns Nodes) Less(i, j int) bool {
return ns[i].Key < ns[j].Key
}
func (ns Nodes) Swap(i, j int) {
ns[i], ns[j] = ns[j], ns[i]
}

View file

@ -1,137 +0,0 @@
package etcd
// Set sets the given key to the given value.
// It will create a new key value pair or replace the old one.
// It will not replace a existing directory.
func (c *Client) Set(key string, value string, ttl uint64) (*Response, error) {
raw, err := c.RawSet(key, value, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// SetDir sets the given key to a directory.
// It will create a new directory or replace the old key value pair by a directory.
// It will not replace a existing directory.
func (c *Client) SetDir(key string, ttl uint64) (*Response, error) {
raw, err := c.RawSetDir(key, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// CreateDir creates a directory. It succeeds only if
// the given key does not yet exist.
func (c *Client) CreateDir(key string, ttl uint64) (*Response, error) {
raw, err := c.RawCreateDir(key, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// UpdateDir updates the given directory. It succeeds only if the
// given key already exists.
func (c *Client) UpdateDir(key string, ttl uint64) (*Response, error) {
raw, err := c.RawUpdateDir(key, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// Create creates a file with the given value under the given key. It succeeds
// only if the given key does not yet exist.
func (c *Client) Create(key string, value string, ttl uint64) (*Response, error) {
raw, err := c.RawCreate(key, value, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// CreateInOrder creates a file with a key that's guaranteed to be higher than other
// keys in the given directory. It is useful for creating queues.
func (c *Client) CreateInOrder(dir string, value string, ttl uint64) (*Response, error) {
raw, err := c.RawCreateInOrder(dir, value, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
// Update updates the given key to the given value. It succeeds only if the
// given key already exists.
func (c *Client) Update(key string, value string, ttl uint64) (*Response, error) {
raw, err := c.RawUpdate(key, value, ttl)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
func (c *Client) RawUpdateDir(key string, ttl uint64) (*RawResponse, error) {
ops := Options{
"prevExist": true,
"dir": true,
}
return c.put(key, "", ttl, ops)
}
func (c *Client) RawCreateDir(key string, ttl uint64) (*RawResponse, error) {
ops := Options{
"prevExist": false,
"dir": true,
}
return c.put(key, "", ttl, ops)
}
func (c *Client) RawSet(key string, value string, ttl uint64) (*RawResponse, error) {
return c.put(key, value, ttl, nil)
}
func (c *Client) RawSetDir(key string, ttl uint64) (*RawResponse, error) {
ops := Options{
"dir": true,
}
return c.put(key, "", ttl, ops)
}
func (c *Client) RawUpdate(key string, value string, ttl uint64) (*RawResponse, error) {
ops := Options{
"prevExist": true,
}
return c.put(key, value, ttl, ops)
}
func (c *Client) RawCreate(key string, value string, ttl uint64) (*RawResponse, error) {
ops := Options{
"prevExist": false,
}
return c.put(key, value, ttl, ops)
}
func (c *Client) RawCreateInOrder(dir string, value string, ttl uint64) (*RawResponse, error) {
return c.post(dir, value, ttl)
}

View file

@ -1,6 +0,0 @@
package etcd
const (
version = "v2"
packageVersion = "v2.0.0"
)

View file

@ -1,103 +0,0 @@
package etcd
import (
"errors"
)
// Errors introduced by the Watch command.
var (
ErrWatchStoppedByUser = errors.New("Watch stopped by the user via stop channel")
)
// If recursive is set to true the watch returns the first change under the given
// prefix since the given index.
//
// If recursive is set to false the watch returns the first change to the given key
// since the given index.
//
// To watch for the latest change, set waitIndex = 0.
//
// If a receiver channel is given, it will be a long-term watch. Watch will block at the
//channel. After someone receives the channel, it will go on to watch that
// prefix. If a stop channel is given, the client can close long-term watch using
// the stop channel.
func (c *Client) Watch(prefix string, waitIndex uint64, recursive bool,
receiver chan *Response, stop chan bool) (*Response, error) {
logger.Debugf("watch %s [%s]", prefix, c.cluster.Leader)
if receiver == nil {
raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
if err != nil {
return nil, err
}
return raw.Unmarshal()
}
defer close(receiver)
for {
raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
if err != nil {
return nil, err
}
resp, err := raw.Unmarshal()
if err != nil {
return nil, err
}
waitIndex = resp.Node.ModifiedIndex + 1
receiver <- resp
}
}
func (c *Client) RawWatch(prefix string, waitIndex uint64, recursive bool,
receiver chan *RawResponse, stop chan bool) (*RawResponse, error) {
logger.Debugf("rawWatch %s [%s]", prefix, c.cluster.Leader)
if receiver == nil {
return c.watchOnce(prefix, waitIndex, recursive, stop)
}
for {
raw, err := c.watchOnce(prefix, waitIndex, recursive, stop)
if err != nil {
return nil, err
}
resp, err := raw.Unmarshal()
if err != nil {
return nil, err
}
waitIndex = resp.Node.ModifiedIndex + 1
receiver <- raw
}
}
// helper func
// return when there is change under the given prefix
func (c *Client) watchOnce(key string, waitIndex uint64, recursive bool, stop chan bool) (*RawResponse, error) {
options := Options{
"wait": true,
}
if waitIndex > 0 {
options["waitIndex"] = waitIndex
}
if recursive {
options["recursive"] = true
}
resp, err := c.getCancelable(key, options, stop)
if err == ErrRequestCancelled {
return nil, ErrWatchStoppedByUser
}
return resp, err
}

View file

@ -19,7 +19,7 @@ before_install:
before_script:
- script/travis_consul.sh 0.5.2
- script/travis_etcd.sh 2.0.11
- script/travis_etcd.sh 2.2.0
- script/travis_zk.sh 3.4.6
script:

View file

@ -6,90 +6,37 @@
`libkv` provides a `Go` native library to store metadata.
The goal of `libkv` is to abstract common store operations for multiple Key/Value backends and offer the same experience no matter which one of the backend you want to use.
The goal of `libkv` is to abstract common store operations for multiple distributed and/or local Key/Value store backends.
For example, you can use it to store your metadata or for service discovery to register machines and endpoints inside your cluster.
You can also easily implement a generic *Leader Election* on top of it (see the [swarm/leadership](https://github.com/docker/swarm/tree/master/leadership) package).
As of now, `libkv` offers support for `Consul`, `Etcd`, `Zookeeper` and `BoltDB`.
As of now, `libkv` offers support for `Consul`, `Etcd`, `Zookeeper` (**Distributed** store) and `BoltDB` (**Local** store).
## Example of usage
## Usage
### Create a new store and use Put/Get
`libkv` is meant to be used as an abstraction layer over existing distributed Key/Value stores. It is especially useful if you plan to support `consul`, `etcd` and `zookeeper` using the same codebase.
```go
package main
It is ideal if you plan for something written in Go that should support:
import (
"fmt"
"time"
- A simple metadata storage, distributed or local
- A lightweight discovery service for your nodes
- A distributed lock mechanism
"github.com/docker/libkv"
"github.com/docker/libkv/store"
"github.com/docker/libkv/store/consul"
log "github.com/Sirupsen/logrus"
)
func init() {
// Register consul store to libkv
consul.Register()
}
func main() {
client := "localhost:8500"
// Initialize a new store with consul
kv, err := libkv.NewStore(
store.CONSUL, // or "consul"
[]string{client},
&store.Config{
ConnectionTimeout: 10*time.Second,
},
)
if err != nil {
log.Fatal("Cannot create store consul")
}
key := "foo"
err = kv.Put(key, []byte("bar"), nil)
if err != nil {
log.Error("Error trying to put value at key `", key, "`")
}
pair, err := kv.Get(key)
if err != nil {
log.Error("Error trying accessing value at key `", key, "`")
}
log.Info("value: ", string(pair.Value))
}
```
You can find other usage examples for `libkv` under the `docker/swarm` or `docker/libnetwork` repositories.
You can find examples of usage for `libkv` under in `docs/examples.go`. Optionally you can also take a look at the `docker/swarm` or `docker/libnetwork` repositories which are using `docker/libkv` for all the use cases listed above.
## Supported versions
`libkv` supports:
- Consul version >= `0.5.1` because it uses Sessions with `Delete` behavior for the use of `TTLs` (mimics zookeeper's Ephemeral node support), If you don't plan to use `TTLs`: you can use Consul version `0.4.0+`.
- Etcd version >= `2.0` because it uses the `2.0.0` branch of the `coreos/go-etcd` client, this might change in the future as the support for `APIv3` comes along.
- Zookeeper version >= `3.4.5`. Although this might work with previous version but this remains untested as of now.
- Consul versions >= `0.5.1` because it uses Sessions with `Delete` behavior for the use of `TTLs` (mimics zookeeper's Ephemeral node support), If you don't plan to use `TTLs`: you can use Consul version `0.4.0+`.
- Etcd versions >= `2.0` because it uses the new `coreos/etcd/client`, this might change in the future as the support for `APIv3` comes along and adds mor capabilities.
- Zookeeper versions >= `3.4.5`. Although this might work with previous version but this remains untested as of now.
- Boltdb, which shouldn't be subject to any version dependencies.
## TLS
## Interface
The etcd backend supports etcd servers that require TLS Client Authentication. Zookeeper and Consul support are planned. This feature is somewhat experimental and the store.ClientTLSConfig struct may change to accommodate the additional backends.
## Warning
There are a few consistency issues with *etcd*, on the notion of *directory* and *key*. If you want to use the three KV backends in an interchangeable way, you should only put data on leaves (see [Issue 20](https://github.com/docker/libkv/issues/20) for more details). This will be fixed when *etcd* API v3 will be made available (API v3 drops the *directory/key* distinction). An official release for *libkv* with a tag is likely to come after this issue being marked as **solved**.
Other than that, you should expect the same experience for basic operations like `Get`/`Put`, etc.
Calls like `WatchTree` may return different events (or number of events) depending on the backend (for now, `Etcd` and `Consul` will likely return more events than `Zookeeper` that you should triage properly). Although you should be able to use it successfully to watch on events in an interchangeable way (see the **swarm/leadership** or **swarm/discovery** packages in **docker/swarm**).
## Create a new storage backend
A new **storage backend** should include those calls:
A **storage backend** in `libkv` should implement (fully or partially) this interface:
```go
type Store interface {
@ -108,14 +55,46 @@ type Store interface {
}
```
You can get inspiration from existing backends to create a new one. This interface could be subject to changes to improve the experience of using the library and contributing to a new backend.
## Compatibility matrix
Backend drivers in `libkv` are generally divided between **local drivers** and **distributed drivers**. Distributed backends offer enhanced capabilities like `Watches` and/or distributed `Locks`.
Local drivers are usually used in complement to the distributed drivers to store informations that only needs to be available locally.
| Calls | Consul | Etcd | Zookeeper | BoltDB |
|-----------------------|:----------:|:------:|:-----------:|:--------:|
| Put | X | X | X | X |
| Get | X | X | X | X |
| Delete | X | X | X | X |
| Exists | X | X | X | X |
| Watch | X | X | X | |
| WatchTree | X | X | X | |
| NewLock (Lock/Unlock) | X | X | X | |
| List | X | X | X | X |
| DeleteTree | X | X | X | X |
| AtomicPut | X | X | X | X |
| Close | X | X | X | X |
## Limitations
Distributed Key/Value stores often have different concepts for managing and formatting keys and their associated values. Even though `libkv` tries to abstract those stores aiming for some consistency, in some cases it can't be applied easily.
Please refer to the `docs/compatibility.md` to see what are the special cases for cross-backend compatibility.
Other than those special cases, you should expect the same experience for basic operations like `Get`/`Put`, etc.
Calls like `WatchTree` may return different events (or number of events) depending on the backend (for now, `Etcd` and `Consul` will likely return more events than `Zookeeper` that you should triage properly). Although you should be able to use it successfully to watch on events in an interchangeable way (see the **swarm/leadership** or **swarm/discovery** packages in **docker/swarm**).
## TLS
Only `Consul` and `etcd` have support for TLS and you should build and provide your own `config.TLS` object to feed the client. Support is planned for `zookeeper`.
##Roadmap
- Make the API nicer to use (using `options`)
- Provide more options (`consistency` for example)
- Improve performance (remove extras `Get`/`List` operations)
- Add more exhaustive tests
- Better key formatting
- New backends?
##Contributing
@ -124,4 +103,4 @@ Want to hack on libkv? [Docker's contributions guidelines](https://github.com/do
##Copyright and license
Copyright © 2014-2015 Docker, Inc. All rights reserved, except as follows. Code is released under the Apache 2.0 license. Documentation is licensed to end users under the Creative Commons Attribution 4.0 International License under the terms and conditions set forth in the file "LICENSE.docs". You may obtain a duplicate copy of the same license, titled CC-BY-SA-4.0, at http://creativecommons.org/licenses/by/4.0/.
Copyright © 2014-2015 Docker, Inc. All rights reserved, except as follows. Code is released under the Apache 2.0 license. The README.md file, and files in the "docs" folder are licensed under the Creative Commons Attribution 4.0 International License under the terms and conditions set forth in the file "LICENSE.docs". You may obtain a duplicate copy of the same license, titled CC-BY-SA-4.0, at http://creativecommons.org/licenses/by/4.0/.

View file

@ -1,64 +1,3 @@
// Package libkv provides a Go native library to store metadata.
//
// The goal of libkv is to abstract common store operations for multiple
// Key/Value backends and offer the same experience no matter which one of the
// backend you want to use.
//
// For example, you can use it to store your metadata or for service discovery to
// register machines and endpoints inside your cluster.
//
// As of now, `libkv` offers support for `Consul`, `Etcd` and `Zookeeper`.
//
// ## Example of usage
//
// ### Create a new store and use Put/Get
//
//
// package main
//
// import (
// "fmt"
// "time"
//
// "github.com/docker/libkv"
// "github.com/docker/libkv/store"
// log "github.com/Sirupsen/logrus"
// )
//
// func main() {
// client := "localhost:8500"
//
// // Initialize a new store with consul
// kv, err := libkv.NewStore(
// store.CONSUL, // or "consul"
// []string{client},
// &store.Config{
// ConnectionTimeout: 10*time.Second,
// },
// )
// if err != nil {
// log.Fatal("Cannot create store consul")
// }
//
// key := "foo"
// err = kv.Put(key, []byte("bar"), nil)
// if err != nil {
// log.Error("Error trying to put value at key `", key, "`")
// }
//
// pair, err := kv.Get(key)
// if err != nil {
// log.Error("Error trying accessing value at key `", key, "`")
// }
//
// log.Info("value: ", string(pair.Value))
// }
//
// ##Copyright and license
//
// Code and documentation copyright 2015 Docker, inc. Code released under the
// Apache 2.0 license. Docs released under Creative commons.
//
package libkv
import (
@ -92,7 +31,7 @@ func NewStore(backend store.Backend, addrs []string, options *store.Config) (sto
return init(addrs, options)
}
return nil, fmt.Errorf("%s %s", store.ErrNotSupported.Error(), supportedBackend)
return nil, fmt.Errorf("%s %s", store.ErrBackendNotSupported.Error(), supportedBackend)
}
// AddStore adds a new store backend to libkv

View file

@ -23,8 +23,6 @@ var (
ErrBoltBucketNotFound = errors.New("boltdb bucket doesn't exist")
// ErrBoltBucketOptionMissing is thrown when boltBcuket config option is missing
ErrBoltBucketOptionMissing = errors.New("boltBucket config option missing")
// ErrBoltAPIUnsupported is thrown when an APIs unsupported by BoltDB backend is called
ErrBoltAPIUnsupported = errors.New("API not supported by BoltDB backend")
)
const (
@ -456,15 +454,15 @@ func (b *BoltDB) DeleteTree(keyPrefix string) error {
// NewLock has to implemented at the library level since its not supported by BoltDB
func (b *BoltDB) NewLock(key string, options *store.LockOptions) (store.Locker, error) {
return nil, ErrBoltAPIUnsupported
return nil, store.ErrCallNotSupported
}
// Watch has to implemented at the library level since its not supported by BoltDB
func (b *BoltDB) Watch(key string, stopCh <-chan struct{}) (<-chan *store.KVPair, error) {
return nil, ErrBoltAPIUnsupported
return nil, store.ErrCallNotSupported
}
// WatchTree has to implemented at the library level since its not supported by BoltDB
func (b *BoltDB) WatchTree(directory string, stopCh <-chan struct{}) (<-chan []*store.KVPair, error) {
return nil, ErrBoltAPIUnsupported
return nil, store.ErrCallNotSupported
}

View file

@ -18,12 +18,20 @@ const (
// time to check if the watched key has changed. This
// affects the minimum time it takes to cancel a watch.
DefaultWatchWaitTime = 15 * time.Second
// RenewSessionRetryMax is the number of time we should try
// to renew the session before giving up and throwing an error
RenewSessionRetryMax = 5
)
var (
// ErrMultipleEndpointsUnsupported is thrown when there are
// multiple endpoints specified for Consul
ErrMultipleEndpointsUnsupported = errors.New("consul does not support multiple endpoints")
// ErrSessionRenew is thrown when the session can't be
// renewed because the Consul version does not support sessions
ErrSessionRenew = errors.New("cannot set or renew session for ttl, unable to operate on sessions")
)
// Consul is the receiver type for the
@ -99,7 +107,7 @@ func (s *Consul) normalize(key string) string {
return strings.TrimPrefix(key, "/")
}
func (s *Consul) refreshSession(pair *api.KVPair, ttl time.Duration) error {
func (s *Consul) renewSession(pair *api.KVPair, ttl time.Duration) error {
// Check if there is any previous session with an active TTL
session, err := s.getActiveSession(pair.Key)
if err != nil {
@ -134,10 +142,7 @@ func (s *Consul) refreshSession(pair *api.KVPair, ttl time.Duration) error {
}
_, _, err = s.client.Session().Renew(session, nil)
if err != nil {
return s.refreshSession(pair, ttl)
}
return nil
return err
}
// getActiveSession checks if the key already has
@ -184,10 +189,16 @@ func (s *Consul) Put(key string, value []byte, opts *store.WriteOptions) error {
}
if opts != nil && opts.TTL > 0 {
// Create or refresh the session
err := s.refreshSession(p, opts.TTL)
if err != nil {
return err
// Create or renew a session holding a TTL. Operations on sessions
// are not deterministic: creating or renewing a session can fail
for retry := 1; retry <= RenewSessionRetryMax; retry++ {
err := s.renewSession(p, opts.TTL)
if err == nil {
break
}
if retry == RenewSessionRetryMax {
return ErrSessionRenew
}
}
}

View file

@ -3,12 +3,15 @@ package etcd
import (
"crypto/tls"
"errors"
"log"
"net"
"net/http"
"strings"
"time"
etcd "github.com/coreos/go-etcd/etcd"
"golang.org/x/net/context"
etcd "github.com/coreos/etcd/client"
"github.com/docker/libkv"
"github.com/docker/libkv/store"
)
@ -23,21 +26,21 @@ var (
// Etcd is the receiver type for the
// Store interface
type Etcd struct {
client *etcd.Client
client etcd.KeysAPI
}
type etcdLock struct {
client *etcd.Client
client etcd.KeysAPI
stopLock chan struct{}
stopRenew chan struct{}
key string
value string
last *etcd.Response
ttl uint64
ttl time.Duration
}
const (
periodicSync = 10 * time.Minute
periodicSync = 5 * time.Minute
defaultLockTTL = 20 * time.Second
defaultUpdateTime = 5 * time.Second
)
@ -57,34 +60,36 @@ func New(addrs []string, options *store.Config) (store.Store, error) {
err error
)
// Create the etcd client
if options != nil && options.ClientTLS != nil {
entries = store.CreateEndpoints(addrs, "https")
s.client, err = etcd.NewTLSClient(entries, options.ClientTLS.CertFile, options.ClientTLS.KeyFile, options.ClientTLS.CACertFile)
if err != nil {
return nil, err
}
} else {
entries = store.CreateEndpoints(addrs, "http")
s.client = etcd.NewClient(entries)
entries = store.CreateEndpoints(addrs, "http")
cfg := &etcd.Config{
Endpoints: entries,
Transport: etcd.DefaultTransport,
HeaderTimeoutPerRequest: 3 * time.Second,
}
// Set options
if options != nil {
// Plain TLS config overrides ClientTLS if specified
if options.TLS != nil {
s.setTLS(options.TLS, addrs)
setTLS(cfg, options.TLS, addrs)
}
if options.ConnectionTimeout != 0 {
s.setTimeout(options.ConnectionTimeout)
setTimeout(cfg, options.ConnectionTimeout)
}
}
// Periodic SyncCluster
c, err := etcd.New(*cfg)
if err != nil {
log.Fatal(err)
}
s.client = etcd.NewKeysAPI(c)
// Periodic Cluster Sync
go func() {
for {
s.client.SyncCluster()
time.Sleep(periodicSync)
if err := c.AutoSync(context.Background(), periodicSync); err != nil {
return
}
}
}()
@ -92,9 +97,9 @@ func New(addrs []string, options *store.Config) (store.Store, error) {
}
// SetTLS sets the tls configuration given a tls.Config scheme
func (s *Etcd) setTLS(tls *tls.Config, addrs []string) {
func setTLS(cfg *etcd.Config, tls *tls.Config, addrs []string) {
entries := store.CreateEndpoints(addrs, "https")
s.client.SetCluster(entries)
cfg.Endpoints = entries
// Set transport
t := http.Transport{
@ -105,36 +110,46 @@ func (s *Etcd) setTLS(tls *tls.Config, addrs []string) {
TLSHandshakeTimeout: 10 * time.Second,
TLSClientConfig: tls,
}
s.client.SetTransport(&t)
cfg.Transport = &t
}
// setTimeout sets the timeout used for connecting to the store
func (s *Etcd) setTimeout(time time.Duration) {
s.client.SetDialTimeout(time)
func setTimeout(cfg *etcd.Config, time time.Duration) {
cfg.HeaderTimeoutPerRequest = time
}
// createDirectory creates the entire path for a directory
// that does not exist
func (s *Etcd) createDirectory(path string) error {
if _, err := s.client.CreateDir(store.Normalize(path), 10); err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
// Skip key already exists
if etcdError.ErrorCode != 105 {
return err
// Normalize the key for usage in Etcd
func (s *Etcd) normalize(key string) string {
key = store.Normalize(key)
return strings.TrimPrefix(key, "/")
}
// keyNotFound checks on the error returned by the KeysAPI
// to verify if the key exists in the store or not
func keyNotFound(err error) bool {
if err != nil {
if etcdError, ok := err.(etcd.Error); ok {
if etcdError.Code == etcd.ErrorCodeKeyNotFound ||
etcdError.Code == etcd.ErrorCodeNotFile ||
etcdError.Code == etcd.ErrorCodeNotDir {
return true
}
} else {
return err
}
}
return nil
return false
}
// Get the value at "key", returns the last modified index
// to use in conjunction to Atomic calls
// Get the value at "key", returns the last modified
// index to use in conjunction to Atomic calls
func (s *Etcd) Get(key string) (pair *store.KVPair, err error) {
result, err := s.client.Get(store.Normalize(key), false, false)
getOpts := &etcd.GetOptions{
Quorum: true,
}
result, err := s.client.Get(context.Background(), s.normalize(key), getOpts)
if err != nil {
if isKeyNotFoundError(err) {
if keyNotFound(err) {
return nil, store.ErrKeyNotFound
}
return nil, err
@ -151,40 +166,26 @@ func (s *Etcd) Get(key string) (pair *store.KVPair, err error) {
// Put a value at "key"
func (s *Etcd) Put(key string, value []byte, opts *store.WriteOptions) error {
setOpts := &etcd.SetOptions{}
// Default TTL = 0 means no expiration
var ttl uint64
if opts != nil && opts.TTL > 0 {
ttl = uint64(opts.TTL.Seconds())
// Set options
if opts != nil {
setOpts.Dir = opts.IsDir
setOpts.TTL = opts.TTL
}
if _, err := s.client.Set(key, string(value), ttl); err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
// Not a directory
if etcdError.ErrorCode == 104 {
// Remove the last element (the actual key)
// and create the full directory path
err = s.createDirectory(store.GetDirectory(key))
if err != nil {
return err
}
// Now that the directory is created, set the key
if _, err := s.client.Set(key, string(value), ttl); err != nil {
return err
}
}
}
return err
}
return nil
_, err := s.client.Set(context.Background(), s.normalize(key), string(value), setOpts)
return err
}
// Delete a value at "key"
func (s *Etcd) Delete(key string) error {
_, err := s.client.Delete(store.Normalize(key), false)
if isKeyNotFoundError(err) {
opts := &etcd.DeleteOptions{
Recursive: false,
}
_, err := s.client.Delete(context.Background(), s.normalize(key), opts)
if keyNotFound(err) {
return store.ErrKeyNotFound
}
return err
@ -208,130 +209,130 @@ func (s *Etcd) Exists(key string) (bool, error) {
// be sent to the channel. Providing a non-nil stopCh can
// be used to stop watching.
func (s *Etcd) Watch(key string, stopCh <-chan struct{}) (<-chan *store.KVPair, error) {
// Start an etcd watch.
// Note: etcd will send the current value through the channel.
etcdWatchCh := make(chan *etcd.Response)
etcdStopCh := make(chan bool)
go s.client.Watch(store.Normalize(key), 0, false, etcdWatchCh, etcdStopCh)
opts := &etcd.WatcherOptions{Recursive: false}
watcher := s.client.Watcher(s.normalize(key), opts)
// Adapter goroutine: The goal here is to convert whatever
// format etcd is using into our interface.
// watchCh is sending back events to the caller
watchCh := make(chan *store.KVPair)
go func() {
defer close(watchCh)
// Get the current value
current, err := s.Get(key)
pair, err := s.Get(key)
if err != nil {
return
}
// Push the current value through the channel.
watchCh <- current
watchCh <- pair
for {
// Check if the watch was stopped by the caller
select {
case result := <-etcdWatchCh:
if result == nil || result.Node == nil {
// Something went wrong, exit
// No need to stop the chan as the watch already ended
return
}
watchCh <- &store.KVPair{
Key: key,
Value: []byte(result.Node.Value),
LastIndex: result.Node.ModifiedIndex,
}
case <-stopCh:
etcdStopCh <- true
return
default:
}
result, err := watcher.Next(context.Background())
if err != nil {
return
}
watchCh <- &store.KVPair{
Key: key,
Value: []byte(result.Node.Value),
LastIndex: result.Node.ModifiedIndex,
}
}
}()
return watchCh, nil
}
// WatchTree watches for changes on a "directory"
// It returns a channel that will receive changes or pass
// on errors. Upon creating a watch, the current childs values
// will be sent to the channel .Providing a non-nil stopCh can
// will be sent to the channel. Providing a non-nil stopCh can
// be used to stop watching.
func (s *Etcd) WatchTree(directory string, stopCh <-chan struct{}) (<-chan []*store.KVPair, error) {
// Start the watch
etcdWatchCh := make(chan *etcd.Response)
etcdStopCh := make(chan bool)
go s.client.Watch(store.Normalize(directory), 0, true, etcdWatchCh, etcdStopCh)
watchOpts := &etcd.WatcherOptions{Recursive: true}
watcher := s.client.Watcher(s.normalize(directory), watchOpts)
// Adapter goroutine: The goal here is to convert whatever
// format etcd is using into our interface.
// watchCh is sending back events to the caller
watchCh := make(chan []*store.KVPair)
go func() {
defer close(watchCh)
// Get child values
current, err := s.List(directory)
list, err := s.List(directory)
if err != nil {
return
}
// Push the current value through the channel.
watchCh <- current
watchCh <- list
for {
// Check if the watch was stopped by the caller
select {
case event := <-etcdWatchCh:
if event == nil {
// Something went wrong, exit
// No need to stop the chan as the watch already ended
return
}
// FIXME: We should probably use the value pushed by the channel.
// However, Node.Nodes seems to be empty.
if list, err := s.List(directory); err == nil {
watchCh <- list
}
case <-stopCh:
etcdStopCh <- true
return
default:
}
_, err := watcher.Next(context.Background())
if err != nil {
return
}
list, err = s.List(directory)
if err != nil {
return
}
watchCh <- list
}
}()
return watchCh, nil
}
// AtomicPut put a value at "key" if the key has not been
// AtomicPut puts a value at "key" if the key has not been
// modified in the meantime, throws an error if this is the case
func (s *Etcd) AtomicPut(key string, value []byte, previous *store.KVPair, options *store.WriteOptions) (bool, *store.KVPair, error) {
func (s *Etcd) AtomicPut(key string, value []byte, previous *store.KVPair, opts *store.WriteOptions) (bool, *store.KVPair, error) {
var (
meta *etcd.Response
err error
)
setOpts := &etcd.SetOptions{}
var meta *etcd.Response
var err error
if previous != nil {
meta, err = s.client.CompareAndSwap(store.Normalize(key), string(value), 0, "", previous.LastIndex)
setOpts.PrevExist = etcd.PrevExist
setOpts.PrevIndex = previous.LastIndex
if previous.Value != nil {
setOpts.PrevValue = string(previous.Value)
}
} else {
// Interpret previous == nil as Atomic Create
meta, err = s.client.Create(store.Normalize(key), string(value), 0)
if etcdError, ok := err.(*etcd.EtcdError); ok {
setOpts.PrevExist = etcd.PrevNoExist
}
// Directory doesn't exist.
if etcdError.ErrorCode == 104 {
// Remove the last element (the actual key)
// and create the full directory path
err = s.createDirectory(store.GetDirectory(key))
if err != nil {
return false, nil, err
}
// Now that the directory is created, create the key
if _, err := s.client.Create(key, string(value), 0); err != nil {
return false, nil, err
}
}
if opts != nil {
if opts.TTL > 0 {
setOpts.TTL = opts.TTL
}
}
meta, err = s.client.Set(context.Background(), s.normalize(key), string(value), setOpts)
if err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
// Compare Failed
if etcdError.ErrorCode == 101 {
if etcdError, ok := err.(etcd.Error); ok {
// Compare failed
if etcdError.Code == etcd.ErrorCodeTestFailed {
return false, nil, store.ErrKeyModified
}
}
@ -355,11 +356,20 @@ func (s *Etcd) AtomicDelete(key string, previous *store.KVPair) (bool, error) {
return false, store.ErrPreviousNotSpecified
}
_, err := s.client.CompareAndDelete(store.Normalize(key), "", previous.LastIndex)
delOpts := &etcd.DeleteOptions{}
if previous != nil {
delOpts.PrevIndex = previous.LastIndex
if previous.Value != nil {
delOpts.PrevValue = string(previous.Value)
}
}
_, err := s.client.Delete(context.Background(), s.normalize(key), delOpts)
if err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
if etcdError, ok := err.(etcd.Error); ok {
// Compare failed
if etcdError.ErrorCode == 101 {
if etcdError.Code == etcd.ErrorCodeTestFailed {
return false, store.ErrKeyModified
}
}
@ -371,18 +381,24 @@ func (s *Etcd) AtomicDelete(key string, previous *store.KVPair) (bool, error) {
// List child nodes of a given directory
func (s *Etcd) List(directory string) ([]*store.KVPair, error) {
resp, err := s.client.Get(store.Normalize(directory), true, true)
getOpts := &etcd.GetOptions{
Quorum: true,
Recursive: true,
Sort: true,
}
resp, err := s.client.Get(context.Background(), s.normalize(directory), getOpts)
if err != nil {
if isKeyNotFoundError(err) {
if keyNotFound(err) {
return nil, store.ErrKeyNotFound
}
return nil, err
}
kv := []*store.KVPair{}
for _, n := range resp.Node.Nodes {
key := strings.TrimLeft(n.Key, "/")
kv = append(kv, &store.KVPair{
Key: key,
Key: n.Key,
Value: []byte(n.Value),
LastIndex: n.ModifiedIndex,
})
@ -392,8 +408,12 @@ func (s *Etcd) List(directory string) ([]*store.KVPair, error) {
// DeleteTree deletes a range of keys under a given directory
func (s *Etcd) DeleteTree(directory string) error {
_, err := s.client.Delete(store.Normalize(directory), true)
if isKeyNotFoundError(err) {
delOpts := &etcd.DeleteOptions{
Recursive: true,
}
_, err := s.client.Delete(context.Background(), s.normalize(directory), delOpts)
if keyNotFound(err) {
return store.ErrKeyNotFound
}
return err
@ -403,7 +423,7 @@ func (s *Etcd) DeleteTree(directory string) error {
// be used to provide mutual exclusion on a key
func (s *Etcd) NewLock(key string, options *store.LockOptions) (lock store.Locker, err error) {
var value string
ttl := uint64(time.Duration(defaultLockTTL).Seconds())
ttl := defaultLockTTL
renewCh := make(chan struct{})
// Apply options on Lock
@ -412,7 +432,7 @@ func (s *Etcd) NewLock(key string, options *store.LockOptions) (lock store.Locke
value = string(options.Value)
}
if options.TTL != 0 {
ttl = uint64(options.TTL.Seconds())
ttl = options.TTL
}
if options.RenewLock != nil {
renewCh = options.RenewLock
@ -423,7 +443,7 @@ func (s *Etcd) NewLock(key string, options *store.LockOptions) (lock store.Locke
lock = &etcdLock{
client: s.client,
stopRenew: renewCh,
key: key,
key: s.normalize(key),
value: value,
ttl: ttl,
}
@ -436,47 +456,58 @@ func (s *Etcd) NewLock(key string, options *store.LockOptions) (lock store.Locke
// lock is lost or if an error occurs
func (l *etcdLock) Lock(stopChan chan struct{}) (<-chan struct{}, error) {
key := store.Normalize(l.key)
// Lock holder channel
lockHeld := make(chan struct{})
stopLocking := l.stopRenew
var lastIndex uint64
setOpts := &etcd.SetOptions{
TTL: l.ttl,
}
for {
resp, err := l.client.Create(key, l.value, l.ttl)
setOpts.PrevExist = etcd.PrevNoExist
resp, err := l.client.Set(context.Background(), l.key, l.value, setOpts)
if err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
// Key already exists
if etcdError.ErrorCode != 105 {
lastIndex = ^uint64(0)
if etcdError, ok := err.(etcd.Error); ok {
if etcdError.Code != etcd.ErrorCodeNodeExist {
return nil, err
}
setOpts.PrevIndex = ^uint64(0)
}
} else {
lastIndex = resp.Node.ModifiedIndex
setOpts.PrevIndex = resp.Node.ModifiedIndex
}
l.last, err = l.client.CompareAndSwap(key, l.value, l.ttl, "", lastIndex)
setOpts.PrevExist = etcd.PrevExist
l.last, err = l.client.Set(context.Background(), l.key, l.value, setOpts)
if err == nil {
// Leader section
l.stopLock = stopLocking
go l.holdLock(key, lockHeld, stopLocking)
go l.holdLock(l.key, lockHeld, stopLocking)
break
} else {
// If this is a legitimate error, return
if etcdError, ok := err.(etcd.Error); ok {
if etcdError.Code != etcd.ErrorCodeTestFailed {
return nil, err
}
}
// Seeker section
chW := make(chan *etcd.Response)
errorCh := make(chan error)
chWStop := make(chan bool)
free := make(chan bool)
go l.waitLock(key, chW, chWStop, free)
go l.waitLock(l.key, errorCh, chWStop, free)
// Wait for the key to be available or for
// a signal to stop trying to lock the key
select {
case _ = <-free:
break
case err := <-errorCh:
return nil, err
case _ = <-stopChan:
return nil, ErrAbortTryLock
}
@ -495,15 +526,17 @@ func (l *etcdLock) Lock(stopChan chan struct{}) (<-chan struct{}, error) {
func (l *etcdLock) holdLock(key string, lockHeld chan struct{}, stopLocking <-chan struct{}) {
defer close(lockHeld)
update := time.NewTicker(time.Duration(l.ttl) * time.Second / 3)
update := time.NewTicker(l.ttl / 3)
defer update.Stop()
var err error
setOpts := &etcd.SetOptions{TTL: l.ttl}
for {
select {
case <-update.C:
l.last, err = l.client.CompareAndSwap(key, l.value, l.ttl, "", l.last.Node.ModifiedIndex)
setOpts.PrevIndex = l.last.Node.ModifiedIndex
l.last, err = l.client.Set(context.Background(), key, l.value, setOpts)
if err != nil {
return
}
@ -515,12 +548,19 @@ func (l *etcdLock) holdLock(key string, lockHeld chan struct{}, stopLocking <-ch
}
// WaitLock simply waits for the key to be available for creation
func (l *etcdLock) waitLock(key string, eventCh chan *etcd.Response, stopWatchCh chan bool, free chan<- bool) {
go l.client.Watch(key, 0, false, eventCh, stopWatchCh)
func (l *etcdLock) waitLock(key string, errorCh chan error, stopWatchCh chan bool, free chan<- bool) {
opts := &etcd.WatcherOptions{Recursive: false}
watcher := l.client.Watcher(key, opts)
for event := range eventCh {
for {
event, err := watcher.Next(context.Background())
if err != nil {
errorCh <- err
return
}
if event.Action == "delete" || event.Action == "expire" {
free <- true
return
}
}
}
@ -532,7 +572,10 @@ func (l *etcdLock) Unlock() error {
l.stopLock <- struct{}{}
}
if l.last != nil {
_, err := l.client.CompareAndDelete(store.Normalize(l.key), l.value, l.last.Node.ModifiedIndex)
delOpts := &etcd.DeleteOptions{
PrevIndex: l.last.Node.ModifiedIndex,
}
_, err := l.client.Delete(context.Background(), l.key, delOpts)
if err != nil {
return err
}
@ -544,15 +587,3 @@ func (l *etcdLock) Unlock() error {
func (s *Etcd) Close() {
return
}
func isKeyNotFoundError(err error) bool {
if err != nil {
if etcdError, ok := err.(*etcd.EtcdError); ok {
// Not a Directory or Not a file
if etcdError.ErrorCode == 100 || etcdError.ErrorCode == 102 || etcdError.ErrorCode == 104 {
return true
}
}
}
return false
}

View file

@ -21,10 +21,10 @@ const (
)
var (
// ErrNotSupported is thrown when the backend k/v store is not supported by libkv
ErrNotSupported = errors.New("Backend storage not supported yet, please choose one of")
// ErrNotImplemented is thrown when a method is not implemented by the current backend
ErrNotImplemented = errors.New("Call not implemented in current backend")
// ErrBackendNotSupported is thrown when the backend k/v store is not supported by libkv
ErrBackendNotSupported = errors.New("Backend storage not supported yet, please choose one of")
// ErrCallNotSupported is thrown when a method is not implemented/supported by the current backend
ErrCallNotSupported = errors.New("The current call is not supported with this backend")
// ErrNotReachable is thrown when the API cannot be reached for issuing common store operations
ErrNotReachable = errors.New("Api not reachable")
// ErrCannotLock is thrown when there is an error acquiring a lock on a key
@ -47,7 +47,7 @@ type Config struct {
}
// ClientTLSConfig contains data for a Client TLS configuration in the form
// the etcd client wants it. Eventually we'll adapt it for ZK and Consul.
// the etcd client wants it. Eventually we'll adapt it for ZK and Consul.
type ClientTLSConfig struct {
CertFile string
KeyFile string
@ -109,7 +109,8 @@ type KVPair struct {
// WriteOptions contains optional request parameters
type WriteOptions struct {
TTL time.Duration
IsDir bool
TTL time.Duration
}
// LockOptions contains optional request parameters

View file

@ -385,15 +385,15 @@ func (ds *datastore) GetObject(key string, o KVObject) error {
return nil
}
func (ds *datastore) ensureKey(key string) error {
exists, err := ds.store.Exists(key)
func (ds *datastore) ensureParent(parent string) error {
exists, err := ds.store.Exists(parent)
if err != nil {
return err
}
if exists {
return nil
}
return ds.store.Put(key, []byte{}, nil)
return ds.store.Put(parent, []byte{}, &store.WriteOptions{IsDir: true})
}
func (ds *datastore) List(key string, kvObject KVObject) ([]KVObject, error) {
@ -411,7 +411,7 @@ func (ds *datastore) List(key string, kvObject KVObject) ([]KVObject, error) {
}
// Make sure the parent key exists
if err := ds.ensureKey(key); err != nil {
if err := ds.ensureParent(key); err != nil {
return nil, err
}

22
vendor/src/github.com/ugorji/go/LICENSE vendored Normal file
View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2012-2015 Ugorji Nwoke.
All rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -0,0 +1,150 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
/*
High Performance, Feature-Rich Idiomatic Go codec/encoding library for
binc, msgpack, cbor, json.
Supported Serialization formats are:
- msgpack: https://github.com/msgpack/msgpack
- binc: http://github.com/ugorji/binc
- cbor: http://cbor.io http://tools.ietf.org/html/rfc7049
- json: http://json.org http://tools.ietf.org/html/rfc7159
- simple:
To install:
go get github.com/ugorji/go/codec
This package understands the 'unsafe' tag, to allow using unsafe semantics:
- When decoding into a struct, you need to read the field name as a string
so you can find the struct field it is mapped to.
Using `unsafe` will bypass the allocation and copying overhead of []byte->string conversion.
To install using unsafe, pass the 'unsafe' tag:
go get -tags=unsafe github.com/ugorji/go/codec
For detailed usage information, read the primer at http://ugorji.net/blog/go-codec-primer .
The idiomatic Go support is as seen in other encoding packages in
the standard library (ie json, xml, gob, etc).
Rich Feature Set includes:
- Simple but extremely powerful and feature-rich API
- Very High Performance.
Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
- Multiple conversions:
Package coerces types where appropriate
e.g. decode an int in the stream into a float, etc.
- Corner Cases:
Overflows, nil maps/slices, nil values in streams are handled correctly
- Standard field renaming via tags
- Support for omitting empty fields during an encoding
- Encoding from any value and decoding into pointer to any value
(struct, slice, map, primitives, pointers, interface{}, etc)
- Extensions to support efficient encoding/decoding of any named types
- Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
- Decoding without a schema (into a interface{}).
Includes Options to configure what specific map or slice type to use
when decoding an encoded list or map into a nil interface{}
- Encode a struct as an array, and decode struct from an array in the data stream
- Comprehensive support for anonymous fields
- Fast (no-reflection) encoding/decoding of common maps and slices
- Code-generation for faster performance.
- Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
- Support indefinite-length formats to enable true streaming
(for formats which support it e.g. json, cbor)
- Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes.
This mostly applies to maps, where iteration order is non-deterministic.
- NIL in data stream decoded as zero value
- Never silently skip data when decoding.
User decides whether to return an error or silently skip data when keys or indexes
in the data stream do not map to fields in the struct.
- Encode/Decode from/to chan types (for iterative streaming support)
- Drop-in replacement for encoding/json. `json:` key in struct tag supported.
- Provides a RPC Server and Client Codec for net/rpc communication protocol.
- Handle unique idiosynchracies of codecs e.g.
- For messagepack, configure how ambiguities in handling raw bytes are resolved
- For messagepack, provide rpc server/client codec to support
msgpack-rpc protocol defined at:
https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
Extension Support
Users can register a function to handle the encoding or decoding of
their custom types.
There are no restrictions on what the custom type can be. Some examples:
type BisSet []int
type BitSet64 uint64
type UUID string
type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
type GifImage struct { ... }
As an illustration, MyStructWithUnexportedFields would normally be
encoded as an empty map because it has no exported fields, while UUID
would be encoded as a string. However, with extension support, you can
encode any of these however you like.
RPC
RPC Client and Server Codecs are implemented, so the codecs can be used
with the standard net/rpc package.
Usage
Typical usage model:
// create and configure Handle
var (
bh codec.BincHandle
mh codec.MsgpackHandle
ch codec.CborHandle
)
mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
// configure extensions
// e.g. for msgpack, define functions and enable Time support for tag 1
// mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)
// create and use decoder/encoder
var (
r io.Reader
w io.Writer
b []byte
h = &bh // or mh to use msgpack
)
dec = codec.NewDecoder(r, h)
dec = codec.NewDecoderBytes(b, h)
err = dec.Decode(&v)
enc = codec.NewEncoder(w, h)
enc = codec.NewEncoderBytes(&b, h)
err = enc.Encode(v)
//RPC Server
go func() {
for {
conn, err := listener.Accept()
rpcCodec := codec.GoRpc.ServerCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
rpc.ServeCodec(rpcCodec)
}
}()
//RPC Communication (client side)
conn, err = net.Dial("tcp", "localhost:5555")
rpcCodec := codec.GoRpc.ClientCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
client := rpc.NewClientWithCodec(rpcCodec)
*/
package codec

View file

@ -0,0 +1,148 @@
# Codec
High Performance, Feature-Rich Idiomatic Go codec/encoding library for
binc, msgpack, cbor, json.
Supported Serialization formats are:
- msgpack: https://github.com/msgpack/msgpack
- binc: http://github.com/ugorji/binc
- cbor: http://cbor.io http://tools.ietf.org/html/rfc7049
- json: http://json.org http://tools.ietf.org/html/rfc7159
- simple:
To install:
go get github.com/ugorji/go/codec
This package understands the `unsafe` tag, to allow using unsafe semantics:
- When decoding into a struct, you need to read the field name as a string
so you can find the struct field it is mapped to.
Using `unsafe` will bypass the allocation and copying overhead of `[]byte->string` conversion.
To use it, you must pass the `unsafe` tag during install:
```
go install -tags=unsafe github.com/ugorji/go/codec
```
Online documentation: http://godoc.org/github.com/ugorji/go/codec
Detailed Usage/How-to Primer: http://ugorji.net/blog/go-codec-primer
The idiomatic Go support is as seen in other encoding packages in
the standard library (ie json, xml, gob, etc).
Rich Feature Set includes:
- Simple but extremely powerful and feature-rich API
- Very High Performance.
Our extensive benchmarks show us outperforming Gob, Json, Bson, etc by 2-4X.
- Multiple conversions:
Package coerces types where appropriate
e.g. decode an int in the stream into a float, etc.
- Corner Cases:
Overflows, nil maps/slices, nil values in streams are handled correctly
- Standard field renaming via tags
- Support for omitting empty fields during an encoding
- Encoding from any value and decoding into pointer to any value
(struct, slice, map, primitives, pointers, interface{}, etc)
- Extensions to support efficient encoding/decoding of any named types
- Support encoding.(Binary|Text)(M|Unm)arshaler interfaces
- Decoding without a schema (into a interface{}).
Includes Options to configure what specific map or slice type to use
when decoding an encoded list or map into a nil interface{}
- Encode a struct as an array, and decode struct from an array in the data stream
- Comprehensive support for anonymous fields
- Fast (no-reflection) encoding/decoding of common maps and slices
- Code-generation for faster performance.
- Support binary (e.g. messagepack, cbor) and text (e.g. json) formats
- Support indefinite-length formats to enable true streaming
(for formats which support it e.g. json, cbor)
- Support canonical encoding, where a value is ALWAYS encoded as same sequence of bytes.
This mostly applies to maps, where iteration order is non-deterministic.
- NIL in data stream decoded as zero value
- Never silently skip data when decoding.
User decides whether to return an error or silently skip data when keys or indexes
in the data stream do not map to fields in the struct.
- Encode/Decode from/to chan types (for iterative streaming support)
- Drop-in replacement for encoding/json. `json:` key in struct tag supported.
- Provides a RPC Server and Client Codec for net/rpc communication protocol.
- Handle unique idiosynchracies of codecs e.g.
- For messagepack, configure how ambiguities in handling raw bytes are resolved
- For messagepack, provide rpc server/client codec to support
msgpack-rpc protocol defined at:
https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
## Extension Support
Users can register a function to handle the encoding or decoding of
their custom types.
There are no restrictions on what the custom type can be. Some examples:
type BisSet []int
type BitSet64 uint64
type UUID string
type MyStructWithUnexportedFields struct { a int; b bool; c []int; }
type GifImage struct { ... }
As an illustration, MyStructWithUnexportedFields would normally be
encoded as an empty map because it has no exported fields, while UUID
would be encoded as a string. However, with extension support, you can
encode any of these however you like.
## RPC
RPC Client and Server Codecs are implemented, so the codecs can be used
with the standard net/rpc package.
## Usage
Typical usage model:
// create and configure Handle
var (
bh codec.BincHandle
mh codec.MsgpackHandle
ch codec.CborHandle
)
mh.MapType = reflect.TypeOf(map[string]interface{}(nil))
// configure extensions
// e.g. for msgpack, define functions and enable Time support for tag 1
// mh.SetExt(reflect.TypeOf(time.Time{}), 1, myExt)
// create and use decoder/encoder
var (
r io.Reader
w io.Writer
b []byte
h = &bh // or mh to use msgpack
)
dec = codec.NewDecoder(r, h)
dec = codec.NewDecoderBytes(b, h)
err = dec.Decode(&v)
enc = codec.NewEncoder(w, h)
enc = codec.NewEncoderBytes(&b, h)
err = enc.Encode(v)
//RPC Server
go func() {
for {
conn, err := listener.Accept()
rpcCodec := codec.GoRpc.ServerCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ServerCodec(conn, h)
rpc.ServeCodec(rpcCodec)
}
}()
//RPC Communication (client side)
conn, err = net.Dial("tcp", "localhost:5555")
rpcCodec := codec.GoRpc.ClientCodec(conn, h)
//OR rpcCodec := codec.MsgpackSpecRpc.ClientCodec(conn, h)
client := rpc.NewClientWithCodec(rpcCodec)

View file

@ -0,0 +1,901 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import (
"math"
"time"
)
const bincDoPrune = true // No longer needed. Needed before as C lib did not support pruning.
// vd as low 4 bits (there are 16 slots)
const (
bincVdSpecial byte = iota
bincVdPosInt
bincVdNegInt
bincVdFloat
bincVdString
bincVdByteArray
bincVdArray
bincVdMap
bincVdTimestamp
bincVdSmallInt
bincVdUnicodeOther
bincVdSymbol
bincVdDecimal
_ // open slot
_ // open slot
bincVdCustomExt = 0x0f
)
const (
bincSpNil byte = iota
bincSpFalse
bincSpTrue
bincSpNan
bincSpPosInf
bincSpNegInf
bincSpZeroFloat
bincSpZero
bincSpNegOne
)
const (
bincFlBin16 byte = iota
bincFlBin32
_ // bincFlBin32e
bincFlBin64
_ // bincFlBin64e
// others not currently supported
)
type bincEncDriver struct {
e *Encoder
w encWriter
m map[string]uint16 // symbols
s uint16 // symbols sequencer
b [scratchByteArrayLen]byte
encNoSeparator
}
func (e *bincEncDriver) IsBuiltinType(rt uintptr) bool {
return rt == timeTypId
}
func (e *bincEncDriver) EncodeBuiltin(rt uintptr, v interface{}) {
if rt == timeTypId {
bs := encodeTime(v.(time.Time))
e.w.writen1(bincVdTimestamp<<4 | uint8(len(bs)))
e.w.writeb(bs)
}
}
func (e *bincEncDriver) EncodeNil() {
e.w.writen1(bincVdSpecial<<4 | bincSpNil)
}
func (e *bincEncDriver) EncodeBool(b bool) {
if b {
e.w.writen1(bincVdSpecial<<4 | bincSpTrue)
} else {
e.w.writen1(bincVdSpecial<<4 | bincSpFalse)
}
}
func (e *bincEncDriver) EncodeFloat32(f float32) {
if f == 0 {
e.w.writen1(bincVdSpecial<<4 | bincSpZeroFloat)
return
}
e.w.writen1(bincVdFloat<<4 | bincFlBin32)
bigenHelper{e.b[:4], e.w}.writeUint32(math.Float32bits(f))
}
func (e *bincEncDriver) EncodeFloat64(f float64) {
if f == 0 {
e.w.writen1(bincVdSpecial<<4 | bincSpZeroFloat)
return
}
bigen.PutUint64(e.b[:8], math.Float64bits(f))
if bincDoPrune {
i := 7
for ; i >= 0 && (e.b[i] == 0); i-- {
}
i++
if i <= 6 {
e.w.writen1(bincVdFloat<<4 | 0x8 | bincFlBin64)
e.w.writen1(byte(i))
e.w.writeb(e.b[:i])
return
}
}
e.w.writen1(bincVdFloat<<4 | bincFlBin64)
e.w.writeb(e.b[:8])
}
func (e *bincEncDriver) encIntegerPrune(bd byte, pos bool, v uint64, lim uint8) {
if lim == 4 {
bigen.PutUint32(e.b[:lim], uint32(v))
} else {
bigen.PutUint64(e.b[:lim], v)
}
if bincDoPrune {
i := pruneSignExt(e.b[:lim], pos)
e.w.writen1(bd | lim - 1 - byte(i))
e.w.writeb(e.b[i:lim])
} else {
e.w.writen1(bd | lim - 1)
e.w.writeb(e.b[:lim])
}
}
func (e *bincEncDriver) EncodeInt(v int64) {
const nbd byte = bincVdNegInt << 4
if v >= 0 {
e.encUint(bincVdPosInt<<4, true, uint64(v))
} else if v == -1 {
e.w.writen1(bincVdSpecial<<4 | bincSpNegOne)
} else {
e.encUint(bincVdNegInt<<4, false, uint64(-v))
}
}
func (e *bincEncDriver) EncodeUint(v uint64) {
e.encUint(bincVdPosInt<<4, true, v)
}
func (e *bincEncDriver) encUint(bd byte, pos bool, v uint64) {
if v == 0 {
e.w.writen1(bincVdSpecial<<4 | bincSpZero)
} else if pos && v >= 1 && v <= 16 {
e.w.writen1(bincVdSmallInt<<4 | byte(v-1))
} else if v <= math.MaxUint8 {
e.w.writen2(bd|0x0, byte(v))
} else if v <= math.MaxUint16 {
e.w.writen1(bd | 0x01)
bigenHelper{e.b[:2], e.w}.writeUint16(uint16(v))
} else if v <= math.MaxUint32 {
e.encIntegerPrune(bd, pos, v, 4)
} else {
e.encIntegerPrune(bd, pos, v, 8)
}
}
func (e *bincEncDriver) EncodeExt(rv interface{}, xtag uint64, ext Ext, _ *Encoder) {
bs := ext.WriteExt(rv)
if bs == nil {
e.EncodeNil()
return
}
e.encodeExtPreamble(uint8(xtag), len(bs))
e.w.writeb(bs)
}
func (e *bincEncDriver) EncodeRawExt(re *RawExt, _ *Encoder) {
e.encodeExtPreamble(uint8(re.Tag), len(re.Data))
e.w.writeb(re.Data)
}
func (e *bincEncDriver) encodeExtPreamble(xtag byte, length int) {
e.encLen(bincVdCustomExt<<4, uint64(length))
e.w.writen1(xtag)
}
func (e *bincEncDriver) EncodeArrayStart(length int) {
e.encLen(bincVdArray<<4, uint64(length))
}
func (e *bincEncDriver) EncodeMapStart(length int) {
e.encLen(bincVdMap<<4, uint64(length))
}
func (e *bincEncDriver) EncodeString(c charEncoding, v string) {
l := uint64(len(v))
e.encBytesLen(c, l)
if l > 0 {
e.w.writestr(v)
}
}
func (e *bincEncDriver) EncodeSymbol(v string) {
// if WriteSymbolsNoRefs {
// e.encodeString(c_UTF8, v)
// return
// }
//symbols only offer benefit when string length > 1.
//This is because strings with length 1 take only 2 bytes to store
//(bd with embedded length, and single byte for string val).
l := len(v)
if l == 0 {
e.encBytesLen(c_UTF8, 0)
return
} else if l == 1 {
e.encBytesLen(c_UTF8, 1)
e.w.writen1(v[0])
return
}
if e.m == nil {
e.m = make(map[string]uint16, 16)
}
ui, ok := e.m[v]
if ok {
if ui <= math.MaxUint8 {
e.w.writen2(bincVdSymbol<<4, byte(ui))
} else {
e.w.writen1(bincVdSymbol<<4 | 0x8)
bigenHelper{e.b[:2], e.w}.writeUint16(ui)
}
} else {
e.s++
ui = e.s
//ui = uint16(atomic.AddUint32(&e.s, 1))
e.m[v] = ui
var lenprec uint8
if l <= math.MaxUint8 {
// lenprec = 0
} else if l <= math.MaxUint16 {
lenprec = 1
} else if int64(l) <= math.MaxUint32 {
lenprec = 2
} else {
lenprec = 3
}
if ui <= math.MaxUint8 {
e.w.writen2(bincVdSymbol<<4|0x0|0x4|lenprec, byte(ui))
} else {
e.w.writen1(bincVdSymbol<<4 | 0x8 | 0x4 | lenprec)
bigenHelper{e.b[:2], e.w}.writeUint16(ui)
}
if lenprec == 0 {
e.w.writen1(byte(l))
} else if lenprec == 1 {
bigenHelper{e.b[:2], e.w}.writeUint16(uint16(l))
} else if lenprec == 2 {
bigenHelper{e.b[:4], e.w}.writeUint32(uint32(l))
} else {
bigenHelper{e.b[:8], e.w}.writeUint64(uint64(l))
}
e.w.writestr(v)
}
}
func (e *bincEncDriver) EncodeStringBytes(c charEncoding, v []byte) {
l := uint64(len(v))
e.encBytesLen(c, l)
if l > 0 {
e.w.writeb(v)
}
}
func (e *bincEncDriver) encBytesLen(c charEncoding, length uint64) {
//TODO: support bincUnicodeOther (for now, just use string or bytearray)
if c == c_RAW {
e.encLen(bincVdByteArray<<4, length)
} else {
e.encLen(bincVdString<<4, length)
}
}
func (e *bincEncDriver) encLen(bd byte, l uint64) {
if l < 12 {
e.w.writen1(bd | uint8(l+4))
} else {
e.encLenNumber(bd, l)
}
}
func (e *bincEncDriver) encLenNumber(bd byte, v uint64) {
if v <= math.MaxUint8 {
e.w.writen2(bd, byte(v))
} else if v <= math.MaxUint16 {
e.w.writen1(bd | 0x01)
bigenHelper{e.b[:2], e.w}.writeUint16(uint16(v))
} else if v <= math.MaxUint32 {
e.w.writen1(bd | 0x02)
bigenHelper{e.b[:4], e.w}.writeUint32(uint32(v))
} else {
e.w.writen1(bd | 0x03)
bigenHelper{e.b[:8], e.w}.writeUint64(uint64(v))
}
}
//------------------------------------
type bincDecSymbol struct {
i uint16
s string
b []byte
}
type bincDecDriver struct {
d *Decoder
h *BincHandle
r decReader
br bool // bytes reader
bdRead bool
bdType valueType
bd byte
vd byte
vs byte
noStreamingCodec
decNoSeparator
b [scratchByteArrayLen]byte
// linear searching on this slice is ok,
// because we typically expect < 32 symbols in each stream.
s []bincDecSymbol
}
func (d *bincDecDriver) readNextBd() {
d.bd = d.r.readn1()
d.vd = d.bd >> 4
d.vs = d.bd & 0x0f
d.bdRead = true
d.bdType = valueTypeUnset
}
func (d *bincDecDriver) IsContainerType(vt valueType) (b bool) {
switch vt {
case valueTypeNil:
return d.vd == bincVdSpecial && d.vs == bincSpNil
case valueTypeBytes:
return d.vd == bincVdByteArray
case valueTypeString:
return d.vd == bincVdString
case valueTypeArray:
return d.vd == bincVdArray
case valueTypeMap:
return d.vd == bincVdMap
}
d.d.errorf("isContainerType: unsupported parameter: %v", vt)
return // "unreachable"
}
func (d *bincDecDriver) TryDecodeAsNil() bool {
if !d.bdRead {
d.readNextBd()
}
if d.bd == bincVdSpecial<<4|bincSpNil {
d.bdRead = false
return true
}
return false
}
func (d *bincDecDriver) IsBuiltinType(rt uintptr) bool {
return rt == timeTypId
}
func (d *bincDecDriver) DecodeBuiltin(rt uintptr, v interface{}) {
if !d.bdRead {
d.readNextBd()
}
if rt == timeTypId {
if d.vd != bincVdTimestamp {
d.d.errorf("Invalid d.vd. Expecting 0x%x. Received: 0x%x", bincVdTimestamp, d.vd)
return
}
tt, err := decodeTime(d.r.readx(int(d.vs)))
if err != nil {
panic(err)
}
var vt *time.Time = v.(*time.Time)
*vt = tt
d.bdRead = false
}
}
func (d *bincDecDriver) decFloatPre(vs, defaultLen byte) {
if vs&0x8 == 0 {
d.r.readb(d.b[0:defaultLen])
} else {
l := d.r.readn1()
if l > 8 {
d.d.errorf("At most 8 bytes used to represent float. Received: %v bytes", l)
return
}
for i := l; i < 8; i++ {
d.b[i] = 0
}
d.r.readb(d.b[0:l])
}
}
func (d *bincDecDriver) decFloat() (f float64) {
//if true { f = math.Float64frombits(bigen.Uint64(d.r.readx(8))); break; }
if x := d.vs & 0x7; x == bincFlBin32 {
d.decFloatPre(d.vs, 4)
f = float64(math.Float32frombits(bigen.Uint32(d.b[0:4])))
} else if x == bincFlBin64 {
d.decFloatPre(d.vs, 8)
f = math.Float64frombits(bigen.Uint64(d.b[0:8]))
} else {
d.d.errorf("only float32 and float64 are supported. d.vd: 0x%x, d.vs: 0x%x", d.vd, d.vs)
return
}
return
}
func (d *bincDecDriver) decUint() (v uint64) {
// need to inline the code (interface conversion and type assertion expensive)
switch d.vs {
case 0:
v = uint64(d.r.readn1())
case 1:
d.r.readb(d.b[6:8])
v = uint64(bigen.Uint16(d.b[6:8]))
case 2:
d.b[4] = 0
d.r.readb(d.b[5:8])
v = uint64(bigen.Uint32(d.b[4:8]))
case 3:
d.r.readb(d.b[4:8])
v = uint64(bigen.Uint32(d.b[4:8]))
case 4, 5, 6:
lim := int(7 - d.vs)
d.r.readb(d.b[lim:8])
for i := 0; i < lim; i++ {
d.b[i] = 0
}
v = uint64(bigen.Uint64(d.b[:8]))
case 7:
d.r.readb(d.b[:8])
v = uint64(bigen.Uint64(d.b[:8]))
default:
d.d.errorf("unsigned integers with greater than 64 bits of precision not supported")
return
}
return
}
func (d *bincDecDriver) decCheckInteger() (ui uint64, neg bool) {
if !d.bdRead {
d.readNextBd()
}
vd, vs := d.vd, d.vs
if vd == bincVdPosInt {
ui = d.decUint()
} else if vd == bincVdNegInt {
ui = d.decUint()
neg = true
} else if vd == bincVdSmallInt {
ui = uint64(d.vs) + 1
} else if vd == bincVdSpecial {
if vs == bincSpZero {
//i = 0
} else if vs == bincSpNegOne {
neg = true
ui = 1
} else {
d.d.errorf("numeric decode fails for special value: d.vs: 0x%x", d.vs)
return
}
} else {
d.d.errorf("number can only be decoded from uint or int values. d.bd: 0x%x, d.vd: 0x%x", d.bd, d.vd)
return
}
return
}
func (d *bincDecDriver) DecodeInt(bitsize uint8) (i int64) {
ui, neg := d.decCheckInteger()
i, overflow := chkOvf.SignedInt(ui)
if overflow {
d.d.errorf("simple: overflow converting %v to signed integer", ui)
return
}
if neg {
i = -i
}
if chkOvf.Int(i, bitsize) {
d.d.errorf("binc: overflow integer: %v", i)
return
}
d.bdRead = false
return
}
func (d *bincDecDriver) DecodeUint(bitsize uint8) (ui uint64) {
ui, neg := d.decCheckInteger()
if neg {
d.d.errorf("Assigning negative signed value to unsigned type")
return
}
if chkOvf.Uint(ui, bitsize) {
d.d.errorf("binc: overflow integer: %v", ui)
return
}
d.bdRead = false
return
}
func (d *bincDecDriver) DecodeFloat(chkOverflow32 bool) (f float64) {
if !d.bdRead {
d.readNextBd()
}
vd, vs := d.vd, d.vs
if vd == bincVdSpecial {
d.bdRead = false
if vs == bincSpNan {
return math.NaN()
} else if vs == bincSpPosInf {
return math.Inf(1)
} else if vs == bincSpZeroFloat || vs == bincSpZero {
return
} else if vs == bincSpNegInf {
return math.Inf(-1)
} else {
d.d.errorf("Invalid d.vs decoding float where d.vd=bincVdSpecial: %v", d.vs)
return
}
} else if vd == bincVdFloat {
f = d.decFloat()
} else {
f = float64(d.DecodeInt(64))
}
if chkOverflow32 && chkOvf.Float32(f) {
d.d.errorf("binc: float32 overflow: %v", f)
return
}
d.bdRead = false
return
}
// bool can be decoded from bool only (single byte).
func (d *bincDecDriver) DecodeBool() (b bool) {
if !d.bdRead {
d.readNextBd()
}
if bd := d.bd; bd == (bincVdSpecial | bincSpFalse) {
// b = false
} else if bd == (bincVdSpecial | bincSpTrue) {
b = true
} else {
d.d.errorf("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
return
}
d.bdRead = false
return
}
func (d *bincDecDriver) ReadMapStart() (length int) {
if d.vd != bincVdMap {
d.d.errorf("Invalid d.vd for map. Expecting 0x%x. Got: 0x%x", bincVdMap, d.vd)
return
}
length = d.decLen()
d.bdRead = false
return
}
func (d *bincDecDriver) ReadArrayStart() (length int) {
if d.vd != bincVdArray {
d.d.errorf("Invalid d.vd for array. Expecting 0x%x. Got: 0x%x", bincVdArray, d.vd)
return
}
length = d.decLen()
d.bdRead = false
return
}
func (d *bincDecDriver) decLen() int {
if d.vs > 3 {
return int(d.vs - 4)
}
return int(d.decLenNumber())
}
func (d *bincDecDriver) decLenNumber() (v uint64) {
if x := d.vs; x == 0 {
v = uint64(d.r.readn1())
} else if x == 1 {
d.r.readb(d.b[6:8])
v = uint64(bigen.Uint16(d.b[6:8]))
} else if x == 2 {
d.r.readb(d.b[4:8])
v = uint64(bigen.Uint32(d.b[4:8]))
} else {
d.r.readb(d.b[:8])
v = bigen.Uint64(d.b[:8])
}
return
}
func (d *bincDecDriver) decStringAndBytes(bs []byte, withString, zerocopy bool) (bs2 []byte, s string) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == bincVdSpecial<<4|bincSpNil {
d.bdRead = false
return
}
var slen int = -1
// var ok bool
switch d.vd {
case bincVdString, bincVdByteArray:
slen = d.decLen()
if zerocopy {
if d.br {
bs2 = d.r.readx(slen)
} else if len(bs) == 0 {
bs2 = decByteSlice(d.r, slen, d.b[:])
} else {
bs2 = decByteSlice(d.r, slen, bs)
}
} else {
bs2 = decByteSlice(d.r, slen, bs)
}
if withString {
s = string(bs2)
}
case bincVdSymbol:
// zerocopy doesn't apply for symbols,
// as the values must be stored in a table for later use.
//
//from vs: extract numSymbolBytes, containsStringVal, strLenPrecision,
//extract symbol
//if containsStringVal, read it and put in map
//else look in map for string value
var symbol uint16
vs := d.vs
if vs&0x8 == 0 {
symbol = uint16(d.r.readn1())
} else {
symbol = uint16(bigen.Uint16(d.r.readx(2)))
}
if d.s == nil {
d.s = make([]bincDecSymbol, 0, 16)
}
if vs&0x4 == 0 {
for i := range d.s {
j := &d.s[i]
if j.i == symbol {
bs2 = j.b
if withString {
if j.s == "" && bs2 != nil {
j.s = string(bs2)
}
s = j.s
}
break
}
}
} else {
switch vs & 0x3 {
case 0:
slen = int(d.r.readn1())
case 1:
slen = int(bigen.Uint16(d.r.readx(2)))
case 2:
slen = int(bigen.Uint32(d.r.readx(4)))
case 3:
slen = int(bigen.Uint64(d.r.readx(8)))
}
// since using symbols, do not store any part of
// the parameter bs in the map, as it might be a shared buffer.
// bs2 = decByteSlice(d.r, slen, bs)
bs2 = decByteSlice(d.r, slen, nil)
if withString {
s = string(bs2)
}
d.s = append(d.s, bincDecSymbol{symbol, s, bs2})
}
default:
d.d.errorf("Invalid d.vd. Expecting string:0x%x, bytearray:0x%x or symbol: 0x%x. Got: 0x%x",
bincVdString, bincVdByteArray, bincVdSymbol, d.vd)
return
}
d.bdRead = false
return
}
func (d *bincDecDriver) DecodeString() (s string) {
// DecodeBytes does not accomodate symbols, whose impl stores string version in map.
// Use decStringAndBytes directly.
// return string(d.DecodeBytes(d.b[:], true, true))
_, s = d.decStringAndBytes(d.b[:], true, true)
return
}
func (d *bincDecDriver) DecodeBytes(bs []byte, isstring, zerocopy bool) (bsOut []byte) {
if isstring {
bsOut, _ = d.decStringAndBytes(bs, false, zerocopy)
return
}
if !d.bdRead {
d.readNextBd()
}
if d.bd == bincVdSpecial<<4|bincSpNil {
d.bdRead = false
return nil
}
var clen int
if d.vd == bincVdString || d.vd == bincVdByteArray {
clen = d.decLen()
} else {
d.d.errorf("Invalid d.vd for bytes. Expecting string:0x%x or bytearray:0x%x. Got: 0x%x",
bincVdString, bincVdByteArray, d.vd)
return
}
d.bdRead = false
if zerocopy {
if d.br {
return d.r.readx(clen)
} else if len(bs) == 0 {
bs = d.b[:]
}
}
return decByteSlice(d.r, clen, bs)
}
func (d *bincDecDriver) DecodeExt(rv interface{}, xtag uint64, ext Ext) (realxtag uint64) {
if xtag > 0xff {
d.d.errorf("decodeExt: tag must be <= 0xff; got: %v", xtag)
return
}
realxtag1, xbs := d.decodeExtV(ext != nil, uint8(xtag))
realxtag = uint64(realxtag1)
if ext == nil {
re := rv.(*RawExt)
re.Tag = realxtag
re.Data = detachZeroCopyBytes(d.br, re.Data, xbs)
} else {
ext.ReadExt(rv, xbs)
}
return
}
func (d *bincDecDriver) decodeExtV(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
if !d.bdRead {
d.readNextBd()
}
if d.vd == bincVdCustomExt {
l := d.decLen()
xtag = d.r.readn1()
if verifyTag && xtag != tag {
d.d.errorf("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
return
}
xbs = d.r.readx(l)
} else if d.vd == bincVdByteArray {
xbs = d.DecodeBytes(nil, false, true)
} else {
d.d.errorf("Invalid d.vd for extensions (Expecting extensions or byte array). Got: 0x%x", d.vd)
return
}
d.bdRead = false
return
}
func (d *bincDecDriver) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
if !d.bdRead {
d.readNextBd()
}
switch d.vd {
case bincVdSpecial:
switch d.vs {
case bincSpNil:
vt = valueTypeNil
case bincSpFalse:
vt = valueTypeBool
v = false
case bincSpTrue:
vt = valueTypeBool
v = true
case bincSpNan:
vt = valueTypeFloat
v = math.NaN()
case bincSpPosInf:
vt = valueTypeFloat
v = math.Inf(1)
case bincSpNegInf:
vt = valueTypeFloat
v = math.Inf(-1)
case bincSpZeroFloat:
vt = valueTypeFloat
v = float64(0)
case bincSpZero:
vt = valueTypeUint
v = uint64(0) // int8(0)
case bincSpNegOne:
vt = valueTypeInt
v = int64(-1) // int8(-1)
default:
d.d.errorf("decodeNaked: Unrecognized special value 0x%x", d.vs)
return
}
case bincVdSmallInt:
vt = valueTypeUint
v = uint64(int8(d.vs)) + 1 // int8(d.vs) + 1
case bincVdPosInt:
vt = valueTypeUint
v = d.decUint()
case bincVdNegInt:
vt = valueTypeInt
v = -(int64(d.decUint()))
case bincVdFloat:
vt = valueTypeFloat
v = d.decFloat()
case bincVdSymbol:
vt = valueTypeSymbol
v = d.DecodeString()
case bincVdString:
vt = valueTypeString
v = d.DecodeString()
case bincVdByteArray:
vt = valueTypeBytes
v = d.DecodeBytes(nil, false, false)
case bincVdTimestamp:
vt = valueTypeTimestamp
tt, err := decodeTime(d.r.readx(int(d.vs)))
if err != nil {
panic(err)
}
v = tt
case bincVdCustomExt:
vt = valueTypeExt
l := d.decLen()
var re RawExt
re.Tag = uint64(d.r.readn1())
re.Data = d.r.readx(l)
v = &re
vt = valueTypeExt
case bincVdArray:
vt = valueTypeArray
decodeFurther = true
case bincVdMap:
vt = valueTypeMap
decodeFurther = true
default:
d.d.errorf("decodeNaked: Unrecognized d.vd: 0x%x", d.vd)
return
}
if !decodeFurther {
d.bdRead = false
}
if vt == valueTypeUint && d.h.SignedInteger {
d.bdType = valueTypeInt
v = int64(v.(uint64))
}
return
}
//------------------------------------
//BincHandle is a Handle for the Binc Schema-Free Encoding Format
//defined at https://github.com/ugorji/binc .
//
//BincHandle currently supports all Binc features with the following EXCEPTIONS:
// - only integers up to 64 bits of precision are supported.
// big integers are unsupported.
// - Only IEEE 754 binary32 and binary64 floats are supported (ie Go float32 and float64 types).
// extended precision and decimal IEEE 754 floats are unsupported.
// - Only UTF-8 strings supported.
// Unicode_Other Binc types (UTF16, UTF32) are currently unsupported.
//
//Note that these EXCEPTIONS are temporary and full support is possible and may happen soon.
type BincHandle struct {
BasicHandle
binaryEncodingType
}
func (h *BincHandle) newEncDriver(e *Encoder) encDriver {
return &bincEncDriver{e: e, w: e.w}
}
func (h *BincHandle) newDecDriver(d *Decoder) decDriver {
return &bincDecDriver{d: d, r: d.r, h: h, br: d.bytes}
}
var _ decDriver = (*bincDecDriver)(nil)
var _ encDriver = (*bincEncDriver)(nil)

View file

@ -0,0 +1,566 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import "math"
const (
cborMajorUint byte = iota
cborMajorNegInt
cborMajorBytes
cborMajorText
cborMajorArray
cborMajorMap
cborMajorTag
cborMajorOther
)
const (
cborBdFalse byte = 0xf4 + iota
cborBdTrue
cborBdNil
cborBdUndefined
cborBdExt
cborBdFloat16
cborBdFloat32
cborBdFloat64
)
const (
cborBdIndefiniteBytes byte = 0x5f
cborBdIndefiniteString = 0x7f
cborBdIndefiniteArray = 0x9f
cborBdIndefiniteMap = 0xbf
cborBdBreak = 0xff
)
const (
CborStreamBytes byte = 0x5f
CborStreamString = 0x7f
CborStreamArray = 0x9f
CborStreamMap = 0xbf
CborStreamBreak = 0xff
)
const (
cborBaseUint byte = 0x00
cborBaseNegInt = 0x20
cborBaseBytes = 0x40
cborBaseString = 0x60
cborBaseArray = 0x80
cborBaseMap = 0xa0
cborBaseTag = 0xc0
cborBaseSimple = 0xe0
)
// -------------------
type cborEncDriver struct {
e *Encoder
w encWriter
h *CborHandle
noBuiltInTypes
encNoSeparator
x [8]byte
}
func (e *cborEncDriver) EncodeNil() {
e.w.writen1(cborBdNil)
}
func (e *cborEncDriver) EncodeBool(b bool) {
if b {
e.w.writen1(cborBdTrue)
} else {
e.w.writen1(cborBdFalse)
}
}
func (e *cborEncDriver) EncodeFloat32(f float32) {
e.w.writen1(cborBdFloat32)
bigenHelper{e.x[:4], e.w}.writeUint32(math.Float32bits(f))
}
func (e *cborEncDriver) EncodeFloat64(f float64) {
e.w.writen1(cborBdFloat64)
bigenHelper{e.x[:8], e.w}.writeUint64(math.Float64bits(f))
}
func (e *cborEncDriver) encUint(v uint64, bd byte) {
if v <= 0x17 {
e.w.writen1(byte(v) + bd)
} else if v <= math.MaxUint8 {
e.w.writen2(bd+0x18, uint8(v))
} else if v <= math.MaxUint16 {
e.w.writen1(bd + 0x19)
bigenHelper{e.x[:2], e.w}.writeUint16(uint16(v))
} else if v <= math.MaxUint32 {
e.w.writen1(bd + 0x1a)
bigenHelper{e.x[:4], e.w}.writeUint32(uint32(v))
} else { // if v <= math.MaxUint64 {
e.w.writen1(bd + 0x1b)
bigenHelper{e.x[:8], e.w}.writeUint64(v)
}
}
func (e *cborEncDriver) EncodeInt(v int64) {
if v < 0 {
e.encUint(uint64(-1-v), cborBaseNegInt)
} else {
e.encUint(uint64(v), cborBaseUint)
}
}
func (e *cborEncDriver) EncodeUint(v uint64) {
e.encUint(v, cborBaseUint)
}
func (e *cborEncDriver) encLen(bd byte, length int) {
e.encUint(uint64(length), bd)
}
func (e *cborEncDriver) EncodeExt(rv interface{}, xtag uint64, ext Ext, en *Encoder) {
e.encUint(uint64(xtag), cborBaseTag)
if v := ext.ConvertExt(rv); v == nil {
e.EncodeNil()
} else {
en.encode(v)
}
}
func (e *cborEncDriver) EncodeRawExt(re *RawExt, en *Encoder) {
e.encUint(uint64(re.Tag), cborBaseTag)
if re.Data != nil {
en.encode(re.Data)
} else if re.Value == nil {
e.EncodeNil()
} else {
en.encode(re.Value)
}
}
func (e *cborEncDriver) EncodeArrayStart(length int) {
e.encLen(cborBaseArray, length)
}
func (e *cborEncDriver) EncodeMapStart(length int) {
e.encLen(cborBaseMap, length)
}
func (e *cborEncDriver) EncodeString(c charEncoding, v string) {
e.encLen(cborBaseString, len(v))
e.w.writestr(v)
}
func (e *cborEncDriver) EncodeSymbol(v string) {
e.EncodeString(c_UTF8, v)
}
func (e *cborEncDriver) EncodeStringBytes(c charEncoding, v []byte) {
e.encLen(cborBaseBytes, len(v))
e.w.writeb(v)
}
// ----------------------
type cborDecDriver struct {
d *Decoder
h *CborHandle
r decReader
br bool // bytes reader
bdRead bool
bdType valueType
bd byte
b [scratchByteArrayLen]byte
noBuiltInTypes
decNoSeparator
}
func (d *cborDecDriver) readNextBd() {
d.bd = d.r.readn1()
d.bdRead = true
d.bdType = valueTypeUnset
}
func (d *cborDecDriver) IsContainerType(vt valueType) (bv bool) {
switch vt {
case valueTypeNil:
return d.bd == cborBdNil
case valueTypeBytes:
return d.bd == cborBdIndefiniteBytes || (d.bd >= cborBaseBytes && d.bd < cborBaseString)
case valueTypeString:
return d.bd == cborBdIndefiniteString || (d.bd >= cborBaseString && d.bd < cborBaseArray)
case valueTypeArray:
return d.bd == cborBdIndefiniteArray || (d.bd >= cborBaseArray && d.bd < cborBaseMap)
case valueTypeMap:
return d.bd == cborBdIndefiniteMap || (d.bd >= cborBaseMap && d.bd < cborBaseTag)
}
d.d.errorf("isContainerType: unsupported parameter: %v", vt)
return // "unreachable"
}
func (d *cborDecDriver) TryDecodeAsNil() bool {
if !d.bdRead {
d.readNextBd()
}
// treat Nil and Undefined as nil values
if d.bd == cborBdNil || d.bd == cborBdUndefined {
d.bdRead = false
return true
}
return false
}
func (d *cborDecDriver) CheckBreak() bool {
if !d.bdRead {
d.readNextBd()
}
if d.bd == cborBdBreak {
d.bdRead = false
return true
}
return false
}
func (d *cborDecDriver) decUint() (ui uint64) {
v := d.bd & 0x1f
if v <= 0x17 {
ui = uint64(v)
} else {
if v == 0x18 {
ui = uint64(d.r.readn1())
} else if v == 0x19 {
ui = uint64(bigen.Uint16(d.r.readx(2)))
} else if v == 0x1a {
ui = uint64(bigen.Uint32(d.r.readx(4)))
} else if v == 0x1b {
ui = uint64(bigen.Uint64(d.r.readx(8)))
} else {
d.d.errorf("decUint: Invalid descriptor: %v", d.bd)
return
}
}
return
}
func (d *cborDecDriver) decCheckInteger() (neg bool) {
if !d.bdRead {
d.readNextBd()
}
major := d.bd >> 5
if major == cborMajorUint {
} else if major == cborMajorNegInt {
neg = true
} else {
d.d.errorf("invalid major: %v (bd: %v)", major, d.bd)
return
}
return
}
func (d *cborDecDriver) DecodeInt(bitsize uint8) (i int64) {
neg := d.decCheckInteger()
ui := d.decUint()
// check if this number can be converted to an int without overflow
var overflow bool
if neg {
if i, overflow = chkOvf.SignedInt(ui + 1); overflow {
d.d.errorf("cbor: overflow converting %v to signed integer", ui+1)
return
}
i = -i
} else {
if i, overflow = chkOvf.SignedInt(ui); overflow {
d.d.errorf("cbor: overflow converting %v to signed integer", ui)
return
}
}
if chkOvf.Int(i, bitsize) {
d.d.errorf("cbor: overflow integer: %v", i)
return
}
d.bdRead = false
return
}
func (d *cborDecDriver) DecodeUint(bitsize uint8) (ui uint64) {
if d.decCheckInteger() {
d.d.errorf("Assigning negative signed value to unsigned type")
return
}
ui = d.decUint()
if chkOvf.Uint(ui, bitsize) {
d.d.errorf("cbor: overflow integer: %v", ui)
return
}
d.bdRead = false
return
}
func (d *cborDecDriver) DecodeFloat(chkOverflow32 bool) (f float64) {
if !d.bdRead {
d.readNextBd()
}
if bd := d.bd; bd == cborBdFloat16 {
f = float64(math.Float32frombits(halfFloatToFloatBits(bigen.Uint16(d.r.readx(2)))))
} else if bd == cborBdFloat32 {
f = float64(math.Float32frombits(bigen.Uint32(d.r.readx(4))))
} else if bd == cborBdFloat64 {
f = math.Float64frombits(bigen.Uint64(d.r.readx(8)))
} else if bd >= cborBaseUint && bd < cborBaseBytes {
f = float64(d.DecodeInt(64))
} else {
d.d.errorf("Float only valid from float16/32/64: Invalid descriptor: %v", bd)
return
}
if chkOverflow32 && chkOvf.Float32(f) {
d.d.errorf("cbor: float32 overflow: %v", f)
return
}
d.bdRead = false
return
}
// bool can be decoded from bool only (single byte).
func (d *cborDecDriver) DecodeBool() (b bool) {
if !d.bdRead {
d.readNextBd()
}
if bd := d.bd; bd == cborBdTrue {
b = true
} else if bd == cborBdFalse {
} else {
d.d.errorf("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
return
}
d.bdRead = false
return
}
func (d *cborDecDriver) ReadMapStart() (length int) {
d.bdRead = false
if d.bd == cborBdIndefiniteMap {
return -1
}
return d.decLen()
}
func (d *cborDecDriver) ReadArrayStart() (length int) {
d.bdRead = false
if d.bd == cborBdIndefiniteArray {
return -1
}
return d.decLen()
}
func (d *cborDecDriver) decLen() int {
return int(d.decUint())
}
func (d *cborDecDriver) decAppendIndefiniteBytes(bs []byte) []byte {
d.bdRead = false
for {
if d.CheckBreak() {
break
}
if major := d.bd >> 5; major != cborMajorBytes && major != cborMajorText {
d.d.errorf("cbor: expect bytes or string major type in indefinite string/bytes; got: %v, byte: %v", major, d.bd)
return nil
}
n := d.decLen()
oldLen := len(bs)
newLen := oldLen + n
if newLen > cap(bs) {
bs2 := make([]byte, newLen, 2*cap(bs)+n)
copy(bs2, bs)
bs = bs2
} else {
bs = bs[:newLen]
}
d.r.readb(bs[oldLen:newLen])
// bs = append(bs, d.r.readn()...)
d.bdRead = false
}
d.bdRead = false
return bs
}
func (d *cborDecDriver) DecodeBytes(bs []byte, isstring, zerocopy bool) (bsOut []byte) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == cborBdNil || d.bd == cborBdUndefined {
d.bdRead = false
return nil
}
if d.bd == cborBdIndefiniteBytes || d.bd == cborBdIndefiniteString {
if bs == nil {
return d.decAppendIndefiniteBytes(nil)
}
return d.decAppendIndefiniteBytes(bs[:0])
}
clen := d.decLen()
d.bdRead = false
if zerocopy {
if d.br {
return d.r.readx(clen)
} else if len(bs) == 0 {
bs = d.b[:]
}
}
return decByteSlice(d.r, clen, bs)
}
func (d *cborDecDriver) DecodeString() (s string) {
return string(d.DecodeBytes(d.b[:], true, true))
}
func (d *cborDecDriver) DecodeExt(rv interface{}, xtag uint64, ext Ext) (realxtag uint64) {
if !d.bdRead {
d.readNextBd()
}
u := d.decUint()
d.bdRead = false
realxtag = u
if ext == nil {
re := rv.(*RawExt)
re.Tag = realxtag
d.d.decode(&re.Value)
} else if xtag != realxtag {
d.d.errorf("Wrong extension tag. Got %b. Expecting: %v", realxtag, xtag)
return
} else {
var v interface{}
d.d.decode(&v)
ext.UpdateExt(rv, v)
}
d.bdRead = false
return
}
func (d *cborDecDriver) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case cborBdNil:
vt = valueTypeNil
case cborBdFalse:
vt = valueTypeBool
v = false
case cborBdTrue:
vt = valueTypeBool
v = true
case cborBdFloat16, cborBdFloat32:
vt = valueTypeFloat
v = d.DecodeFloat(true)
case cborBdFloat64:
vt = valueTypeFloat
v = d.DecodeFloat(false)
case cborBdIndefiniteBytes:
vt = valueTypeBytes
v = d.DecodeBytes(nil, false, false)
case cborBdIndefiniteString:
vt = valueTypeString
v = d.DecodeString()
case cborBdIndefiniteArray:
vt = valueTypeArray
decodeFurther = true
case cborBdIndefiniteMap:
vt = valueTypeMap
decodeFurther = true
default:
switch {
case d.bd >= cborBaseUint && d.bd < cborBaseNegInt:
if d.h.SignedInteger {
vt = valueTypeInt
v = d.DecodeInt(64)
} else {
vt = valueTypeUint
v = d.DecodeUint(64)
}
case d.bd >= cborBaseNegInt && d.bd < cborBaseBytes:
vt = valueTypeInt
v = d.DecodeInt(64)
case d.bd >= cborBaseBytes && d.bd < cborBaseString:
vt = valueTypeBytes
v = d.DecodeBytes(nil, false, false)
case d.bd >= cborBaseString && d.bd < cborBaseArray:
vt = valueTypeString
v = d.DecodeString()
case d.bd >= cborBaseArray && d.bd < cborBaseMap:
vt = valueTypeArray
decodeFurther = true
case d.bd >= cborBaseMap && d.bd < cborBaseTag:
vt = valueTypeMap
decodeFurther = true
case d.bd >= cborBaseTag && d.bd < cborBaseSimple:
vt = valueTypeExt
var re RawExt
ui := d.decUint()
d.bdRead = false
re.Tag = ui
d.d.decode(&re.Value)
v = &re
// decodeFurther = true
default:
d.d.errorf("decodeNaked: Unrecognized d.bd: 0x%x", d.bd)
return
}
}
if !decodeFurther {
d.bdRead = false
}
return
}
// -------------------------
// CborHandle is a Handle for the CBOR encoding format,
// defined at http://tools.ietf.org/html/rfc7049 and documented further at http://cbor.io .
//
// CBOR is comprehensively supported, including support for:
// - indefinite-length arrays/maps/bytes/strings
// - (extension) tags in range 0..0xffff (0 .. 65535)
// - half, single and double-precision floats
// - all numbers (1, 2, 4 and 8-byte signed and unsigned integers)
// - nil, true, false, ...
// - arrays and maps, bytes and text strings
//
// None of the optional extensions (with tags) defined in the spec are supported out-of-the-box.
// Users can implement them as needed (using SetExt), including spec-documented ones:
// - timestamp, BigNum, BigFloat, Decimals, Encoded Text (e.g. URL, regexp, base64, MIME Message), etc.
//
// To encode with indefinite lengths (streaming), users will use
// (Must)Encode methods of *Encoder, along with writing CborStreamXXX constants.
//
// For example, to encode "one-byte" as an indefinite length string:
// var buf bytes.Buffer
// e := NewEncoder(&buf, new(CborHandle))
// buf.WriteByte(CborStreamString)
// e.MustEncode("one-")
// e.MustEncode("byte")
// buf.WriteByte(CborStreamBreak)
// encodedBytes := buf.Bytes()
// var vv interface{}
// NewDecoderBytes(buf.Bytes(), new(CborHandle)).MustDecode(&vv)
// // Now, vv contains the same string "one-byte"
//
type CborHandle struct {
BasicHandle
binaryEncodingType
}
func (h *CborHandle) newEncDriver(e *Encoder) encDriver {
return &cborEncDriver{e: e, w: e.w, h: h}
}
func (h *CborHandle) newDecDriver(d *Decoder) decDriver {
return &cborDecDriver{d: d, r: d.r, h: h, br: d.bytes}
}
var _ decDriver = (*cborDecDriver)(nil)
var _ encDriver = (*cborEncDriver)(nil)

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,442 @@
// //+build ignore
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
// ************************************************************
// DO NOT EDIT.
// THIS FILE IS AUTO-GENERATED from fast-path.go.tmpl
// ************************************************************
package codec
// Fast path functions try to create a fast path encode or decode implementation
// for common maps and slices.
//
// We define the functions and register then in this single file
// so as not to pollute the encode.go and decode.go, and create a dependency in there.
// This file can be omitted without causing a build failure.
//
// The advantage of fast paths is:
// - Many calls bypass reflection altogether
//
// Currently support
// - slice of all builtin types,
// - map of all builtin types to string or interface value
// - symetrical maps of all builtin types (e.g. str-str, uint8-uint8)
// This should provide adequate "typical" implementations.
//
// Note that fast track decode functions must handle values for which an address cannot be obtained.
// For example:
// m2 := map[string]int{}
// p2 := []interface{}{m2}
// // decoding into p2 will bomb if fast track functions do not treat like unaddressable.
//
import (
"reflect"
"sort"
)
const fastpathCheckNilFalse = false // for reflect
const fastpathCheckNilTrue = true // for type switch
type fastpathT struct {}
var fastpathTV fastpathT
type fastpathE struct {
rtid uintptr
rt reflect.Type
encfn func(encFnInfo, reflect.Value)
decfn func(decFnInfo, reflect.Value)
}
type fastpathA [{{ .FastpathLen }}]fastpathE
func (x *fastpathA) index(rtid uintptr) int {
// use binary search to grab the index (adapted from sort/search.go)
h, i, j := 0, 0, {{ .FastpathLen }} // len(x)
for i < j {
h = i + (j-i)/2
if x[h].rtid < rtid {
i = h + 1
} else {
j = h
}
}
if i < {{ .FastpathLen }} && x[i].rtid == rtid {
return i
}
return -1
}
type fastpathAslice []fastpathE
func (x fastpathAslice) Len() int { return len(x) }
func (x fastpathAslice) Less(i, j int) bool { return x[i].rtid < x[j].rtid }
func (x fastpathAslice) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
var fastpathAV fastpathA
// due to possible initialization loop error, make fastpath in an init()
func init() {
if !fastpathEnabled {
return
}
i := 0
fn := func(v interface{}, fe func(encFnInfo, reflect.Value), fd func(decFnInfo, reflect.Value)) (f fastpathE) {
xrt := reflect.TypeOf(v)
xptr := reflect.ValueOf(xrt).Pointer()
fastpathAV[i] = fastpathE{xptr, xrt, fe, fd}
i++
return
}
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
fn([]{{ .Elem }}(nil), (encFnInfo).{{ .MethodNamePfx "fastpathEnc" false }}R, (decFnInfo).{{ .MethodNamePfx "fastpathDec" false }}R){{end}}{{end}}{{end}}
{{range .Values}}{{if not .Primitive}}{{if not .Slice }}
fn(map[{{ .MapKey }}]{{ .Elem }}(nil), (encFnInfo).{{ .MethodNamePfx "fastpathEnc" false }}R, (decFnInfo).{{ .MethodNamePfx "fastpathDec" false }}R){{end}}{{end}}{{end}}
sort.Sort(fastpathAslice(fastpathAV[:]))
}
// -- encode
// -- -- fast path type switch
func fastpathEncodeTypeSwitch(iv interface{}, e *Encoder) bool {
switch v := iv.(type) {
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
case []{{ .Elem }}:{{else}}
case map[{{ .MapKey }}]{{ .Elem }}:{{end}}
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(v, fastpathCheckNilTrue, e){{if .Slice }}
case *[]{{ .Elem }}:{{else}}
case *map[{{ .MapKey }}]{{ .Elem }}:{{end}}
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(*v, fastpathCheckNilTrue, e)
{{end}}{{end}}
default:
return false
}
return true
}
func fastpathEncodeTypeSwitchSlice(iv interface{}, e *Encoder) bool {
switch v := iv.(type) {
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
case []{{ .Elem }}:
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(v, fastpathCheckNilTrue, e)
case *[]{{ .Elem }}:
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(*v, fastpathCheckNilTrue, e)
{{end}}{{end}}{{end}}
default:
return false
}
return true
}
func fastpathEncodeTypeSwitchMap(iv interface{}, e *Encoder) bool {
switch v := iv.(type) {
{{range .Values}}{{if not .Primitive}}{{if not .Slice }}
case map[{{ .MapKey }}]{{ .Elem }}:
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(v, fastpathCheckNilTrue, e)
case *map[{{ .MapKey }}]{{ .Elem }}:
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(*v, fastpathCheckNilTrue, e)
{{end}}{{end}}{{end}}
default:
return false
}
return true
}
// -- -- fast path functions
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
func (f encFnInfo) {{ .MethodNamePfx "fastpathEnc" false }}R(rv reflect.Value) {
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(rv.Interface().([]{{ .Elem }}), fastpathCheckNilFalse, f.e)
}
func (_ fastpathT) {{ .MethodNamePfx "Enc" false }}V(v []{{ .Elem }}, checkNil bool, e *Encoder) {
ee := e.e
if checkNil && v == nil {
ee.EncodeNil()
return
}
ee.EncodeArrayStart(len(v))
if e.be {
for _, v2 := range v {
{{ encmd .Elem "v2"}}
}
} else {
for j, v2 := range v {
if j > 0 {
ee.EncodeArrayEntrySeparator()
}
{{ encmd .Elem "v2"}}
}
ee.EncodeArrayEnd()
}
}
{{end}}{{end}}{{end}}
{{range .Values}}{{if not .Primitive}}{{if not .Slice }}
func (f encFnInfo) {{ .MethodNamePfx "fastpathEnc" false }}R(rv reflect.Value) {
fastpathTV.{{ .MethodNamePfx "Enc" false }}V(rv.Interface().(map[{{ .MapKey }}]{{ .Elem }}), fastpathCheckNilFalse, f.e)
}
func (_ fastpathT) {{ .MethodNamePfx "Enc" false }}V(v map[{{ .MapKey }}]{{ .Elem }}, checkNil bool, e *Encoder) {
ee := e.e
if checkNil && v == nil {
ee.EncodeNil()
return
}
ee.EncodeMapStart(len(v))
{{if eq .MapKey "string"}}asSymbols := e.h.AsSymbols&AsSymbolMapStringKeysFlag != 0{{end}}
if e.be {
for k2, v2 := range v {
{{if eq .MapKey "string"}}if asSymbols {
ee.EncodeSymbol(k2)
} else {
ee.EncodeString(c_UTF8, k2)
}{{else}}{{ encmd .MapKey "k2"}}{{end}}
{{ encmd .Elem "v2"}}
}
} else {
j := 0
for k2, v2 := range v {
if j > 0 {
ee.EncodeMapEntrySeparator()
}
{{if eq .MapKey "string"}}if asSymbols {
ee.EncodeSymbol(k2)
} else {
ee.EncodeString(c_UTF8, k2)
}{{else}}{{ encmd .MapKey "k2"}}{{end}}
ee.EncodeMapKVSeparator()
{{ encmd .Elem "v2"}}
j++
}
ee.EncodeMapEnd()
}
}
{{end}}{{end}}{{end}}
// -- decode
// -- -- fast path type switch
func fastpathDecodeTypeSwitch(iv interface{}, d *Decoder) bool {
switch v := iv.(type) {
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
case []{{ .Elem }}:{{else}}
case map[{{ .MapKey }}]{{ .Elem }}:{{end}}
fastpathTV.{{ .MethodNamePfx "Dec" false }}V(v, fastpathCheckNilFalse, false, d){{if .Slice }}
case *[]{{ .Elem }}:{{else}}
case *map[{{ .MapKey }}]{{ .Elem }}:{{end}}
v2, changed2 := fastpathTV.{{ .MethodNamePfx "Dec" false }}V(*v, fastpathCheckNilFalse, true, d)
if changed2 {
*v = v2
}
{{end}}{{end}}
default:
return false
}
return true
}
// -- -- fast path functions
{{range .Values}}{{if not .Primitive}}{{if .Slice }}
{{/*
Slices can change if they
- did not come from an array
- are addressable (from a ptr)
- are settable (e.g. contained in an interface{})
*/}}
func (f decFnInfo) {{ .MethodNamePfx "fastpathDec" false }}R(rv reflect.Value) {
array := f.seq == seqTypeArray
if !array && rv.CanAddr() { // CanSet => CanAddr + Exported
vp := rv.Addr().Interface().(*[]{{ .Elem }})
v, changed := fastpathTV.{{ .MethodNamePfx "Dec" false }}V(*vp, fastpathCheckNilFalse, !array, f.d)
if changed {
*vp = v
}
} else {
v := rv.Interface().([]{{ .Elem }})
fastpathTV.{{ .MethodNamePfx "Dec" false }}V(v, fastpathCheckNilFalse, false, f.d)
}
}
func (f fastpathT) {{ .MethodNamePfx "Dec" false }}X(vp *[]{{ .Elem }}, checkNil bool, d *Decoder) {
v, changed := f.{{ .MethodNamePfx "Dec" false }}V(*vp, checkNil, true, d)
if changed {
*vp = v
}
}
func (_ fastpathT) {{ .MethodNamePfx "Dec" false }}V(v []{{ .Elem }}, checkNil bool, canChange bool,
d *Decoder) (_ []{{ .Elem }}, changed bool) {
dd := d.d
// if dd.isContainerType(valueTypeNil) { dd.TryDecodeAsNil()
if checkNil && dd.TryDecodeAsNil() {
if v != nil {
changed = true
}
return nil, changed
}
slh, containerLenS := d.decSliceHelperStart()
if canChange && v == nil {
if containerLenS <= 0 {
v = []{{ .Elem }}{}
} else {
v = make([]{{ .Elem }}, containerLenS, containerLenS)
}
changed = true
}
if containerLenS == 0 {
if canChange && len(v) != 0 {
v = v[:0]
changed = true
}{{/*
// slh.End() // dd.ReadArrayEnd()
*/}}
return v, changed
}
// for j := 0; j < containerLenS; j++ {
if containerLenS > 0 {
decLen := containerLenS
if containerLenS > cap(v) {
if canChange {
s := make([]{{ .Elem }}, containerLenS, containerLenS)
// copy(s, v[:cap(v)])
v = s
changed = true
} else {
d.arrayCannotExpand(len(v), containerLenS)
decLen = len(v)
}
} else if containerLenS != len(v) {
v = v[:containerLenS]
changed = true
}
// all checks done. cannot go past len.
j := 0
for ; j < decLen; j++ {
{{ if eq .Elem "interface{}" }}d.decode(&v[j]){{ else }}v[j] = {{ decmd .Elem }}{{ end }}
}
if !canChange {
for ; j < containerLenS; j++ {
d.swallow()
}
}
} else {
j := 0
for ; !dd.CheckBreak(); j++ {
if j >= len(v) {
if canChange {
v = append(v, {{ zerocmd .Elem }})
changed = true
} else {
d.arrayCannotExpand(len(v), j+1)
}
}
if j > 0 {
slh.Sep(j)
}
if j < len(v) { // all checks done. cannot go past len.
{{ if eq .Elem "interface{}" }}d.decode(&v[j])
{{ else }}v[j] = {{ decmd .Elem }}{{ end }}
} else {
d.swallow()
}
}
slh.End()
}
return v, changed
}
{{end}}{{end}}{{end}}
{{range .Values}}{{if not .Primitive}}{{if not .Slice }}
{{/*
Maps can change if they are
- addressable (from a ptr)
- settable (e.g. contained in an interface{})
*/}}
func (f decFnInfo) {{ .MethodNamePfx "fastpathDec" false }}R(rv reflect.Value) {
if rv.CanAddr() {
vp := rv.Addr().Interface().(*map[{{ .MapKey }}]{{ .Elem }})
v, changed := fastpathTV.{{ .MethodNamePfx "Dec" false }}V(*vp, fastpathCheckNilFalse, true, f.d)
if changed {
*vp = v
}
} else {
v := rv.Interface().(map[{{ .MapKey }}]{{ .Elem }})
fastpathTV.{{ .MethodNamePfx "Dec" false }}V(v, fastpathCheckNilFalse, false, f.d)
}
}
func (f fastpathT) {{ .MethodNamePfx "Dec" false }}X(vp *map[{{ .MapKey }}]{{ .Elem }}, checkNil bool, d *Decoder) {
v, changed := f.{{ .MethodNamePfx "Dec" false }}V(*vp, checkNil, true, d)
if changed {
*vp = v
}
}
func (_ fastpathT) {{ .MethodNamePfx "Dec" false }}V(v map[{{ .MapKey }}]{{ .Elem }}, checkNil bool, canChange bool,
d *Decoder) (_ map[{{ .MapKey }}]{{ .Elem }}, changed bool) {
dd := d.d
// if dd.isContainerType(valueTypeNil) {dd.TryDecodeAsNil()
if checkNil && dd.TryDecodeAsNil() {
if v != nil {
changed = true
}
return nil, changed
}
containerLen := dd.ReadMapStart()
if canChange && v == nil {
if containerLen > 0 {
v = make(map[{{ .MapKey }}]{{ .Elem }}, containerLen)
} else {
v = make(map[{{ .MapKey }}]{{ .Elem }}) // supports indefinite-length, etc
}
changed = true
}
if containerLen > 0 {
for j := 0; j < containerLen; j++ {
{{ if eq .MapKey "interface{}" }}var mk interface{}
d.decode(&mk)
if bv, bok := mk.([]byte); bok {
mk = string(bv) // maps cannot have []byte as key. switch to string.
}{{ else }}mk := {{ decmd .MapKey }}{{ end }}
mv := v[mk]
{{ if eq .Elem "interface{}" }}d.decode(&mv)
{{ else }}mv = {{ decmd .Elem }}{{ end }}
if v != nil {
v[mk] = mv
}
}
} else if containerLen < 0 {
for j := 0; !dd.CheckBreak(); j++ {
if j > 0 {
dd.ReadMapEntrySeparator()
}
{{ if eq .MapKey "interface{}" }}var mk interface{}
d.decode(&mk)
if bv, bok := mk.([]byte); bok {
mk = string(bv) // maps cannot have []byte as key. switch to string.
}{{ else }}mk := {{ decmd .MapKey }}{{ end }}
dd.ReadMapKVSeparator()
mv := v[mk]
{{ if eq .Elem "interface{}" }}d.decode(&mv)
{{ else }}mv = {{ decmd .Elem }}{{ end }}
if v != nil {
v[mk] = mv
}
}
dd.ReadMapEnd()
}
return v, changed
}
{{end}}{{end}}{{end}}

View file

@ -0,0 +1,80 @@
{{var "v"}} := {{ if not isArray}}*{{ end }}{{ .Varname }}
{{var "h"}}, {{var "l"}} := z.DecSliceHelperStart()
var {{var "c"}} bool
_ = {{var "c"}}
{{ if not isArray }}if {{var "v"}} == nil {
if {{var "l"}} <= 0 {
{{var "v"}} = make({{ .CTyp }}, 0)
} else {
{{var "v"}} = make({{ .CTyp }}, {{var "l"}})
}
{{var "c"}} = true
}
{{ end }}
if {{var "l"}} == 0 { {{ if isSlice }}
if len({{var "v"}}) != 0 {
{{var "v"}} = {{var "v"}}[:0]
{{var "c"}} = true
} {{ end }}
} else if {{var "l"}} > 0 {
{{ if isChan }}
for {{var "r"}} := 0; {{var "r"}} < {{var "l"}}; {{var "r"}}++ {
var {{var "t"}} {{ .Typ }}
{{ $x := printf "%st%s" .TempVar .Rand }}{{ decLineVar $x }}
{{var "v"}} <- {{var "t"}}
{{ else }}
{{var "n"}} := {{var "l"}}
if {{var "l"}} > cap({{var "v"}}) {
{{ if isArray }}z.DecArrayCannotExpand(len({{var "v"}}), {{var "l"}})
{{var "n"}} = len({{var "v"}})
{{ else }}{{ if .Immutable }}
{{var "v2"}} := {{var "v"}}
{{var "v"}} = make([]{{ .Typ }}, {{var "l"}}, {{var "l"}})
if len({{var "v"}}) > 0 {
copy({{var "v"}}, {{var "v2"}}[:cap({{var "v2"}})])
}
{{ else }}{{var "v"}} = make([]{{ .Typ }}, {{var "l"}}, {{var "l"}})
{{ end }}{{var "c"}} = true
{{ end }}
} else if {{var "l"}} != len({{var "v"}}) {
{{ if isSlice }}{{var "v"}} = {{var "v"}}[:{{var "l"}}]
{{var "c"}} = true {{ end }}
}
{{var "j"}} := 0
for ; {{var "j"}} < {{var "n"}} ; {{var "j"}}++ {
{{ $x := printf "%[1]vv%[2]v[%[1]vj%[2]v]" .TempVar .Rand }}{{ decLineVar $x }}
} {{ if isArray }}
for ; {{var "j"}} < {{var "l"}} ; {{var "j"}}++ {
z.DecSwallow()
}{{ end }}
{{ end }}{{/* closing if not chan */}}
} else {
for {{var "j"}} := 0; !r.CheckBreak(); {{var "j"}}++ {
if {{var "j"}} >= len({{var "v"}}) {
{{ if isArray }}z.DecArrayCannotExpand(len({{var "v"}}), {{var "j"}}+1)
{{ else if isSlice}}{{var "v"}} = append({{var "v"}}, {{zero}})// var {{var "z"}} {{ .Typ }}
{{var "c"}} = true {{ end }}
}
if {{var "j"}} > 0 {
{{var "h"}}.Sep({{var "j"}})
}
{{ if isChan}}
var {{var "t"}} {{ .Typ }}
{{ $x := printf "%st%s" .TempVar .Rand }}{{ decLineVar $x }}
{{var "v"}} <- {{var "t"}}
{{ else }}
if {{var "j"}} < len({{var "v"}}) {
{{ $x := printf "%[1]vv%[2]v[%[1]vj%[2]v]" .TempVar .Rand }}{{ decLineVar $x }}
} else {
z.DecSwallow()
}
{{ end }}
}
{{var "h"}}.End()
}
{{ if not isArray }}if {{var "c"}} {
*{{ .Varname }} = {{var "v"}}
}{{ end }}

View file

@ -0,0 +1,46 @@
{{var "v"}} := *{{ .Varname }}
{{var "l"}} := r.ReadMapStart()
if {{var "v"}} == nil {
if {{var "l"}} > 0 {
{{var "v"}} = make(map[{{ .KTyp }}]{{ .Typ }}, {{var "l"}})
} else {
{{var "v"}} = make(map[{{ .KTyp }}]{{ .Typ }}) // supports indefinite-length, etc
}
*{{ .Varname }} = {{var "v"}}
}
if {{var "l"}} > 0 {
for {{var "j"}} := 0; {{var "j"}} < {{var "l"}}; {{var "j"}}++ {
var {{var "mk"}} {{ .KTyp }}
{{ $x := printf "%vmk%v" .TempVar .Rand }}{{ decLineVarK $x }}
{{ if eq .KTyp "interface{}" }}// special case if a byte array.
if {{var "bv"}}, {{var "bok"}} := {{var "mk"}}.([]byte); {{var "bok"}} {
{{var "mk"}} = string({{var "bv"}})
}
{{ end }}
{{var "mv"}} := {{var "v"}}[{{var "mk"}}]
{{ $x := printf "%vmv%v" .TempVar .Rand }}{{ decLineVar $x }}
if {{var "v"}} != nil {
{{var "v"}}[{{var "mk"}}] = {{var "mv"}}
}
}
} else if {{var "l"}} < 0 {
for {{var "j"}} := 0; !r.CheckBreak(); {{var "j"}}++ {
if {{var "j"}} > 0 {
r.ReadMapEntrySeparator()
}
var {{var "mk"}} {{ .KTyp }}
{{ $x := printf "%vmk%v" .TempVar .Rand }}{{ decLineVarK $x }}
{{ if eq .KTyp "interface{}" }}// special case if a byte array.
if {{var "bv"}}, {{var "bok"}} := {{var "mk"}}.([]byte); {{var "bok"}} {
{{var "mk"}} = string({{var "bv"}})
}
{{ end }}
r.ReadMapKVSeparator()
{{var "mv"}} := {{var "v"}}[{{var "mk"}}]
{{ $x := printf "%vmv%v" .TempVar .Rand }}{{ decLineVar $x }}
if {{var "v"}} != nil {
{{var "v"}}[{{var "mk"}}] = {{var "mv"}}
}
}
r.ReadMapEnd()
} // else len==0: TODO: Should we clear map entries?

View file

@ -0,0 +1,102 @@
// //+build ignore
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
// ************************************************************
// DO NOT EDIT.
// THIS FILE IS AUTO-GENERATED from gen-helper.go.tmpl
// ************************************************************
package codec
// This file is used to generate helper code for codecgen.
// The values here i.e. genHelper(En|De)coder are not to be used directly by
// library users. They WILL change continously and without notice.
//
// To help enforce this, we create an unexported type with exported members.
// The only way to get the type is via the one exported type that we control (somewhat).
//
// When static codecs are created for types, they will use this value
// to perform encoding or decoding of primitives or known slice or map types.
// GenHelperEncoder is exported so that it can be used externally by codecgen.
// Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.
func GenHelperEncoder(e *Encoder) (genHelperEncoder, encDriver) {
return genHelperEncoder{e: e}, e.e
}
// GenHelperDecoder is exported so that it can be used externally by codecgen.
// Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.
func GenHelperDecoder(d *Decoder) (genHelperDecoder, decDriver) {
return genHelperDecoder{d: d}, d.d
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
type genHelperEncoder struct {
e *Encoder
F fastpathT
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
type genHelperDecoder struct {
d *Decoder
F fastpathT
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncBasicHandle() *BasicHandle {
return f.e.h
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncBinary() bool {
return f.e.be // f.e.hh.isBinaryEncoding()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncFallback(iv interface{}) {
// println(">>>>>>>>> EncFallback")
f.e.encodeI(iv, false, false)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecBasicHandle() *BasicHandle {
return f.d.h
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecBinary() bool {
return f.d.be // f.d.hh.isBinaryEncoding()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecSwallow() {
f.d.swallow()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecScratchBuffer() []byte {
return f.d.b[:]
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecFallback(iv interface{}, chkPtr bool) {
// println(">>>>>>>>> DecFallback")
f.d.decodeI(iv, chkPtr, false, false, false)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecSliceHelperStart() (decSliceHelper, int) {
return f.d.decSliceHelperStart()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecStructFieldNotFound(index int, name string) {
f.d.structFieldNotFound(index, name)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecArrayCannotExpand(sliceLen, streamLen int) {
f.d.arrayCannotExpand(sliceLen, streamLen)
}

View file

@ -0,0 +1,250 @@
// //+build ignore
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
// ************************************************************
// DO NOT EDIT.
// THIS FILE IS AUTO-GENERATED from gen-helper.go.tmpl
// ************************************************************
package codec
// This file is used to generate helper code for codecgen.
// The values here i.e. genHelper(En|De)coder are not to be used directly by
// library users. They WILL change continously and without notice.
//
// To help enforce this, we create an unexported type with exported members.
// The only way to get the type is via the one exported type that we control (somewhat).
//
// When static codecs are created for types, they will use this value
// to perform encoding or decoding of primitives or known slice or map types.
// GenHelperEncoder is exported so that it can be used externally by codecgen.
// Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.
func GenHelperEncoder(e *Encoder) (genHelperEncoder, encDriver) {
return genHelperEncoder{e:e}, e.e
}
// GenHelperDecoder is exported so that it can be used externally by codecgen.
// Library users: DO NOT USE IT DIRECTLY. IT WILL CHANGE CONTINOUSLY WITHOUT NOTICE.
func GenHelperDecoder(d *Decoder) (genHelperDecoder, decDriver) {
return genHelperDecoder{d:d}, d.d
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
type genHelperEncoder struct {
e *Encoder
F fastpathT
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
type genHelperDecoder struct {
d *Decoder
F fastpathT
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncBasicHandle() *BasicHandle {
return f.e.h
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncBinary() bool {
return f.e.be // f.e.hh.isBinaryEncoding()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncFallback(iv interface{}) {
// println(">>>>>>>>> EncFallback")
f.e.encodeI(iv, false, false)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecBasicHandle() *BasicHandle {
return f.d.h
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecBinary() bool {
return f.d.be // f.d.hh.isBinaryEncoding()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecSwallow() {
f.d.swallow()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecScratchBuffer() []byte {
return f.d.b[:]
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecFallback(iv interface{}, chkPtr bool) {
// println(">>>>>>>>> DecFallback")
f.d.decodeI(iv, chkPtr, false, false, false)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecSliceHelperStart() (decSliceHelper, int) {
return f.d.decSliceHelperStart()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecStructFieldNotFound(index int, name string) {
f.d.structFieldNotFound(index, name)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecArrayCannotExpand(sliceLen, streamLen int) {
f.d.arrayCannotExpand(sliceLen, streamLen)
}
{{/*
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncDriver() encDriver {
return f.e.e
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecDriver() decDriver {
return f.d.d
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncNil() {
f.e.e.EncodeNil()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncBytes(v []byte) {
f.e.e.EncodeStringBytes(c_RAW, v)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncArrayStart(length int) {
f.e.e.EncodeArrayStart(length)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncArrayEnd() {
f.e.e.EncodeArrayEnd()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncArrayEntrySeparator() {
f.e.e.EncodeArrayEntrySeparator()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncMapStart(length int) {
f.e.e.EncodeMapStart(length)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncMapEnd() {
f.e.e.EncodeMapEnd()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncMapEntrySeparator() {
f.e.e.EncodeMapEntrySeparator()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) EncMapKVSeparator() {
f.e.e.EncodeMapKVSeparator()
}
// ---------
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecBytes(v *[]byte) {
*v = f.d.d.DecodeBytes(*v)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecTryNil() bool {
return f.d.d.TryDecodeAsNil()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecContainerIsNil() (b bool) {
return f.d.d.IsContainerType(valueTypeNil)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecContainerIsMap() (b bool) {
return f.d.d.IsContainerType(valueTypeMap)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecContainerIsArray() (b bool) {
return f.d.d.IsContainerType(valueTypeArray)
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecCheckBreak() bool {
return f.d.d.CheckBreak()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecMapStart() int {
return f.d.d.ReadMapStart()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecArrayStart() int {
return f.d.d.ReadArrayStart()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecMapEnd() {
f.d.d.ReadMapEnd()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecArrayEnd() {
f.d.d.ReadArrayEnd()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecArrayEntrySeparator() {
f.d.d.ReadArrayEntrySeparator()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecMapEntrySeparator() {
f.d.d.ReadMapEntrySeparator()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) DecMapKVSeparator() {
f.d.d.ReadMapKVSeparator()
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) ReadStringAsBytes(bs []byte) []byte {
return f.d.d.DecodeStringAsBytes(bs)
}
// -- encode calls (primitives)
{{range .Values}}{{if .Primitive }}{{if ne .Primitive "interface{}" }}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) {{ .MethodNamePfx "Enc" true }}(v {{ .Primitive }}) {
ee := f.e.e
{{ encmd .Primitive "v" }}
}
{{ end }}{{ end }}{{ end }}
// -- decode calls (primitives)
{{range .Values}}{{if .Primitive }}{{if ne .Primitive "interface{}" }}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) {{ .MethodNamePfx "Dec" true }}(vp *{{ .Primitive }}) {
dd := f.d.d
*vp = {{ decmd .Primitive }}
}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperDecoder) {{ .MethodNamePfx "Read" true }}() (v {{ .Primitive }}) {
dd := f.d.d
v = {{ decmd .Primitive }}
return
}
{{ end }}{{ end }}{{ end }}
// -- encode calls (slices/maps)
{{range .Values}}{{if not .Primitive }}{{if .Slice }}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) {{ .MethodNamePfx "Enc" false }}(v []{{ .Elem }}) { {{ else }}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
func (f genHelperEncoder) {{ .MethodNamePfx "Enc" false }}(v map[{{ .MapKey }}]{{ .Elem }}) { {{end}}
f.F.{{ .MethodNamePfx "Enc" false }}V(v, false, f.e)
}
{{ end }}{{ end }}
// -- decode calls (slices/maps)
{{range .Values}}{{if not .Primitive }}
// FOR USE BY CODECGEN ONLY. IT *WILL* CHANGE WITHOUT NOTICE. *DO NOT USE*
{{if .Slice }}func (f genHelperDecoder) {{ .MethodNamePfx "Dec" false }}(vp *[]{{ .Elem }}) {
{{else}}func (f genHelperDecoder) {{ .MethodNamePfx "Dec" false }}(vp *map[{{ .MapKey }}]{{ .Elem }}) { {{end}}
v, changed := f.F.{{ .MethodNamePfx "Dec" false }}V(*vp, false, true, f.d)
if changed {
*vp = v
}
}
{{ end }}{{ end }}
*/}}

View file

@ -0,0 +1,139 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// DO NOT EDIT. THIS FILE IS AUTO-GENERATED FROM gen-dec-(map|array).go.tmpl
const genDecMapTmpl = `
{{var "v"}} := *{{ .Varname }}
{{var "l"}} := r.ReadMapStart()
if {{var "v"}} == nil {
if {{var "l"}} > 0 {
{{var "v"}} = make(map[{{ .KTyp }}]{{ .Typ }}, {{var "l"}})
} else {
{{var "v"}} = make(map[{{ .KTyp }}]{{ .Typ }}) // supports indefinite-length, etc
}
*{{ .Varname }} = {{var "v"}}
}
if {{var "l"}} > 0 {
for {{var "j"}} := 0; {{var "j"}} < {{var "l"}}; {{var "j"}}++ {
var {{var "mk"}} {{ .KTyp }}
{{ $x := printf "%vmk%v" .TempVar .Rand }}{{ decLineVarK $x }}
{{ if eq .KTyp "interface{}" }}// special case if a byte array.
if {{var "bv"}}, {{var "bok"}} := {{var "mk"}}.([]byte); {{var "bok"}} {
{{var "mk"}} = string({{var "bv"}})
}
{{ end }}
{{var "mv"}} := {{var "v"}}[{{var "mk"}}]
{{ $x := printf "%vmv%v" .TempVar .Rand }}{{ decLineVar $x }}
if {{var "v"}} != nil {
{{var "v"}}[{{var "mk"}}] = {{var "mv"}}
}
}
} else if {{var "l"}} < 0 {
for {{var "j"}} := 0; !r.CheckBreak(); {{var "j"}}++ {
if {{var "j"}} > 0 {
r.ReadMapEntrySeparator()
}
var {{var "mk"}} {{ .KTyp }}
{{ $x := printf "%vmk%v" .TempVar .Rand }}{{ decLineVarK $x }}
{{ if eq .KTyp "interface{}" }}// special case if a byte array.
if {{var "bv"}}, {{var "bok"}} := {{var "mk"}}.([]byte); {{var "bok"}} {
{{var "mk"}} = string({{var "bv"}})
}
{{ end }}
r.ReadMapKVSeparator()
{{var "mv"}} := {{var "v"}}[{{var "mk"}}]
{{ $x := printf "%vmv%v" .TempVar .Rand }}{{ decLineVar $x }}
if {{var "v"}} != nil {
{{var "v"}}[{{var "mk"}}] = {{var "mv"}}
}
}
r.ReadMapEnd()
} // else len==0: TODO: Should we clear map entries?
`
const genDecListTmpl = `
{{var "v"}} := {{ if not isArray}}*{{ end }}{{ .Varname }}
{{var "h"}}, {{var "l"}} := z.DecSliceHelperStart()
var {{var "c"}} bool
_ = {{var "c"}}
{{ if not isArray }}if {{var "v"}} == nil {
if {{var "l"}} <= 0 {
{{var "v"}} = make({{ .CTyp }}, 0)
} else {
{{var "v"}} = make({{ .CTyp }}, {{var "l"}})
}
{{var "c"}} = true
}
{{ end }}
if {{var "l"}} == 0 { {{ if isSlice }}
if len({{var "v"}}) != 0 {
{{var "v"}} = {{var "v"}}[:0]
{{var "c"}} = true
} {{ end }}
} else if {{var "l"}} > 0 {
{{ if isChan }}
for {{var "r"}} := 0; {{var "r"}} < {{var "l"}}; {{var "r"}}++ {
var {{var "t"}} {{ .Typ }}
{{ $x := printf "%st%s" .TempVar .Rand }}{{ decLineVar $x }}
{{var "v"}} <- {{var "t"}}
{{ else }}
{{var "n"}} := {{var "l"}}
if {{var "l"}} > cap({{var "v"}}) {
{{ if isArray }}z.DecArrayCannotExpand(len({{var "v"}}), {{var "l"}})
{{var "n"}} = len({{var "v"}})
{{ else }}{{ if .Immutable }}
{{var "v2"}} := {{var "v"}}
{{var "v"}} = make([]{{ .Typ }}, {{var "l"}}, {{var "l"}})
if len({{var "v"}}) > 0 {
copy({{var "v"}}, {{var "v2"}}[:cap({{var "v2"}})])
}
{{ else }}{{var "v"}} = make([]{{ .Typ }}, {{var "l"}}, {{var "l"}})
{{ end }}{{var "c"}} = true
{{ end }}
} else if {{var "l"}} != len({{var "v"}}) {
{{ if isSlice }}{{var "v"}} = {{var "v"}}[:{{var "l"}}]
{{var "c"}} = true {{ end }}
}
{{var "j"}} := 0
for ; {{var "j"}} < {{var "n"}} ; {{var "j"}}++ {
{{ $x := printf "%[1]vv%[2]v[%[1]vj%[2]v]" .TempVar .Rand }}{{ decLineVar $x }}
} {{ if isArray }}
for ; {{var "j"}} < {{var "l"}} ; {{var "j"}}++ {
z.DecSwallow()
}{{ end }}
{{ end }}{{/* closing if not chan */}}
} else {
for {{var "j"}} := 0; !r.CheckBreak(); {{var "j"}}++ {
if {{var "j"}} >= len({{var "v"}}) {
{{ if isArray }}z.DecArrayCannotExpand(len({{var "v"}}), {{var "j"}}+1)
{{ else if isSlice}}{{var "v"}} = append({{var "v"}}, {{zero}})// var {{var "z"}} {{ .Typ }}
{{var "c"}} = true {{ end }}
}
if {{var "j"}} > 0 {
{{var "h"}}.Sep({{var "j"}})
}
{{ if isChan}}
var {{var "t"}} {{ .Typ }}
{{ $x := printf "%st%s" .TempVar .Rand }}{{ decLineVar $x }}
{{var "v"}} <- {{var "t"}}
{{ else }}
if {{var "j"}} < len({{var "v"}}) {
{{ $x := printf "%[1]vv%[2]v[%[1]vj%[2]v]" .TempVar .Rand }}{{ decLineVar $x }}
} else {
z.DecSwallow()
}
{{ end }}
}
{{var "h"}}.End()
}
{{ if not isArray }}if {{var "c"}} {
*{{ .Varname }} = {{var "v"}}
}{{ end }}
`

File diff suppressed because it is too large Load diff

View file

@ -0,0 +1,846 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// Contains code shared by both encode and decode.
// Some shared ideas around encoding/decoding
// ------------------------------------------
//
// If an interface{} is passed, we first do a type assertion to see if it is
// a primitive type or a map/slice of primitive types, and use a fastpath to handle it.
//
// If we start with a reflect.Value, we are already in reflect.Value land and
// will try to grab the function for the underlying Type and directly call that function.
// This is more performant than calling reflect.Value.Interface().
//
// This still helps us bypass many layers of reflection, and give best performance.
//
// Containers
// ------------
// Containers in the stream are either associative arrays (key-value pairs) or
// regular arrays (indexed by incrementing integers).
//
// Some streams support indefinite-length containers, and use a breaking
// byte-sequence to denote that the container has come to an end.
//
// Some streams also are text-based, and use explicit separators to denote the
// end/beginning of different values.
//
// During encode, we use a high-level condition to determine how to iterate through
// the container. That decision is based on whether the container is text-based (with
// separators) or binary (without separators). If binary, we do not even call the
// encoding of separators.
//
// During decode, we use a different high-level condition to determine how to iterate
// through the containers. That decision is based on whether the stream contained
// a length prefix, or if it used explicit breaks. If length-prefixed, we assume that
// it has to be binary, and we do not even try to read separators.
//
// The only codec that may suffer (slightly) is cbor, and only when decoding indefinite-length.
// It may suffer because we treat it like a text-based codec, and read separators.
// However, this read is a no-op and the cost is insignificant.
//
// Philosophy
// ------------
// On decode, this codec will update containers appropriately:
// - If struct, update fields from stream into fields of struct.
// If field in stream not found in struct, handle appropriately (based on option).
// If a struct field has no corresponding value in the stream, leave it AS IS.
// If nil in stream, set value to nil/zero value.
// - If map, update map from stream.
// If the stream value is NIL, set the map to nil.
// - if slice, try to update up to length of array in stream.
// if container len is less than stream array length,
// and container cannot be expanded, handled (based on option).
// This means you can decode 4-element stream array into 1-element array.
//
// ------------------------------------
// On encode, user can specify omitEmpty. This means that the value will be omitted
// if the zero value. The problem may occur during decode, where omitted values do not affect
// the value being decoded into. This means that if decoding into a struct with an
// int field with current value=5, and the field is omitted in the stream, then after
// decoding, the value will still be 5 (not 0).
// omitEmpty only works if you guarantee that you always decode into zero-values.
//
// ------------------------------------
// We could have truncated a map to remove keys not available in the stream,
// or set values in the struct which are not in the stream to their zero values.
// We decided against it because there is no efficient way to do it.
// We may introduce it as an option later.
// However, that will require enabling it for both runtime and code generation modes.
//
// To support truncate, we need to do 2 passes over the container:
// map
// - first collect all keys (e.g. in k1)
// - for each key in stream, mark k1 that the key should not be removed
// - after updating map, do second pass and call delete for all keys in k1 which are not marked
// struct:
// - for each field, track the *typeInfo s1
// - iterate through all s1, and for each one not marked, set value to zero
// - this involves checking the possible anonymous fields which are nil ptrs.
// too much work.
//
// ------------------------------------------
// Error Handling is done within the library using panic.
//
// This way, the code doesn't have to keep checking if an error has happened,
// and we don't have to keep sending the error value along with each call
// or storing it in the En|Decoder and checking it constantly along the way.
//
// The disadvantage is that small functions which use panics cannot be inlined.
// The code accounts for that by only using panics behind an interface;
// since interface calls cannot be inlined, this is irrelevant.
//
// We considered storing the error is En|Decoder.
// - once it has its err field set, it cannot be used again.
// - panicing will be optional, controlled by const flag.
// - code should always check error first and return early.
// We eventually decided against it as it makes the code clumsier to always
// check for these error conditions.
import (
"encoding"
"encoding/binary"
"errors"
"fmt"
"math"
"reflect"
"sort"
"strings"
"sync"
"time"
"unicode"
"unicode/utf8"
)
const (
scratchByteArrayLen = 32
// Support encoding.(Binary|Text)(Unm|M)arshaler.
// This constant flag will enable or disable it.
supportMarshalInterfaces = true
// Each Encoder or Decoder uses a cache of functions based on conditionals,
// so that the conditionals are not run every time.
//
// Either a map or a slice is used to keep track of the functions.
// The map is more natural, but has a higher cost than a slice/array.
// This flag (useMapForCodecCache) controls which is used.
//
// From benchmarks, slices with linear search perform better with < 32 entries.
// We have typically seen a high threshold of about 24 entries.
useMapForCodecCache = false
// for debugging, set this to false, to catch panic traces.
// Note that this will always cause rpc tests to fail, since they need io.EOF sent via panic.
recoverPanicToErr = true
// Fast path functions try to create a fast path encode or decode implementation
// for common maps and slices, by by-passing reflection altogether.
fastpathEnabled = true
// if checkStructForEmptyValue, check structs fields to see if an empty value.
// This could be an expensive call, so possibly disable it.
checkStructForEmptyValue = false
// if derefForIsEmptyValue, deref pointers and interfaces when checking isEmptyValue
derefForIsEmptyValue = false
)
var oneByteArr = [1]byte{0}
var zeroByteSlice = oneByteArr[:0:0]
type charEncoding uint8
const (
c_RAW charEncoding = iota
c_UTF8
c_UTF16LE
c_UTF16BE
c_UTF32LE
c_UTF32BE
)
// valueType is the stream type
type valueType uint8
const (
valueTypeUnset valueType = iota
valueTypeNil
valueTypeInt
valueTypeUint
valueTypeFloat
valueTypeBool
valueTypeString
valueTypeSymbol
valueTypeBytes
valueTypeMap
valueTypeArray
valueTypeTimestamp
valueTypeExt
// valueTypeInvalid = 0xff
)
type seqType uint8
const (
_ seqType = iota
seqTypeArray
seqTypeSlice
seqTypeChan
)
var (
bigen = binary.BigEndian
structInfoFieldName = "_struct"
cachedTypeInfo = make(map[uintptr]*typeInfo, 64)
cachedTypeInfoMutex sync.RWMutex
// mapStrIntfTyp = reflect.TypeOf(map[string]interface{}(nil))
intfSliceTyp = reflect.TypeOf([]interface{}(nil))
intfTyp = intfSliceTyp.Elem()
stringTyp = reflect.TypeOf("")
timeTyp = reflect.TypeOf(time.Time{})
rawExtTyp = reflect.TypeOf(RawExt{})
uint8SliceTyp = reflect.TypeOf([]uint8(nil))
mapBySliceTyp = reflect.TypeOf((*MapBySlice)(nil)).Elem()
binaryMarshalerTyp = reflect.TypeOf((*encoding.BinaryMarshaler)(nil)).Elem()
binaryUnmarshalerTyp = reflect.TypeOf((*encoding.BinaryUnmarshaler)(nil)).Elem()
textMarshalerTyp = reflect.TypeOf((*encoding.TextMarshaler)(nil)).Elem()
textUnmarshalerTyp = reflect.TypeOf((*encoding.TextUnmarshaler)(nil)).Elem()
selferTyp = reflect.TypeOf((*Selfer)(nil)).Elem()
uint8SliceTypId = reflect.ValueOf(uint8SliceTyp).Pointer()
rawExtTypId = reflect.ValueOf(rawExtTyp).Pointer()
intfTypId = reflect.ValueOf(intfTyp).Pointer()
timeTypId = reflect.ValueOf(timeTyp).Pointer()
stringTypId = reflect.ValueOf(stringTyp).Pointer()
// mapBySliceTypId = reflect.ValueOf(mapBySliceTyp).Pointer()
intBitsize uint8 = uint8(reflect.TypeOf(int(0)).Bits())
uintBitsize uint8 = uint8(reflect.TypeOf(uint(0)).Bits())
bsAll0x00 = []byte{0, 0, 0, 0, 0, 0, 0, 0}
bsAll0xff = []byte{0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff, 0xff}
chkOvf checkOverflow
noFieldNameToStructFieldInfoErr = errors.New("no field name passed to parseStructFieldInfo")
)
// Selfer defines methods by which a value can encode or decode itself.
//
// Any type which implements Selfer will be able to encode or decode itself.
// Consequently, during (en|de)code, this takes precedence over
// (text|binary)(M|Unm)arshal or extension support.
type Selfer interface {
CodecEncodeSelf(*Encoder)
CodecDecodeSelf(*Decoder)
}
// MapBySlice represents a slice which should be encoded as a map in the stream.
// The slice contains a sequence of key-value pairs.
// This affords storing a map in a specific sequence in the stream.
//
// The support of MapBySlice affords the following:
// - A slice type which implements MapBySlice will be encoded as a map
// - A slice can be decoded from a map in the stream
type MapBySlice interface {
MapBySlice()
}
// WARNING: DO NOT USE DIRECTLY. EXPORTED FOR GODOC BENEFIT. WILL BE REMOVED.
//
// BasicHandle encapsulates the common options and extension functions.
type BasicHandle struct {
extHandle
EncodeOptions
DecodeOptions
}
func (x *BasicHandle) getBasicHandle() *BasicHandle {
return x
}
// Handle is the interface for a specific encoding format.
//
// Typically, a Handle is pre-configured before first time use,
// and not modified while in use. Such a pre-configured Handle
// is safe for concurrent access.
type Handle interface {
getBasicHandle() *BasicHandle
newEncDriver(w *Encoder) encDriver
newDecDriver(r *Decoder) decDriver
isBinary() bool
}
// RawExt represents raw unprocessed extension data.
// Some codecs will decode extension data as a *RawExt if there is no registered extension for the tag.
//
// Only one of Data or Value is nil. If Data is nil, then the content of the RawExt is in the Value.
type RawExt struct {
Tag uint64
// Data is the []byte which represents the raw ext. If Data is nil, ext is exposed in Value.
// Data is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types
Data []byte
// Value represents the extension, if Data is nil.
// Value is used by codecs (e.g. cbor) which use the format to do custom serialization of the types.
Value interface{}
}
// Ext handles custom (de)serialization of custom types / extensions.
type Ext interface {
// WriteExt converts a value to a []byte.
// It is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types.
WriteExt(v interface{}) []byte
// ReadExt updates a value from a []byte.
// It is used by codecs (e.g. binc, msgpack, simple) which do custom serialization of the types.
ReadExt(dst interface{}, src []byte)
// ConvertExt converts a value into a simpler interface for easy encoding e.g. convert time.Time to int64.
// It is used by codecs (e.g. cbor) which use the format to do custom serialization of the types.
ConvertExt(v interface{}) interface{}
// UpdateExt updates a value from a simpler interface for easy decoding e.g. convert int64 to time.Time.
// It is used by codecs (e.g. cbor) which use the format to do custom serialization of the types.
UpdateExt(dst interface{}, src interface{})
}
// bytesExt is a wrapper implementation to support former AddExt exported method.
type bytesExt struct {
encFn func(reflect.Value) ([]byte, error)
decFn func(reflect.Value, []byte) error
}
func (x bytesExt) WriteExt(v interface{}) []byte {
// fmt.Printf(">>>>>>>>>> WriteExt: %T, %v\n", v, v)
bs, err := x.encFn(reflect.ValueOf(v))
if err != nil {
panic(err)
}
return bs
}
func (x bytesExt) ReadExt(v interface{}, bs []byte) {
// fmt.Printf(">>>>>>>>>> ReadExt: %T, %v\n", v, v)
if err := x.decFn(reflect.ValueOf(v), bs); err != nil {
panic(err)
}
}
func (x bytesExt) ConvertExt(v interface{}) interface{} {
return x.WriteExt(v)
}
func (x bytesExt) UpdateExt(dest interface{}, v interface{}) {
x.ReadExt(dest, v.([]byte))
}
// type errorString string
// func (x errorString) Error() string { return string(x) }
type binaryEncodingType struct{}
func (_ binaryEncodingType) isBinary() bool { return true }
type textEncodingType struct{}
func (_ textEncodingType) isBinary() bool { return false }
// noBuiltInTypes is embedded into many types which do not support builtins
// e.g. msgpack, simple, cbor.
type noBuiltInTypes struct{}
func (_ noBuiltInTypes) IsBuiltinType(rt uintptr) bool { return false }
func (_ noBuiltInTypes) EncodeBuiltin(rt uintptr, v interface{}) {}
func (_ noBuiltInTypes) DecodeBuiltin(rt uintptr, v interface{}) {}
type noStreamingCodec struct{}
func (_ noStreamingCodec) CheckBreak() bool { return false }
// bigenHelper.
// Users must already slice the x completely, because we will not reslice.
type bigenHelper struct {
x []byte // must be correctly sliced to appropriate len. slicing is a cost.
w encWriter
}
func (z bigenHelper) writeUint16(v uint16) {
bigen.PutUint16(z.x, v)
z.w.writeb(z.x)
}
func (z bigenHelper) writeUint32(v uint32) {
bigen.PutUint32(z.x, v)
z.w.writeb(z.x)
}
func (z bigenHelper) writeUint64(v uint64) {
bigen.PutUint64(z.x, v)
z.w.writeb(z.x)
}
type extTypeTagFn struct {
rtid uintptr
rt reflect.Type
tag uint64
ext Ext
}
type extHandle []*extTypeTagFn
// DEPRECATED: AddExt is deprecated in favor of SetExt. It exists for compatibility only.
//
// AddExt registes an encode and decode function for a reflect.Type.
// AddExt internally calls SetExt.
// To deregister an Ext, call AddExt with nil encfn and/or nil decfn.
func (o *extHandle) AddExt(
rt reflect.Type, tag byte,
encfn func(reflect.Value) ([]byte, error), decfn func(reflect.Value, []byte) error,
) (err error) {
if encfn == nil || decfn == nil {
return o.SetExt(rt, uint64(tag), nil)
}
return o.SetExt(rt, uint64(tag), bytesExt{encfn, decfn})
}
// SetExt registers a tag and Ext for a reflect.Type.
//
// Note that the type must be a named type, and specifically not
// a pointer or Interface. An error is returned if that is not honored.
//
// To Deregister an ext, call SetExt with nil Ext
func (o *extHandle) SetExt(rt reflect.Type, tag uint64, ext Ext) (err error) {
// o is a pointer, because we may need to initialize it
if rt.PkgPath() == "" || rt.Kind() == reflect.Interface {
err = fmt.Errorf("codec.Handle.AddExt: Takes named type, especially not a pointer or interface: %T",
reflect.Zero(rt).Interface())
return
}
rtid := reflect.ValueOf(rt).Pointer()
for _, v := range *o {
if v.rtid == rtid {
v.tag, v.ext = tag, ext
return
}
}
*o = append(*o, &extTypeTagFn{rtid, rt, tag, ext})
return
}
func (o extHandle) getExt(rtid uintptr) *extTypeTagFn {
for _, v := range o {
if v.rtid == rtid {
return v
}
}
return nil
}
func (o extHandle) getExtForTag(tag uint64) *extTypeTagFn {
for _, v := range o {
if v.tag == tag {
return v
}
}
return nil
}
type structFieldInfo struct {
encName string // encode name
// only one of 'i' or 'is' can be set. If 'i' is -1, then 'is' has been set.
is []int // (recursive/embedded) field index in struct
i int16 // field index in struct
omitEmpty bool
toArray bool // if field is _struct, is the toArray set?
}
// rv returns the field of the struct.
// If anonymous, it returns an Invalid
func (si *structFieldInfo) field(v reflect.Value, update bool) (rv2 reflect.Value) {
if si.i != -1 {
v = v.Field(int(si.i))
return v
}
// replicate FieldByIndex
for _, x := range si.is {
for v.Kind() == reflect.Ptr {
if v.IsNil() {
if !update {
return
}
v.Set(reflect.New(v.Type().Elem()))
}
v = v.Elem()
}
v = v.Field(x)
}
return v
}
func (si *structFieldInfo) setToZeroValue(v reflect.Value) {
if si.i != -1 {
v = v.Field(int(si.i))
v.Set(reflect.Zero(v.Type()))
// v.Set(reflect.New(v.Type()).Elem())
// v.Set(reflect.New(v.Type()))
} else {
// replicate FieldByIndex
for _, x := range si.is {
for v.Kind() == reflect.Ptr {
if v.IsNil() {
return
}
v = v.Elem()
}
v = v.Field(x)
}
v.Set(reflect.Zero(v.Type()))
}
}
func parseStructFieldInfo(fname string, stag string) *structFieldInfo {
if fname == "" {
panic(noFieldNameToStructFieldInfoErr)
}
si := structFieldInfo{
encName: fname,
}
if stag != "" {
for i, s := range strings.Split(stag, ",") {
if i == 0 {
if s != "" {
si.encName = s
}
} else {
if s == "omitempty" {
si.omitEmpty = true
} else if s == "toarray" {
si.toArray = true
}
}
}
}
// si.encNameBs = []byte(si.encName)
return &si
}
type sfiSortedByEncName []*structFieldInfo
func (p sfiSortedByEncName) Len() int {
return len(p)
}
func (p sfiSortedByEncName) Less(i, j int) bool {
return p[i].encName < p[j].encName
}
func (p sfiSortedByEncName) Swap(i, j int) {
p[i], p[j] = p[j], p[i]
}
// typeInfo keeps information about each type referenced in the encode/decode sequence.
//
// During an encode/decode sequence, we work as below:
// - If base is a built in type, en/decode base value
// - If base is registered as an extension, en/decode base value
// - If type is binary(M/Unm)arshaler, call Binary(M/Unm)arshal method
// - If type is text(M/Unm)arshaler, call Text(M/Unm)arshal method
// - Else decode appropriately based on the reflect.Kind
type typeInfo struct {
sfi []*structFieldInfo // sorted. Used when enc/dec struct to map.
sfip []*structFieldInfo // unsorted. Used when enc/dec struct to array.
rt reflect.Type
rtid uintptr
// baseId gives pointer to the base reflect.Type, after deferencing
// the pointers. E.g. base type of ***time.Time is time.Time.
base reflect.Type
baseId uintptr
baseIndir int8 // number of indirections to get to base
mbs bool // base type (T or *T) is a MapBySlice
bm bool // base type (T or *T) is a binaryMarshaler
bunm bool // base type (T or *T) is a binaryUnmarshaler
bmIndir int8 // number of indirections to get to binaryMarshaler type
bunmIndir int8 // number of indirections to get to binaryUnmarshaler type
tm bool // base type (T or *T) is a textMarshaler
tunm bool // base type (T or *T) is a textUnmarshaler
tmIndir int8 // number of indirections to get to textMarshaler type
tunmIndir int8 // number of indirections to get to textUnmarshaler type
cs bool // base type (T or *T) is a Selfer
csIndir int8 // number of indirections to get to Selfer type
toArray bool // whether this (struct) type should be encoded as an array
}
func (ti *typeInfo) indexForEncName(name string) int {
//tisfi := ti.sfi
const binarySearchThreshold = 16
if sfilen := len(ti.sfi); sfilen < binarySearchThreshold {
// linear search. faster than binary search in my testing up to 16-field structs.
for i, si := range ti.sfi {
if si.encName == name {
return i
}
}
} else {
// binary search. adapted from sort/search.go.
h, i, j := 0, 0, sfilen
for i < j {
h = i + (j-i)/2
if ti.sfi[h].encName < name {
i = h + 1
} else {
j = h
}
}
if i < sfilen && ti.sfi[i].encName == name {
return i
}
}
return -1
}
func getStructTag(t reflect.StructTag) (s string) {
// check for tags: codec, json, in that order.
// this allows seamless support for many configured structs.
s = t.Get("codec")
if s == "" {
s = t.Get("json")
}
return
}
func getTypeInfo(rtid uintptr, rt reflect.Type) (pti *typeInfo) {
var ok bool
cachedTypeInfoMutex.RLock()
pti, ok = cachedTypeInfo[rtid]
cachedTypeInfoMutex.RUnlock()
if ok {
return
}
cachedTypeInfoMutex.Lock()
defer cachedTypeInfoMutex.Unlock()
if pti, ok = cachedTypeInfo[rtid]; ok {
return
}
ti := typeInfo{rt: rt, rtid: rtid}
pti = &ti
var indir int8
if ok, indir = implementsIntf(rt, binaryMarshalerTyp); ok {
ti.bm, ti.bmIndir = true, indir
}
if ok, indir = implementsIntf(rt, binaryUnmarshalerTyp); ok {
ti.bunm, ti.bunmIndir = true, indir
}
if ok, indir = implementsIntf(rt, textMarshalerTyp); ok {
ti.tm, ti.tmIndir = true, indir
}
if ok, indir = implementsIntf(rt, textUnmarshalerTyp); ok {
ti.tunm, ti.tunmIndir = true, indir
}
if ok, indir = implementsIntf(rt, selferTyp); ok {
ti.cs, ti.csIndir = true, indir
}
if ok, _ = implementsIntf(rt, mapBySliceTyp); ok {
ti.mbs = true
}
pt := rt
var ptIndir int8
// for ; pt.Kind() == reflect.Ptr; pt, ptIndir = pt.Elem(), ptIndir+1 { }
for pt.Kind() == reflect.Ptr {
pt = pt.Elem()
ptIndir++
}
if ptIndir == 0 {
ti.base = rt
ti.baseId = rtid
} else {
ti.base = pt
ti.baseId = reflect.ValueOf(pt).Pointer()
ti.baseIndir = ptIndir
}
if rt.Kind() == reflect.Struct {
var siInfo *structFieldInfo
if f, ok := rt.FieldByName(structInfoFieldName); ok {
siInfo = parseStructFieldInfo(structInfoFieldName, getStructTag(f.Tag))
ti.toArray = siInfo.toArray
}
sfip := make([]*structFieldInfo, 0, rt.NumField())
rgetTypeInfo(rt, nil, make(map[string]bool, 16), &sfip, siInfo)
ti.sfip = make([]*structFieldInfo, len(sfip))
ti.sfi = make([]*structFieldInfo, len(sfip))
copy(ti.sfip, sfip)
sort.Sort(sfiSortedByEncName(sfip))
copy(ti.sfi, sfip)
}
// sfi = sfip
cachedTypeInfo[rtid] = pti
return
}
func rgetTypeInfo(rt reflect.Type, indexstack []int, fnameToHastag map[string]bool,
sfi *[]*structFieldInfo, siInfo *structFieldInfo,
) {
for j := 0; j < rt.NumField(); j++ {
f := rt.Field(j)
// func types are skipped.
if tk := f.Type.Kind(); tk == reflect.Func {
continue
}
stag := getStructTag(f.Tag)
if stag == "-" {
continue
}
if r1, _ := utf8.DecodeRuneInString(f.Name); r1 == utf8.RuneError || !unicode.IsUpper(r1) {
continue
}
// if anonymous and there is no struct tag and its a struct (or pointer to struct), inline it.
if f.Anonymous && stag == "" {
ft := f.Type
for ft.Kind() == reflect.Ptr {
ft = ft.Elem()
}
if ft.Kind() == reflect.Struct {
indexstack2 := make([]int, len(indexstack)+1, len(indexstack)+4)
copy(indexstack2, indexstack)
indexstack2[len(indexstack)] = j
// indexstack2 := append(append(make([]int, 0, len(indexstack)+4), indexstack...), j)
rgetTypeInfo(ft, indexstack2, fnameToHastag, sfi, siInfo)
continue
}
}
// do not let fields with same name in embedded structs override field at higher level.
// this must be done after anonymous check, to allow anonymous field
// still include their child fields
if _, ok := fnameToHastag[f.Name]; ok {
continue
}
si := parseStructFieldInfo(f.Name, stag)
// si.ikind = int(f.Type.Kind())
if len(indexstack) == 0 {
si.i = int16(j)
} else {
si.i = -1
si.is = append(append(make([]int, 0, len(indexstack)+4), indexstack...), j)
}
if siInfo != nil {
if siInfo.omitEmpty {
si.omitEmpty = true
}
}
*sfi = append(*sfi, si)
fnameToHastag[f.Name] = stag != ""
}
}
func panicToErr(err *error) {
if recoverPanicToErr {
if x := recover(); x != nil {
//debug.PrintStack()
panicValToErr(x, err)
}
}
}
// func doPanic(tag string, format string, params ...interface{}) {
// params2 := make([]interface{}, len(params)+1)
// params2[0] = tag
// copy(params2[1:], params)
// panic(fmt.Errorf("%s: "+format, params2...))
// }
func isMutableKind(k reflect.Kind) (v bool) {
return k == reflect.Int ||
k == reflect.Int8 ||
k == reflect.Int16 ||
k == reflect.Int32 ||
k == reflect.Int64 ||
k == reflect.Uint ||
k == reflect.Uint8 ||
k == reflect.Uint16 ||
k == reflect.Uint32 ||
k == reflect.Uint64 ||
k == reflect.Float32 ||
k == reflect.Float64 ||
k == reflect.Bool ||
k == reflect.String
}
// these functions must be inlinable, and not call anybody
type checkOverflow struct{}
func (_ checkOverflow) Float32(f float64) (overflow bool) {
if f < 0 {
f = -f
}
return math.MaxFloat32 < f && f <= math.MaxFloat64
}
func (_ checkOverflow) Uint(v uint64, bitsize uint8) (overflow bool) {
if bitsize == 0 || bitsize >= 64 || v == 0 {
return
}
if trunc := (v << (64 - bitsize)) >> (64 - bitsize); v != trunc {
overflow = true
}
return
}
func (_ checkOverflow) Int(v int64, bitsize uint8) (overflow bool) {
if bitsize == 0 || bitsize >= 64 || v == 0 {
return
}
if trunc := (v << (64 - bitsize)) >> (64 - bitsize); v != trunc {
overflow = true
}
return
}
func (_ checkOverflow) SignedInt(v uint64) (i int64, overflow bool) {
//e.g. -127 to 128 for int8
pos := (v >> 63) == 0
ui2 := v & 0x7fffffffffffffff
if pos {
if ui2 > math.MaxInt64 {
overflow = true
return
}
} else {
if ui2 > math.MaxInt64-1 {
overflow = true
return
}
}
i = int64(v)
return
}

View file

@ -0,0 +1,151 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// All non-std package dependencies live in this file,
// so porting to different environment is easy (just update functions).
import (
"errors"
"fmt"
"math"
"reflect"
)
func panicValToErr(panicVal interface{}, err *error) {
if panicVal == nil {
return
}
// case nil
switch xerr := panicVal.(type) {
case error:
*err = xerr
case string:
*err = errors.New(xerr)
default:
*err = fmt.Errorf("%v", panicVal)
}
return
}
func hIsEmptyValue(v reflect.Value, deref, checkStruct bool) bool {
switch v.Kind() {
case reflect.Invalid:
return true
case reflect.Array, reflect.Map, reflect.Slice, reflect.String:
return v.Len() == 0
case reflect.Bool:
return !v.Bool()
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return v.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64, reflect.Uintptr:
return v.Uint() == 0
case reflect.Float32, reflect.Float64:
return v.Float() == 0
case reflect.Interface, reflect.Ptr:
if deref {
if v.IsNil() {
return true
}
return hIsEmptyValue(v.Elem(), deref, checkStruct)
} else {
return v.IsNil()
}
case reflect.Struct:
if !checkStruct {
return false
}
// return true if all fields are empty. else return false.
// we cannot use equality check, because some fields may be maps/slices/etc
// and consequently the structs are not comparable.
// return v.Interface() == reflect.Zero(v.Type()).Interface()
for i, n := 0, v.NumField(); i < n; i++ {
if !hIsEmptyValue(v.Field(i), deref, checkStruct) {
return false
}
}
return true
}
return false
}
func isEmptyValue(v reflect.Value) bool {
return hIsEmptyValue(v, derefForIsEmptyValue, checkStructForEmptyValue)
}
func pruneSignExt(v []byte, pos bool) (n int) {
if len(v) < 2 {
} else if pos && v[0] == 0 {
for ; v[n] == 0 && n+1 < len(v) && (v[n+1]&(1<<7) == 0); n++ {
}
} else if !pos && v[0] == 0xff {
for ; v[n] == 0xff && n+1 < len(v) && (v[n+1]&(1<<7) != 0); n++ {
}
}
return
}
func implementsIntf(typ, iTyp reflect.Type) (success bool, indir int8) {
if typ == nil {
return
}
rt := typ
// The type might be a pointer and we need to keep
// dereferencing to the base type until we find an implementation.
for {
if rt.Implements(iTyp) {
return true, indir
}
if p := rt; p.Kind() == reflect.Ptr {
indir++
if indir >= math.MaxInt8 { // insane number of indirections
return false, 0
}
rt = p.Elem()
continue
}
break
}
// No luck yet, but if this is a base type (non-pointer), the pointer might satisfy.
if typ.Kind() != reflect.Ptr {
// Not a pointer, but does the pointer work?
if reflect.PtrTo(typ).Implements(iTyp) {
return true, -1
}
}
return false, 0
}
// validate that this function is correct ...
// culled from OGRE (Object-Oriented Graphics Rendering Engine)
// function: halfToFloatI (http://stderr.org/doc/ogre-doc/api/OgreBitwise_8h-source.html)
func halfFloatToFloatBits(yy uint16) (d uint32) {
y := uint32(yy)
s := (y >> 15) & 0x01
e := (y >> 10) & 0x1f
m := y & 0x03ff
if e == 0 {
if m == 0 { // plu or minus 0
return s << 31
} else { // Denormalized number -- renormalize it
for (m & 0x00000400) == 0 {
m <<= 1
e -= 1
}
e += 1
const zz uint32 = 0x0400
m &= ^zz
}
} else if e == 31 {
if m == 0 { // Inf
return (s << 31) | 0x7f800000
} else { // NaN
return (s << 31) | 0x7f800000 | (m << 13)
}
}
e = e + (127 - 15)
m = m << 13
return (s << 31) | (e << 23) | m
}

View file

@ -0,0 +1,20 @@
//+build !unsafe
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// stringView returns a view of the []byte as a string.
// In unsafe mode, it doesn't incur allocation and copying caused by conversion.
// In regular safe mode, it is an allocation and copy.
func stringView(v []byte) string {
return string(v)
}
// bytesView returns a view of the string as a []byte.
// In unsafe mode, it doesn't incur allocation and copying caused by conversion.
// In regular safe mode, it is an allocation and copy.
func bytesView(v string) []byte {
return []byte(v)
}

View file

@ -0,0 +1,39 @@
//+build unsafe
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import (
"unsafe"
)
// This file has unsafe variants of some helper methods.
type unsafeString struct {
Data uintptr
Len int
}
type unsafeBytes struct {
Data uintptr
Len int
Cap int
}
// stringView returns a view of the []byte as a string.
// In unsafe mode, it doesn't incur allocation and copying caused by conversion.
// In regular safe mode, it is an allocation and copy.
func stringView(v []byte) string {
x := unsafeString{uintptr(unsafe.Pointer(&v[0])), len(v)}
return *(*string)(unsafe.Pointer(&x))
}
// bytesView returns a view of the string as a []byte.
// In unsafe mode, it doesn't incur allocation and copying caused by conversion.
// In regular safe mode, it is an allocation and copy.
func bytesView(v string) []byte {
x := unsafeBytes{uintptr(unsafe.Pointer(&v)), len(v), len(v)}
return *(*[]byte)(unsafe.Pointer(&x))
}

View file

@ -0,0 +1,924 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// This json support uses base64 encoding for bytes, because you cannot
// store and read any arbitrary string in json (only unicode).
//
// This library specifically supports UTF-8 for encoding and decoding only.
//
// Note that the library will happily encode/decode things which are not valid
// json e.g. a map[int64]string. We do it for consistency. With valid json,
// we will encode and decode appropriately.
// Users can specify their map type if necessary to force it.
//
// Note:
// - we cannot use strconv.Quote and strconv.Unquote because json quotes/unquotes differently.
// We implement it here.
// - Also, strconv.ParseXXX for floats and integers
// - only works on strings resulting in unnecessary allocation and []byte-string conversion.
// - it does a lot of redundant checks, because json numbers are simpler that what it supports.
// - We parse numbers (floats and integers) directly here.
// We only delegate parsing floats if it is a hairy float which could cause a loss of precision.
// In that case, we delegate to strconv.ParseFloat.
//
// Note:
// - encode does not beautify. There is no whitespace when encoding.
// - rpc calls which take single integer arguments or write single numeric arguments will need care.
import (
"bytes"
"encoding/base64"
"fmt"
"strconv"
"unicode/utf16"
"unicode/utf8"
)
//--------------------------------
var jsonLiterals = [...]byte{'t', 'r', 'u', 'e', 'f', 'a', 'l', 's', 'e', 'n', 'u', 'l', 'l'}
var jsonFloat64Pow10 = [...]float64{
1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9,
1e10, 1e11, 1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19,
1e20, 1e21, 1e22,
}
var jsonUint64Pow10 = [...]uint64{
1e0, 1e1, 1e2, 1e3, 1e4, 1e5, 1e6, 1e7, 1e8, 1e9,
1e10, 1e11, 1e12, 1e13, 1e14, 1e15, 1e16, 1e17, 1e18, 1e19,
}
const (
// if jsonTrackSkipWhitespace, we track Whitespace and reduce the number of redundant checks.
// Make it a const flag, so that it can be elided during linking if false.
//
// It is not a clear win, because we continually set a flag behind a pointer
// and then check it each time, as opposed to just 4 conditionals on a stack variable.
jsonTrackSkipWhitespace = true
// If !jsonValidateSymbols, decoding will be faster, by skipping some checks:
// - If we see first character of null, false or true,
// do not validate subsequent characters.
// - e.g. if we see a n, assume null and skip next 3 characters,
// and do not validate they are ull.
// P.S. Do not expect a significant decoding boost from this.
jsonValidateSymbols = true
// if jsonTruncateMantissa, truncate mantissa if trailing 0's.
// This is important because it could allow some floats to be decoded without
// deferring to strconv.ParseFloat.
jsonTruncateMantissa = true
// if mantissa >= jsonNumUintCutoff before multiplying by 10, this is an overflow
jsonNumUintCutoff = (1<<64-1)/uint64(10) + 1 // cutoff64(base)
// if mantissa >= jsonNumUintMaxVal, this is an overflow
jsonNumUintMaxVal = 1<<uint64(64) - 1
// jsonNumDigitsUint64Largest = 19
)
type jsonEncDriver struct {
e *Encoder
w encWriter
h *JsonHandle
b [64]byte // scratch
bs []byte // scratch
noBuiltInTypes
}
func (e *jsonEncDriver) EncodeNil() {
e.w.writeb(jsonLiterals[9:13]) // null
}
func (e *jsonEncDriver) EncodeBool(b bool) {
if b {
e.w.writeb(jsonLiterals[0:4]) // true
} else {
e.w.writeb(jsonLiterals[4:9]) // false
}
}
func (e *jsonEncDriver) EncodeFloat32(f float32) {
e.w.writeb(strconv.AppendFloat(e.b[:0], float64(f), 'E', -1, 32))
}
func (e *jsonEncDriver) EncodeFloat64(f float64) {
// e.w.writestr(strconv.FormatFloat(f, 'E', -1, 64))
e.w.writeb(strconv.AppendFloat(e.b[:0], f, 'E', -1, 64))
}
func (e *jsonEncDriver) EncodeInt(v int64) {
e.w.writeb(strconv.AppendInt(e.b[:0], v, 10))
}
func (e *jsonEncDriver) EncodeUint(v uint64) {
e.w.writeb(strconv.AppendUint(e.b[:0], v, 10))
}
func (e *jsonEncDriver) EncodeExt(rv interface{}, xtag uint64, ext Ext, en *Encoder) {
if v := ext.ConvertExt(rv); v == nil {
e.EncodeNil()
} else {
en.encode(v)
}
}
func (e *jsonEncDriver) EncodeRawExt(re *RawExt, en *Encoder) {
// only encodes re.Value (never re.Data)
if re.Value == nil {
e.EncodeNil()
} else {
en.encode(re.Value)
}
}
func (e *jsonEncDriver) EncodeArrayStart(length int) {
e.w.writen1('[')
}
func (e *jsonEncDriver) EncodeArrayEntrySeparator() {
e.w.writen1(',')
}
func (e *jsonEncDriver) EncodeArrayEnd() {
e.w.writen1(']')
}
func (e *jsonEncDriver) EncodeMapStart(length int) {
e.w.writen1('{')
}
func (e *jsonEncDriver) EncodeMapEntrySeparator() {
e.w.writen1(',')
}
func (e *jsonEncDriver) EncodeMapKVSeparator() {
e.w.writen1(':')
}
func (e *jsonEncDriver) EncodeMapEnd() {
e.w.writen1('}')
}
func (e *jsonEncDriver) EncodeString(c charEncoding, v string) {
// e.w.writestr(strconv.Quote(v))
e.quoteStr(v)
}
func (e *jsonEncDriver) EncodeSymbol(v string) {
// e.EncodeString(c_UTF8, v)
e.quoteStr(v)
}
func (e *jsonEncDriver) EncodeStringBytes(c charEncoding, v []byte) {
if c == c_RAW {
slen := base64.StdEncoding.EncodedLen(len(v))
if e.bs == nil {
e.bs = e.b[:]
}
if cap(e.bs) >= slen {
e.bs = e.bs[:slen]
} else {
e.bs = make([]byte, slen)
}
base64.StdEncoding.Encode(e.bs, v)
e.w.writen1('"')
e.w.writeb(e.bs)
e.w.writen1('"')
} else {
// e.EncodeString(c, string(v))
e.quoteStr(stringView(v))
}
}
func (e *jsonEncDriver) quoteStr(s string) {
// adapted from std pkg encoding/json
const hex = "0123456789abcdef"
w := e.w
w.writen1('"')
start := 0
for i := 0; i < len(s); {
if b := s[i]; b < utf8.RuneSelf {
if 0x20 <= b && b != '\\' && b != '"' && b != '<' && b != '>' && b != '&' {
i++
continue
}
if start < i {
w.writestr(s[start:i])
}
switch b {
case '\\', '"':
w.writen2('\\', b)
case '\n':
w.writen2('\\', 'n')
case '\r':
w.writen2('\\', 'r')
case '\b':
w.writen2('\\', 'b')
case '\f':
w.writen2('\\', 'f')
case '\t':
w.writen2('\\', 't')
default:
// encode all bytes < 0x20 (except \r, \n).
// also encode < > & to prevent security holes when served to some browsers.
w.writestr(`\u00`)
w.writen2(hex[b>>4], hex[b&0xF])
}
i++
start = i
continue
}
c, size := utf8.DecodeRuneInString(s[i:])
if c == utf8.RuneError && size == 1 {
if start < i {
w.writestr(s[start:i])
}
w.writestr(`\ufffd`)
i += size
start = i
continue
}
// U+2028 is LINE SEPARATOR. U+2029 is PARAGRAPH SEPARATOR.
// Both technically valid JSON, but bomb on JSONP, so fix here.
if c == '\u2028' || c == '\u2029' {
if start < i {
w.writestr(s[start:i])
}
w.writestr(`\u202`)
w.writen1(hex[c&0xF])
i += size
start = i
continue
}
i += size
}
if start < len(s) {
w.writestr(s[start:])
}
w.writen1('"')
}
//--------------------------------
type jsonNum struct {
bytes []byte // may have [+-.eE0-9]
mantissa uint64 // where mantissa ends, and maybe dot begins.
exponent int16 // exponent value.
manOverflow bool
neg bool // started with -. No initial sign in the bytes above.
dot bool // has dot
explicitExponent bool // explicit exponent
}
func (x *jsonNum) reset() {
x.bytes = x.bytes[:0]
x.manOverflow = false
x.neg = false
x.dot = false
x.explicitExponent = false
x.mantissa = 0
x.exponent = 0
}
// uintExp is called only if exponent > 0.
func (x *jsonNum) uintExp() (n uint64, overflow bool) {
n = x.mantissa
e := x.exponent
if e >= int16(len(jsonUint64Pow10)) {
overflow = true
return
}
n *= jsonUint64Pow10[e]
if n < x.mantissa || n > jsonNumUintMaxVal {
overflow = true
return
}
return
// for i := int16(0); i < e; i++ {
// if n >= jsonNumUintCutoff {
// overflow = true
// return
// }
// n *= 10
// }
// return
}
func (x *jsonNum) floatVal() (f float64) {
// We do not want to lose precision.
// Consequently, we will delegate to strconv.ParseFloat if any of the following happen:
// - There are more digits than in math.MaxUint64: 18446744073709551615 (20 digits)
// We expect up to 99.... (19 digits)
// - The mantissa cannot fit into a 52 bits of uint64
// - The exponent is beyond our scope ie beyong 22.
const uint64MantissaBits = 52
const maxExponent = int16(len(jsonFloat64Pow10)) - 1
parseUsingStrConv := x.manOverflow ||
x.exponent > maxExponent ||
(x.exponent < 0 && -(x.exponent) > maxExponent) ||
x.mantissa>>uint64MantissaBits != 0
if parseUsingStrConv {
var err error
if f, err = strconv.ParseFloat(stringView(x.bytes), 64); err != nil {
panic(fmt.Errorf("parse float: %s, %v", x.bytes, err))
return
}
if x.neg {
f = -f
}
return
}
// all good. so handle parse here.
f = float64(x.mantissa)
// fmt.Printf(".Float: uint64 value: %v, float: %v\n", m, f)
if x.neg {
f = -f
}
if x.exponent > 0 {
f *= jsonFloat64Pow10[x.exponent]
} else if x.exponent < 0 {
f /= jsonFloat64Pow10[-x.exponent]
}
return
}
type jsonDecDriver struct {
d *Decoder
h *JsonHandle
r decReader // *bytesDecReader decReader
ct valueType // container type. one of unset, array or map.
bstr [8]byte // scratch used for string \UXXX parsing
b [64]byte // scratch
wsSkipped bool // whitespace skipped
n jsonNum
noBuiltInTypes
}
// This will skip whitespace characters and return the next byte to read.
// The next byte determines what the value will be one of.
func (d *jsonDecDriver) skipWhitespace(unread bool) (b byte) {
// as initReadNext is not called all the time, we set ct to unSet whenever
// we skipwhitespace, as this is the signal that something new is about to be read.
d.ct = valueTypeUnset
b = d.r.readn1()
if !jsonTrackSkipWhitespace || !d.wsSkipped {
for ; b == ' ' || b == '\t' || b == '\r' || b == '\n'; b = d.r.readn1() {
}
if jsonTrackSkipWhitespace {
d.wsSkipped = true
}
}
if unread {
d.r.unreadn1()
}
return b
}
func (d *jsonDecDriver) CheckBreak() bool {
b := d.skipWhitespace(true)
return b == '}' || b == ']'
}
func (d *jsonDecDriver) readStrIdx(fromIdx, toIdx uint8) {
bs := d.r.readx(int(toIdx - fromIdx))
if jsonValidateSymbols {
if !bytes.Equal(bs, jsonLiterals[fromIdx:toIdx]) {
d.d.errorf("json: expecting %s: got %s", jsonLiterals[fromIdx:toIdx], bs)
return
}
}
if jsonTrackSkipWhitespace {
d.wsSkipped = false
}
}
func (d *jsonDecDriver) TryDecodeAsNil() bool {
b := d.skipWhitespace(true)
if b == 'n' {
d.readStrIdx(9, 13) // null
d.ct = valueTypeNil
return true
}
return false
}
func (d *jsonDecDriver) DecodeBool() bool {
b := d.skipWhitespace(false)
if b == 'f' {
d.readStrIdx(5, 9) // alse
return false
}
if b == 't' {
d.readStrIdx(1, 4) // rue
return true
}
d.d.errorf("json: decode bool: got first char %c", b)
return false // "unreachable"
}
func (d *jsonDecDriver) ReadMapStart() int {
d.expectChar('{')
d.ct = valueTypeMap
return -1
}
func (d *jsonDecDriver) ReadArrayStart() int {
d.expectChar('[')
d.ct = valueTypeArray
return -1
}
func (d *jsonDecDriver) ReadMapEnd() {
d.expectChar('}')
}
func (d *jsonDecDriver) ReadArrayEnd() {
d.expectChar(']')
}
func (d *jsonDecDriver) ReadArrayEntrySeparator() {
d.expectChar(',')
}
func (d *jsonDecDriver) ReadMapEntrySeparator() {
d.expectChar(',')
}
func (d *jsonDecDriver) ReadMapKVSeparator() {
d.expectChar(':')
}
func (d *jsonDecDriver) expectChar(c uint8) {
b := d.skipWhitespace(false)
if b != c {
d.d.errorf("json: expect char %c but got char %c", c, b)
return
}
if jsonTrackSkipWhitespace {
d.wsSkipped = false
}
}
func (d *jsonDecDriver) IsContainerType(vt valueType) bool {
// check container type by checking the first char
if d.ct == valueTypeUnset {
b := d.skipWhitespace(true)
if b == '{' {
d.ct = valueTypeMap
} else if b == '[' {
d.ct = valueTypeArray
} else if b == 'n' {
d.ct = valueTypeNil
} else if b == '"' {
d.ct = valueTypeString
}
}
if vt == valueTypeNil || vt == valueTypeBytes || vt == valueTypeString ||
vt == valueTypeArray || vt == valueTypeMap {
return d.ct == vt
}
// ugorji: made switch into conditionals, so that IsContainerType can be inlined.
// switch vt {
// case valueTypeNil, valueTypeBytes, valueTypeString, valueTypeArray, valueTypeMap:
// return d.ct == vt
// }
d.d.errorf("isContainerType: unsupported parameter: %v", vt)
return false // "unreachable"
}
func (d *jsonDecDriver) decNum(storeBytes bool) {
// storeBytes = true // TODO: remove.
// If it is has a . or an e|E, decode as a float; else decode as an int.
b := d.skipWhitespace(false)
if !(b == '+' || b == '-' || b == '.' || (b >= '0' && b <= '9')) {
d.d.errorf("json: decNum: got first char '%c'", b)
return
}
const cutoff = (1<<64-1)/uint64(10) + 1 // cutoff64(base)
const jsonNumUintMaxVal = 1<<uint64(64) - 1
// var n jsonNum // create stack-copy jsonNum, and set to pointer at end.
// n.bytes = d.n.bytes[:0]
n := &d.n
n.reset()
// The format of a number is as below:
// parsing: sign? digit* dot? digit* e? sign? digit*
// states: 0 1* 2 3* 4 5* 6 7
// We honor this state so we can break correctly.
var state uint8 = 0
var eNeg bool
var e int16
var eof bool
LOOP:
for !eof {
// fmt.Printf("LOOP: b: %q\n", b)
switch b {
case '+':
switch state {
case 0:
state = 2
// do not add sign to the slice ...
b, eof = d.r.readn1eof()
continue
case 6: // typ = jsonNumFloat
state = 7
default:
break LOOP
}
case '-':
switch state {
case 0:
state = 2
n.neg = true
// do not add sign to the slice ...
b, eof = d.r.readn1eof()
continue
case 6: // typ = jsonNumFloat
eNeg = true
state = 7
default:
break LOOP
}
case '.':
switch state {
case 0, 2: // typ = jsonNumFloat
state = 4
n.dot = true
default:
break LOOP
}
case 'e', 'E':
switch state {
case 0, 2, 4: // typ = jsonNumFloat
state = 6
// n.mantissaEndIndex = int16(len(n.bytes))
n.explicitExponent = true
default:
break LOOP
}
case '0', '1', '2', '3', '4', '5', '6', '7', '8', '9':
switch state {
case 0:
state = 2
fallthrough
case 2:
fallthrough
case 4:
if n.dot {
n.exponent--
}
if n.mantissa >= jsonNumUintCutoff {
n.manOverflow = true
break
}
v := uint64(b - '0')
n.mantissa *= 10
if v != 0 {
n1 := n.mantissa + v
if n1 < n.mantissa || n1 > jsonNumUintMaxVal {
n.manOverflow = true // n+v overflows
break
}
n.mantissa = n1
}
case 6:
state = 7
fallthrough
case 7:
if !(b == '0' && e == 0) {
e = e*10 + int16(b-'0')
}
default:
break LOOP
}
default:
break LOOP
}
if storeBytes {
n.bytes = append(n.bytes, b)
}
b, eof = d.r.readn1eof()
}
if jsonTruncateMantissa && n.mantissa != 0 {
for n.mantissa%10 == 0 {
n.mantissa /= 10
n.exponent++
}
}
if e != 0 {
if eNeg {
n.exponent -= e
} else {
n.exponent += e
}
}
// d.n = n
if !eof {
d.r.unreadn1()
}
if jsonTrackSkipWhitespace {
d.wsSkipped = false
}
// fmt.Printf("1: n: bytes: %s, neg: %v, dot: %v, exponent: %v, mantissaEndIndex: %v\n",
// n.bytes, n.neg, n.dot, n.exponent, n.mantissaEndIndex)
return
}
func (d *jsonDecDriver) DecodeInt(bitsize uint8) (i int64) {
d.decNum(false)
n := &d.n
if n.manOverflow {
d.d.errorf("json: overflow integer after: %v", n.mantissa)
return
}
var u uint64
if n.exponent == 0 {
u = n.mantissa
} else if n.exponent < 0 {
d.d.errorf("json: fractional integer")
return
} else if n.exponent > 0 {
var overflow bool
if u, overflow = n.uintExp(); overflow {
d.d.errorf("json: overflow integer")
return
}
}
i = int64(u)
if n.neg {
i = -i
}
if chkOvf.Int(i, bitsize) {
d.d.errorf("json: overflow %v bits: %s", bitsize, n.bytes)
return
}
// fmt.Printf("DecodeInt: %v\n", i)
return
}
func (d *jsonDecDriver) DecodeUint(bitsize uint8) (u uint64) {
d.decNum(false)
n := &d.n
if n.neg {
d.d.errorf("json: unsigned integer cannot be negative")
return
}
if n.manOverflow {
d.d.errorf("json: overflow integer after: %v", n.mantissa)
return
}
if n.exponent == 0 {
u = n.mantissa
} else if n.exponent < 0 {
d.d.errorf("json: fractional integer")
return
} else if n.exponent > 0 {
var overflow bool
if u, overflow = n.uintExp(); overflow {
d.d.errorf("json: overflow integer")
return
}
}
if chkOvf.Uint(u, bitsize) {
d.d.errorf("json: overflow %v bits: %s", bitsize, n.bytes)
return
}
// fmt.Printf("DecodeUint: %v\n", u)
return
}
func (d *jsonDecDriver) DecodeFloat(chkOverflow32 bool) (f float64) {
d.decNum(true)
n := &d.n
f = n.floatVal()
if chkOverflow32 && chkOvf.Float32(f) {
d.d.errorf("json: overflow float32: %v, %s", f, n.bytes)
return
}
return
}
func (d *jsonDecDriver) DecodeExt(rv interface{}, xtag uint64, ext Ext) (realxtag uint64) {
if ext == nil {
re := rv.(*RawExt)
re.Tag = xtag
d.d.decode(&re.Value)
} else {
var v interface{}
d.d.decode(&v)
ext.UpdateExt(rv, v)
}
return
}
func (d *jsonDecDriver) DecodeBytes(bs []byte, isstring, zerocopy bool) (bsOut []byte) {
// zerocopy doesn't matter for json, as the bytes must be parsed.
bs0 := d.appendStringAsBytes(d.b[:0])
if isstring {
return bs0
}
slen := base64.StdEncoding.DecodedLen(len(bs0))
if cap(bs) >= slen {
bsOut = bs[:slen]
} else {
bsOut = make([]byte, slen)
}
slen2, err := base64.StdEncoding.Decode(bsOut, bs0)
if err != nil {
d.d.errorf("json: error decoding base64 binary '%s': %v", bs0, err)
return nil
}
if slen != slen2 {
bsOut = bsOut[:slen2]
}
return
}
func (d *jsonDecDriver) DecodeString() (s string) {
return string(d.appendStringAsBytes(d.b[:0]))
}
func (d *jsonDecDriver) appendStringAsBytes(v []byte) []byte {
d.expectChar('"')
for {
c := d.r.readn1()
if c == '"' {
break
} else if c == '\\' {
c = d.r.readn1()
switch c {
case '"', '\\', '/', '\'':
v = append(v, c)
case 'b':
v = append(v, '\b')
case 'f':
v = append(v, '\f')
case 'n':
v = append(v, '\n')
case 'r':
v = append(v, '\r')
case 't':
v = append(v, '\t')
case 'u':
rr := d.jsonU4(false)
// fmt.Printf("$$$$$$$$$: is surrogate: %v\n", utf16.IsSurrogate(rr))
if utf16.IsSurrogate(rr) {
rr = utf16.DecodeRune(rr, d.jsonU4(true))
}
w2 := utf8.EncodeRune(d.bstr[:], rr)
v = append(v, d.bstr[:w2]...)
default:
d.d.errorf("json: unsupported escaped value: %c", c)
return nil
}
} else {
v = append(v, c)
}
}
if jsonTrackSkipWhitespace {
d.wsSkipped = false
}
return v
}
func (d *jsonDecDriver) jsonU4(checkSlashU bool) rune {
if checkSlashU && !(d.r.readn1() == '\\' && d.r.readn1() == 'u') {
d.d.errorf(`json: unquoteStr: invalid unicode sequence. Expecting \u`)
return 0
}
// u, _ := strconv.ParseUint(string(d.bstr[:4]), 16, 64)
var u uint32
for i := 0; i < 4; i++ {
v := d.r.readn1()
if '0' <= v && v <= '9' {
v = v - '0'
} else if 'a' <= v && v <= 'z' {
v = v - 'a' + 10
} else if 'A' <= v && v <= 'Z' {
v = v - 'A' + 10
} else {
d.d.errorf(`json: unquoteStr: invalid hex char in \u unicode sequence: %q`, v)
return 0
}
u = u*16 + uint32(v)
}
return rune(u)
}
func (d *jsonDecDriver) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
n := d.skipWhitespace(true)
switch n {
case 'n':
d.readStrIdx(9, 13) // null
vt = valueTypeNil
case 'f':
d.readStrIdx(4, 9) // false
vt = valueTypeBool
v = false
case 't':
d.readStrIdx(0, 4) // true
vt = valueTypeBool
v = true
case '{':
vt = valueTypeMap
decodeFurther = true
case '[':
vt = valueTypeArray
decodeFurther = true
case '"':
vt = valueTypeString
v = d.DecodeString()
default: // number
d.decNum(true)
n := &d.n
// if the string had a any of [.eE], then decode as float.
switch {
case n.explicitExponent, n.dot, n.exponent < 0, n.manOverflow:
vt = valueTypeFloat
v = n.floatVal()
case n.exponent == 0:
u := n.mantissa
switch {
case n.neg:
vt = valueTypeInt
v = -int64(u)
case d.h.SignedInteger:
vt = valueTypeInt
v = int64(u)
default:
vt = valueTypeUint
v = u
}
default:
u, overflow := n.uintExp()
switch {
case overflow:
vt = valueTypeFloat
v = n.floatVal()
case n.neg:
vt = valueTypeInt
v = -int64(u)
case d.h.SignedInteger:
vt = valueTypeInt
v = int64(u)
default:
vt = valueTypeUint
v = u
}
}
// fmt.Printf("DecodeNaked: Number: %T, %v\n", v, v)
}
return
}
//----------------------
// JsonHandle is a handle for JSON encoding format.
//
// Json is comprehensively supported:
// - decodes numbers into interface{} as int, uint or float64
// - encodes and decodes []byte using base64 Std Encoding
// - UTF-8 support for encoding and decoding
//
// It has better performance than the json library in the standard library,
// by leveraging the performance improvements of the codec library and
// minimizing allocations.
//
// In addition, it doesn't read more bytes than necessary during a decode, which allows
// reading multiple values from a stream containing json and non-json content.
// For example, a user can read a json value, then a cbor value, then a msgpack value,
// all from the same stream in sequence.
type JsonHandle struct {
BasicHandle
textEncodingType
}
func (h *JsonHandle) newEncDriver(e *Encoder) encDriver {
return &jsonEncDriver{e: e, w: e.w, h: h}
}
func (h *JsonHandle) newDecDriver(d *Decoder) decDriver {
// d := jsonDecDriver{r: r.(*bytesDecReader), h: h}
hd := jsonDecDriver{d: d, r: d.r, h: h}
hd.n.bytes = d.b[:]
return &hd
}
var jsonEncodeTerminate = []byte{' '}
func (h *JsonHandle) rpcEncodeTerminate() []byte {
return jsonEncodeTerminate
}
var _ decDriver = (*jsonDecDriver)(nil)
var _ encDriver = (*jsonEncDriver)(nil)

View file

@ -0,0 +1,843 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
/*
MSGPACK
Msgpack-c implementation powers the c, c++, python, ruby, etc libraries.
We need to maintain compatibility with it and how it encodes integer values
without caring about the type.
For compatibility with behaviour of msgpack-c reference implementation:
- Go intX (>0) and uintX
IS ENCODED AS
msgpack +ve fixnum, unsigned
- Go intX (<0)
IS ENCODED AS
msgpack -ve fixnum, signed
*/
package codec
import (
"fmt"
"io"
"math"
"net/rpc"
)
const (
mpPosFixNumMin byte = 0x00
mpPosFixNumMax = 0x7f
mpFixMapMin = 0x80
mpFixMapMax = 0x8f
mpFixArrayMin = 0x90
mpFixArrayMax = 0x9f
mpFixStrMin = 0xa0
mpFixStrMax = 0xbf
mpNil = 0xc0
_ = 0xc1
mpFalse = 0xc2
mpTrue = 0xc3
mpFloat = 0xca
mpDouble = 0xcb
mpUint8 = 0xcc
mpUint16 = 0xcd
mpUint32 = 0xce
mpUint64 = 0xcf
mpInt8 = 0xd0
mpInt16 = 0xd1
mpInt32 = 0xd2
mpInt64 = 0xd3
// extensions below
mpBin8 = 0xc4
mpBin16 = 0xc5
mpBin32 = 0xc6
mpExt8 = 0xc7
mpExt16 = 0xc8
mpExt32 = 0xc9
mpFixExt1 = 0xd4
mpFixExt2 = 0xd5
mpFixExt4 = 0xd6
mpFixExt8 = 0xd7
mpFixExt16 = 0xd8
mpStr8 = 0xd9 // new
mpStr16 = 0xda
mpStr32 = 0xdb
mpArray16 = 0xdc
mpArray32 = 0xdd
mpMap16 = 0xde
mpMap32 = 0xdf
mpNegFixNumMin = 0xe0
mpNegFixNumMax = 0xff
)
// MsgpackSpecRpcMultiArgs is a special type which signifies to the MsgpackSpecRpcCodec
// that the backend RPC service takes multiple arguments, which have been arranged
// in sequence in the slice.
//
// The Codec then passes it AS-IS to the rpc service (without wrapping it in an
// array of 1 element).
type MsgpackSpecRpcMultiArgs []interface{}
// A MsgpackContainer type specifies the different types of msgpackContainers.
type msgpackContainerType struct {
fixCutoff int
bFixMin, b8, b16, b32 byte
hasFixMin, has8, has8Always bool
}
var (
msgpackContainerStr = msgpackContainerType{32, mpFixStrMin, mpStr8, mpStr16, mpStr32, true, true, false}
msgpackContainerBin = msgpackContainerType{0, 0, mpBin8, mpBin16, mpBin32, false, true, true}
msgpackContainerList = msgpackContainerType{16, mpFixArrayMin, 0, mpArray16, mpArray32, true, false, false}
msgpackContainerMap = msgpackContainerType{16, mpFixMapMin, 0, mpMap16, mpMap32, true, false, false}
)
//---------------------------------------------
type msgpackEncDriver struct {
e *Encoder
w encWriter
h *MsgpackHandle
noBuiltInTypes
encNoSeparator
x [8]byte
}
func (e *msgpackEncDriver) EncodeNil() {
e.w.writen1(mpNil)
}
func (e *msgpackEncDriver) EncodeInt(i int64) {
if i >= 0 {
e.EncodeUint(uint64(i))
} else if i >= -32 {
e.w.writen1(byte(i))
} else if i >= math.MinInt8 {
e.w.writen2(mpInt8, byte(i))
} else if i >= math.MinInt16 {
e.w.writen1(mpInt16)
bigenHelper{e.x[:2], e.w}.writeUint16(uint16(i))
} else if i >= math.MinInt32 {
e.w.writen1(mpInt32)
bigenHelper{e.x[:4], e.w}.writeUint32(uint32(i))
} else {
e.w.writen1(mpInt64)
bigenHelper{e.x[:8], e.w}.writeUint64(uint64(i))
}
}
func (e *msgpackEncDriver) EncodeUint(i uint64) {
if i <= math.MaxInt8 {
e.w.writen1(byte(i))
} else if i <= math.MaxUint8 {
e.w.writen2(mpUint8, byte(i))
} else if i <= math.MaxUint16 {
e.w.writen1(mpUint16)
bigenHelper{e.x[:2], e.w}.writeUint16(uint16(i))
} else if i <= math.MaxUint32 {
e.w.writen1(mpUint32)
bigenHelper{e.x[:4], e.w}.writeUint32(uint32(i))
} else {
e.w.writen1(mpUint64)
bigenHelper{e.x[:8], e.w}.writeUint64(uint64(i))
}
}
func (e *msgpackEncDriver) EncodeBool(b bool) {
if b {
e.w.writen1(mpTrue)
} else {
e.w.writen1(mpFalse)
}
}
func (e *msgpackEncDriver) EncodeFloat32(f float32) {
e.w.writen1(mpFloat)
bigenHelper{e.x[:4], e.w}.writeUint32(math.Float32bits(f))
}
func (e *msgpackEncDriver) EncodeFloat64(f float64) {
e.w.writen1(mpDouble)
bigenHelper{e.x[:8], e.w}.writeUint64(math.Float64bits(f))
}
func (e *msgpackEncDriver) EncodeExt(v interface{}, xtag uint64, ext Ext, _ *Encoder) {
bs := ext.WriteExt(v)
if bs == nil {
e.EncodeNil()
return
}
if e.h.WriteExt {
e.encodeExtPreamble(uint8(xtag), len(bs))
e.w.writeb(bs)
} else {
e.EncodeStringBytes(c_RAW, bs)
}
}
func (e *msgpackEncDriver) EncodeRawExt(re *RawExt, _ *Encoder) {
e.encodeExtPreamble(uint8(re.Tag), len(re.Data))
e.w.writeb(re.Data)
}
func (e *msgpackEncDriver) encodeExtPreamble(xtag byte, l int) {
if l == 1 {
e.w.writen2(mpFixExt1, xtag)
} else if l == 2 {
e.w.writen2(mpFixExt2, xtag)
} else if l == 4 {
e.w.writen2(mpFixExt4, xtag)
} else if l == 8 {
e.w.writen2(mpFixExt8, xtag)
} else if l == 16 {
e.w.writen2(mpFixExt16, xtag)
} else if l < 256 {
e.w.writen2(mpExt8, byte(l))
e.w.writen1(xtag)
} else if l < 65536 {
e.w.writen1(mpExt16)
bigenHelper{e.x[:2], e.w}.writeUint16(uint16(l))
e.w.writen1(xtag)
} else {
e.w.writen1(mpExt32)
bigenHelper{e.x[:4], e.w}.writeUint32(uint32(l))
e.w.writen1(xtag)
}
}
func (e *msgpackEncDriver) EncodeArrayStart(length int) {
e.writeContainerLen(msgpackContainerList, length)
}
func (e *msgpackEncDriver) EncodeMapStart(length int) {
e.writeContainerLen(msgpackContainerMap, length)
}
func (e *msgpackEncDriver) EncodeString(c charEncoding, s string) {
if c == c_RAW && e.h.WriteExt {
e.writeContainerLen(msgpackContainerBin, len(s))
} else {
e.writeContainerLen(msgpackContainerStr, len(s))
}
if len(s) > 0 {
e.w.writestr(s)
}
}
func (e *msgpackEncDriver) EncodeSymbol(v string) {
e.EncodeString(c_UTF8, v)
}
func (e *msgpackEncDriver) EncodeStringBytes(c charEncoding, bs []byte) {
if c == c_RAW && e.h.WriteExt {
e.writeContainerLen(msgpackContainerBin, len(bs))
} else {
e.writeContainerLen(msgpackContainerStr, len(bs))
}
if len(bs) > 0 {
e.w.writeb(bs)
}
}
func (e *msgpackEncDriver) writeContainerLen(ct msgpackContainerType, l int) {
if ct.hasFixMin && l < ct.fixCutoff {
e.w.writen1(ct.bFixMin | byte(l))
} else if ct.has8 && l < 256 && (ct.has8Always || e.h.WriteExt) {
e.w.writen2(ct.b8, uint8(l))
} else if l < 65536 {
e.w.writen1(ct.b16)
bigenHelper{e.x[:2], e.w}.writeUint16(uint16(l))
} else {
e.w.writen1(ct.b32)
bigenHelper{e.x[:4], e.w}.writeUint32(uint32(l))
}
}
//---------------------------------------------
type msgpackDecDriver struct {
d *Decoder
r decReader // *Decoder decReader decReaderT
h *MsgpackHandle
b [scratchByteArrayLen]byte
bd byte
bdRead bool
br bool // bytes reader
bdType valueType
noBuiltInTypes
noStreamingCodec
decNoSeparator
}
// Note: This returns either a primitive (int, bool, etc) for non-containers,
// or a containerType, or a specific type denoting nil or extension.
// It is called when a nil interface{} is passed, leaving it up to the DecDriver
// to introspect the stream and decide how best to decode.
// It deciphers the value by looking at the stream first.
func (d *msgpackDecDriver) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
if !d.bdRead {
d.readNextBd()
}
bd := d.bd
switch bd {
case mpNil:
vt = valueTypeNil
d.bdRead = false
case mpFalse:
vt = valueTypeBool
v = false
case mpTrue:
vt = valueTypeBool
v = true
case mpFloat:
vt = valueTypeFloat
v = float64(math.Float32frombits(bigen.Uint32(d.r.readx(4))))
case mpDouble:
vt = valueTypeFloat
v = math.Float64frombits(bigen.Uint64(d.r.readx(8)))
case mpUint8:
vt = valueTypeUint
v = uint64(d.r.readn1())
case mpUint16:
vt = valueTypeUint
v = uint64(bigen.Uint16(d.r.readx(2)))
case mpUint32:
vt = valueTypeUint
v = uint64(bigen.Uint32(d.r.readx(4)))
case mpUint64:
vt = valueTypeUint
v = uint64(bigen.Uint64(d.r.readx(8)))
case mpInt8:
vt = valueTypeInt
v = int64(int8(d.r.readn1()))
case mpInt16:
vt = valueTypeInt
v = int64(int16(bigen.Uint16(d.r.readx(2))))
case mpInt32:
vt = valueTypeInt
v = int64(int32(bigen.Uint32(d.r.readx(4))))
case mpInt64:
vt = valueTypeInt
v = int64(int64(bigen.Uint64(d.r.readx(8))))
default:
switch {
case bd >= mpPosFixNumMin && bd <= mpPosFixNumMax:
// positive fixnum (always signed)
vt = valueTypeInt
v = int64(int8(bd))
case bd >= mpNegFixNumMin && bd <= mpNegFixNumMax:
// negative fixnum
vt = valueTypeInt
v = int64(int8(bd))
case bd == mpStr8, bd == mpStr16, bd == mpStr32, bd >= mpFixStrMin && bd <= mpFixStrMax:
if d.h.RawToString {
var rvm string
vt = valueTypeString
v = &rvm
} else {
var rvm = zeroByteSlice
vt = valueTypeBytes
v = &rvm
}
decodeFurther = true
case bd == mpBin8, bd == mpBin16, bd == mpBin32:
var rvm = zeroByteSlice
vt = valueTypeBytes
v = &rvm
decodeFurther = true
case bd == mpArray16, bd == mpArray32, bd >= mpFixArrayMin && bd <= mpFixArrayMax:
vt = valueTypeArray
decodeFurther = true
case bd == mpMap16, bd == mpMap32, bd >= mpFixMapMin && bd <= mpFixMapMax:
vt = valueTypeMap
decodeFurther = true
case bd >= mpFixExt1 && bd <= mpFixExt16, bd >= mpExt8 && bd <= mpExt32:
clen := d.readExtLen()
var re RawExt
re.Tag = uint64(d.r.readn1())
re.Data = d.r.readx(clen)
v = &re
vt = valueTypeExt
default:
d.d.errorf("Nil-Deciphered DecodeValue: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
return
}
}
if !decodeFurther {
d.bdRead = false
}
if vt == valueTypeUint && d.h.SignedInteger {
d.bdType = valueTypeInt
v = int64(v.(uint64))
}
return
}
// int can be decoded from msgpack type: intXXX or uintXXX
func (d *msgpackDecDriver) DecodeInt(bitsize uint8) (i int64) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case mpUint8:
i = int64(uint64(d.r.readn1()))
case mpUint16:
i = int64(uint64(bigen.Uint16(d.r.readx(2))))
case mpUint32:
i = int64(uint64(bigen.Uint32(d.r.readx(4))))
case mpUint64:
i = int64(bigen.Uint64(d.r.readx(8)))
case mpInt8:
i = int64(int8(d.r.readn1()))
case mpInt16:
i = int64(int16(bigen.Uint16(d.r.readx(2))))
case mpInt32:
i = int64(int32(bigen.Uint32(d.r.readx(4))))
case mpInt64:
i = int64(bigen.Uint64(d.r.readx(8)))
default:
switch {
case d.bd >= mpPosFixNumMin && d.bd <= mpPosFixNumMax:
i = int64(int8(d.bd))
case d.bd >= mpNegFixNumMin && d.bd <= mpNegFixNumMax:
i = int64(int8(d.bd))
default:
d.d.errorf("Unhandled single-byte unsigned integer value: %s: %x", msgBadDesc, d.bd)
return
}
}
// check overflow (logic adapted from std pkg reflect/value.go OverflowUint()
if bitsize > 0 {
if trunc := (i << (64 - bitsize)) >> (64 - bitsize); i != trunc {
d.d.errorf("Overflow int value: %v", i)
return
}
}
d.bdRead = false
return
}
// uint can be decoded from msgpack type: intXXX or uintXXX
func (d *msgpackDecDriver) DecodeUint(bitsize uint8) (ui uint64) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case mpUint8:
ui = uint64(d.r.readn1())
case mpUint16:
ui = uint64(bigen.Uint16(d.r.readx(2)))
case mpUint32:
ui = uint64(bigen.Uint32(d.r.readx(4)))
case mpUint64:
ui = bigen.Uint64(d.r.readx(8))
case mpInt8:
if i := int64(int8(d.r.readn1())); i >= 0 {
ui = uint64(i)
} else {
d.d.errorf("Assigning negative signed value: %v, to unsigned type", i)
return
}
case mpInt16:
if i := int64(int16(bigen.Uint16(d.r.readx(2)))); i >= 0 {
ui = uint64(i)
} else {
d.d.errorf("Assigning negative signed value: %v, to unsigned type", i)
return
}
case mpInt32:
if i := int64(int32(bigen.Uint32(d.r.readx(4)))); i >= 0 {
ui = uint64(i)
} else {
d.d.errorf("Assigning negative signed value: %v, to unsigned type", i)
return
}
case mpInt64:
if i := int64(bigen.Uint64(d.r.readx(8))); i >= 0 {
ui = uint64(i)
} else {
d.d.errorf("Assigning negative signed value: %v, to unsigned type", i)
return
}
default:
switch {
case d.bd >= mpPosFixNumMin && d.bd <= mpPosFixNumMax:
ui = uint64(d.bd)
case d.bd >= mpNegFixNumMin && d.bd <= mpNegFixNumMax:
d.d.errorf("Assigning negative signed value: %v, to unsigned type", int(d.bd))
return
default:
d.d.errorf("Unhandled single-byte unsigned integer value: %s: %x", msgBadDesc, d.bd)
return
}
}
// check overflow (logic adapted from std pkg reflect/value.go OverflowUint()
if bitsize > 0 {
if trunc := (ui << (64 - bitsize)) >> (64 - bitsize); ui != trunc {
d.d.errorf("Overflow uint value: %v", ui)
return
}
}
d.bdRead = false
return
}
// float can either be decoded from msgpack type: float, double or intX
func (d *msgpackDecDriver) DecodeFloat(chkOverflow32 bool) (f float64) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == mpFloat {
f = float64(math.Float32frombits(bigen.Uint32(d.r.readx(4))))
} else if d.bd == mpDouble {
f = math.Float64frombits(bigen.Uint64(d.r.readx(8)))
} else {
f = float64(d.DecodeInt(0))
}
if chkOverflow32 && chkOvf.Float32(f) {
d.d.errorf("msgpack: float32 overflow: %v", f)
return
}
d.bdRead = false
return
}
// bool can be decoded from bool, fixnum 0 or 1.
func (d *msgpackDecDriver) DecodeBool() (b bool) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == mpFalse || d.bd == 0 {
// b = false
} else if d.bd == mpTrue || d.bd == 1 {
b = true
} else {
d.d.errorf("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
return
}
d.bdRead = false
return
}
func (d *msgpackDecDriver) DecodeBytes(bs []byte, isstring, zerocopy bool) (bsOut []byte) {
if !d.bdRead {
d.readNextBd()
}
var clen int
if isstring {
clen = d.readContainerLen(msgpackContainerStr)
} else {
// bytes can be decoded from msgpackContainerStr or msgpackContainerBin
if bd := d.bd; bd == mpBin8 || bd == mpBin16 || bd == mpBin32 {
clen = d.readContainerLen(msgpackContainerBin)
} else {
clen = d.readContainerLen(msgpackContainerStr)
}
}
// println("DecodeBytes: clen: ", clen)
d.bdRead = false
// bytes may be nil, so handle it. if nil, clen=-1.
if clen < 0 {
return nil
}
if zerocopy {
if d.br {
return d.r.readx(clen)
} else if len(bs) == 0 {
bs = d.b[:]
}
}
return decByteSlice(d.r, clen, bs)
}
func (d *msgpackDecDriver) DecodeString() (s string) {
return string(d.DecodeBytes(d.b[:], true, true))
}
func (d *msgpackDecDriver) readNextBd() {
d.bd = d.r.readn1()
d.bdRead = true
d.bdType = valueTypeUnset
}
func (d *msgpackDecDriver) IsContainerType(vt valueType) bool {
bd := d.bd
switch vt {
case valueTypeNil:
return bd == mpNil
case valueTypeBytes:
return bd == mpBin8 || bd == mpBin16 || bd == mpBin32 ||
(!d.h.RawToString &&
(bd == mpStr8 || bd == mpStr16 || bd == mpStr32 || (bd >= mpFixStrMin && bd <= mpFixStrMax)))
case valueTypeString:
return d.h.RawToString &&
(bd == mpStr8 || bd == mpStr16 || bd == mpStr32 || (bd >= mpFixStrMin && bd <= mpFixStrMax))
case valueTypeArray:
return bd == mpArray16 || bd == mpArray32 || (bd >= mpFixArrayMin && bd <= mpFixArrayMax)
case valueTypeMap:
return bd == mpMap16 || bd == mpMap32 || (bd >= mpFixMapMin && bd <= mpFixMapMax)
}
d.d.errorf("isContainerType: unsupported parameter: %v", vt)
return false // "unreachable"
}
func (d *msgpackDecDriver) TryDecodeAsNil() (v bool) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == mpNil {
d.bdRead = false
v = true
}
return
}
func (d *msgpackDecDriver) readContainerLen(ct msgpackContainerType) (clen int) {
bd := d.bd
if bd == mpNil {
clen = -1 // to represent nil
} else if bd == ct.b8 {
clen = int(d.r.readn1())
} else if bd == ct.b16 {
clen = int(bigen.Uint16(d.r.readx(2)))
} else if bd == ct.b32 {
clen = int(bigen.Uint32(d.r.readx(4)))
} else if (ct.bFixMin & bd) == ct.bFixMin {
clen = int(ct.bFixMin ^ bd)
} else {
d.d.errorf("readContainerLen: %s: hex: %x, dec: %d", msgBadDesc, bd, bd)
return
}
d.bdRead = false
return
}
func (d *msgpackDecDriver) ReadMapStart() int {
return d.readContainerLen(msgpackContainerMap)
}
func (d *msgpackDecDriver) ReadArrayStart() int {
return d.readContainerLen(msgpackContainerList)
}
func (d *msgpackDecDriver) readExtLen() (clen int) {
switch d.bd {
case mpNil:
clen = -1 // to represent nil
case mpFixExt1:
clen = 1
case mpFixExt2:
clen = 2
case mpFixExt4:
clen = 4
case mpFixExt8:
clen = 8
case mpFixExt16:
clen = 16
case mpExt8:
clen = int(d.r.readn1())
case mpExt16:
clen = int(bigen.Uint16(d.r.readx(2)))
case mpExt32:
clen = int(bigen.Uint32(d.r.readx(4)))
default:
d.d.errorf("decoding ext bytes: found unexpected byte: %x", d.bd)
return
}
return
}
func (d *msgpackDecDriver) DecodeExt(rv interface{}, xtag uint64, ext Ext) (realxtag uint64) {
if xtag > 0xff {
d.d.errorf("decodeExt: tag must be <= 0xff; got: %v", xtag)
return
}
realxtag1, xbs := d.decodeExtV(ext != nil, uint8(xtag))
realxtag = uint64(realxtag1)
if ext == nil {
re := rv.(*RawExt)
re.Tag = realxtag
re.Data = detachZeroCopyBytes(d.br, re.Data, xbs)
} else {
ext.ReadExt(rv, xbs)
}
return
}
func (d *msgpackDecDriver) decodeExtV(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
if !d.bdRead {
d.readNextBd()
}
xbd := d.bd
if xbd == mpBin8 || xbd == mpBin16 || xbd == mpBin32 {
xbs = d.DecodeBytes(nil, false, true)
} else if xbd == mpStr8 || xbd == mpStr16 || xbd == mpStr32 ||
(xbd >= mpFixStrMin && xbd <= mpFixStrMax) {
xbs = d.DecodeBytes(nil, true, true)
} else {
clen := d.readExtLen()
xtag = d.r.readn1()
if verifyTag && xtag != tag {
d.d.errorf("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
return
}
xbs = d.r.readx(clen)
}
d.bdRead = false
return
}
//--------------------------------------------------
//MsgpackHandle is a Handle for the Msgpack Schema-Free Encoding Format.
type MsgpackHandle struct {
BasicHandle
binaryEncodingType
// RawToString controls how raw bytes are decoded into a nil interface{}.
RawToString bool
// WriteExt flag supports encoding configured extensions with extension tags.
// It also controls whether other elements of the new spec are encoded (ie Str8).
//
// With WriteExt=false, configured extensions are serialized as raw bytes
// and Str8 is not encoded.
//
// A stream can still be decoded into a typed value, provided an appropriate value
// is provided, but the type cannot be inferred from the stream. If no appropriate
// type is provided (e.g. decoding into a nil interface{}), you get back
// a []byte or string based on the setting of RawToString.
WriteExt bool
}
func (h *MsgpackHandle) newEncDriver(e *Encoder) encDriver {
return &msgpackEncDriver{e: e, w: e.w, h: h}
}
func (h *MsgpackHandle) newDecDriver(d *Decoder) decDriver {
return &msgpackDecDriver{d: d, r: d.r, h: h, br: d.bytes}
}
//--------------------------------------------------
type msgpackSpecRpcCodec struct {
rpcCodec
}
// /////////////// Spec RPC Codec ///////////////////
func (c *msgpackSpecRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
// WriteRequest can write to both a Go service, and other services that do
// not abide by the 1 argument rule of a Go service.
// We discriminate based on if the body is a MsgpackSpecRpcMultiArgs
var bodyArr []interface{}
if m, ok := body.(MsgpackSpecRpcMultiArgs); ok {
bodyArr = ([]interface{})(m)
} else {
bodyArr = []interface{}{body}
}
r2 := []interface{}{0, uint32(r.Seq), r.ServiceMethod, bodyArr}
return c.write(r2, nil, false, true)
}
func (c *msgpackSpecRpcCodec) WriteResponse(r *rpc.Response, body interface{}) error {
var moe interface{}
if r.Error != "" {
moe = r.Error
}
if moe != nil && body != nil {
body = nil
}
r2 := []interface{}{1, uint32(r.Seq), moe, body}
return c.write(r2, nil, false, true)
}
func (c *msgpackSpecRpcCodec) ReadResponseHeader(r *rpc.Response) error {
return c.parseCustomHeader(1, &r.Seq, &r.Error)
}
func (c *msgpackSpecRpcCodec) ReadRequestHeader(r *rpc.Request) error {
return c.parseCustomHeader(0, &r.Seq, &r.ServiceMethod)
}
func (c *msgpackSpecRpcCodec) ReadRequestBody(body interface{}) error {
if body == nil { // read and discard
return c.read(nil)
}
bodyArr := []interface{}{body}
return c.read(&bodyArr)
}
func (c *msgpackSpecRpcCodec) parseCustomHeader(expectTypeByte byte, msgid *uint64, methodOrError *string) (err error) {
if c.isClosed() {
return io.EOF
}
// We read the response header by hand
// so that the body can be decoded on its own from the stream at a later time.
const fia byte = 0x94 //four item array descriptor value
// Not sure why the panic of EOF is swallowed above.
// if bs1 := c.dec.r.readn1(); bs1 != fia {
// err = fmt.Errorf("Unexpected value for array descriptor: Expecting %v. Received %v", fia, bs1)
// return
// }
var b byte
b, err = c.br.ReadByte()
if err != nil {
return
}
if b != fia {
err = fmt.Errorf("Unexpected value for array descriptor: Expecting %v. Received %v", fia, b)
return
}
if err = c.read(&b); err != nil {
return
}
if b != expectTypeByte {
err = fmt.Errorf("Unexpected byte descriptor in header. Expecting %v. Received %v", expectTypeByte, b)
return
}
if err = c.read(msgid); err != nil {
return
}
if err = c.read(methodOrError); err != nil {
return
}
return
}
//--------------------------------------------------
// msgpackSpecRpc is the implementation of Rpc that uses custom communication protocol
// as defined in the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md
type msgpackSpecRpc struct{}
// MsgpackSpecRpc implements Rpc using the communication protocol defined in
// the msgpack spec at https://github.com/msgpack-rpc/msgpack-rpc/blob/master/spec.md .
// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
var MsgpackSpecRpc msgpackSpecRpc
func (x msgpackSpecRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
}
func (x msgpackSpecRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
return &msgpackSpecRpcCodec{newRPCCodec(conn, h)}
}
var _ decDriver = (*msgpackDecDriver)(nil)
var _ encDriver = (*msgpackEncDriver)(nil)

View file

@ -0,0 +1,164 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import (
"math/rand"
"time"
)
// NoopHandle returns a no-op handle. It basically does nothing.
// It is only useful for benchmarking, as it gives an idea of the
// overhead from the codec framework.
// LIBRARY USERS: *** DO NOT USE ***
func NoopHandle(slen int) *noopHandle {
h := noopHandle{}
h.rand = rand.New(rand.NewSource(time.Now().UnixNano()))
h.B = make([][]byte, slen)
h.S = make([]string, slen)
for i := 0; i < len(h.S); i++ {
b := make([]byte, i+1)
for j := 0; j < len(b); j++ {
b[j] = 'a' + byte(i)
}
h.B[i] = b
h.S[i] = string(b)
}
return &h
}
// noopHandle does nothing.
// It is used to simulate the overhead of the codec framework.
type noopHandle struct {
BasicHandle
binaryEncodingType
noopDrv // noopDrv is unexported here, so we can get a copy of it when needed.
}
type noopDrv struct {
i int
S []string
B [][]byte
mk bool // are we about to read a map key?
ct valueType // last request for IsContainerType.
cb bool // last response for IsContainerType.
rand *rand.Rand
}
func (h *noopDrv) r(v int) int { return h.rand.Intn(v) }
func (h *noopDrv) m(v int) int { h.i++; return h.i % v }
func (h *noopDrv) newEncDriver(_ *Encoder) encDriver { return h }
func (h *noopDrv) newDecDriver(_ *Decoder) decDriver { return h }
// --- encDriver
func (h *noopDrv) EncodeBuiltin(rt uintptr, v interface{}) {}
func (h *noopDrv) EncodeNil() {}
func (h *noopDrv) EncodeInt(i int64) {}
func (h *noopDrv) EncodeUint(i uint64) {}
func (h *noopDrv) EncodeBool(b bool) {}
func (h *noopDrv) EncodeFloat32(f float32) {}
func (h *noopDrv) EncodeFloat64(f float64) {}
func (h *noopDrv) EncodeRawExt(re *RawExt, e *Encoder) {}
func (h *noopDrv) EncodeArrayStart(length int) {}
func (h *noopDrv) EncodeArrayEnd() {}
func (h *noopDrv) EncodeArrayEntrySeparator() {}
func (h *noopDrv) EncodeMapStart(length int) {}
func (h *noopDrv) EncodeMapEnd() {}
func (h *noopDrv) EncodeMapEntrySeparator() {}
func (h *noopDrv) EncodeMapKVSeparator() {}
func (h *noopDrv) EncodeString(c charEncoding, v string) {}
func (h *noopDrv) EncodeSymbol(v string) {}
func (h *noopDrv) EncodeStringBytes(c charEncoding, v []byte) {}
func (h *noopDrv) EncodeExt(rv interface{}, xtag uint64, ext Ext, e *Encoder) {}
// ---- decDriver
func (h *noopDrv) initReadNext() {}
func (h *noopDrv) CheckBreak() bool { return false }
func (h *noopDrv) IsBuiltinType(rt uintptr) bool { return false }
func (h *noopDrv) DecodeBuiltin(rt uintptr, v interface{}) {}
func (h *noopDrv) DecodeInt(bitsize uint8) (i int64) { return int64(h.m(15)) }
func (h *noopDrv) DecodeUint(bitsize uint8) (ui uint64) { return uint64(h.m(35)) }
func (h *noopDrv) DecodeFloat(chkOverflow32 bool) (f float64) { return float64(h.m(95)) }
func (h *noopDrv) DecodeBool() (b bool) { return h.m(2) == 0 }
func (h *noopDrv) DecodeString() (s string) { return h.S[h.m(8)] }
// func (h *noopDrv) DecodeStringAsBytes(bs []byte) []byte { return h.DecodeBytes(bs) }
func (h *noopDrv) DecodeBytes(bs []byte, isstring, zerocopy bool) []byte { return h.B[h.m(len(h.B))] }
func (h *noopDrv) ReadMapEnd() { h.mk = false }
func (h *noopDrv) ReadArrayEnd() {}
func (h *noopDrv) ReadArrayEntrySeparator() {}
func (h *noopDrv) ReadMapEntrySeparator() { h.mk = true }
func (h *noopDrv) ReadMapKVSeparator() { h.mk = false }
// toggle map/slice
func (h *noopDrv) ReadMapStart() int { h.mk = true; return h.m(10) }
func (h *noopDrv) ReadArrayStart() int { return h.m(10) }
func (h *noopDrv) IsContainerType(vt valueType) bool {
// return h.m(2) == 0
// handle kStruct
if h.ct == valueTypeMap && vt == valueTypeArray || h.ct == valueTypeArray && vt == valueTypeMap {
h.cb = !h.cb
h.ct = vt
return h.cb
}
// go in a loop and check it.
h.ct = vt
h.cb = h.m(7) == 0
return h.cb
}
func (h *noopDrv) TryDecodeAsNil() bool {
if h.mk {
return false
} else {
return h.m(8) == 0
}
}
func (h *noopDrv) DecodeExt(rv interface{}, xtag uint64, ext Ext) uint64 {
return 0
}
func (h *noopDrv) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
// use h.r (random) not h.m() because h.m() could cause the same value to be given.
var sk int
if h.mk {
// if mapkey, do not support values of nil OR bytes, array, map or rawext
sk = h.r(7) + 1
} else {
sk = h.r(12)
}
switch sk {
case 0:
vt = valueTypeNil
case 1:
vt, v = valueTypeBool, false
case 2:
vt, v = valueTypeBool, true
case 3:
vt, v = valueTypeInt, h.DecodeInt(64)
case 4:
vt, v = valueTypeUint, h.DecodeUint(64)
case 5:
vt, v = valueTypeFloat, h.DecodeFloat(true)
case 6:
vt, v = valueTypeFloat, h.DecodeFloat(false)
case 7:
vt, v = valueTypeString, h.DecodeString()
case 8:
vt, v = valueTypeBytes, h.B[h.m(len(h.B))]
case 9:
vt, decodeFurther = valueTypeArray, true
case 10:
vt, decodeFurther = valueTypeMap, true
default:
vt, v = valueTypeExt, &RawExt{Tag: h.DecodeUint(64), Data: h.B[h.m(len(h.B))]}
}
h.ct = vt
return
}

View file

@ -0,0 +1,3 @@
package codec
//go:generate bash prebuild.sh

View file

@ -0,0 +1,193 @@
#!/bin/bash
# _needgen is a helper function to tell if we need to generate files for msgp, codecgen.
_needgen() {
local a="$1"
zneedgen=0
if [[ ! -e "$a" ]]
then
zneedgen=1
echo 1
return 0
fi
for i in `ls -1 *.go.tmpl gen.go values_test.go`
do
if [[ "$a" -ot "$i" ]]
then
zneedgen=1
echo 1
return 0
fi
done
echo 0
}
# _build generates fast-path.go and gen-helper.go.
#
# It is needed because there is some dependency between the generated code
# and the other classes. Consequently, we have to totally remove the
# generated files and put stubs in place, before calling "go run" again
# to recreate them.
_build() {
if ! [[ "${zforce}" == "1" ||
"1" == $( _needgen "fast-path.generated.go" ) ||
"1" == $( _needgen "gen-helper.generated.go" ) ||
"1" == $( _needgen "gen.generated.go" ) ||
1 == 0 ]]
then
return 0
fi
# echo "Running prebuild"
if [ "${zbak}" == "1" ]
then
# echo "Backing up old generated files"
_zts=`date '+%m%d%Y_%H%M%S'`
_gg=".generated.go"
[ -e "gen-helper${_gg}" ] && mv gen-helper${_gg} gen-helper${_gg}__${_zts}.bak
[ -e "fast-path${_gg}" ] && mv fast-path${_gg} fast-path${_gg}__${_zts}.bak
# [ -e "safe${_gg}" ] && mv safe${_gg} safe${_gg}__${_zts}.bak
# [ -e "unsafe${_gg}" ] && mv unsafe${_gg} unsafe${_gg}__${_zts}.bak
else
rm -f fast-path.generated.go gen.generated.go gen-helper.generated.go *safe.generated.go *_generated_test.go *.generated_ffjson_expose.go
fi
cat > gen.generated.go <<EOF
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
// DO NOT EDIT. THIS FILE IS AUTO-GENERATED FROM gen-dec-(map|array).go.tmpl
const genDecMapTmpl = \`
EOF
cat >> gen.generated.go < gen-dec-map.go.tmpl
cat >> gen.generated.go <<EOF
\`
const genDecListTmpl = \`
EOF
cat >> gen.generated.go < gen-dec-array.go.tmpl
cat >> gen.generated.go <<EOF
\`
EOF
# All functions, variables which must exist are put in this file.
# This way, build works before we generate the right things.
cat > fast-path.generated.go <<EOF
package codec
import "reflect"
// func GenBytesToStringRO(b []byte) string { return string(b) }
func fastpathDecodeTypeSwitch(iv interface{}, d *Decoder) bool { return false }
func fastpathEncodeTypeSwitch(iv interface{}, e *Encoder) bool { return false }
func fastpathEncodeTypeSwitchSlice(iv interface{}, e *Encoder) bool { return false }
func fastpathEncodeTypeSwitchMap(iv interface{}, e *Encoder) bool { return false }
type fastpathE struct {
rtid uintptr
rt reflect.Type
encfn func(encFnInfo, reflect.Value)
decfn func(decFnInfo, reflect.Value)
}
type fastpathA [0]fastpathE
func (x fastpathA) index(rtid uintptr) int { return -1 }
var fastpathAV fastpathA
EOF
cat > gen-from-tmpl.generated.go <<EOF
//+build ignore
package main
//import "flag"
import "ugorji.net/codec"
import "os"
func run(fnameIn, fnameOut string, safe bool) {
fin, err := os.Open(fnameIn)
if err != nil { panic(err) }
defer fin.Close()
fout, err := os.Create(fnameOut)
if err != nil { panic(err) }
defer fout.Close()
err = codec.GenInternalGoFile(fin, fout, safe)
if err != nil { panic(err) }
}
func main() {
// do not make safe/unsafe variants.
// Instead, depend on escape analysis, and place string creation and usage appropriately.
// run("unsafe.go.tmpl", "safe.generated.go", true)
// run("unsafe.go.tmpl", "unsafe.generated.go", false)
run("fast-path.go.tmpl", "fast-path.generated.go", false)
run("gen-helper.go.tmpl", "gen-helper.generated.go", false)
}
EOF
go run gen-from-tmpl.generated.go && \
rm -f gen-from-tmpl.generated.go
}
_codegenerators() {
if [[ $zforce == "1" ||
"1" == $( _needgen "values_codecgen${zsfx}" ) ||
"1" == $( _needgen "values_msgp${zsfx}" ) ||
"1" == $( _needgen "values_ffjson${zsfx}" ) ||
1 == 0 ]]
then
true && \
echo "codecgen - !unsafe ... " && \
codecgen -rt codecgen -t 'x,codecgen,!unsafe' -o values_codecgen${zsfx} $zfin && \
echo "codecgen - unsafe ... " && \
codecgen -u -rt codecgen -t 'x,codecgen,unsafe' -o values_codecgen_unsafe${zsfx} $zfin && \
echo "msgp ... " && \
msgp -tests=false -pkg=codec -o=values_msgp${zsfx} -file=$zfin && \
echo "ffjson ... " && \
ffjson -w values_ffjson${zsfx} $zfin && \
# remove (M|Unm)arshalJSON implementations, so they don't conflict with encoding/json bench \
sed -i 's+ MarshalJSON(+ _MarshalJSON(+g' values_ffjson${zsfx} && \
sed -i 's+ UnmarshalJSON(+ _UnmarshalJSON(+g' values_ffjson${zsfx} && \
echo "generators done!" && \
true
fi
}
# _init reads the arguments and sets up the flags
_init() {
OPTIND=1
while getopts "fb" flag
do
case "x$flag" in
'xf') zforce=1;;
'xb') zbak=1;;
*) echo "prebuild.sh accepts [-fb] only"; return 1;;
esac
done
shift $((OPTIND-1))
OPTIND=1
}
# main script.
# First ensure that this is being run from the basedir (i.e. dirname of script is .)
if [ "." = `dirname $0` ]
then
zmydir=`pwd`
zfin="test_values.generated.go"
zsfx="_generated_test.go"
# rm -f *_generated_test.go
rm -f codecgen-*.go && \
_init "$@" && \
_build && \
cp $zmydir/values_test.go $zmydir/$zfin && \
_codegenerators && \
echo prebuild done successfully
rm -f $zmydir/$zfin
else
echo "Script must be run from the directory it resides in"
fi

View file

@ -0,0 +1,180 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import (
"bufio"
"io"
"net/rpc"
"sync"
)
// rpcEncodeTerminator allows a handler specify a []byte terminator to send after each Encode.
//
// Some codecs like json need to put a space after each encoded value, to serve as a
// delimiter for things like numbers (else json codec will continue reading till EOF).
type rpcEncodeTerminator interface {
rpcEncodeTerminate() []byte
}
// Rpc provides a rpc Server or Client Codec for rpc communication.
type Rpc interface {
ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec
ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec
}
// RpcCodecBuffered allows access to the underlying bufio.Reader/Writer
// used by the rpc connection. It accomodates use-cases where the connection
// should be used by rpc and non-rpc functions, e.g. streaming a file after
// sending an rpc response.
type RpcCodecBuffered interface {
BufferedReader() *bufio.Reader
BufferedWriter() *bufio.Writer
}
// -------------------------------------
// rpcCodec defines the struct members and common methods.
type rpcCodec struct {
rwc io.ReadWriteCloser
dec *Decoder
enc *Encoder
bw *bufio.Writer
br *bufio.Reader
mu sync.Mutex
h Handle
cls bool
clsmu sync.RWMutex
}
func newRPCCodec(conn io.ReadWriteCloser, h Handle) rpcCodec {
bw := bufio.NewWriter(conn)
br := bufio.NewReader(conn)
return rpcCodec{
rwc: conn,
bw: bw,
br: br,
enc: NewEncoder(bw, h),
dec: NewDecoder(br, h),
h: h,
}
}
func (c *rpcCodec) BufferedReader() *bufio.Reader {
return c.br
}
func (c *rpcCodec) BufferedWriter() *bufio.Writer {
return c.bw
}
func (c *rpcCodec) write(obj1, obj2 interface{}, writeObj2, doFlush bool) (err error) {
if c.isClosed() {
return io.EOF
}
if err = c.enc.Encode(obj1); err != nil {
return
}
t, tOk := c.h.(rpcEncodeTerminator)
if tOk {
c.bw.Write(t.rpcEncodeTerminate())
}
if writeObj2 {
if err = c.enc.Encode(obj2); err != nil {
return
}
if tOk {
c.bw.Write(t.rpcEncodeTerminate())
}
}
if doFlush {
return c.bw.Flush()
}
return
}
func (c *rpcCodec) read(obj interface{}) (err error) {
if c.isClosed() {
return io.EOF
}
//If nil is passed in, we should still attempt to read content to nowhere.
if obj == nil {
var obj2 interface{}
return c.dec.Decode(&obj2)
}
return c.dec.Decode(obj)
}
func (c *rpcCodec) isClosed() bool {
c.clsmu.RLock()
x := c.cls
c.clsmu.RUnlock()
return x
}
func (c *rpcCodec) Close() error {
if c.isClosed() {
return io.EOF
}
c.clsmu.Lock()
c.cls = true
c.clsmu.Unlock()
return c.rwc.Close()
}
func (c *rpcCodec) ReadResponseBody(body interface{}) error {
return c.read(body)
}
// -------------------------------------
type goRpcCodec struct {
rpcCodec
}
func (c *goRpcCodec) WriteRequest(r *rpc.Request, body interface{}) error {
// Must protect for concurrent access as per API
c.mu.Lock()
defer c.mu.Unlock()
return c.write(r, body, true, true)
}
func (c *goRpcCodec) WriteResponse(r *rpc.Response, body interface{}) error {
c.mu.Lock()
defer c.mu.Unlock()
return c.write(r, body, true, true)
}
func (c *goRpcCodec) ReadResponseHeader(r *rpc.Response) error {
return c.read(r)
}
func (c *goRpcCodec) ReadRequestHeader(r *rpc.Request) error {
return c.read(r)
}
func (c *goRpcCodec) ReadRequestBody(body interface{}) error {
return c.read(body)
}
// -------------------------------------
// goRpc is the implementation of Rpc that uses the communication protocol
// as defined in net/rpc package.
type goRpc struct{}
// GoRpc implements Rpc using the communication protocol defined in net/rpc package.
// Its methods (ServerCodec and ClientCodec) return values that implement RpcCodecBuffered.
var GoRpc goRpc
func (x goRpc) ServerCodec(conn io.ReadWriteCloser, h Handle) rpc.ServerCodec {
return &goRpcCodec{newRPCCodec(conn, h)}
}
func (x goRpc) ClientCodec(conn io.ReadWriteCloser, h Handle) rpc.ClientCodec {
return &goRpcCodec{newRPCCodec(conn, h)}
}
var _ RpcCodecBuffered = (*rpcCodec)(nil) // ensure *rpcCodec implements RpcCodecBuffered

View file

@ -0,0 +1,505 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import "math"
const (
_ uint8 = iota
simpleVdNil = 1
simpleVdFalse = 2
simpleVdTrue = 3
simpleVdFloat32 = 4
simpleVdFloat64 = 5
// each lasts for 4 (ie n, n+1, n+2, n+3)
simpleVdPosInt = 8
simpleVdNegInt = 12
// containers: each lasts for 4 (ie n, n+1, n+2, ... n+7)
simpleVdString = 216
simpleVdByteArray = 224
simpleVdArray = 232
simpleVdMap = 240
simpleVdExt = 248
)
type simpleEncDriver struct {
e *Encoder
h *SimpleHandle
w encWriter
noBuiltInTypes
b [8]byte
encNoSeparator
}
func (e *simpleEncDriver) EncodeNil() {
e.w.writen1(simpleVdNil)
}
func (e *simpleEncDriver) EncodeBool(b bool) {
if b {
e.w.writen1(simpleVdTrue)
} else {
e.w.writen1(simpleVdFalse)
}
}
func (e *simpleEncDriver) EncodeFloat32(f float32) {
e.w.writen1(simpleVdFloat32)
bigenHelper{e.b[:4], e.w}.writeUint32(math.Float32bits(f))
}
func (e *simpleEncDriver) EncodeFloat64(f float64) {
e.w.writen1(simpleVdFloat64)
bigenHelper{e.b[:8], e.w}.writeUint64(math.Float64bits(f))
}
func (e *simpleEncDriver) EncodeInt(v int64) {
if v < 0 {
e.encUint(uint64(-v), simpleVdNegInt)
} else {
e.encUint(uint64(v), simpleVdPosInt)
}
}
func (e *simpleEncDriver) EncodeUint(v uint64) {
e.encUint(v, simpleVdPosInt)
}
func (e *simpleEncDriver) encUint(v uint64, bd uint8) {
if v <= math.MaxUint8 {
e.w.writen2(bd, uint8(v))
} else if v <= math.MaxUint16 {
e.w.writen1(bd + 1)
bigenHelper{e.b[:2], e.w}.writeUint16(uint16(v))
} else if v <= math.MaxUint32 {
e.w.writen1(bd + 2)
bigenHelper{e.b[:4], e.w}.writeUint32(uint32(v))
} else { // if v <= math.MaxUint64 {
e.w.writen1(bd + 3)
bigenHelper{e.b[:8], e.w}.writeUint64(v)
}
}
func (e *simpleEncDriver) encLen(bd byte, length int) {
if length == 0 {
e.w.writen1(bd)
} else if length <= math.MaxUint8 {
e.w.writen1(bd + 1)
e.w.writen1(uint8(length))
} else if length <= math.MaxUint16 {
e.w.writen1(bd + 2)
bigenHelper{e.b[:2], e.w}.writeUint16(uint16(length))
} else if int64(length) <= math.MaxUint32 {
e.w.writen1(bd + 3)
bigenHelper{e.b[:4], e.w}.writeUint32(uint32(length))
} else {
e.w.writen1(bd + 4)
bigenHelper{e.b[:8], e.w}.writeUint64(uint64(length))
}
}
func (e *simpleEncDriver) EncodeExt(rv interface{}, xtag uint64, ext Ext, _ *Encoder) {
bs := ext.WriteExt(rv)
if bs == nil {
e.EncodeNil()
return
}
e.encodeExtPreamble(uint8(xtag), len(bs))
e.w.writeb(bs)
}
func (e *simpleEncDriver) EncodeRawExt(re *RawExt, _ *Encoder) {
e.encodeExtPreamble(uint8(re.Tag), len(re.Data))
e.w.writeb(re.Data)
}
func (e *simpleEncDriver) encodeExtPreamble(xtag byte, length int) {
e.encLen(simpleVdExt, length)
e.w.writen1(xtag)
}
func (e *simpleEncDriver) EncodeArrayStart(length int) {
e.encLen(simpleVdArray, length)
}
func (e *simpleEncDriver) EncodeMapStart(length int) {
e.encLen(simpleVdMap, length)
}
func (e *simpleEncDriver) EncodeString(c charEncoding, v string) {
e.encLen(simpleVdString, len(v))
e.w.writestr(v)
}
func (e *simpleEncDriver) EncodeSymbol(v string) {
e.EncodeString(c_UTF8, v)
}
func (e *simpleEncDriver) EncodeStringBytes(c charEncoding, v []byte) {
e.encLen(simpleVdByteArray, len(v))
e.w.writeb(v)
}
//------------------------------------
type simpleDecDriver struct {
d *Decoder
h *SimpleHandle
r decReader
bdRead bool
bdType valueType
bd byte
br bool // bytes reader
noBuiltInTypes
noStreamingCodec
decNoSeparator
b [scratchByteArrayLen]byte
}
func (d *simpleDecDriver) readNextBd() {
d.bd = d.r.readn1()
d.bdRead = true
d.bdType = valueTypeUnset
}
func (d *simpleDecDriver) IsContainerType(vt valueType) bool {
switch vt {
case valueTypeNil:
return d.bd == simpleVdNil
case valueTypeBytes:
const x uint8 = simpleVdByteArray
return d.bd == x || d.bd == x+1 || d.bd == x+2 || d.bd == x+3 || d.bd == x+4
case valueTypeString:
const x uint8 = simpleVdString
return d.bd == x || d.bd == x+1 || d.bd == x+2 || d.bd == x+3 || d.bd == x+4
case valueTypeArray:
const x uint8 = simpleVdArray
return d.bd == x || d.bd == x+1 || d.bd == x+2 || d.bd == x+3 || d.bd == x+4
case valueTypeMap:
const x uint8 = simpleVdMap
return d.bd == x || d.bd == x+1 || d.bd == x+2 || d.bd == x+3 || d.bd == x+4
}
d.d.errorf("isContainerType: unsupported parameter: %v", vt)
return false // "unreachable"
}
func (d *simpleDecDriver) TryDecodeAsNil() bool {
if !d.bdRead {
d.readNextBd()
}
if d.bd == simpleVdNil {
d.bdRead = false
return true
}
return false
}
func (d *simpleDecDriver) decCheckInteger() (ui uint64, neg bool) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case simpleVdPosInt:
ui = uint64(d.r.readn1())
case simpleVdPosInt + 1:
ui = uint64(bigen.Uint16(d.r.readx(2)))
case simpleVdPosInt + 2:
ui = uint64(bigen.Uint32(d.r.readx(4)))
case simpleVdPosInt + 3:
ui = uint64(bigen.Uint64(d.r.readx(8)))
case simpleVdNegInt:
ui = uint64(d.r.readn1())
neg = true
case simpleVdNegInt + 1:
ui = uint64(bigen.Uint16(d.r.readx(2)))
neg = true
case simpleVdNegInt + 2:
ui = uint64(bigen.Uint32(d.r.readx(4)))
neg = true
case simpleVdNegInt + 3:
ui = uint64(bigen.Uint64(d.r.readx(8)))
neg = true
default:
d.d.errorf("decIntAny: Integer only valid from pos/neg integer1..8. Invalid descriptor: %v", d.bd)
return
}
// don't do this check, because callers may only want the unsigned value.
// if ui > math.MaxInt64 {
// d.d.errorf("decIntAny: Integer out of range for signed int64: %v", ui)
// return
// }
return
}
func (d *simpleDecDriver) DecodeInt(bitsize uint8) (i int64) {
ui, neg := d.decCheckInteger()
i, overflow := chkOvf.SignedInt(ui)
if overflow {
d.d.errorf("simple: overflow converting %v to signed integer", ui)
return
}
if neg {
i = -i
}
if chkOvf.Int(i, bitsize) {
d.d.errorf("simple: overflow integer: %v", i)
return
}
d.bdRead = false
return
}
func (d *simpleDecDriver) DecodeUint(bitsize uint8) (ui uint64) {
ui, neg := d.decCheckInteger()
if neg {
d.d.errorf("Assigning negative signed value to unsigned type")
return
}
if chkOvf.Uint(ui, bitsize) {
d.d.errorf("simple: overflow integer: %v", ui)
return
}
d.bdRead = false
return
}
func (d *simpleDecDriver) DecodeFloat(chkOverflow32 bool) (f float64) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == simpleVdFloat32 {
f = float64(math.Float32frombits(bigen.Uint32(d.r.readx(4))))
} else if d.bd == simpleVdFloat64 {
f = math.Float64frombits(bigen.Uint64(d.r.readx(8)))
} else {
if d.bd >= simpleVdPosInt && d.bd <= simpleVdNegInt+3 {
f = float64(d.DecodeInt(64))
} else {
d.d.errorf("Float only valid from float32/64: Invalid descriptor: %v", d.bd)
return
}
}
if chkOverflow32 && chkOvf.Float32(f) {
d.d.errorf("msgpack: float32 overflow: %v", f)
return
}
d.bdRead = false
return
}
// bool can be decoded from bool only (single byte).
func (d *simpleDecDriver) DecodeBool() (b bool) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == simpleVdTrue {
b = true
} else if d.bd == simpleVdFalse {
} else {
d.d.errorf("Invalid single-byte value for bool: %s: %x", msgBadDesc, d.bd)
return
}
d.bdRead = false
return
}
func (d *simpleDecDriver) ReadMapStart() (length int) {
d.bdRead = false
return d.decLen()
}
func (d *simpleDecDriver) ReadArrayStart() (length int) {
d.bdRead = false
return d.decLen()
}
func (d *simpleDecDriver) decLen() int {
switch d.bd % 8 {
case 0:
return 0
case 1:
return int(d.r.readn1())
case 2:
return int(bigen.Uint16(d.r.readx(2)))
case 3:
ui := uint64(bigen.Uint32(d.r.readx(4)))
if chkOvf.Uint(ui, intBitsize) {
d.d.errorf("simple: overflow integer: %v", ui)
return 0
}
return int(ui)
case 4:
ui := bigen.Uint64(d.r.readx(8))
if chkOvf.Uint(ui, intBitsize) {
d.d.errorf("simple: overflow integer: %v", ui)
return 0
}
return int(ui)
}
d.d.errorf("decLen: Cannot read length: bd%8 must be in range 0..4. Got: %d", d.bd%8)
return -1
}
func (d *simpleDecDriver) DecodeString() (s string) {
return string(d.DecodeBytes(d.b[:], true, true))
}
func (d *simpleDecDriver) DecodeBytes(bs []byte, isstring, zerocopy bool) (bsOut []byte) {
if !d.bdRead {
d.readNextBd()
}
if d.bd == simpleVdNil {
d.bdRead = false
return
}
clen := d.decLen()
d.bdRead = false
if zerocopy {
if d.br {
return d.r.readx(clen)
} else if len(bs) == 0 {
bs = d.b[:]
}
}
return decByteSlice(d.r, clen, bs)
}
func (d *simpleDecDriver) DecodeExt(rv interface{}, xtag uint64, ext Ext) (realxtag uint64) {
if xtag > 0xff {
d.d.errorf("decodeExt: tag must be <= 0xff; got: %v", xtag)
return
}
realxtag1, xbs := d.decodeExtV(ext != nil, uint8(xtag))
realxtag = uint64(realxtag1)
if ext == nil {
re := rv.(*RawExt)
re.Tag = realxtag
re.Data = detachZeroCopyBytes(d.br, re.Data, xbs)
} else {
ext.ReadExt(rv, xbs)
}
return
}
func (d *simpleDecDriver) decodeExtV(verifyTag bool, tag byte) (xtag byte, xbs []byte) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case simpleVdExt, simpleVdExt + 1, simpleVdExt + 2, simpleVdExt + 3, simpleVdExt + 4:
l := d.decLen()
xtag = d.r.readn1()
if verifyTag && xtag != tag {
d.d.errorf("Wrong extension tag. Got %b. Expecting: %v", xtag, tag)
return
}
xbs = d.r.readx(l)
case simpleVdByteArray, simpleVdByteArray + 1, simpleVdByteArray + 2, simpleVdByteArray + 3, simpleVdByteArray + 4:
xbs = d.DecodeBytes(nil, false, true)
default:
d.d.errorf("Invalid d.bd for extensions (Expecting extensions or byte array). Got: 0x%x", d.bd)
return
}
d.bdRead = false
return
}
func (d *simpleDecDriver) DecodeNaked() (v interface{}, vt valueType, decodeFurther bool) {
if !d.bdRead {
d.readNextBd()
}
switch d.bd {
case simpleVdNil:
vt = valueTypeNil
case simpleVdFalse:
vt = valueTypeBool
v = false
case simpleVdTrue:
vt = valueTypeBool
v = true
case simpleVdPosInt, simpleVdPosInt + 1, simpleVdPosInt + 2, simpleVdPosInt + 3:
if d.h.SignedInteger {
vt = valueTypeInt
v = d.DecodeInt(64)
} else {
vt = valueTypeUint
v = d.DecodeUint(64)
}
case simpleVdNegInt, simpleVdNegInt + 1, simpleVdNegInt + 2, simpleVdNegInt + 3:
vt = valueTypeInt
v = d.DecodeInt(64)
case simpleVdFloat32:
vt = valueTypeFloat
v = d.DecodeFloat(true)
case simpleVdFloat64:
vt = valueTypeFloat
v = d.DecodeFloat(false)
case simpleVdString, simpleVdString + 1, simpleVdString + 2, simpleVdString + 3, simpleVdString + 4:
vt = valueTypeString
v = d.DecodeString()
case simpleVdByteArray, simpleVdByteArray + 1, simpleVdByteArray + 2, simpleVdByteArray + 3, simpleVdByteArray + 4:
vt = valueTypeBytes
v = d.DecodeBytes(nil, false, false)
case simpleVdExt, simpleVdExt + 1, simpleVdExt + 2, simpleVdExt + 3, simpleVdExt + 4:
vt = valueTypeExt
l := d.decLen()
var re RawExt
re.Tag = uint64(d.r.readn1())
re.Data = d.r.readx(l)
v = &re
case simpleVdArray, simpleVdArray + 1, simpleVdArray + 2, simpleVdArray + 3, simpleVdArray + 4:
vt = valueTypeArray
decodeFurther = true
case simpleVdMap, simpleVdMap + 1, simpleVdMap + 2, simpleVdMap + 3, simpleVdMap + 4:
vt = valueTypeMap
decodeFurther = true
default:
d.d.errorf("decodeNaked: Unrecognized d.bd: 0x%x", d.bd)
return
}
if !decodeFurther {
d.bdRead = false
}
return
}
//------------------------------------
// SimpleHandle is a Handle for a very simple encoding format.
//
// simple is a simplistic codec similar to binc, but not as compact.
// - Encoding of a value is always preceeded by the descriptor byte (bd)
// - True, false, nil are encoded fully in 1 byte (the descriptor)
// - Integers (intXXX, uintXXX) are encoded in 1, 2, 4 or 8 bytes (plus a descriptor byte).
// There are positive (uintXXX and intXXX >= 0) and negative (intXXX < 0) integers.
// - Floats are encoded in 4 or 8 bytes (plus a descriptor byte)
// - Lenght of containers (strings, bytes, array, map, extensions)
// are encoded in 0, 1, 2, 4 or 8 bytes.
// Zero-length containers have no length encoded.
// For others, the number of bytes is given by pow(2, bd%3)
// - maps are encoded as [bd] [length] [[key][value]]...
// - arrays are encoded as [bd] [length] [value]...
// - extensions are encoded as [bd] [length] [tag] [byte]...
// - strings/bytearrays are encoded as [bd] [length] [byte]...
//
// The full spec will be published soon.
type SimpleHandle struct {
BasicHandle
binaryEncodingType
}
func (h *SimpleHandle) newEncDriver(e *Encoder) encDriver {
return &simpleEncDriver{e: e, w: e.w, h: h}
}
func (h *SimpleHandle) newDecDriver(d *Decoder) decDriver {
return &simpleDecDriver{d: d, r: d.r, h: h, br: d.bytes}
}
var _ decDriver = (*simpleDecDriver)(nil)
var _ encDriver = (*simpleEncDriver)(nil)

View file

@ -0,0 +1,639 @@
[
{
"cbor": "AA==",
"hex": "00",
"roundtrip": true,
"decoded": 0
},
{
"cbor": "AQ==",
"hex": "01",
"roundtrip": true,
"decoded": 1
},
{
"cbor": "Cg==",
"hex": "0a",
"roundtrip": true,
"decoded": 10
},
{
"cbor": "Fw==",
"hex": "17",
"roundtrip": true,
"decoded": 23
},
{
"cbor": "GBg=",
"hex": "1818",
"roundtrip": true,
"decoded": 24
},
{
"cbor": "GBk=",
"hex": "1819",
"roundtrip": true,
"decoded": 25
},
{
"cbor": "GGQ=",
"hex": "1864",
"roundtrip": true,
"decoded": 100
},
{
"cbor": "GQPo",
"hex": "1903e8",
"roundtrip": true,
"decoded": 1000
},
{
"cbor": "GgAPQkA=",
"hex": "1a000f4240",
"roundtrip": true,
"decoded": 1000000
},
{
"cbor": "GwAAAOjUpRAA",
"hex": "1b000000e8d4a51000",
"roundtrip": true,
"decoded": 1000000000000
},
{
"cbor": "G///////////",
"hex": "1bffffffffffffffff",
"roundtrip": true,
"decoded": 18446744073709551615
},
{
"cbor": "wkkBAAAAAAAAAAA=",
"hex": "c249010000000000000000",
"roundtrip": true,
"decoded": 18446744073709551616
},
{
"cbor": "O///////////",
"hex": "3bffffffffffffffff",
"roundtrip": true,
"decoded": -18446744073709551616,
"skip": true
},
{
"cbor": "w0kBAAAAAAAAAAA=",
"hex": "c349010000000000000000",
"roundtrip": true,
"decoded": -18446744073709551617
},
{
"cbor": "IA==",
"hex": "20",
"roundtrip": true,
"decoded": -1
},
{
"cbor": "KQ==",
"hex": "29",
"roundtrip": true,
"decoded": -10
},
{
"cbor": "OGM=",
"hex": "3863",
"roundtrip": true,
"decoded": -100
},
{
"cbor": "OQPn",
"hex": "3903e7",
"roundtrip": true,
"decoded": -1000
},
{
"cbor": "+QAA",
"hex": "f90000",
"roundtrip": true,
"decoded": 0.0
},
{
"cbor": "+YAA",
"hex": "f98000",
"roundtrip": true,
"decoded": -0.0
},
{
"cbor": "+TwA",
"hex": "f93c00",
"roundtrip": true,
"decoded": 1.0
},
{
"cbor": "+z/xmZmZmZma",
"hex": "fb3ff199999999999a",
"roundtrip": true,
"decoded": 1.1
},
{
"cbor": "+T4A",
"hex": "f93e00",
"roundtrip": true,
"decoded": 1.5
},
{
"cbor": "+Xv/",
"hex": "f97bff",
"roundtrip": true,
"decoded": 65504.0
},
{
"cbor": "+kfDUAA=",
"hex": "fa47c35000",
"roundtrip": true,
"decoded": 100000.0
},
{
"cbor": "+n9///8=",
"hex": "fa7f7fffff",
"roundtrip": true,
"decoded": 3.4028234663852886e+38
},
{
"cbor": "+3435DyIAHWc",
"hex": "fb7e37e43c8800759c",
"roundtrip": true,
"decoded": 1.0e+300
},
{
"cbor": "+QAB",
"hex": "f90001",
"roundtrip": true,
"decoded": 5.960464477539063e-08
},
{
"cbor": "+QQA",
"hex": "f90400",
"roundtrip": true,
"decoded": 6.103515625e-05
},
{
"cbor": "+cQA",
"hex": "f9c400",
"roundtrip": true,
"decoded": -4.0
},
{
"cbor": "+8AQZmZmZmZm",
"hex": "fbc010666666666666",
"roundtrip": true,
"decoded": -4.1
},
{
"cbor": "+XwA",
"hex": "f97c00",
"roundtrip": true,
"diagnostic": "Infinity"
},
{
"cbor": "+X4A",
"hex": "f97e00",
"roundtrip": true,
"diagnostic": "NaN"
},
{
"cbor": "+fwA",
"hex": "f9fc00",
"roundtrip": true,
"diagnostic": "-Infinity"
},
{
"cbor": "+n+AAAA=",
"hex": "fa7f800000",
"roundtrip": false,
"diagnostic": "Infinity"
},
{
"cbor": "+n/AAAA=",
"hex": "fa7fc00000",
"roundtrip": false,
"diagnostic": "NaN"
},
{
"cbor": "+v+AAAA=",
"hex": "faff800000",
"roundtrip": false,
"diagnostic": "-Infinity"
},
{
"cbor": "+3/wAAAAAAAA",
"hex": "fb7ff0000000000000",
"roundtrip": false,
"diagnostic": "Infinity"
},
{
"cbor": "+3/4AAAAAAAA",
"hex": "fb7ff8000000000000",
"roundtrip": false,
"diagnostic": "NaN"
},
{
"cbor": "+//wAAAAAAAA",
"hex": "fbfff0000000000000",
"roundtrip": false,
"diagnostic": "-Infinity"
},
{
"cbor": "9A==",
"hex": "f4",
"roundtrip": true,
"decoded": false
},
{
"cbor": "9Q==",
"hex": "f5",
"roundtrip": true,
"decoded": true
},
{
"cbor": "9g==",
"hex": "f6",
"roundtrip": true,
"decoded": null
},
{
"cbor": "9w==",
"hex": "f7",
"roundtrip": true,
"diagnostic": "undefined"
},
{
"cbor": "8A==",
"hex": "f0",
"roundtrip": true,
"diagnostic": "simple(16)"
},
{
"cbor": "+Bg=",
"hex": "f818",
"roundtrip": true,
"diagnostic": "simple(24)"
},
{
"cbor": "+P8=",
"hex": "f8ff",
"roundtrip": true,
"diagnostic": "simple(255)"
},
{
"cbor": "wHQyMDEzLTAzLTIxVDIwOjA0OjAwWg==",
"hex": "c074323031332d30332d32315432303a30343a30305a",
"roundtrip": true,
"diagnostic": "0(\"2013-03-21T20:04:00Z\")"
},
{
"cbor": "wRpRS2ew",
"hex": "c11a514b67b0",
"roundtrip": true,
"diagnostic": "1(1363896240)"
},
{
"cbor": "wftB1FLZ7CAAAA==",
"hex": "c1fb41d452d9ec200000",
"roundtrip": true,
"diagnostic": "1(1363896240.5)"
},
{
"cbor": "10QBAgME",
"hex": "d74401020304",
"roundtrip": true,
"diagnostic": "23(h'01020304')"
},
{
"cbor": "2BhFZElFVEY=",
"hex": "d818456449455446",
"roundtrip": true,
"diagnostic": "24(h'6449455446')"
},
{
"cbor": "2CB2aHR0cDovL3d3dy5leGFtcGxlLmNvbQ==",
"hex": "d82076687474703a2f2f7777772e6578616d706c652e636f6d",
"roundtrip": true,
"diagnostic": "32(\"http://www.example.com\")"
},
{
"cbor": "QA==",
"hex": "40",
"roundtrip": true,
"diagnostic": "h''"
},
{
"cbor": "RAECAwQ=",
"hex": "4401020304",
"roundtrip": true,
"diagnostic": "h'01020304'"
},
{
"cbor": "YA==",
"hex": "60",
"roundtrip": true,
"decoded": ""
},
{
"cbor": "YWE=",
"hex": "6161",
"roundtrip": true,
"decoded": "a"
},
{
"cbor": "ZElFVEY=",
"hex": "6449455446",
"roundtrip": true,
"decoded": "IETF"
},
{
"cbor": "YiJc",
"hex": "62225c",
"roundtrip": true,
"decoded": "\"\\"
},
{
"cbor": "YsO8",
"hex": "62c3bc",
"roundtrip": true,
"decoded": "ü"
},
{
"cbor": "Y+awtA==",
"hex": "63e6b0b4",
"roundtrip": true,
"decoded": "水"
},
{
"cbor": "ZPCQhZE=",
"hex": "64f0908591",
"roundtrip": true,
"decoded": "𐅑"
},
{
"cbor": "gA==",
"hex": "80",
"roundtrip": true,
"decoded": [
]
},
{
"cbor": "gwECAw==",
"hex": "83010203",
"roundtrip": true,
"decoded": [
1,
2,
3
]
},
{
"cbor": "gwGCAgOCBAU=",
"hex": "8301820203820405",
"roundtrip": true,
"decoded": [
1,
[
2,
3
],
[
4,
5
]
]
},
{
"cbor": "mBkBAgMEBQYHCAkKCwwNDg8QERITFBUWFxgYGBk=",
"hex": "98190102030405060708090a0b0c0d0e0f101112131415161718181819",
"roundtrip": true,
"decoded": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25
]
},
{
"cbor": "oA==",
"hex": "a0",
"roundtrip": true,
"decoded": {
}
},
{
"cbor": "ogECAwQ=",
"hex": "a201020304",
"roundtrip": true,
"skip": true,
"diagnostic": "{1: 2, 3: 4}"
},
{
"cbor": "omFhAWFiggID",
"hex": "a26161016162820203",
"roundtrip": true,
"decoded": {
"a": 1,
"b": [
2,
3
]
}
},
{
"cbor": "gmFhoWFiYWM=",
"hex": "826161a161626163",
"roundtrip": true,
"decoded": [
"a",
{
"b": "c"
}
]
},
{
"cbor": "pWFhYUFhYmFCYWNhQ2FkYURhZWFF",
"hex": "a56161614161626142616361436164614461656145",
"roundtrip": true,
"decoded": {
"a": "A",
"b": "B",
"c": "C",
"d": "D",
"e": "E"
}
},
{
"cbor": "X0IBAkMDBAX/",
"hex": "5f42010243030405ff",
"roundtrip": false,
"skip": true,
"diagnostic": "(_ h'0102', h'030405')"
},
{
"cbor": "f2VzdHJlYWRtaW5n/w==",
"hex": "7f657374726561646d696e67ff",
"roundtrip": false,
"decoded": "streaming"
},
{
"cbor": "n/8=",
"hex": "9fff",
"roundtrip": false,
"decoded": [
]
},
{
"cbor": "nwGCAgOfBAX//w==",
"hex": "9f018202039f0405ffff",
"roundtrip": false,
"decoded": [
1,
[
2,
3
],
[
4,
5
]
]
},
{
"cbor": "nwGCAgOCBAX/",
"hex": "9f01820203820405ff",
"roundtrip": false,
"decoded": [
1,
[
2,
3
],
[
4,
5
]
]
},
{
"cbor": "gwGCAgOfBAX/",
"hex": "83018202039f0405ff",
"roundtrip": false,
"decoded": [
1,
[
2,
3
],
[
4,
5
]
]
},
{
"cbor": "gwGfAgP/ggQF",
"hex": "83019f0203ff820405",
"roundtrip": false,
"decoded": [
1,
[
2,
3
],
[
4,
5
]
]
},
{
"cbor": "nwECAwQFBgcICQoLDA0ODxAREhMUFRYXGBgYGf8=",
"hex": "9f0102030405060708090a0b0c0d0e0f101112131415161718181819ff",
"roundtrip": false,
"decoded": [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14,
15,
16,
17,
18,
19,
20,
21,
22,
23,
24,
25
]
},
{
"cbor": "v2FhAWFinwID//8=",
"hex": "bf61610161629f0203ffff",
"roundtrip": false,
"decoded": {
"a": 1,
"b": [
2,
3
]
}
},
{
"cbor": "gmFhv2FiYWP/",
"hex": "826161bf61626163ff",
"roundtrip": false,
"decoded": [
"a",
{
"b": "c"
}
]
},
{
"cbor": "v2NGdW71Y0FtdCH/",
"hex": "bf6346756ef563416d7421ff",
"roundtrip": false,
"decoded": {
"Fun": true,
"Amt": -2
}
}
]

119
vendor/src/github.com/ugorji/go/codec/test.py vendored Executable file
View file

@ -0,0 +1,119 @@
#!/usr/bin/env python
# This will create golden files in a directory passed to it.
# A Test calls this internally to create the golden files
# So it can process them (so we don't have to checkin the files).
# Ensure msgpack-python and cbor are installed first, using:
# pip install --user msgpack-python
# pip install --user cbor
import cbor, msgpack, msgpackrpc, sys, os, threading
def get_test_data_list():
# get list with all primitive types, and a combo type
l0 = [
-8,
-1616,
-32323232,
-6464646464646464,
192,
1616,
32323232,
6464646464646464,
192,
-3232.0,
-6464646464.0,
3232.0,
6464646464.0,
False,
True,
None,
u"someday",
u"",
u"bytestring",
1328176922000002000,
-2206187877999998000,
270,
-2013855847999995777,
#-6795364578871345152,
]
l1 = [
{ "true": True,
"false": False },
{ "true": "True",
"false": False,
"uint16(1616)": 1616 },
{ "list": [1616, 32323232, True, -3232.0, {"TRUE":True, "FALSE":False}, [True, False] ],
"int32":32323232, "bool": True,
"LONG STRING": "123456789012345678901234567890123456789012345678901234567890",
"SHORT STRING": "1234567890" },
{ True: "true", 8: False, "false": 0 }
]
l = []
l.extend(l0)
l.append(l0)
l.extend(l1)
return l
def build_test_data(destdir):
l = get_test_data_list()
for i in range(len(l)):
# packer = msgpack.Packer()
serialized = msgpack.dumps(l[i])
f = open(os.path.join(destdir, str(i) + '.msgpack.golden'), 'wb')
f.write(serialized)
f.close()
serialized = cbor.dumps(l[i])
f = open(os.path.join(destdir, str(i) + '.cbor.golden'), 'wb')
f.write(serialized)
f.close()
def doRpcServer(port, stopTimeSec):
class EchoHandler(object):
def Echo123(self, msg1, msg2, msg3):
return ("1:%s 2:%s 3:%s" % (msg1, msg2, msg3))
def EchoStruct(self, msg):
return ("%s" % msg)
addr = msgpackrpc.Address('localhost', port)
server = msgpackrpc.Server(EchoHandler())
server.listen(addr)
# run thread to stop it after stopTimeSec seconds if > 0
if stopTimeSec > 0:
def myStopRpcServer():
server.stop()
t = threading.Timer(stopTimeSec, myStopRpcServer)
t.start()
server.start()
def doRpcClientToPythonSvc(port):
address = msgpackrpc.Address('localhost', port)
client = msgpackrpc.Client(address, unpack_encoding='utf-8')
print client.call("Echo123", "A1", "B2", "C3")
print client.call("EchoStruct", {"A" :"Aa", "B":"Bb", "C":"Cc"})
def doRpcClientToGoSvc(port):
# print ">>>> port: ", port, " <<<<<"
address = msgpackrpc.Address('localhost', port)
client = msgpackrpc.Client(address, unpack_encoding='utf-8')
print client.call("TestRpcInt.Echo123", ["A1", "B2", "C3"])
print client.call("TestRpcInt.EchoStruct", {"A" :"Aa", "B":"Bb", "C":"Cc"})
def doMain(args):
if len(args) == 2 and args[0] == "testdata":
build_test_data(args[1])
elif len(args) == 3 and args[0] == "rpc-server":
doRpcServer(int(args[1]), int(args[2]))
elif len(args) == 2 and args[0] == "rpc-client-python-service":
doRpcClientToPythonSvc(int(args[1]))
elif len(args) == 2 and args[0] == "rpc-client-go-service":
doRpcClientToGoSvc(int(args[1]))
else:
print("Usage: test.py " +
"[testdata|rpc-server|rpc-client-python-service|rpc-client-go-service] ...")
if __name__ == "__main__":
doMain(sys.argv[1:])

View file

@ -0,0 +1,193 @@
// Copyright (c) 2012-2015 Ugorji Nwoke. All rights reserved.
// Use of this source code is governed by a MIT license found in the LICENSE file.
package codec
import (
"time"
)
var (
timeDigits = [...]byte{'0', '1', '2', '3', '4', '5', '6', '7', '8', '9'}
)
// EncodeTime encodes a time.Time as a []byte, including
// information on the instant in time and UTC offset.
//
// Format Description
//
// A timestamp is composed of 3 components:
//
// - secs: signed integer representing seconds since unix epoch
// - nsces: unsigned integer representing fractional seconds as a
// nanosecond offset within secs, in the range 0 <= nsecs < 1e9
// - tz: signed integer representing timezone offset in minutes east of UTC,
// and a dst (daylight savings time) flag
//
// When encoding a timestamp, the first byte is the descriptor, which
// defines which components are encoded and how many bytes are used to
// encode secs and nsecs components. *If secs/nsecs is 0 or tz is UTC, it
// is not encoded in the byte array explicitly*.
//
// Descriptor 8 bits are of the form `A B C DDD EE`:
// A: Is secs component encoded? 1 = true
// B: Is nsecs component encoded? 1 = true
// C: Is tz component encoded? 1 = true
// DDD: Number of extra bytes for secs (range 0-7).
// If A = 1, secs encoded in DDD+1 bytes.
// If A = 0, secs is not encoded, and is assumed to be 0.
// If A = 1, then we need at least 1 byte to encode secs.
// DDD says the number of extra bytes beyond that 1.
// E.g. if DDD=0, then secs is represented in 1 byte.
// if DDD=2, then secs is represented in 3 bytes.
// EE: Number of extra bytes for nsecs (range 0-3).
// If B = 1, nsecs encoded in EE+1 bytes (similar to secs/DDD above)
//
// Following the descriptor bytes, subsequent bytes are:
//
// secs component encoded in `DDD + 1` bytes (if A == 1)
// nsecs component encoded in `EE + 1` bytes (if B == 1)
// tz component encoded in 2 bytes (if C == 1)
//
// secs and nsecs components are integers encoded in a BigEndian
// 2-complement encoding format.
//
// tz component is encoded as 2 bytes (16 bits). Most significant bit 15 to
// Least significant bit 0 are described below:
//
// Timezone offset has a range of -12:00 to +14:00 (ie -720 to +840 minutes).
// Bit 15 = have\_dst: set to 1 if we set the dst flag.
// Bit 14 = dst\_on: set to 1 if dst is in effect at the time, or 0 if not.
// Bits 13..0 = timezone offset in minutes. It is a signed integer in Big Endian format.
//
func encodeTime(t time.Time) []byte {
//t := rv.Interface().(time.Time)
tsecs, tnsecs := t.Unix(), t.Nanosecond()
var (
bd byte
btmp [8]byte
bs [16]byte
i int = 1
)
l := t.Location()
if l == time.UTC {
l = nil
}
if tsecs != 0 {
bd = bd | 0x80
bigen.PutUint64(btmp[:], uint64(tsecs))
f := pruneSignExt(btmp[:], tsecs >= 0)
bd = bd | (byte(7-f) << 2)
copy(bs[i:], btmp[f:])
i = i + (8 - f)
}
if tnsecs != 0 {
bd = bd | 0x40
bigen.PutUint32(btmp[:4], uint32(tnsecs))
f := pruneSignExt(btmp[:4], true)
bd = bd | byte(3-f)
copy(bs[i:], btmp[f:4])
i = i + (4 - f)
}
if l != nil {
bd = bd | 0x20
// Note that Go Libs do not give access to dst flag.
_, zoneOffset := t.Zone()
//zoneName, zoneOffset := t.Zone()
zoneOffset /= 60
z := uint16(zoneOffset)
bigen.PutUint16(btmp[:2], z)
// clear dst flags
bs[i] = btmp[0] & 0x3f
bs[i+1] = btmp[1]
i = i + 2
}
bs[0] = bd
return bs[0:i]
}
// DecodeTime decodes a []byte into a time.Time.
func decodeTime(bs []byte) (tt time.Time, err error) {
bd := bs[0]
var (
tsec int64
tnsec uint32
tz uint16
i byte = 1
i2 byte
n byte
)
if bd&(1<<7) != 0 {
var btmp [8]byte
n = ((bd >> 2) & 0x7) + 1
i2 = i + n
copy(btmp[8-n:], bs[i:i2])
//if first bit of bs[i] is set, then fill btmp[0..8-n] with 0xff (ie sign extend it)
if bs[i]&(1<<7) != 0 {
copy(btmp[0:8-n], bsAll0xff)
//for j,k := byte(0), 8-n; j < k; j++ { btmp[j] = 0xff }
}
i = i2
tsec = int64(bigen.Uint64(btmp[:]))
}
if bd&(1<<6) != 0 {
var btmp [4]byte
n = (bd & 0x3) + 1
i2 = i + n
copy(btmp[4-n:], bs[i:i2])
i = i2
tnsec = bigen.Uint32(btmp[:])
}
if bd&(1<<5) == 0 {
tt = time.Unix(tsec, int64(tnsec)).UTC()
return
}
// In stdlib time.Parse, when a date is parsed without a zone name, it uses "" as zone name.
// However, we need name here, so it can be shown when time is printed.
// Zone name is in form: UTC-08:00.
// Note that Go Libs do not give access to dst flag, so we ignore dst bits
i2 = i + 2
tz = bigen.Uint16(bs[i:i2])
i = i2
// sign extend sign bit into top 2 MSB (which were dst bits):
if tz&(1<<13) == 0 { // positive
tz = tz & 0x3fff //clear 2 MSBs: dst bits
} else { // negative
tz = tz | 0xc000 //set 2 MSBs: dst bits
//tzname[3] = '-' (TODO: verify. this works here)
}
tzint := int16(tz)
if tzint == 0 {
tt = time.Unix(tsec, int64(tnsec)).UTC()
} else {
// For Go Time, do not use a descriptive timezone.
// It's unnecessary, and makes it harder to do a reflect.DeepEqual.
// The Offset already tells what the offset should be, if not on UTC and unknown zone name.
// var zoneName = timeLocUTCName(tzint)
tt = time.Unix(tsec, int64(tnsec)).In(time.FixedZone("", int(tzint)*60))
}
return
}
func timeLocUTCName(tzint int16) string {
if tzint == 0 {
return "UTC"
}
var tzname = []byte("UTC+00:00")
//tzname := fmt.Sprintf("UTC%s%02d:%02d", tzsign, tz/60, tz%60) //perf issue using Sprintf. inline below.
//tzhr, tzmin := tz/60, tz%60 //faster if u convert to int first
var tzhr, tzmin int16
if tzint < 0 {
tzname[3] = '-' // (TODO: verify. this works here)
tzhr, tzmin = -tzint/60, (-tzint)%60
} else {
tzhr, tzmin = tzint/60, tzint%60
}
tzname[4] = timeDigits[tzhr/10]
tzname[5] = timeDigits[tzhr%10]
tzname[7] = timeDigits[tzmin/10]
tzname[8] = timeDigits[tzmin%10]
return string(tzname)
//return time.FixedZone(string(tzname), int(tzint)*60)
}