Fix pruning on tar split

Currently the vendor script prunes files which are currently checked in.

Signed-off-by: Derek McGowan <derek@mcgstyle.net> (github: dmcgowan)
This commit is contained in:
Derek McGowan 2015-07-23 15:21:32 -07:00
parent c6f4c192fe
commit a8546df89d
30 changed files with 0 additions and 2723 deletions

View file

@ -1,13 +0,0 @@
language: go
go:
- 1.4.2
- 1.3.3
# let us have pretty, fast Docker-based Travis workers!
sudo: false
# we don't need "go get" here <3
install: go get -d ./...
script:
- go test -v ./...

View file

@ -1,36 +0,0 @@
Flow of TAR stream
==================
The underlying use of `github.com/vbatts/tar-split/archive/tar` is most similar
to stdlib.
Packer interface
----------------
For ease of storage and usage of the raw bytes, there will be a storage
interface, that accepts an io.Writer (This way you could pass it an in memory
buffer or a file handle).
Having a Packer interface can allow configuration of hash.Hash for file payloads
and providing your own io.Writer.
Instead of having a state directory to store all the header information for all
Readers, we will leave that up to user of Reader. Because we can not assume an
ID for each Reader, and keeping that information differentiated.
State Directory
---------------
Perhaps we could deduplicate the header info, by hashing the rawbytes and
storing them in a directory tree like:
./ac/dc/beef
Then reference the hash of the header info, in the positional records for the
tar stream. Though this could be a future feature, and not required for an
initial implementation. Also, this would imply an owned state directory, rather
than just writing storage info to an io.Writer.

View file

@ -1,19 +0,0 @@
Copyright (c) 2015 Vincent Batts, Raleigh, NC, USA
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View file

@ -1,181 +0,0 @@
tar-split
========
[![Build Status](https://travis-ci.org/vbatts/tar-split.svg?branch=master)](https://travis-ci.org/vbatts/tar-split)
Extend the upstream golang stdlib `archive/tar` library, to expose the raw
bytes of the TAR, rather than just the marshalled headers and file stream.
The goal being that by preserving the raw bytes of each header, padding bytes,
and the raw file payload, one could reassemble the original archive.
Docs
----
* https://godoc.org/github.com/vbatts/tar-split/tar/asm
* https://godoc.org/github.com/vbatts/tar-split/tar/storage
* https://godoc.org/github.com/vbatts/tar-split/archive/tar
Caveat
------
Eventually this should detect TARs that this is not possible with.
For example stored sparse files that have "holes" in them, will be read as a
contiguous file, though the archive contents may be recorded in sparse format.
Therefore when adding the file payload to a reassembled tar, to achieve
identical output, the file payload would need be precisely re-sparsified. This
is not something I seek to fix imediately, but would rather have an alert that
precise reassembly is not possible.
(see more http://www.gnu.org/software/tar/manual/html_node/Sparse-Formats.html)
Other caveat, while tar archives support having multiple file entries for the
same path, we will not support this feature. If there are more than one entries
with the same path, expect an err (like `ErrDuplicatePath`) or a resulting tar
stream that does not validate your original checksum/signature.
Contract
--------
Do not break the API of stdlib `archive/tar` in our fork (ideally find an
upstream mergeable solution)
Std Version
-----------
The version of golang stdlib `archive/tar` is from go1.4.1, and their master branch around [a9dddb53f](https://github.com/golang/go/tree/a9dddb53f)
Example
-------
First we'll get an archive to work with. For repeatability, we'll make an
archive from what you've just cloned:
```
git archive --format=tar -o tar-split.tar HEAD .
```
Then build the example main.go:
```
go build ./main.go
```
Now run the example over the archive:
```
$ ./main tar-split.tar
2015/02/20 15:00:58 writing "tar-split.tar" to "tar-split.tar.out"
pax_global_header pre: 512 read: 52
.travis.yml pre: 972 read: 374
DESIGN.md pre: 650 read: 1131
LICENSE pre: 917 read: 1075
README.md pre: 973 read: 4289
archive/ pre: 831 read: 0
archive/tar/ pre: 512 read: 0
archive/tar/common.go pre: 512 read: 7790
[...]
tar/storage/entry_test.go pre: 667 read: 1137
tar/storage/getter.go pre: 911 read: 2741
tar/storage/getter_test.go pre: 843 read: 1491
tar/storage/packer.go pre: 557 read: 3141
tar/storage/packer_test.go pre: 955 read: 3096
EOF padding: 1512
Remainder: 512
Size: 215040; Sum: 215040
```
*What are we seeing here?*
* `pre` is the header of a file entry, and potentially the padding from the
end of the prior file's payload. Also with particular tar extensions and pax
attributes, the header can exceed 512 bytes.
* `read` is the size of the file payload from the entry
* `EOF padding` is the expected 1024 null bytes on the end of a tar archive,
plus potential padding from the end of the prior file entry's payload
* `Remainder` is the remaining bytes of an archive. This is typically deadspace
as most tar implmentations will return after having reached the end of the
1024 null bytes. Though various implementations will include some amount of
bytes here, which will affect the checksum of the resulting tar archive,
therefore this must be accounted for as well.
Ideally the input tar and output `*.out`, will match:
```
$ sha1sum tar-split.tar*
ca9e19966b892d9ad5960414abac01ef585a1e22 tar-split.tar
ca9e19966b892d9ad5960414abac01ef585a1e22 tar-split.tar.out
```
Stored Metadata
---------------
Since the raw bytes of the headers and padding are stored, you may be wondering
what the size implications are. The headers are at least 512 bytes per
file (sometimes more), at least 1024 null bytes on the end, and then various
padding. This makes for a constant linear growth in the stored metadata, with a
naive storage implementation.
Reusing our prior example's `tar-split.tar`, let's build the checksize.go example:
```
go build ./checksize.go
```
```
$ ./checksize ./tar-split.tar
inspecting "tar-split.tar" (size 210k)
-- number of files: 50
-- size of metadata uncompressed: 53k
-- size of gzip compressed metadata: 3k
```
So assuming you've managed the extraction of the archive yourself, for reuse of
the file payloads from a relative path, then the only additional storage
implications are as little as 3kb.
But let's look at a larger archive, with many files.
```
$ ls -sh ./d.tar
1.4G ./d.tar
$ ./checksize ~/d.tar
inspecting "/home/vbatts/d.tar" (size 1420749k)
-- number of files: 38718
-- size of metadata uncompressed: 43261k
-- size of gzip compressed metadata: 2251k
```
Here, an archive with 38,718 files has a compressed footprint of about 2mb.
Rolling the null bytes on the end of the archive, we will assume a
bytes-per-file rate for the storage implications.
| uncompressed | compressed |
| :----------: | :--------: |
| ~ 1kb per/file | 0.06kb per/file |
What's Next?
------------
* More implementations of storage Packer and Unpacker
- could be a redis or mongo backend
* More implementations of FileGetter and FilePutter
- could be a redis or mongo backend
* cli tooling to assemble/disassemble a provided tar archive
* would be interesting to have an assembler stream that implements `io.Seeker`
License
-------
See LICENSE

View file

@ -1,80 +0,0 @@
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar_test
import (
"archive/tar"
"bytes"
"fmt"
"io"
"log"
"os"
)
func Example() {
// Create a buffer to write our archive to.
buf := new(bytes.Buffer)
// Create a new tar archive.
tw := tar.NewWriter(buf)
// Add some files to the archive.
var files = []struct {
Name, Body string
}{
{"readme.txt", "This archive contains some text files."},
{"gopher.txt", "Gopher names:\nGeorge\nGeoffrey\nGonzo"},
{"todo.txt", "Get animal handling licence."},
}
for _, file := range files {
hdr := &tar.Header{
Name: file.Name,
Mode: 0600,
Size: int64(len(file.Body)),
}
if err := tw.WriteHeader(hdr); err != nil {
log.Fatalln(err)
}
if _, err := tw.Write([]byte(file.Body)); err != nil {
log.Fatalln(err)
}
}
// Make sure to check the error on Close.
if err := tw.Close(); err != nil {
log.Fatalln(err)
}
// Open the tar archive for reading.
r := bytes.NewReader(buf.Bytes())
tr := tar.NewReader(r)
// Iterate through the files in the archive.
for {
hdr, err := tr.Next()
if err == io.EOF {
// end of tar archive
break
}
if err != nil {
log.Fatalln(err)
}
fmt.Printf("Contents of %s:\n", hdr.Name)
if _, err := io.Copy(os.Stdout, tr); err != nil {
log.Fatalln(err)
}
fmt.Println()
}
// Output:
// Contents of readme.txt:
// This archive contains some text files.
// Contents of gopher.txt:
// Gopher names:
// George
// Geoffrey
// Gonzo
// Contents of todo.txt:
// Get animal handling licence.
}

View file

@ -1,743 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
import (
"bytes"
"crypto/md5"
"fmt"
"io"
"io/ioutil"
"os"
"reflect"
"strings"
"testing"
"time"
)
type untarTest struct {
file string
headers []*Header
cksums []string
}
var gnuTarTest = &untarTest{
file: "testdata/gnu.tar",
headers: []*Header{
{
Name: "small.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 5,
ModTime: time.Unix(1244428340, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
{
Name: "small2.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 11,
ModTime: time.Unix(1244436044, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
},
cksums: []string{
"e38b27eaccb4391bdec553a7f3ae6b2f",
"c65bd2e50a56a2138bf1716f2fd56fe9",
},
}
var sparseTarTest = &untarTest{
file: "testdata/sparse-formats.tar",
headers: []*Header{
{
Name: "sparse-gnu",
Mode: 420,
Uid: 1000,
Gid: 1000,
Size: 200,
ModTime: time.Unix(1392395740, 0),
Typeflag: 0x53,
Linkname: "",
Uname: "david",
Gname: "david",
Devmajor: 0,
Devminor: 0,
},
{
Name: "sparse-posix-0.0",
Mode: 420,
Uid: 1000,
Gid: 1000,
Size: 200,
ModTime: time.Unix(1392342187, 0),
Typeflag: 0x30,
Linkname: "",
Uname: "david",
Gname: "david",
Devmajor: 0,
Devminor: 0,
},
{
Name: "sparse-posix-0.1",
Mode: 420,
Uid: 1000,
Gid: 1000,
Size: 200,
ModTime: time.Unix(1392340456, 0),
Typeflag: 0x30,
Linkname: "",
Uname: "david",
Gname: "david",
Devmajor: 0,
Devminor: 0,
},
{
Name: "sparse-posix-1.0",
Mode: 420,
Uid: 1000,
Gid: 1000,
Size: 200,
ModTime: time.Unix(1392337404, 0),
Typeflag: 0x30,
Linkname: "",
Uname: "david",
Gname: "david",
Devmajor: 0,
Devminor: 0,
},
{
Name: "end",
Mode: 420,
Uid: 1000,
Gid: 1000,
Size: 4,
ModTime: time.Unix(1392398319, 0),
Typeflag: 0x30,
Linkname: "",
Uname: "david",
Gname: "david",
Devmajor: 0,
Devminor: 0,
},
},
cksums: []string{
"6f53234398c2449fe67c1812d993012f",
"6f53234398c2449fe67c1812d993012f",
"6f53234398c2449fe67c1812d993012f",
"6f53234398c2449fe67c1812d993012f",
"b0061974914468de549a2af8ced10316",
},
}
var untarTests = []*untarTest{
gnuTarTest,
sparseTarTest,
{
file: "testdata/star.tar",
headers: []*Header{
{
Name: "small.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 5,
ModTime: time.Unix(1244592783, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
AccessTime: time.Unix(1244592783, 0),
ChangeTime: time.Unix(1244592783, 0),
},
{
Name: "small2.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 11,
ModTime: time.Unix(1244592783, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
AccessTime: time.Unix(1244592783, 0),
ChangeTime: time.Unix(1244592783, 0),
},
},
},
{
file: "testdata/v7.tar",
headers: []*Header{
{
Name: "small.txt",
Mode: 0444,
Uid: 73025,
Gid: 5000,
Size: 5,
ModTime: time.Unix(1244593104, 0),
Typeflag: '\x00',
},
{
Name: "small2.txt",
Mode: 0444,
Uid: 73025,
Gid: 5000,
Size: 11,
ModTime: time.Unix(1244593104, 0),
Typeflag: '\x00',
},
},
},
{
file: "testdata/pax.tar",
headers: []*Header{
{
Name: "a/123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100",
Mode: 0664,
Uid: 1000,
Gid: 1000,
Uname: "shane",
Gname: "shane",
Size: 7,
ModTime: time.Unix(1350244992, 23960108),
ChangeTime: time.Unix(1350244992, 23960108),
AccessTime: time.Unix(1350244992, 23960108),
Typeflag: TypeReg,
},
{
Name: "a/b",
Mode: 0777,
Uid: 1000,
Gid: 1000,
Uname: "shane",
Gname: "shane",
Size: 0,
ModTime: time.Unix(1350266320, 910238425),
ChangeTime: time.Unix(1350266320, 910238425),
AccessTime: time.Unix(1350266320, 910238425),
Typeflag: TypeSymlink,
Linkname: "123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100",
},
},
},
{
file: "testdata/nil-uid.tar", // golang.org/issue/5290
headers: []*Header{
{
Name: "P1050238.JPG.log",
Mode: 0664,
Uid: 0,
Gid: 0,
Size: 14,
ModTime: time.Unix(1365454838, 0),
Typeflag: TypeReg,
Linkname: "",
Uname: "eyefi",
Gname: "eyefi",
Devmajor: 0,
Devminor: 0,
},
},
},
{
file: "testdata/xattrs.tar",
headers: []*Header{
{
Name: "small.txt",
Mode: 0644,
Uid: 1000,
Gid: 10,
Size: 5,
ModTime: time.Unix(1386065770, 448252320),
Typeflag: '0',
Uname: "alex",
Gname: "wheel",
AccessTime: time.Unix(1389782991, 419875220),
ChangeTime: time.Unix(1389782956, 794414986),
Xattrs: map[string]string{
"user.key": "value",
"user.key2": "value2",
// Interestingly, selinux encodes the terminating null inside the xattr
"security.selinux": "unconfined_u:object_r:default_t:s0\x00",
},
},
{
Name: "small2.txt",
Mode: 0644,
Uid: 1000,
Gid: 10,
Size: 11,
ModTime: time.Unix(1386065770, 449252304),
Typeflag: '0',
Uname: "alex",
Gname: "wheel",
AccessTime: time.Unix(1389782991, 419875220),
ChangeTime: time.Unix(1386065770, 449252304),
Xattrs: map[string]string{
"security.selinux": "unconfined_u:object_r:default_t:s0\x00",
},
},
},
},
}
func TestReader(t *testing.T) {
testLoop:
for i, test := range untarTests {
f, err := os.Open(test.file)
if err != nil {
t.Errorf("test %d: Unexpected error: %v", i, err)
continue
}
defer f.Close()
tr := NewReader(f)
for j, header := range test.headers {
hdr, err := tr.Next()
if err != nil || hdr == nil {
t.Errorf("test %d, entry %d: Didn't get entry: %v", i, j, err)
f.Close()
continue testLoop
}
if !reflect.DeepEqual(*hdr, *header) {
t.Errorf("test %d, entry %d: Incorrect header:\nhave %+v\nwant %+v",
i, j, *hdr, *header)
}
}
hdr, err := tr.Next()
if err == io.EOF {
continue testLoop
}
if hdr != nil || err != nil {
t.Errorf("test %d: Unexpected entry or error: hdr=%v err=%v", i, hdr, err)
}
}
}
func TestPartialRead(t *testing.T) {
f, err := os.Open("testdata/gnu.tar")
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
defer f.Close()
tr := NewReader(f)
// Read the first four bytes; Next() should skip the last byte.
hdr, err := tr.Next()
if err != nil || hdr == nil {
t.Fatalf("Didn't get first file: %v", err)
}
buf := make([]byte, 4)
if _, err := io.ReadFull(tr, buf); err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if expected := []byte("Kilt"); !bytes.Equal(buf, expected) {
t.Errorf("Contents = %v, want %v", buf, expected)
}
// Second file
hdr, err = tr.Next()
if err != nil || hdr == nil {
t.Fatalf("Didn't get second file: %v", err)
}
buf = make([]byte, 6)
if _, err := io.ReadFull(tr, buf); err != nil {
t.Fatalf("Unexpected error: %v", err)
}
if expected := []byte("Google"); !bytes.Equal(buf, expected) {
t.Errorf("Contents = %v, want %v", buf, expected)
}
}
func TestIncrementalRead(t *testing.T) {
test := gnuTarTest
f, err := os.Open(test.file)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
defer f.Close()
tr := NewReader(f)
headers := test.headers
cksums := test.cksums
nread := 0
// loop over all files
for ; ; nread++ {
hdr, err := tr.Next()
if hdr == nil || err == io.EOF {
break
}
// check the header
if !reflect.DeepEqual(*hdr, *headers[nread]) {
t.Errorf("Incorrect header:\nhave %+v\nwant %+v",
*hdr, headers[nread])
}
// read file contents in little chunks EOF,
// checksumming all the way
h := md5.New()
rdbuf := make([]uint8, 8)
for {
nr, err := tr.Read(rdbuf)
if err == io.EOF {
break
}
if err != nil {
t.Errorf("Read: unexpected error %v\n", err)
break
}
h.Write(rdbuf[0:nr])
}
// verify checksum
have := fmt.Sprintf("%x", h.Sum(nil))
want := cksums[nread]
if want != have {
t.Errorf("Bad checksum on file %s:\nhave %+v\nwant %+v", hdr.Name, have, want)
}
}
if nread != len(headers) {
t.Errorf("Didn't process all files\nexpected: %d\nprocessed %d\n", len(headers), nread)
}
}
func TestNonSeekable(t *testing.T) {
test := gnuTarTest
f, err := os.Open(test.file)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
defer f.Close()
type readerOnly struct {
io.Reader
}
tr := NewReader(readerOnly{f})
nread := 0
for ; ; nread++ {
_, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
}
if nread != len(test.headers) {
t.Errorf("Didn't process all files\nexpected: %d\nprocessed %d\n", len(test.headers), nread)
}
}
func TestParsePAXHeader(t *testing.T) {
paxTests := [][3]string{
{"a", "a=name", "10 a=name\n"}, // Test case involving multiple acceptable lengths
{"a", "a=name", "9 a=name\n"}, // Test case involving multiple acceptable length
{"mtime", "mtime=1350244992.023960108", "30 mtime=1350244992.023960108\n"}}
for _, test := range paxTests {
key, expected, raw := test[0], test[1], test[2]
reader := bytes.NewReader([]byte(raw))
headers, err := parsePAX(reader)
if err != nil {
t.Errorf("Couldn't parse correctly formatted headers: %v", err)
continue
}
if strings.EqualFold(headers[key], expected) {
t.Errorf("mtime header incorrectly parsed: got %s, wanted %s", headers[key], expected)
continue
}
trailer := make([]byte, 100)
n, err := reader.Read(trailer)
if err != io.EOF || n != 0 {
t.Error("Buffer wasn't consumed")
}
}
badHeader := bytes.NewReader([]byte("3 somelongkey="))
if _, err := parsePAX(badHeader); err != ErrHeader {
t.Fatal("Unexpected success when parsing bad header")
}
}
func TestParsePAXTime(t *testing.T) {
// Some valid PAX time values
timestamps := map[string]time.Time{
"1350244992.023960108": time.Unix(1350244992, 23960108), // The common case
"1350244992.02396010": time.Unix(1350244992, 23960100), // Lower precision value
"1350244992.0239601089": time.Unix(1350244992, 23960108), // Higher precision value
"1350244992": time.Unix(1350244992, 0), // Low precision value
}
for input, expected := range timestamps {
ts, err := parsePAXTime(input)
if err != nil {
t.Fatal(err)
}
if !ts.Equal(expected) {
t.Fatalf("Time parsing failure %s %s", ts, expected)
}
}
}
func TestMergePAX(t *testing.T) {
hdr := new(Header)
// Test a string, integer, and time based value.
headers := map[string]string{
"path": "a/b/c",
"uid": "1000",
"mtime": "1350244992.023960108",
}
err := mergePAX(hdr, headers)
if err != nil {
t.Fatal(err)
}
want := &Header{
Name: "a/b/c",
Uid: 1000,
ModTime: time.Unix(1350244992, 23960108),
}
if !reflect.DeepEqual(hdr, want) {
t.Errorf("incorrect merge: got %+v, want %+v", hdr, want)
}
}
func TestSparseEndToEnd(t *testing.T) {
test := sparseTarTest
f, err := os.Open(test.file)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
defer f.Close()
tr := NewReader(f)
headers := test.headers
cksums := test.cksums
nread := 0
// loop over all files
for ; ; nread++ {
hdr, err := tr.Next()
if hdr == nil || err == io.EOF {
break
}
// check the header
if !reflect.DeepEqual(*hdr, *headers[nread]) {
t.Errorf("Incorrect header:\nhave %+v\nwant %+v",
*hdr, headers[nread])
}
// read and checksum the file data
h := md5.New()
_, err = io.Copy(h, tr)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
// verify checksum
have := fmt.Sprintf("%x", h.Sum(nil))
want := cksums[nread]
if want != have {
t.Errorf("Bad checksum on file %s:\nhave %+v\nwant %+v", hdr.Name, have, want)
}
}
if nread != len(headers) {
t.Errorf("Didn't process all files\nexpected: %d\nprocessed %d\n", len(headers), nread)
}
}
type sparseFileReadTest struct {
sparseData []byte
sparseMap []sparseEntry
realSize int64
expected []byte
}
var sparseFileReadTests = []sparseFileReadTest{
{
sparseData: []byte("abcde"),
sparseMap: []sparseEntry{
{offset: 0, numBytes: 2},
{offset: 5, numBytes: 3},
},
realSize: 8,
expected: []byte("ab\x00\x00\x00cde"),
},
{
sparseData: []byte("abcde"),
sparseMap: []sparseEntry{
{offset: 0, numBytes: 2},
{offset: 5, numBytes: 3},
},
realSize: 10,
expected: []byte("ab\x00\x00\x00cde\x00\x00"),
},
{
sparseData: []byte("abcde"),
sparseMap: []sparseEntry{
{offset: 1, numBytes: 3},
{offset: 6, numBytes: 2},
},
realSize: 8,
expected: []byte("\x00abc\x00\x00de"),
},
{
sparseData: []byte("abcde"),
sparseMap: []sparseEntry{
{offset: 1, numBytes: 3},
{offset: 6, numBytes: 2},
},
realSize: 10,
expected: []byte("\x00abc\x00\x00de\x00\x00"),
},
{
sparseData: []byte(""),
sparseMap: nil,
realSize: 2,
expected: []byte("\x00\x00"),
},
}
func TestSparseFileReader(t *testing.T) {
for i, test := range sparseFileReadTests {
r := bytes.NewReader(test.sparseData)
nb := int64(r.Len())
sfr := &sparseFileReader{
rfr: &regFileReader{r: r, nb: nb},
sp: test.sparseMap,
pos: 0,
tot: test.realSize,
}
if sfr.numBytes() != nb {
t.Errorf("test %d: Before reading, sfr.numBytes() = %d, want %d", i, sfr.numBytes(), nb)
}
buf, err := ioutil.ReadAll(sfr)
if err != nil {
t.Errorf("test %d: Unexpected error: %v", i, err)
}
if e := test.expected; !bytes.Equal(buf, e) {
t.Errorf("test %d: Contents = %v, want %v", i, buf, e)
}
if sfr.numBytes() != 0 {
t.Errorf("test %d: After draining the reader, numBytes() was nonzero", i)
}
}
}
func TestSparseIncrementalRead(t *testing.T) {
sparseMap := []sparseEntry{{10, 2}}
sparseData := []byte("Go")
expected := "\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00Go\x00\x00\x00\x00\x00\x00\x00\x00"
r := bytes.NewReader(sparseData)
nb := int64(r.Len())
sfr := &sparseFileReader{
rfr: &regFileReader{r: r, nb: nb},
sp: sparseMap,
pos: 0,
tot: int64(len(expected)),
}
// We'll read the data 6 bytes at a time, with a hole of size 10 at
// the beginning and one of size 8 at the end.
var outputBuf bytes.Buffer
buf := make([]byte, 6)
for {
n, err := sfr.Read(buf)
if err == io.EOF {
break
}
if err != nil {
t.Errorf("Read: unexpected error %v\n", err)
}
if n > 0 {
_, err := outputBuf.Write(buf[:n])
if err != nil {
t.Errorf("Write: unexpected error %v\n", err)
}
}
}
got := outputBuf.String()
if got != expected {
t.Errorf("Contents = %v, want %v", got, expected)
}
}
func TestReadGNUSparseMap0x1(t *testing.T) {
headers := map[string]string{
paxGNUSparseNumBlocks: "4",
paxGNUSparseMap: "0,5,10,5,20,5,30,5",
}
expected := []sparseEntry{
{offset: 0, numBytes: 5},
{offset: 10, numBytes: 5},
{offset: 20, numBytes: 5},
{offset: 30, numBytes: 5},
}
sp, err := readGNUSparseMap0x1(headers)
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if !reflect.DeepEqual(sp, expected) {
t.Errorf("Incorrect sparse map: got %v, wanted %v", sp, expected)
}
}
func TestReadGNUSparseMap1x0(t *testing.T) {
// This test uses lots of holes so the sparse header takes up more than two blocks
numEntries := 100
expected := make([]sparseEntry, 0, numEntries)
sparseMap := new(bytes.Buffer)
fmt.Fprintf(sparseMap, "%d\n", numEntries)
for i := 0; i < numEntries; i++ {
offset := int64(2048 * i)
numBytes := int64(1024)
expected = append(expected, sparseEntry{offset: offset, numBytes: numBytes})
fmt.Fprintf(sparseMap, "%d\n%d\n", offset, numBytes)
}
// Make the header the smallest multiple of blockSize that fits the sparseMap
headerBlocks := (sparseMap.Len() + blockSize - 1) / blockSize
bufLen := blockSize * headerBlocks
buf := make([]byte, bufLen)
copy(buf, sparseMap.Bytes())
// Get an reader to read the sparse map
r := bytes.NewReader(buf)
// Read the sparse map
sp, err := readGNUSparseMap1x0(r)
if err != nil {
t.Errorf("Unexpected error: %v", err)
}
if !reflect.DeepEqual(sp, expected) {
t.Errorf("Incorrect sparse map: got %v, wanted %v", sp, expected)
}
}
func TestUninitializedRead(t *testing.T) {
test := gnuTarTest
f, err := os.Open(test.file)
if err != nil {
t.Fatalf("Unexpected error: %v", err)
}
defer f.Close()
tr := NewReader(f)
_, err = tr.Read([]byte{})
if err == nil || err != io.EOF {
t.Errorf("Unexpected error: %v, wanted %v", err, io.EOF)
}
}

View file

@ -1,284 +0,0 @@
// Copyright 2012 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
import (
"bytes"
"io/ioutil"
"os"
"path"
"reflect"
"strings"
"testing"
"time"
)
func TestFileInfoHeader(t *testing.T) {
fi, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
h, err := FileInfoHeader(fi, "")
if err != nil {
t.Fatalf("FileInfoHeader: %v", err)
}
if g, e := h.Name, "small.txt"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
if g, e := h.Mode, int64(fi.Mode().Perm())|c_ISREG; g != e {
t.Errorf("Mode = %#o; want %#o", g, e)
}
if g, e := h.Size, int64(5); g != e {
t.Errorf("Size = %v; want %v", g, e)
}
if g, e := h.ModTime, fi.ModTime(); !g.Equal(e) {
t.Errorf("ModTime = %v; want %v", g, e)
}
// FileInfoHeader should error when passing nil FileInfo
if _, err := FileInfoHeader(nil, ""); err == nil {
t.Fatalf("Expected error when passing nil to FileInfoHeader")
}
}
func TestFileInfoHeaderDir(t *testing.T) {
fi, err := os.Stat("testdata")
if err != nil {
t.Fatal(err)
}
h, err := FileInfoHeader(fi, "")
if err != nil {
t.Fatalf("FileInfoHeader: %v", err)
}
if g, e := h.Name, "testdata/"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
// Ignoring c_ISGID for golang.org/issue/4867
if g, e := h.Mode&^c_ISGID, int64(fi.Mode().Perm())|c_ISDIR; g != e {
t.Errorf("Mode = %#o; want %#o", g, e)
}
if g, e := h.Size, int64(0); g != e {
t.Errorf("Size = %v; want %v", g, e)
}
if g, e := h.ModTime, fi.ModTime(); !g.Equal(e) {
t.Errorf("ModTime = %v; want %v", g, e)
}
}
func TestFileInfoHeaderSymlink(t *testing.T) {
h, err := FileInfoHeader(symlink{}, "some-target")
if err != nil {
t.Fatal(err)
}
if g, e := h.Name, "some-symlink"; g != e {
t.Errorf("Name = %q; want %q", g, e)
}
if g, e := h.Linkname, "some-target"; g != e {
t.Errorf("Linkname = %q; want %q", g, e)
}
}
type symlink struct{}
func (symlink) Name() string { return "some-symlink" }
func (symlink) Size() int64 { return 0 }
func (symlink) Mode() os.FileMode { return os.ModeSymlink }
func (symlink) ModTime() time.Time { return time.Time{} }
func (symlink) IsDir() bool { return false }
func (symlink) Sys() interface{} { return nil }
func TestRoundTrip(t *testing.T) {
data := []byte("some file contents")
var b bytes.Buffer
tw := NewWriter(&b)
hdr := &Header{
Name: "file.txt",
Uid: 1 << 21, // too big for 8 octal digits
Size: int64(len(data)),
ModTime: time.Now(),
}
// tar only supports second precision.
hdr.ModTime = hdr.ModTime.Add(-time.Duration(hdr.ModTime.Nanosecond()) * time.Nanosecond)
if err := tw.WriteHeader(hdr); err != nil {
t.Fatalf("tw.WriteHeader: %v", err)
}
if _, err := tw.Write(data); err != nil {
t.Fatalf("tw.Write: %v", err)
}
if err := tw.Close(); err != nil {
t.Fatalf("tw.Close: %v", err)
}
// Read it back.
tr := NewReader(&b)
rHdr, err := tr.Next()
if err != nil {
t.Fatalf("tr.Next: %v", err)
}
if !reflect.DeepEqual(rHdr, hdr) {
t.Errorf("Header mismatch.\n got %+v\nwant %+v", rHdr, hdr)
}
rData, err := ioutil.ReadAll(tr)
if err != nil {
t.Fatalf("Read: %v", err)
}
if !bytes.Equal(rData, data) {
t.Errorf("Data mismatch.\n got %q\nwant %q", rData, data)
}
}
type headerRoundTripTest struct {
h *Header
fm os.FileMode
}
func TestHeaderRoundTrip(t *testing.T) {
golden := []headerRoundTripTest{
// regular file.
{
h: &Header{
Name: "test.txt",
Mode: 0644 | c_ISREG,
Size: 12,
ModTime: time.Unix(1360600916, 0),
Typeflag: TypeReg,
},
fm: 0644,
},
// hard link.
{
h: &Header{
Name: "hard.txt",
Mode: 0644 | c_ISLNK,
Size: 0,
ModTime: time.Unix(1360600916, 0),
Typeflag: TypeLink,
},
fm: 0644 | os.ModeSymlink,
},
// symbolic link.
{
h: &Header{
Name: "link.txt",
Mode: 0777 | c_ISLNK,
Size: 0,
ModTime: time.Unix(1360600852, 0),
Typeflag: TypeSymlink,
},
fm: 0777 | os.ModeSymlink,
},
// character device node.
{
h: &Header{
Name: "dev/null",
Mode: 0666 | c_ISCHR,
Size: 0,
ModTime: time.Unix(1360578951, 0),
Typeflag: TypeChar,
},
fm: 0666 | os.ModeDevice | os.ModeCharDevice,
},
// block device node.
{
h: &Header{
Name: "dev/sda",
Mode: 0660 | c_ISBLK,
Size: 0,
ModTime: time.Unix(1360578954, 0),
Typeflag: TypeBlock,
},
fm: 0660 | os.ModeDevice,
},
// directory.
{
h: &Header{
Name: "dir/",
Mode: 0755 | c_ISDIR,
Size: 0,
ModTime: time.Unix(1360601116, 0),
Typeflag: TypeDir,
},
fm: 0755 | os.ModeDir,
},
// fifo node.
{
h: &Header{
Name: "dev/initctl",
Mode: 0600 | c_ISFIFO,
Size: 0,
ModTime: time.Unix(1360578949, 0),
Typeflag: TypeFifo,
},
fm: 0600 | os.ModeNamedPipe,
},
// setuid.
{
h: &Header{
Name: "bin/su",
Mode: 0755 | c_ISREG | c_ISUID,
Size: 23232,
ModTime: time.Unix(1355405093, 0),
Typeflag: TypeReg,
},
fm: 0755 | os.ModeSetuid,
},
// setguid.
{
h: &Header{
Name: "group.txt",
Mode: 0750 | c_ISREG | c_ISGID,
Size: 0,
ModTime: time.Unix(1360602346, 0),
Typeflag: TypeReg,
},
fm: 0750 | os.ModeSetgid,
},
// sticky.
{
h: &Header{
Name: "sticky.txt",
Mode: 0600 | c_ISREG | c_ISVTX,
Size: 7,
ModTime: time.Unix(1360602540, 0),
Typeflag: TypeReg,
},
fm: 0600 | os.ModeSticky,
},
}
for i, g := range golden {
fi := g.h.FileInfo()
h2, err := FileInfoHeader(fi, "")
if err != nil {
t.Error(err)
continue
}
if strings.Contains(fi.Name(), "/") {
t.Errorf("FileInfo of %q contains slash: %q", g.h.Name, fi.Name())
}
name := path.Base(g.h.Name)
if fi.IsDir() {
name += "/"
}
if got, want := h2.Name, name; got != want {
t.Errorf("i=%d: Name: got %v, want %v", i, got, want)
}
if got, want := h2.Size, g.h.Size; got != want {
t.Errorf("i=%d: Size: got %v, want %v", i, got, want)
}
if got, want := h2.Mode, g.h.Mode; got != want {
t.Errorf("i=%d: Mode: got %o, want %o", i, got, want)
}
if got, want := fi.Mode(), g.fm; got != want {
t.Errorf("i=%d: fi.Mode: got %o, want %o", i, got, want)
}
if got, want := h2.ModTime, g.h.ModTime; got != want {
t.Errorf("i=%d: ModTime: got %v, want %v", i, got, want)
}
if sysh, ok := fi.Sys().(*Header); !ok || sysh != g.h {
t.Errorf("i=%d: Sys didn't return original *Header", i)
}
}
}

View file

@ -1 +0,0 @@
Google.com

View file

@ -1,491 +0,0 @@
// Copyright 2009 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
package tar
import (
"bytes"
"fmt"
"io"
"io/ioutil"
"os"
"reflect"
"strings"
"testing"
"testing/iotest"
"time"
)
type writerTestEntry struct {
header *Header
contents string
}
type writerTest struct {
file string // filename of expected output
entries []*writerTestEntry
}
var writerTests = []*writerTest{
// The writer test file was produced with this command:
// tar (GNU tar) 1.26
// ln -s small.txt link.txt
// tar -b 1 --format=ustar -c -f writer.tar small.txt small2.txt link.txt
{
file: "testdata/writer.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: "small.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 5,
ModTime: time.Unix(1246508266, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
contents: "Kilts",
},
{
header: &Header{
Name: "small2.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 11,
ModTime: time.Unix(1245217492, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
contents: "Google.com\n",
},
{
header: &Header{
Name: "link.txt",
Mode: 0777,
Uid: 1000,
Gid: 1000,
Size: 0,
ModTime: time.Unix(1314603082, 0),
Typeflag: '2',
Linkname: "small.txt",
Uname: "strings",
Gname: "strings",
},
// no contents
},
},
},
// The truncated test file was produced using these commands:
// dd if=/dev/zero bs=1048576 count=16384 > /tmp/16gig.txt
// tar -b 1 -c -f- /tmp/16gig.txt | dd bs=512 count=8 > writer-big.tar
{
file: "testdata/writer-big.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: "tmp/16gig.txt",
Mode: 0640,
Uid: 73025,
Gid: 5000,
Size: 16 << 30,
ModTime: time.Unix(1254699560, 0),
Typeflag: '0',
Uname: "dsymonds",
Gname: "eng",
},
// fake contents
contents: strings.Repeat("\x00", 4<<10),
},
},
},
// The truncated test file was produced using these commands:
// dd if=/dev/zero bs=1048576 count=16384 > (longname/)*15 /16gig.txt
// tar -b 1 -c -f- (longname/)*15 /16gig.txt | dd bs=512 count=8 > writer-big-long.tar
{
file: "testdata/writer-big-long.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: strings.Repeat("longname/", 15) + "16gig.txt",
Mode: 0644,
Uid: 1000,
Gid: 1000,
Size: 16 << 30,
ModTime: time.Unix(1399583047, 0),
Typeflag: '0',
Uname: "guillaume",
Gname: "guillaume",
},
// fake contents
contents: strings.Repeat("\x00", 4<<10),
},
},
},
// This file was produced using gnu tar 1.17
// gnutar -b 4 --format=ustar (longname/)*15 + file.txt
{
file: "testdata/ustar.tar",
entries: []*writerTestEntry{
{
header: &Header{
Name: strings.Repeat("longname/", 15) + "file.txt",
Mode: 0644,
Uid: 0765,
Gid: 024,
Size: 06,
ModTime: time.Unix(1360135598, 0),
Typeflag: '0',
Uname: "shane",
Gname: "staff",
},
contents: "hello\n",
},
},
},
}
// Render byte array in a two-character hexadecimal string, spaced for easy visual inspection.
func bytestr(offset int, b []byte) string {
const rowLen = 32
s := fmt.Sprintf("%04x ", offset)
for _, ch := range b {
switch {
case '0' <= ch && ch <= '9', 'A' <= ch && ch <= 'Z', 'a' <= ch && ch <= 'z':
s += fmt.Sprintf(" %c", ch)
default:
s += fmt.Sprintf(" %02x", ch)
}
}
return s
}
// Render a pseudo-diff between two blocks of bytes.
func bytediff(a []byte, b []byte) string {
const rowLen = 32
s := fmt.Sprintf("(%d bytes vs. %d bytes)\n", len(a), len(b))
for offset := 0; len(a)+len(b) > 0; offset += rowLen {
na, nb := rowLen, rowLen
if na > len(a) {
na = len(a)
}
if nb > len(b) {
nb = len(b)
}
sa := bytestr(offset, a[0:na])
sb := bytestr(offset, b[0:nb])
if sa != sb {
s += fmt.Sprintf("-%v\n+%v\n", sa, sb)
}
a = a[na:]
b = b[nb:]
}
return s
}
func TestWriter(t *testing.T) {
testLoop:
for i, test := range writerTests {
expected, err := ioutil.ReadFile(test.file)
if err != nil {
t.Errorf("test %d: Unexpected error: %v", i, err)
continue
}
buf := new(bytes.Buffer)
tw := NewWriter(iotest.TruncateWriter(buf, 4<<10)) // only catch the first 4 KB
big := false
for j, entry := range test.entries {
big = big || entry.header.Size > 1<<10
if err := tw.WriteHeader(entry.header); err != nil {
t.Errorf("test %d, entry %d: Failed writing header: %v", i, j, err)
continue testLoop
}
if _, err := io.WriteString(tw, entry.contents); err != nil {
t.Errorf("test %d, entry %d: Failed writing contents: %v", i, j, err)
continue testLoop
}
}
// Only interested in Close failures for the small tests.
if err := tw.Close(); err != nil && !big {
t.Errorf("test %d: Failed closing archive: %v", i, err)
continue testLoop
}
actual := buf.Bytes()
if !bytes.Equal(expected, actual) {
t.Errorf("test %d: Incorrect result: (-=expected, +=actual)\n%v",
i, bytediff(expected, actual))
}
if testing.Short() { // The second test is expensive.
break
}
}
}
func TestPax(t *testing.T) {
// Create an archive with a large name
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat: %v", err)
}
// Force a PAX long name to be written
longName := strings.Repeat("ab", 100)
contents := strings.Repeat(" ", int(hdr.Size))
hdr.Name = longName
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != longName {
t.Fatal("Couldn't recover long file name")
}
}
func TestPaxSymlink(t *testing.T) {
// Create an archive with a large linkname
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
hdr.Typeflag = TypeSymlink
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// Force a PAX long linkname to be written
longLinkname := strings.Repeat("1234567890/1234567890", 10)
hdr.Linkname = longLinkname
hdr.Size = 0
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Linkname != longLinkname {
t.Fatal("Couldn't recover long link name")
}
}
func TestPaxNonAscii(t *testing.T) {
// Create an archive with non ascii. These should trigger a pax header
// because pax headers have a defined utf-8 encoding.
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// some sample data
chineseFilename := "文件名"
chineseGroupname := "組"
chineseUsername := "用戶名"
hdr.Name = chineseFilename
hdr.Gname = chineseGroupname
hdr.Uname = chineseUsername
contents := strings.Repeat(" ", int(hdr.Size))
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Simple test to make sure PAX extensions are in effect
if !bytes.Contains(buf.Bytes(), []byte("PaxHeaders.")) {
t.Fatal("Expected at least one PAX header to be written.")
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != chineseFilename {
t.Fatal("Couldn't recover unicode name")
}
if hdr.Gname != chineseGroupname {
t.Fatal("Couldn't recover unicode group")
}
if hdr.Uname != chineseUsername {
t.Fatal("Couldn't recover unicode user")
}
}
func TestPaxXattrs(t *testing.T) {
xattrs := map[string]string{
"user.key": "value",
}
// Create an archive with an xattr
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
if err != nil {
t.Fatalf("os.Stat: %v", err)
}
contents := "Kilts"
hdr.Xattrs = xattrs
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if _, err = writer.Write([]byte(contents)); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Test that we can get the xattrs back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if !reflect.DeepEqual(hdr.Xattrs, xattrs) {
t.Fatalf("xattrs did not survive round trip: got %+v, want %+v",
hdr.Xattrs, xattrs)
}
}
func TestPAXHeader(t *testing.T) {
medName := strings.Repeat("CD", 50)
longName := strings.Repeat("AB", 100)
paxTests := [][2]string{
{paxPath + "=/etc/hosts", "19 path=/etc/hosts\n"},
{"a=b", "6 a=b\n"}, // Single digit length
{"a=names", "11 a=names\n"}, // Test case involving carries
{paxPath + "=" + longName, fmt.Sprintf("210 path=%s\n", longName)},
{paxPath + "=" + medName, fmt.Sprintf("110 path=%s\n", medName)}}
for _, test := range paxTests {
key, expected := test[0], test[1]
if result := paxHeader(key); result != expected {
t.Fatalf("paxHeader: got %s, expected %s", result, expected)
}
}
}
func TestUSTARLongName(t *testing.T) {
// Create an archive with a path that failed to split with USTAR extension in previous versions.
fileinfo, err := os.Stat("testdata/small.txt")
if err != nil {
t.Fatal(err)
}
hdr, err := FileInfoHeader(fileinfo, "")
hdr.Typeflag = TypeDir
if err != nil {
t.Fatalf("os.Stat:1 %v", err)
}
// Force a PAX long name to be written. The name was taken from a practical example
// that fails and replaced ever char through numbers to anonymize the sample.
longName := "/0000_0000000/00000-000000000/0000_0000000/00000-0000000000000/0000_0000000/00000-0000000-00000000/0000_0000000/00000000/0000_0000000/000/0000_0000000/00000000v00/0000_0000000/000000/0000_0000000/0000000/0000_0000000/00000y-00/0000/0000/00000000/0x000000/"
hdr.Name = longName
hdr.Size = 0
var buf bytes.Buffer
writer := NewWriter(&buf)
if err := writer.WriteHeader(hdr); err != nil {
t.Fatal(err)
}
if err := writer.Close(); err != nil {
t.Fatal(err)
}
// Test that we can get a long name back out of the archive.
reader := NewReader(&buf)
hdr, err = reader.Next()
if err != nil {
t.Fatal(err)
}
if hdr.Name != longName {
t.Fatal("Couldn't recover long name")
}
}
func TestValidTypeflagWithPAXHeader(t *testing.T) {
var buffer bytes.Buffer
tw := NewWriter(&buffer)
fileName := strings.Repeat("ab", 100)
hdr := &Header{
Name: fileName,
Size: 4,
Typeflag: 0,
}
if err := tw.WriteHeader(hdr); err != nil {
t.Fatalf("Failed to write header: %s", err)
}
if _, err := tw.Write([]byte("fooo")); err != nil {
t.Fatalf("Failed to write the file's data: %s", err)
}
tw.Close()
tr := NewReader(&buffer)
for {
header, err := tr.Next()
if err == io.EOF {
break
}
if err != nil {
t.Fatalf("Failed to read header: %s", err)
}
if header.Typeflag != 0 {
t.Fatalf("Typeflag should've been 0, found %d", header.Typeflag)
}
}
}

View file

@ -1,111 +0,0 @@
// +build ignore
package main
import (
"archive/tar"
"compress/gzip"
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"github.com/vbatts/tar-split/tar/asm"
"github.com/vbatts/tar-split/tar/storage"
)
var (
flCleanup = flag.Bool("cleanup", true, "cleanup tempfiles")
)
func main() {
flag.Parse()
for _, arg := range flag.Args() {
fh, err := os.Open(arg)
if err != nil {
log.Fatal(err)
}
defer fh.Close()
fi, err := fh.Stat()
if err != nil {
log.Fatal(err)
}
fmt.Printf("inspecting %q (size %dk)\n", fh.Name(), fi.Size()/1024)
packFh, err := ioutil.TempFile("", "packed.")
if err != nil {
log.Fatal(err)
}
defer packFh.Close()
if *flCleanup {
defer os.Remove(packFh.Name())
}
sp := storage.NewJSONPacker(packFh)
fp := storage.NewDiscardFilePutter()
dissam, err := asm.NewInputTarStream(fh, sp, fp)
if err != nil {
log.Fatal(err)
}
var num int
tr := tar.NewReader(dissam)
for {
_, err = tr.Next()
if err != nil {
if err == io.EOF {
break
}
log.Fatal(err)
}
num++
if _, err := io.Copy(ioutil.Discard, tr); err != nil {
log.Fatal(err)
}
}
fmt.Printf(" -- number of files: %d\n", num)
if err := packFh.Sync(); err != nil {
log.Fatal(err)
}
fi, err = packFh.Stat()
if err != nil {
log.Fatal(err)
}
fmt.Printf(" -- size of metadata uncompressed: %dk\n", fi.Size()/1024)
gzPackFh, err := ioutil.TempFile("", "packed.gz.")
if err != nil {
log.Fatal(err)
}
defer gzPackFh.Close()
if *flCleanup {
defer os.Remove(gzPackFh.Name())
}
gzWrtr := gzip.NewWriter(gzPackFh)
if _, err := packFh.Seek(0, 0); err != nil {
log.Fatal(err)
}
if _, err := io.Copy(gzWrtr, packFh); err != nil {
log.Fatal(err)
}
gzWrtr.Close()
if err := gzPackFh.Sync(); err != nil {
log.Fatal(err)
}
fi, err = gzPackFh.Stat()
if err != nil {
log.Fatal(err)
}
fmt.Printf(" -- size of gzip compressed metadata: %dk\n", fi.Size()/1024)
}
}

View file

@ -1,25 +0,0 @@
## tar-split utility
## Usage
### Disassembly
```bash
$ sha256sum archive.tar
d734a748db93ec873392470510b8a1c88929abd8fae2540dc43d5b26f7537868 archive.tar
$ mkdir ./x
$ tar-split d --output tar-data.json.gz ./archive.tar | tar -C ./x -x
time="2015-07-20T15:45:04-04:00" level=info msg="created tar-data.json.gz from ./archive.tar (read 204800 bytes)"
```
### Assembly
```bash
$ tar-split a --output new.tar --input ./tar-data.json.gz --path ./x/
INFO[0000] created new.tar from ./x/ and ./tar-data.json.gz (wrote 204800 bytes)
$ sha256sum new.tar
d734a748db93ec873392470510b8a1c88929abd8fae2540dc43d5b26f7537868 new.tar
```

View file

@ -1,175 +0,0 @@
// go:generate git tag | tail -1
package main
import (
"compress/gzip"
"io"
"os"
"github.com/Sirupsen/logrus"
"github.com/codegangsta/cli"
"github.com/vbatts/tar-split/tar/asm"
"github.com/vbatts/tar-split/tar/storage"
)
func main() {
app := cli.NewApp()
app.Name = "tar-split"
app.Usage = "tar assembly and disassembly utility"
app.Version = "0.9.2"
app.Author = "Vincent Batts"
app.Email = "vbatts@hashbangbash.com"
app.Action = cli.ShowAppHelp
app.Before = func(c *cli.Context) error {
logrus.SetOutput(os.Stderr)
if c.Bool("debug") {
logrus.SetLevel(logrus.DebugLevel)
}
return nil
}
app.Flags = []cli.Flag{
cli.BoolFlag{
Name: "debug, D",
Usage: "debug output",
// defaults to false
},
}
app.Commands = []cli.Command{
{
Name: "disasm",
Aliases: []string{"d"},
Usage: "disassemble the input tar stream",
Action: CommandDisasm,
Flags: []cli.Flag{
cli.StringFlag{
Name: "output",
Value: "tar-data.json.gz",
Usage: "output of disassembled tar stream",
},
},
},
{
Name: "asm",
Aliases: []string{"a"},
Usage: "assemble tar stream",
Action: CommandAsm,
Flags: []cli.Flag{
cli.StringFlag{
Name: "input",
Value: "tar-data.json.gz",
Usage: "input of disassembled tar stream",
},
cli.StringFlag{
Name: "output",
Value: "-",
Usage: "reassembled tar archive",
},
cli.StringFlag{
Name: "path",
Value: "",
Usage: "relative path of extracted tar",
},
},
},
}
if err := app.Run(os.Args); err != nil {
logrus.Fatal(err)
}
}
func CommandDisasm(c *cli.Context) {
if len(c.Args()) != 1 {
logrus.Fatalf("please specify tar to be disabled <NAME|->")
}
if len(c.String("output")) == 0 {
logrus.Fatalf("--output filename must be set")
}
// Set up the tar input stream
var inputStream io.Reader
if c.Args()[0] == "-" {
inputStream = os.Stdin
} else {
fh, err := os.Open(c.Args()[0])
if err != nil {
logrus.Fatal(err)
}
defer fh.Close()
inputStream = fh
}
// Set up the metadata storage
mf, err := os.OpenFile(c.String("output"), os.O_CREATE|os.O_WRONLY|os.O_TRUNC, os.FileMode(0600))
if err != nil {
logrus.Fatal(err)
}
defer mf.Close()
mfz := gzip.NewWriter(mf)
defer mfz.Close()
metaPacker := storage.NewJSONPacker(mfz)
// we're passing nil here for the file putter, because the ApplyDiff will
// handle the extraction of the archive
its, err := asm.NewInputTarStream(inputStream, metaPacker, nil)
if err != nil {
logrus.Fatal(err)
}
i, err := io.Copy(os.Stdout, its)
if err != nil {
logrus.Fatal(err)
}
logrus.Infof("created %s from %s (read %d bytes)", c.String("output"), c.Args()[0], i)
}
func CommandAsm(c *cli.Context) {
if len(c.Args()) > 0 {
logrus.Warnf("%d additional arguments passed are ignored", len(c.Args()))
}
if len(c.String("input")) == 0 {
logrus.Fatalf("--input filename must be set")
}
if len(c.String("output")) == 0 {
logrus.Fatalf("--output filename must be set ([FILENAME|-])")
}
if len(c.String("path")) == 0 {
logrus.Fatalf("--path must be set")
}
var outputStream io.Writer
if c.String("output") == "-" {
outputStream = os.Stdout
} else {
fh, err := os.Create(c.String("output"))
if err != nil {
logrus.Fatal(err)
}
defer fh.Close()
outputStream = fh
}
// Get the tar metadata reader
mf, err := os.Open(c.String("input"))
if err != nil {
logrus.Fatal(err)
}
defer mf.Close()
mfz, err := gzip.NewReader(mf)
if err != nil {
logrus.Fatal(err)
}
defer mfz.Close()
metaUnpacker := storage.NewJSONUnpacker(mfz)
// XXX maybe get the absolute path here
fileGetter := storage.NewPathFileGetter(c.String("path"))
ots := asm.NewOutputTarStream(fileGetter, metaUnpacker)
defer ots.Close()
i, err := io.Copy(outputStream, ots)
if err != nil {
logrus.Fatal(err)
}
logrus.Infof("created %s from %s and %s (wrote %d bytes)", c.String("output"), c.String("path"), c.String("input"), i)
}

View file

@ -1,91 +0,0 @@
// +build ignore
package main
import (
"flag"
"fmt"
"io"
"io/ioutil"
"log"
"os"
"github.com/vbatts/tar-split/archive/tar"
)
func main() {
flag.Parse()
log.SetOutput(os.Stderr)
for _, arg := range flag.Args() {
func() {
// Open the tar archive
fh, err := os.Open(arg)
if err != nil {
log.Fatal(err, arg)
}
defer fh.Close()
output, err := os.Create(fmt.Sprintf("%s.out", arg))
if err != nil {
log.Fatal(err)
}
defer output.Close()
log.Printf("writing %q to %q", fh.Name(), output.Name())
fi, err := fh.Stat()
if err != nil {
log.Fatal(err, fh.Name())
}
size := fi.Size()
var sum int64
tr := tar.NewReader(fh)
tr.RawAccounting = true
for {
hdr, err := tr.Next()
if err != nil {
if err != io.EOF {
log.Println(err)
}
// even when an EOF is reached, there is often 1024 null bytes on
// the end of an archive. Collect them too.
post := tr.RawBytes()
output.Write(post)
sum += int64(len(post))
fmt.Printf("EOF padding: %d\n", len(post))
break
}
pre := tr.RawBytes()
output.Write(pre)
sum += int64(len(pre))
var i int64
if i, err = io.Copy(output, tr); err != nil {
log.Println(err)
break
}
sum += i
fmt.Println(hdr.Name, "pre:", len(pre), "read:", i)
}
// it is allowable, and not uncommon that there is further padding on the
// end of an archive, apart from the expected 1024 null bytes
remainder, err := ioutil.ReadAll(fh)
if err != nil && err != io.EOF {
log.Fatal(err, fh.Name())
}
output.Write(remainder)
sum += int64(len(remainder))
fmt.Printf("Remainder: %d\n", len(remainder))
if size != sum {
fmt.Printf("Size: %d; Sum: %d; Diff: %d\n", size, sum, size-sum)
fmt.Printf("Compare like `cmp -bl %s %s | less`\n", fh.Name(), output.Name())
} else {
fmt.Printf("Size: %d; Sum: %d\n", size, sum)
}
}()
}
}

View file

@ -1,181 +0,0 @@
package asm
import (
"bytes"
"compress/gzip"
"crypto/sha1"
"fmt"
"hash/crc64"
"io"
"io/ioutil"
"os"
"testing"
"github.com/vbatts/tar-split/tar/storage"
)
var entries = []struct {
Entry storage.Entry
Body []byte
}{
{
Entry: storage.Entry{
Type: storage.FileType,
Name: "./hurr.txt",
Payload: []byte{2, 116, 164, 177, 171, 236, 107, 78},
Size: 20,
},
Body: []byte("imma hurr til I derp"),
},
{
Entry: storage.Entry{
Type: storage.FileType,
Name: "./ermahgerd.txt",
Payload: []byte{126, 72, 89, 239, 230, 252, 160, 187},
Size: 26,
},
Body: []byte("café con leche, por favor"),
},
}
var entriesMangled = []struct {
Entry storage.Entry
Body []byte
}{
{
Entry: storage.Entry{
Type: storage.FileType,
Name: "./hurr.txt",
Payload: []byte{3, 116, 164, 177, 171, 236, 107, 78},
Size: 20,
},
// switch
Body: []byte("imma derp til I hurr"),
},
{
Entry: storage.Entry{
Type: storage.FileType,
Name: "./ermahgerd.txt",
Payload: []byte{127, 72, 89, 239, 230, 252, 160, 187},
Size: 26,
},
// san not con
Body: []byte("café sans leche, por favor"),
},
}
func TestTarStreamMangledGetterPutter(t *testing.T) {
fgp := storage.NewBufferFileGetPutter()
// first lets prep a GetPutter and Packer
for i := range entries {
if entries[i].Entry.Type == storage.FileType {
j, csum, err := fgp.Put(entries[i].Entry.Name, bytes.NewBuffer(entries[i].Body))
if err != nil {
t.Error(err)
}
if j != entries[i].Entry.Size {
t.Errorf("size %q: expected %d; got %d",
entries[i].Entry.Name,
entries[i].Entry.Size,
j)
}
if !bytes.Equal(csum, entries[i].Entry.Payload) {
t.Errorf("checksum %q: expected %v; got %v",
entries[i].Entry.Name,
entries[i].Entry.Payload,
csum)
}
}
}
for _, e := range entriesMangled {
if e.Entry.Type == storage.FileType {
rdr, err := fgp.Get(e.Entry.Name)
if err != nil {
t.Error(err)
}
c := crc64.New(storage.CRCTable)
i, err := io.Copy(c, rdr)
if err != nil {
t.Fatal(err)
}
rdr.Close()
csum := c.Sum(nil)
if bytes.Equal(csum, e.Entry.Payload) {
t.Errorf("wrote %d bytes. checksum for %q should not have matched! %v",
i,
e.Entry.Name,
csum)
}
}
}
}
func TestTarStream(t *testing.T) {
var (
expectedSum = "1eb237ff69bca6e22789ecb05b45d35ca307adbd"
expectedSize int64 = 10240
)
fh, err := os.Open("./testdata/t.tar.gz")
if err != nil {
t.Fatal(err)
}
defer fh.Close()
gzRdr, err := gzip.NewReader(fh)
if err != nil {
t.Fatal(err)
}
defer gzRdr.Close()
// Setup where we'll store the metadata
w := bytes.NewBuffer([]byte{})
sp := storage.NewJSONPacker(w)
fgp := storage.NewBufferFileGetPutter()
// wrap the disassembly stream
tarStream, err := NewInputTarStream(gzRdr, sp, fgp)
if err != nil {
t.Fatal(err)
}
// get a sum of the stream after it has passed through to ensure it's the same.
h0 := sha1.New()
tRdr0 := io.TeeReader(tarStream, h0)
// read it all to the bit bucket
i, err := io.Copy(ioutil.Discard, tRdr0)
if err != nil {
t.Fatal(err)
}
if i != expectedSize {
t.Errorf("size of tar: expected %d; got %d", expectedSize, i)
}
if fmt.Sprintf("%x", h0.Sum(nil)) != expectedSum {
t.Fatalf("checksum of tar: expected %s; got %x", expectedSum, h0.Sum(nil))
}
t.Logf("%s", w.String()) // if we fail, then show the packed info
// If we've made it this far, then we'll turn it around and create a tar
// stream from the packed metadata and buffered file contents.
r := bytes.NewBuffer(w.Bytes())
sup := storage.NewJSONUnpacker(r)
// and reuse the fgp that we Put the payloads to.
rc := NewOutputTarStream(fgp, sup)
h1 := sha1.New()
i, err = io.Copy(h1, rc)
if err != nil {
t.Fatal(err)
}
if i != expectedSize {
t.Errorf("size of output tar: expected %d; got %d", expectedSize, i)
}
if fmt.Sprintf("%x", h1.Sum(nil)) != expectedSum {
t.Fatalf("checksum of output tar: expected %s; got %x", expectedSum, h1.Sum(nil))
}
}

View file

@ -1,66 +0,0 @@
package storage
import (
"encoding/json"
"sort"
"testing"
)
func TestEntries(t *testing.T) {
e := Entries{
Entry{
Type: SegmentType,
Payload: []byte("y'all"),
Position: 1,
},
Entry{
Type: SegmentType,
Payload: []byte("doin"),
Position: 3,
},
Entry{
Type: FileType,
Name: "./hurr.txt",
Payload: []byte("deadbeef"),
Position: 2,
},
Entry{
Type: SegmentType,
Payload: []byte("how"),
Position: 0,
},
}
sort.Sort(e)
if e[0].Position != 0 {
t.Errorf("expected Position 0, but got %d", e[0].Position)
}
}
func TestFile(t *testing.T) {
f := Entry{
Type: FileType,
Name: "./hello.txt",
Size: 100,
Position: 2,
}
buf, err := json.Marshal(f)
if err != nil {
t.Fatal(err)
}
f1 := Entry{}
if err = json.Unmarshal(buf, &f1); err != nil {
t.Fatal(err)
}
if f.Name != f1.Name {
t.Errorf("expected Name %q, got %q", f.Name, f1.Name)
}
if f.Size != f1.Size {
t.Errorf("expected Size %q, got %q", f.Size, f1.Size)
}
if f.Position != f1.Position {
t.Errorf("expected Position %q, got %q", f.Position, f1.Position)
}
}

View file

@ -1,62 +0,0 @@
package storage
import (
"bytes"
"io/ioutil"
"testing"
)
func TestGetter(t *testing.T) {
fgp := NewBufferFileGetPutter()
files := map[string]map[string][]byte{
"file1.txt": {"foo": []byte{60, 60, 48, 48, 0, 0, 0, 0}},
"file2.txt": {"bar": []byte{45, 196, 22, 240, 0, 0, 0, 0}},
}
for n, b := range files {
for body, sum := range b {
_, csum, err := fgp.Put(n, bytes.NewBufferString(body))
if err != nil {
t.Error(err)
}
if !bytes.Equal(csum, sum) {
t.Errorf("checksum: expected 0x%x; got 0x%x", sum, csum)
}
}
}
for n, b := range files {
for body := range b {
r, err := fgp.Get(n)
if err != nil {
t.Error(err)
}
buf, err := ioutil.ReadAll(r)
if err != nil {
t.Error(err)
}
if body != string(buf) {
t.Errorf("expected %q, got %q", body, string(buf))
}
}
}
}
func TestPutter(t *testing.T) {
fp := NewDiscardFilePutter()
// map[filename]map[body]crc64sum
files := map[string]map[string][]byte{
"file1.txt": {"foo": []byte{60, 60, 48, 48, 0, 0, 0, 0}},
"file2.txt": {"bar": []byte{45, 196, 22, 240, 0, 0, 0, 0}},
"file3.txt": {"baz": []byte{32, 68, 22, 240, 0, 0, 0, 0}},
"file4.txt": {"bif": []byte{48, 9, 150, 240, 0, 0, 0, 0}},
}
for n, b := range files {
for body, sum := range b {
_, csum, err := fp.Put(n, bytes.NewBufferString(body))
if err != nil {
t.Error(err)
}
if !bytes.Equal(csum, sum) {
t.Errorf("checksum on %q: expected %v; got %v", n, sum, csum)
}
}
}
}

View file

@ -1,163 +0,0 @@
package storage
import (
"bytes"
"compress/gzip"
"io"
"testing"
)
func TestDuplicateFail(t *testing.T) {
e := []Entry{
Entry{
Type: FileType,
Name: "./hurr.txt",
Payload: []byte("abcde"),
},
Entry{
Type: FileType,
Name: "./hurr.txt",
Payload: []byte("deadbeef"),
},
Entry{
Type: FileType,
Name: "hurr.txt", // slightly different path, same file though
Payload: []byte("deadbeef"),
},
}
buf := []byte{}
b := bytes.NewBuffer(buf)
jp := NewJSONPacker(b)
if _, err := jp.AddEntry(e[0]); err != nil {
t.Error(err)
}
if _, err := jp.AddEntry(e[1]); err != ErrDuplicatePath {
t.Errorf("expected failure on duplicate path")
}
if _, err := jp.AddEntry(e[2]); err != ErrDuplicatePath {
t.Errorf("expected failure on duplicate path")
}
}
func TestJSONPackerUnpacker(t *testing.T) {
e := []Entry{
Entry{
Type: SegmentType,
Payload: []byte("how"),
},
Entry{
Type: SegmentType,
Payload: []byte("y'all"),
},
Entry{
Type: FileType,
Name: "./hurr.txt",
Payload: []byte("deadbeef"),
},
Entry{
Type: SegmentType,
Payload: []byte("doin"),
},
}
buf := []byte{}
b := bytes.NewBuffer(buf)
func() {
jp := NewJSONPacker(b)
for i := range e {
if _, err := jp.AddEntry(e[i]); err != nil {
t.Error(err)
}
}
}()
// >> packer_test.go:43: uncompressed: 266
//t.Errorf("uncompressed: %d", len(b.Bytes()))
b = bytes.NewBuffer(b.Bytes())
entries := Entries{}
func() {
jup := NewJSONUnpacker(b)
for {
entry, err := jup.Next()
if err != nil {
if err == io.EOF {
break
}
t.Error(err)
}
entries = append(entries, *entry)
t.Logf("got %#v", entry)
}
}()
if len(entries) != len(e) {
t.Errorf("expected %d entries, got %d", len(e), len(entries))
}
}
// you can use a compress Reader/Writer and make nice savings.
//
// For these two tests that are using the same set, it the difference of 266
// bytes uncompressed vs 138 bytes compressed.
func TestGzip(t *testing.T) {
e := []Entry{
Entry{
Type: SegmentType,
Payload: []byte("how"),
},
Entry{
Type: SegmentType,
Payload: []byte("y'all"),
},
Entry{
Type: FileType,
Name: "./hurr.txt",
Payload: []byte("deadbeef"),
},
Entry{
Type: SegmentType,
Payload: []byte("doin"),
},
}
buf := []byte{}
b := bytes.NewBuffer(buf)
gzW := gzip.NewWriter(b)
jp := NewJSONPacker(gzW)
for i := range e {
if _, err := jp.AddEntry(e[i]); err != nil {
t.Error(err)
}
}
gzW.Close()
// >> packer_test.go:99: compressed: 138
//t.Errorf("compressed: %d", len(b.Bytes()))
b = bytes.NewBuffer(b.Bytes())
gzR, err := gzip.NewReader(b)
if err != nil {
t.Fatal(err)
}
entries := Entries{}
func() {
jup := NewJSONUnpacker(gzR)
for {
entry, err := jup.Next()
if err != nil {
if err == io.EOF {
break
}
t.Error(err)
}
entries = append(entries, *entry)
t.Logf("got %#v", entry)
}
}()
if len(entries) != len(e) {
t.Errorf("expected %d entries, got %d", len(e), len(entries))
}
}