Skip to content
This repository has been archived by the owner on Sep 29, 2018. It is now read-only.

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
hoffoo committed Jan 28, 2015
1 parent 9a7ff50 commit 20e171c
Show file tree
Hide file tree
Showing 2 changed files with 9 additions and 5 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Elasticsearch Dumper

## EXAMPLE:
```elasticsearch-dumper -s source:9200 -d destination:9200 -i index1,index2```
```elasticsearch-dumper -s http://source:9200 -d http://destination:9200 -i index1,index2```

## INSTALL:
1. ```go get github.com/hoffoo/elasticsearch-dump```
Expand All @@ -12,12 +12,14 @@
Application Options:
-s, --source= Source elasticsearch instance
-d, --dest= Destination elasticsearch instance
-c, --count= Number of documents at a time: ie "size" in the scroll request (100)
-c, --count= Number of documents at a time: ie "size" in the scroll
request (100)
-t, --time= Scroll time (1m)
--settings Copy sharding and replication settings from source (true)
-f, --force Delete destination index before copying (false)
-i, --indexes= List of indexes to copy, comma separated (_all)
-a, --all Copy indexes starting with . (false)
-w, --workers= Concurrency (1)
```

## NOTES:
Expand All @@ -30,6 +32,7 @@ Application Options:
1. ```--count``` is the [number of documents](http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-request-scroll.html#scroll-scan) that will be request and bulk indexed at a time. Note that this depends on the number of shards (ie: size of 10 on 5 shards is 50 documents)
1. ```--indexes``` is a comma separated list of indexes to copy
1. ```--all``` copy all indexes, even those starting with '.'. The default is false, to ignore marvel and others
1. ```--workers``` concurrency when we post to the bulk api. Only one post happens at a time, but higher concurrency should give you more throughput when using larger scroll sizes.
1. Ports are required, otherwise 80 is the assumed port

## BUGS:
Expand Down
7 changes: 4 additions & 3 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ type Config struct {
Destructive bool `short:"f" long:"force" description:"Delete destination index before copying" default:"false"`
IndexNames string `short:"i" long:"indexes" description:"List of indexes to copy, comma separated" default:"_all"`
CopyDotnameIndexes bool `short:"a" long:"all" description:"Copy indexes starting with ." default:"false"`
Workers int `short:"w" long:"workers" description:"Concurrency" default:"10"`
Workers int `short:"w" long:"workers" description:"Concurrency" default:"1"`
}

func main() {
Expand All @@ -68,6 +68,7 @@ func main() {
return
}

// enough of a buffer to hold all all search results across all workers
c.DocChan = make(chan Document, c.DocBufferCount*c.Workers)

// get all indexes from source
Expand Down Expand Up @@ -117,8 +118,8 @@ func main() {
docEnc := json.NewEncoder(&docBuf)
for {
doc, ok := <-c.DocChan
if !ok { // if channel is closed flush and gtfo
// do one final post and gtfo
if !ok {
// if channel is closed flush and gtfo
if docBuf.Len() > 0 {
mainBuf.Write(docBuf.Bytes())
}
Expand Down

0 comments on commit 20e171c

Please sign in to comment.