[elasticsearch] how to move elasticsearch data from one server to another

How do I move Elasticsearch data from one server to another?

I have server A running Elasticsearch 1.1.1 on one local node with multiple indices. I would like to copy that data to server B running Elasticsearch 1.3.4

Procedure so far

  1. Shut down ES on both servers and
  2. scp all the data to the correct data dir on the new server. (data seems to be located at /var/lib/elasticsearch/ on my debian boxes)
  3. change permissions and ownership to elasticsearch:elasticsearch
  4. start up the new ES server

When I look at the cluster with the ES head plugin, no indices appear.

It seems that the data is not loaded. Am I missing something?

This question is related to elasticsearch

The answer is

You can use snapshot/restore feature available in Elasticsearch for this. Once you have setup a Filesystem based snapshot store, you can move it around between clusters and restore on a different cluster

You can take a snapshot of the complete status of your cluster (including all data indices) and restore them (using the restore API) in the new cluster or server.

If you simply need to transfer data from one elasticsearch server to another, you could also use elasticsearch-document-transfer.


  1. Open a directory in your terminal and run
    $ npm install elasticsearch-document-transfer.
  2. Create a file config.js
  3. Add the connection details of both elasticsearch servers in config.js
  4. Set appropriate values in options.js
  5. Run in the terminal
    $ node index.js

Use ElasticDump

1) yum install epel-release

2) yum install nodejs

3) yum install npm

4) npm install elasticdump

5) cd node_modules/elasticdump/bin


./elasticdump \

  --input= \

  --output= \


If anyone encounter the same issue, when trying to dump from elasticsearch <2.0 to >2.0 you need to do:

elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=analyzer
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=mapping
elasticdump --input=http://localhost:9200/$SRC_IND --output=http://$TARGET_IP:9200/$TGT_IND --type=data --transform "delete doc.__source['_id']"

If you don't want to use the elasticdump like a console tool. You can use next node.js script

There is also the _reindex option

From documentation:

Through the Elasticsearch reindex API, available in version 5.x and later, you can connect your new Elasticsearch Service deployment remotely to your old Elasticsearch cluster. This pulls the data from your old cluster and indexes it into your new one. Reindexing essentially rebuilds the index from scratch and it can be more resource intensive to run.

POST _reindex
  "source": {
    "remote": {
      "username": "USER",
      "password": "PASSWORD"
    "index": "INDEX_NAME",
    "query": {
      "match_all": {}
  "dest": {
    "index": "INDEX_NAME"

I tried on ubuntu to move data from ELK 2.4.3 to ELK 5.1.1

Following are the steps

$ sudo apt-get update

$ sudo apt-get install -y python-software-properties python g++ make

$ sudo add-apt-repository ppa:chris-lea/node.js

$ sudo apt-get update

$ sudo apt-get install npm

$ sudo apt-get install nodejs

$ npm install colors

$ npm install nomnom

$ npm install elasticdump

in home directory goto

$ cd node_modules/elasticdump/

execute the command

If you need basic http auth, you can use it like this:


Copy an index from production:

$ ./bin/elasticdump --input="http://Source:9200/Sourceindex" --output="http://username:password@Destination:9200/Destination_index"  --type=data

If you can add the second server to cluster, you may do this:

  1. Add Server B to cluster with Server A
  2. Increment number of replicas for indices
  3. ES will automatically copy indices to server B
  4. Close server A
  5. Decrement number of replicas for indices

This will only work if number of replaces equal to number of nodes.

The selected answer makes it sound slightly more complex than it is, the following is what you need (install npm first on your system).

npm install -g elasticdump
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=mapping
elasticdump --input=http://mysrc.com:9200/my_index --output=http://mydest.com:9200/my_index --type=data

You can skip the first elasticdump command for subsequent copies if the mappings remain constant.

I have just done a migration from AWS to Qbox.io with the above without any problems.

More details over at:


Help page (as of Feb 2016) included for completeness:

elasticdump: Import and export tools for elasticsearch

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

                    Source location (required)
                    Source index and type
                    (default: all, example: index/type)
                    Destination location (required)
                    Destination index and type
                    (default: all, example: index/type)
                    How many objects to move in bulk per operation
                    limit is approximate for file streams
                    (default: 100)
                    Display the elasticsearch commands being used
                    (default: false)
                    What are we exporting?
                    (default: data, options: [data, mapping])
                    Delete documents one-by-one from the input as they are
                    moved.  Will not delete the source index
                    (default: false)
                    Preform a partial extract based on search results
                    (when ES is the input,
                    (default: '{"query": { "match_all": {} } }'))
                    Output only the json contained within the document _source
                    Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
                    sourceOnly: {SOURCE}
                    (default: false)
                    Load/store documents from ALL indexes
                    (default: false)
                    Leverage elasticsearch Bulk API when writing documents
                    (default: false)
                    Will continue the read/write loop on write error
                    (default: false)
                    Time the nodes will hold the requested search in order.
                    (default: 10m)
                    How many simultaneous HTTP requests can we process make?
                      5 [node <= v0.10.x] /
                      Infinity [node >= v0.11.x] )
                    The mode can be index, delete or update.
                    'index': Add or replace documents on the destination index.
                    'delete': Delete documents on destination index.
                    'update': Use 'doc_as_upsert' option with bulk update API to do partial update.
                    (default: index)
                    Force use of destination index name (the actual output URL)
                    as destination while bulk writing to ES. Allows
                    leveraging Bulk API copying data inside the same
                    elasticsearch instance.
                    (default: false)
                    Integer containing the number of milliseconds to wait for
                    a request to respond before aborting the request. Passed
                    directly to the request library. If used in bulk writing,
                    it will result in the entire batch not being written.
                    Mostly used when you don't care too much if you lose some
                    data when importing but rather have speed.
                    Integer containing the number of rows you wish to skip
                    ahead from the input transport.  When importing a large
                    index, things can go wrong, be it connectivity, crashes,
                    someone forgetting to `screen`, etc.  This allows you
                    to start the dump again from the last known line written
                    (as logged by the `offset` in the output).  Please be
                    advised that since no sorting is specified when the
                    dump is initially created, there's no real way to
                    guarantee that the skipped rows have already been
                    written/parsed.  This is more of an option for when
                    you want to get most data as possible in the index
                    without concern for losing some rows in the process,
                    similar to the `timeout` option.
                    Provide a custom js file to us as the input transport
                    Provide a custom js file to us as the output transport
                    When using a custom outputTransport, should log lines
                    be appended to the output stream?
                    (default: true, except for `$`)
                    This page


# Copy an index from production to staging with mappings:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=http://staging.es.com:9200/my_index \

# Backup index data to a file:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index_mapping.json \
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=/data/my_index.json \

# Backup and index to a gzip using stdout:
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=$ \
  | gzip > /data/my_index.json.gz

# Backup ALL indices, then use Bulk API to populate another ES cluster:
elasticdump \
  --all=true \
  --input=http://production-a.es.com:9200/ \
elasticdump \
  --bulk=true \
  --input=/data/production.json \

# Backup the results of a query to a file
elasticdump \
  --input=http://production.es.com:9200/my_index \
  --output=query.json \
  --searchBody '{"query":{"term":{"username": "admin"}}}'

Learn more @ https://github.com/taskrabbit/elasticsearch-dump`enter code here`

I've always had success simply copying the index directory/folder over to the new server and restarting it. You'll find the index id by doing GET /_cat/indices and the folder matching this id is in data\nodes\0\indices (usually inside your elasticsearch folder unless you moved it).

We can use elasticdump or multielasticdump to take the backup and restore it, We can move data from one server/cluster to another server/cluster.

Please find a detailed answer which I have provided here.