Reading content of ElasticSearch index into Pig Script

In the Using ElasticSearch for storing ouput of Pig Script , i built a sample for storing output of Pig Script into ElasticSearch. I wanted to try out the reverse, in which i wanted to use Index/Search Result in elastic search as input into Pig Script, so i built this sample
  1. First follow step 3 in the Using ElasticSearch for storing ouput of Pig Script to download and upload the ElasticSearch Hadoop jars into HDFS store.
  2. After that create a pig script like this, In this script first 2 lines are used to make the ElasticSearch Hadoop related jars available to Pig. Then the DEFINE statement is creating alias for org.elasticsearch.hadoop.pig.EsStorage and giving it a simple/user friendly name of ES. Then the 4th line is telling Pig to load the content of pig/cricket index on local elastic search into variable A. The last line is used for dumping content of variable A.
    REGISTER /user/root/elasticsearch-hadoop-2.0.0.RC1/dist/elasticsearch-hadoop-2.0.0.RC1.jar
    REGISTER /user/root/elasticsearch-hadoop-2.0.0.RC1/dist/elasticsearch-hadoop-pig-2.0.0.RC1.jar
    DEFINE ES org.elasticsearch.hadoop.pig.EsStorage;
    A = LOAD 'pig/cricket' USING ES;
    DUMP A;
After i executed the script i could see the output like this
Note: Before i got it to work i was using v = LOAD 'pig/cricket' USING org.elasticsearch.pig.EsStorage command to load the content of ES and it kept throwing the following error. I realized that i was using the wrong package name

grunt> v = LOAD 'pig/cricket' USING org.elasticsearch.pig.EsStorage;
2014-05-14 15:56:48,873 [main] ERROR - ERROR 1070: Could not resolve org.elasticsearch.pig.EsStorage using imports: [, java.lang., org.apache.pig.builtin., org.apache.pig.impl.builtin.]
Details at logfile: /root/pig_1400106825043.log

Using ElasticSearch for storing ouput of Pig Script

I wanted to learn how to use ElasticSearch for storing output of Pig Script. So i did create this simple text file that has names of cricket players and their role in the team and email id. Then i used Pig script for simply loading the text file into Elastic Search. I used following steps
  1. First i did create cricket.txt file that contains the crickets information like this
    Virat Kohli batsman
    MahendraSingh Dhoni batsman
    Shikhar Dhawan batsman
  2. The next step was to upload the cicket.txt file to HDFS /user/root directory
    hdfs dfs -copyFromLocal cricket.txt /user/root/cricket.txt
  3. After that i did download the ElasticSearch Hadoop zip and i did expand it on my local. After that i decided to upload the whole elasticsearch-hadoop-2.0.0.RC1 directory to HDFS so that it is available from all the clusters
    dfs dfs -copyFromLocal elasticsearch-hadoop-2.0.0.RC1/ /user/root/
  4. Then i did create this cricketes.pig script which registers the ElasticSearch related jar files into pig as first step then, it loads the content of cricket.txt file into cricket variable and then stores that content into pig/cricket index on local host
    Register the elasticsearch hadoop related jar files
    REGISTER /user/root/elasticsearch-hadoop-2.0.0.RC1/dist/elasticsearch-hadoop-2.0.0.RC1.jar
    REGISTER /user/root/elasticsearch-hadoop-2.0.0.RC1/dist/elasticsearch-hadoop-pig-2.0.0.RC1.jar
    -- Load the content of /user/root/cricket.txt into Pig
    cricket = LOAD '/user/root/cricket.txt' AS( fname:chararray, lname:chararray, skill: chararray, email: chararray);
    DUMP cricket;
    -- Store the content of cricket variable into instance of elastic search on local server, into pig/crciket index
    STORE cricket into 'pig/cricket' USING org.elasticsearch.hadoop.pig.EsStorage;
After loading the pig script i did verify the content of the pig/cricket index on ES and i could see the content of text file like this

Using elasticsearch as external data store with apache hive

ElasticSearch has this feature in which you can configure Hive table that actually points to index in ElasticSearch. I wanted to learn how to use this feature so i followed these steps
  1. First i did create contact/contact index and type in ElasticSearch and i did insert 4 records in it like this
  2. Next i did download ElasticSearch Hadoop zip file on my Hadoop VM by executing following command
    I did expand the in the /root directory
  3. Next i had to start the hive console by executing following command, take a look at how i had to add elasticsearch-hadoop-2.0.0.RC1.jar to the aux.jars.path hive -hiveconf hive.aux.jars.path=/root/elasticsearch-hadoop-2.0.0.RC1/dist/elasticsearch-hadoop-2.0.0.RC1.jar
  4. Next i did define artists table in hive that points to contact index in the elasticsearch server like this
    fname STRING,
    lname STRING,
    email STRING)
    STORED BY 'org.elasticsearch.hadoop.hive.EsStorageHandler'
    TBLPROPERTIES('es.resource' = 'contact/contact',
                  '' = 'false') ;
  5. Once the table is configured i could query it like any normal Hive table like this

Using ElasticSearch to store output of MapReduce program

I wanted to use use ElasticSearch for storing the output of MapReduce program. So i modified the WordCount(HelloWorld) MapReduce program so that it stores output in ElasticSearch instead of Text File. You can download the complete project from here
  1. First change the maven build script to declare dependency on elasticsearch-hadoop-mr like this, I had to try out few combination before this worked (Watch out for jackson mapper version mismatch)
  2. Next change your MapReduce Driver class, to use EsOutputFormat as output format. You will have to set value of es.nodes property to set the host and port of elastic search server that you want to use for storing output. THe value of es.resource points to the Index and type name of elastic search where output should be stored. In my case ElasticSearch is running on local machine.
    public int run(String[] args) throws Exception {
            if (args.length != 2) {
                System.err.printf("Usage: %s [generic options] <input> <output>\n",
                return -1;
            Job job = new Job();
  "Input path " + args[0]);
  "Oupput path " + args[1]);
            FileInputFormat.addInputPath(job, new Path(args[0]));
            FileOutputFormat.setOutputPath(job, new Path(args[1]));
            //Configuration for using ElasticSearch as OutputFormat
            Configuration configuration = job.getConfiguration();
            int returnValue = job.waitForCompletion(true) ? 0:1;
            System.out.println("job.isSuccessful " + job.isSuccessful());
            return returnValue;
  3. I had to start ElasticSearch 1.1 server on my local machine as last step before starting MapReduce program
  4. After running the program when i search wordcount2 index i found results like this

Using WebHDFS as input and output for MapReduce program

In the WordCount(HelloWorld) MapReduce program blog i talked about how to create simple WordCount MapReduce program. Then in the WebHDFS REST API entry i blogged about how to configure WebHDFS end point for your hadoop installation. I wanted to combine both those things so that my MapReduce program reads input using WebHDFS and writes output back to HDFS using WebHDFS. First i changed the program arguments to use webhdfs URL for both input and output to MapReduce program.

hadoop jar WordCount.jar webhdfs:// webhdfs://
When i tried to run this program i got exception, so in this case it was taking the login user name on my machine (I run hadoop on vm and my Eclipse IDE with MapReduce on my machine directly) and using it to to run MapReduce program. Since the HDFS system does not allow user sunil to create any files in HDFS.

14/05/09 10:10:18 WARN mapred.LocalJobRunner: job_local_0001 Permission denied: user=gpzpati, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
 at java.lang.reflect.Constructor.newInstance(
 at org.apache.hadoop.ipc.RemoteException.instantiateException(
 at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.mkdirs(
 at org.apache.hadoop.fs.FileSystem.mkdirs(
 at org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter.setupJob(
 at org.apache.hadoop.mapred.LocalJobRunner$
Caused by: Permission denied: user=gpzpati, access=WRITE, inode="/":hdfs:supergroup:drwxr-xr-x
 at org.apache.hadoop.hdfs.web.JsonUtil.toRemoteException(
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(
 ... 5 more
So i changed the main method of my program to wrap it in UserGroupInformation.doAs() call. In that call i am overriding name of the user used for running MapReduce to hdfs. Now it works ok. Note: In order for this program to work your NameNode and DataNodes should have valid name (network recognizable). Because when you run the MapReduce first it makes OPEN request to the webhdfs:// URL, the NameNode will send a redirect response with URL pointing to data node like this

$curl -i ''
Cache-Control: no-cache
Expires: Tue, 06 May 2014 21:44:56 GMT
Date: Tue, 06 May 2014 21:44:56 GMT
Pragma: no-cache
Expires: Tue, 06 May 2014 21:44:56 GMT
Date: Tue, 06 May 2014 21:44:56 GMT
Pragma: no-cache
Location: http://ubuntu:50075/webhdfs/v1/test/startupdemo.txt?op=OPEN&namenoderpcaddress=localhost:9000&offset=0
Content-Type: application/octet-stream
Content-Length: 0
Server: Jetty(6.1.26)
Now if your Datanode hostname which is ubuntu in my case is not directly addressable then it will fail with error like this, you can fix this issue by mapping name ubuntu to the right ip in your /etc/host file.

14/05/09 10:16:34 WARN mapred.LocalJobRunner: job_local_0001 ubuntu
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.validateResponse(
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.access$000(
 at org.apache.hadoop.hdfs.web.WebHdfsFileSystem$OffsetUrlInputStream.checkResponseCode(
 at org.apache.hadoop.hdfs.ByteRangeInputStream.openInputStream(
 at org.apache.hadoop.hdfs.ByteRangeInputStream.getInputStream(
 at org.apache.hadoop.util.LineReader.readDefaultLine(
 at org.apache.hadoop.util.LineReader.readLine(
 at org.apache.hadoop.mapreduce.lib.input.LineRecordReader.nextKeyValue(
 at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(
 at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(
 at org.apache.hadoop.mapred.MapTask.runNewMapper(
 at org.apache.hadoop.mapred.LocalJobRunner$
14/05/09 10:16:35 INFO mapred.JobClient:  map 0% reduce 0%
14/05/09 10:16:35 INFO mapred.JobClient: Job complete: job_local_0001
14/05/09 10:16:35 INFO mapred.JobClient: Counters: 0
job.isSuccessful false


Hadoop provides HTTP REST API interface that exposes access to HDFS using REST API. The WebHDFS provides ability to read and write files in HDFS and also provides support for all operations. It also provides security using Kerberos (SPNEGO) and Hadoop delegation tokens for authentication. You can find more information about it here I wanted to try WebHDFS API, so i followed these steps, first i changed the hdfs-site.xml file and changed it to set value of dfs.webhdfs.enabled property to true. Then i did restart my hadoop server and when i the HDFS name node was starting i looked at the logs to verify that WebHDFS is started
After restarting the server i used CURL for testing out couple of WebHDFS REST API calls like this