| 
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||
java.lang.Objectorg.apache.hadoop.conf.Configured
org.apache.hadoop.examples.SleepJob
public class SleepJob
Dummy class for testing MR framefork. Sleeps for a defined period 
 of time in mapper and reducer. Generates fake input for map / reduce 
 jobs. Note that generated number of input pairs is in the order 
 of numMappers * mapSleepTime / 100, so the job uses
 some disk space.
| Constructor Summary | |
|---|---|
SleepJob()
 | 
|
| Method Summary | |
|---|---|
 void | 
close()
 | 
 void | 
configure(JobConf job)
Initializes a new instance from a JobConf. | 
 int | 
getPartition(IntWritable key,
             IntWritable value,
             int numPartitions)
Get the paritition number for a given key (hence record) given the total number of partitions i.e.  | 
static void | 
main(String[] args)
 | 
 void | 
map(IntWritable key,
    IntWritable value,
    OutputCollector<IntWritable,IntWritable> output,
    Reporter reporter)
Maps a single input key/value pair into an intermediate key/value pair.  | 
 void | 
reduce(IntWritable key,
       Iterator<IntWritable> values,
       OutputCollector<IntWritable,IntWritable> output,
       Reporter reporter)
Reduces values for a given key.  | 
 int | 
run(int numMapper,
    int numReducer,
    long mapSleepTime,
    long mapSleepCount,
    long reduceSleepTime,
    long reduceSleepCount)
 | 
 int | 
run(String[] args)
Execute the command with the given arguments.  | 
| Methods inherited from class org.apache.hadoop.conf.Configured | 
|---|
getConf, setConf | 
| Methods inherited from class java.lang.Object | 
|---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait | 
| Methods inherited from interface org.apache.hadoop.conf.Configurable | 
|---|
getConf, setConf | 
| Constructor Detail | 
|---|
public SleepJob()
| Method Detail | 
|---|
public int getPartition(IntWritable key,
                        IntWritable value,
                        int numPartitions)
PartitionerTypically a hash function on a all or a subset of the key.
getPartition in interface Partitioner<IntWritable,IntWritable>key - the key to be paritioned.value - the entry value.numPartitions - the total number of partitions.
key.
public void map(IntWritable key,
                IntWritable value,
                OutputCollector<IntWritable,IntWritable> output,
                Reporter reporter)
         throws IOException
MapperOutput pairs need not be of the same types as input pairs.  A given 
 input pair may map to zero or many output pairs.  Output pairs are 
 collected with calls to 
 OutputCollector.collect(Object,Object).
Applications can use the Reporter provided to report progress 
 or just indicate that they are alive. In scenarios where the application 
 takes an insignificant amount of time to process individual key/value 
 pairs, this is crucial since the framework might assume that the task has 
 timed-out and kill that task. The other way of avoiding this is to set 
 
 mapred.task.timeout to a high-enough value (or even zero for no 
 time-outs).
map in interface Mapper<IntWritable,IntWritable,IntWritable,IntWritable>key - the input key.value - the input value.output - collects mapped keys and values.reporter - facility to report progress.
IOException
public void reduce(IntWritable key,
                   Iterator<IntWritable> values,
                   OutputCollector<IntWritable,IntWritable> output,
                   Reporter reporter)
            throws IOException
ReducerThe framework calls this method for each 
 <key, (list of values)> pair in the grouped inputs.
 Output values must be of the same type as input values.  Input keys must 
 not be altered. The framework will reuse the key and value objects
 that are passed into the reduce, therefore the application should clone
 the objects they want to keep a copy of. In many cases, all values are 
 combined into zero or one value.
 
Output pairs are collected with calls to  
 OutputCollector.collect(Object,Object).
Applications can use the Reporter provided to report progress 
 or just indicate that they are alive. In scenarios where the application 
 takes an insignificant amount of time to process individual key/value 
 pairs, this is crucial since the framework might assume that the task has 
 timed-out and kill that task. The other way of avoiding this is to set 
 
 mapred.task.timeout to a high-enough value (or even zero for no 
 time-outs).
reduce in interface Reducer<IntWritable,IntWritable,IntWritable,IntWritable>key - the key.values - the list of values to reduce.output - to collect keys and combined values.reporter - facility to report progress.
IOExceptionpublic void configure(JobConf job)
JobConfigurableJobConf.
configure in interface JobConfigurablejob - the configuration
public void close()
           throws IOException
close in interface CloseableIOException
public static void main(String[] args)
                 throws Exception
Exception
public int run(int numMapper,
               int numReducer,
               long mapSleepTime,
               long mapSleepCount,
               long reduceSleepTime,
               long reduceSleepCount)
        throws Exception
Exception
public int run(String[] args)
        throws Exception
Tool
run in interface Toolargs - command specific arguments.
Exception
  | 
||||||||||
| PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
| SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD | |||||||||