|
||||||||||
PREV NEXT | FRAMES NO FRAMES |
Packages that use JobConf | |
---|---|
org.apache.hadoop.contrib.index.example | |
org.apache.hadoop.contrib.index.mapred | |
org.apache.hadoop.contrib.utils.join | |
org.apache.hadoop.examples | Hadoop example code. |
org.apache.hadoop.examples.dancing | This package is a distributed implementation of Knuth's dancing links algorithm that can run under Hadoop. |
org.apache.hadoop.mapred | A software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) parallelly on large clusters (thousands of nodes) built of commodity hardware in a reliable, fault-tolerant manner. |
org.apache.hadoop.mapred.jobcontrol | Utilities for managing dependent jobs. |
org.apache.hadoop.mapred.join | Given a set of sorted datasets keyed with the same class and yielding equal partitions, it is possible to effect a join of those datasets prior to the map. |
org.apache.hadoop.mapred.lib | Library of generally useful mappers, reducers, and partitioners. |
org.apache.hadoop.mapred.lib.aggregate | Classes for performing various counting and aggregations. |
org.apache.hadoop.mapred.pipes | Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce. |
org.apache.hadoop.streaming | Hadoop Streaming is a utility which allows users to create and run Map-Reduce jobs with any executables (e.g. |
org.apache.hadoop.util | Common utilities. |
Uses of JobConf in org.apache.hadoop.contrib.index.example |
---|
Methods in org.apache.hadoop.contrib.index.example with parameters of type JobConf | |
---|---|
void |
LineDocLocalAnalysis.configure(JobConf job)
|
void |
IdentityLocalAnalysis.configure(JobConf job)
|
RecordReader<DocumentID,LineDocTextAndOp> |
LineDocInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of JobConf in org.apache.hadoop.contrib.index.mapred |
---|
Methods in org.apache.hadoop.contrib.index.mapred with parameters of type JobConf | |
---|---|
void |
IndexUpdatePartitioner.configure(JobConf job)
|
void |
IndexUpdateCombiner.configure(JobConf job)
|
void |
IndexUpdateReducer.configure(JobConf job)
|
void |
IndexUpdateMapper.configure(JobConf job)
|
RecordWriter<Shard,Text> |
IndexUpdateOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable progress)
|
Uses of JobConf in org.apache.hadoop.contrib.utils.join |
---|
Fields in org.apache.hadoop.contrib.utils.join declared as JobConf | |
---|---|
protected JobConf |
DataJoinReducerBase.job
|
protected JobConf |
DataJoinMapperBase.job
|
Methods in org.apache.hadoop.contrib.utils.join that return JobConf | |
---|---|
static JobConf |
DataJoinJob.createDataJoinJob(String[] args)
|
Methods in org.apache.hadoop.contrib.utils.join with parameters of type JobConf | |
---|---|
void |
DataJoinReducerBase.configure(JobConf job)
|
void |
DataJoinMapperBase.configure(JobConf job)
|
void |
JobBase.configure(JobConf job)
Initializes a new instance from a JobConf . |
static boolean |
DataJoinJob.runJob(JobConf job)
Submit/run a map/reduce job. |
Uses of JobConf in org.apache.hadoop.examples |
---|
Methods in org.apache.hadoop.examples with parameters of type JobConf | |
---|---|
void |
SleepJob.configure(JobConf job)
|
void |
PiEstimator.PiMapper.configure(JobConf job)
Mapper configuration. |
void |
PiEstimator.PiReducer.configure(JobConf job)
Reducer configuration. |
RecordReader<MultiFileWordCount.WordOffset,Text> |
MultiFileWordCount.MyInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
Uses of JobConf in org.apache.hadoop.examples.dancing |
---|
Methods in org.apache.hadoop.examples.dancing with parameters of type JobConf | |
---|---|
void |
DistributedPentomino.PentMap.configure(JobConf conf)
|
Uses of JobConf in org.apache.hadoop.mapred |
---|
Methods in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
static void |
FileInputFormat.addInputPath(JobConf conf,
Path path)
Add a Path to the list of inputs for the map-reduce job. |
static void |
FileInputFormat.addInputPaths(JobConf conf,
String commaSeparatedPaths)
Add the given comma separated paths to the list of inputs for the map-reduce job. |
void |
SequenceFileAsBinaryOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
OutputFormatBase.checkOutputSpecs(FileSystem ignored,
JobConf job)
Deprecated. |
void |
FileOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
OutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
Check for validity of the output-specification for the job. |
void |
TextInputFormat.configure(JobConf conf)
|
void |
MapRunner.configure(JobConf job)
|
void |
MapReduceBase.configure(JobConf job)
Default implementation that does nothing. |
void |
KeyValueTextInputFormat.configure(JobConf conf)
|
void |
JobConfigurable.configure(JobConf job)
Initializes a new instance from a JobConf . |
static boolean |
OutputFormatBase.getCompressOutput(JobConf conf)
Deprecated. Is the job output compressed? |
static boolean |
FileOutputFormat.getCompressOutput(JobConf conf)
Is the job output compressed? |
static PathFilter |
FileInputFormat.getInputPathFilter(JobConf conf)
Get a PathFilter instance of the filter set for the input paths. |
static Path[] |
FileInputFormat.getInputPaths(JobConf conf)
Get the list of input Path s for the map-reduce job. |
static SequenceFile.CompressionType |
SequenceFileOutputFormat.getOutputCompressionType(JobConf conf)
Get the SequenceFile.CompressionType for the output SequenceFile . |
static Class<? extends CompressionCodec> |
OutputFormatBase.getOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> defaultValue)
Deprecated. Get the CompressionCodec for compressing the job outputs. |
static Class<? extends CompressionCodec> |
FileOutputFormat.getOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> defaultValue)
Get the CompressionCodec for compressing the job outputs. |
static Path |
FileOutputFormat.getOutputPath(JobConf conf)
Get the Path to the output directory for the map-reduce job. |
RecordReader<LongWritable,Text> |
TextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
SequenceFileInputFilter.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Create a record reader for the given split |
RecordReader<Text,Text> |
SequenceFileAsTextInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
SequenceFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<BytesWritable,BytesWritable> |
SequenceFileAsBinaryInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
MultiFileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<Text,Text> |
KeyValueTextInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
abstract RecordReader<K,V> |
FileInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
RecordReader<K,V> |
InputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Get the RecordReader for the given InputSplit . |
RecordWriter<K,V> |
TextOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
SequenceFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<BytesWritable,BytesWritable> |
SequenceFileAsBinaryOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
abstract RecordWriter<K,V> |
OutputFormatBase.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Deprecated. |
RecordWriter<WritableComparable,Writable> |
MapFileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
abstract RecordWriter<K,V> |
FileOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
OutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
Get the RecordWriter for the given job. |
static Class<?> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputKeyClass(JobConf conf)
Get the key class for the SequenceFile |
static Class<?> |
SequenceFileAsBinaryOutputFormat.getSequenceFileOutputValueClass(JobConf conf)
Get the value class for the SequenceFile |
InputSplit[] |
MultiFileInputFormat.getSplits(JobConf job,
int numSplits)
|
InputSplit[] |
FileInputFormat.getSplits(JobConf job,
int numSplits)
Splits files returned by FileInputFormat.listStatus(JobConf) when
they're too big. |
InputSplit[] |
InputFormat.getSplits(JobConf job,
int numSplits)
Logically split the set of input files for the job. |
static long |
TaskLog.getTaskLogLength(JobConf conf)
Get the desired maximum length of task's logs. |
static JobClient.TaskStatusFilter |
JobClient.getTaskOutputFilter(JobConf job)
Get the task output filter out of the JobConf. |
protected static Path |
FileOutputFormat.getTaskOutputPath(JobConf conf,
String name)
Helper function to create the task's temporary output directory and return the path to the task's output file. |
static Path |
FileOutputFormat.getWorkOutputPath(JobConf conf)
Get the Path to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files |
void |
JobClient.init(JobConf conf)
Connect to the default JobTracker . |
static boolean |
JobHistory.init(JobConf conf,
String hostname)
Initialize JobHistory files. |
protected Path[] |
SequenceFileInputFormat.listPaths(JobConf job)
|
protected Path[] |
FileInputFormat.listPaths(JobConf job)
Deprecated. Use FileInputFormat.listStatus(JobConf) instead. |
protected FileStatus[] |
FileInputFormat.listStatus(JobConf job)
List input directories. |
static void |
JobEndNotifier.localRunnerNotification(JobConf conf,
JobStatus status)
|
static void |
JobHistory.JobInfo.logSubmitted(JobID jobId,
JobConf jobConf,
String jobConfPath,
long submitTime)
Log job submitted event to history. |
static void |
JobHistory.JobInfo.logSubmitted(String jobid,
JobConf jobConf,
String jobConfPath,
long submitTime)
Deprecated. |
static void |
JobEndNotifier.registerNotification(JobConf jobConf,
JobStatus status)
|
static RunningJob |
JobClient.runJob(JobConf job)
Utility that submits a job, then polls for progress until the job is complete. |
static void |
OutputFormatBase.setCompressOutput(JobConf conf,
boolean compress)
Deprecated. Set whether the output of the job is compressed. |
static void |
FileOutputFormat.setCompressOutput(JobConf conf,
boolean compress)
Set whether the output of the job is compressed. |
static void |
FileInputFormat.setInputPathFilter(JobConf conf,
Class<? extends PathFilter> filter)
Set a PathFilter to be applied to the input paths for the map-reduce job. |
static void |
FileInputFormat.setInputPaths(JobConf conf,
Path... inputPaths)
Set the array of Path s as the list of inputs
for the map-reduce job. |
static void |
FileInputFormat.setInputPaths(JobConf conf,
String commaSeparatedPaths)
Sets the given comma separated paths as the list of inputs for the map-reduce job. |
static void |
SequenceFileOutputFormat.setOutputCompressionType(JobConf conf,
SequenceFile.CompressionType style)
Set the SequenceFile.CompressionType for the output SequenceFile . |
static void |
OutputFormatBase.setOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> codecClass)
Deprecated. Set the CompressionCodec to be used to compress job outputs. |
static void |
FileOutputFormat.setOutputCompressorClass(JobConf conf,
Class<? extends CompressionCodec> codecClass)
Set the CompressionCodec to be used to compress job outputs. |
static void |
FileOutputFormat.setOutputPath(JobConf conf,
Path outputDir)
Set the Path of the output directory for the map-reduce job. |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputKeyClass(JobConf conf,
Class<?> theClass)
Set the key class for the SequenceFile |
static void |
SequenceFileAsBinaryOutputFormat.setSequenceFileOutputValueClass(JobConf conf,
Class<?> theClass)
Set the value class for the SequenceFile |
static void |
JobClient.setTaskOutputFilter(JobConf job,
JobClient.TaskStatusFilter newValue)
Modify the JobConf to set the task output filter. |
static JobTracker |
JobTracker.startTracker(JobConf conf)
Start the JobTracker with given configuration. |
RunningJob |
JobClient.submitJob(JobConf job)
Submit a job to the MR system. |
void |
FileInputFormat.validateInput(JobConf job)
Deprecated. |
void |
InputFormat.validateInput(JobConf job)
Deprecated. getSplits is called in the client and can perform any necessary validation of the input |
Constructors in org.apache.hadoop.mapred with parameters of type JobConf | |
---|---|
FileSplit(Path file,
long start,
long length,
JobConf conf)
Deprecated. |
|
JobClient(JobConf conf)
Build a job client with the given JobConf , and connect to the
default JobTracker . |
|
MultiFileSplit(JobConf job,
Path[] files,
long[] lengths)
|
|
TaskTracker(JobConf conf)
Start with the local machine name, and the default JobTracker |
Uses of JobConf in org.apache.hadoop.mapred.jobcontrol |
---|
Methods in org.apache.hadoop.mapred.jobcontrol that return JobConf | |
---|---|
JobConf |
Job.getJobConf()
|
Methods in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
void |
Job.setJobConf(JobConf jobConf)
Set the mapred job conf for this job. |
Constructors in org.apache.hadoop.mapred.jobcontrol with parameters of type JobConf | |
---|---|
Job(JobConf jobConf)
Construct a job. |
|
Job(JobConf jobConf,
ArrayList<Job> dependingJobs)
Construct a job. |
Uses of JobConf in org.apache.hadoop.mapred.join |
---|
Methods in org.apache.hadoop.mapred.join with parameters of type JobConf | |
---|---|
ComposableRecordReader<K,TupleWritable> |
CompositeInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
Construct a CompositeRecordReader for the children of this InputFormat as defined in the init expression. |
ComposableRecordReader<K,V> |
ComposableInputFormat.getRecordReader(InputSplit split,
JobConf job,
Reporter reporter)
|
InputSplit[] |
CompositeInputFormat.getSplits(JobConf job,
int numSplits)
Build a CompositeInputSplit from the child InputFormats by assigning the ith split from each child to the ith composite split. |
void |
CompositeInputFormat.setFormat(JobConf job)
Interpret a given string as a composite expression. |
void |
CompositeInputFormat.validateInput(JobConf job)
Verify that this composite has children and that all its children can validate their input. |
Constructors in org.apache.hadoop.mapred.join with parameters of type JobConf | |
---|---|
JoinRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
|
MultiFilterRecordReader(int id,
JobConf conf,
int capacity,
Class<? extends WritableComparator> cmpcl)
|
Uses of JobConf in org.apache.hadoop.mapred.lib |
---|
Methods in org.apache.hadoop.mapred.lib with parameters of type JobConf | |
---|---|
void |
NullOutputFormat.checkOutputSpecs(FileSystem ignored,
JobConf job)
|
void |
RegexMapper.configure(JobConf job)
|
void |
NLineInputFormat.configure(JobConf conf)
|
void |
MultithreadedMapRunner.configure(JobConf jobConf)
|
void |
KeyFieldBasedPartitioner.configure(JobConf job)
|
void |
HashPartitioner.configure(JobConf job)
|
void |
FieldSelectionMapReduce.configure(JobConf job)
|
protected RecordWriter<K,V> |
MultipleTextOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
protected RecordWriter<K,V> |
MultipleSequenceFileOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
protected abstract RecordWriter<K,V> |
MultipleOutputFormat.getBaseRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
|
protected String |
MultipleOutputFormat.getInputFileBasedOutputFileName(JobConf job,
String name)
Generate the outfile name based on a given anme and the input file name. |
RecordReader<LongWritable,Text> |
NLineInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
RecordWriter<K,V> |
NullOutputFormat.getRecordWriter(FileSystem ignored,
JobConf job,
String name,
Progressable progress)
|
RecordWriter<K,V> |
MultipleOutputFormat.getRecordWriter(FileSystem fs,
JobConf job,
String name,
Progressable arg3)
Create a composite record writer that can write key/value data to different output files |
InputSplit[] |
NLineInputFormat.getSplits(JobConf job,
int numSplits)
Logically splits the set of input files for the job, splits N lines of the input as one split. |
Uses of JobConf in org.apache.hadoop.mapred.lib.aggregate |
---|
Methods in org.apache.hadoop.mapred.lib.aggregate that return JobConf | |
---|---|
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args)
Create an Aggregate based map/reduce job. |
static JobConf |
ValueAggregatorJob.createValueAggregatorJob(String[] args,
Class<? extends ValueAggregatorDescriptor>[] descriptors)
|
Methods in org.apache.hadoop.mapred.lib.aggregate with parameters of type JobConf | |
---|---|
void |
ValueAggregatorJobBase.configure(JobConf job)
|
void |
ValueAggregatorCombiner.configure(JobConf job)
Combiner does not need to configure. |
void |
ValueAggregatorBaseDescriptor.configure(JobConf job)
get the input file name. |
void |
ValueAggregatorDescriptor.configure(JobConf job)
Configure the object |
void |
UserDefinedValueAggregatorDescriptor.configure(JobConf job)
Do nothing. |
static void |
ValueAggregatorJob.setAggregatorDescriptors(JobConf job,
Class<? extends ValueAggregatorDescriptor>[] descriptors)
|
Constructors in org.apache.hadoop.mapred.lib.aggregate with parameters of type JobConf | |
---|---|
UserDefinedValueAggregatorDescriptor(String className,
JobConf job)
|
Uses of JobConf in org.apache.hadoop.mapred.pipes |
---|
Methods in org.apache.hadoop.mapred.pipes with parameters of type JobConf | |
---|---|
static String |
Submitter.getExecutable(JobConf conf)
Get the URI of the application's executable. |
static boolean |
Submitter.getIsJavaMapper(JobConf conf)
Check whether the job is using a Java Mapper. |
static boolean |
Submitter.getIsJavaRecordReader(JobConf conf)
Check whether the job is using a Java RecordReader |
static boolean |
Submitter.getIsJavaRecordWriter(JobConf conf)
Will the reduce use a Java RecordWriter? |
static boolean |
Submitter.getIsJavaReducer(JobConf conf)
Check whether the job is using a Java Reducer. |
static boolean |
Submitter.getKeepCommandFile(JobConf conf)
Does the user want to keep the command file for debugging? If this is true, pipes will write a copy of the command data to a file in the task directory named "downlink.data", which may be used to run the C++ program under the debugger. |
static void |
Submitter.setExecutable(JobConf conf,
String executable)
Set the URI for the application's executable. |
static void |
Submitter.setIsJavaMapper(JobConf conf,
boolean value)
Set whether the Mapper is written in Java. |
static void |
Submitter.setIsJavaRecordReader(JobConf conf,
boolean value)
Set whether the job is using a Java RecordReader. |
static void |
Submitter.setIsJavaRecordWriter(JobConf conf,
boolean value)
Set whether the job will use a Java RecordWriter. |
static void |
Submitter.setIsJavaReducer(JobConf conf,
boolean value)
Set whether the Reducer is written in Java. |
static void |
Submitter.setKeepCommandFile(JobConf conf,
boolean keep)
Set whether to keep the command file for debugging |
static RunningJob |
Submitter.submitJob(JobConf conf)
Submit a job to the map/reduce cluster. |
Uses of JobConf in org.apache.hadoop.streaming |
---|
Fields in org.apache.hadoop.streaming declared as JobConf | |
---|---|
protected JobConf |
StreamJob.jobConf_
|
Methods in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
void |
PipeMapRed.configure(JobConf job)
|
void |
PipeMapper.configure(JobConf job)
|
static FileSplit |
StreamUtil.getCurrentSplit(JobConf job)
|
RecordReader<Text,Text> |
StreamInputFormat.getRecordReader(InputSplit genericSplit,
JobConf job,
Reporter reporter)
|
static org.apache.hadoop.streaming.StreamUtil.TaskId |
StreamUtil.getTaskInfo(JobConf job)
|
static boolean |
StreamUtil.isLocalJobTracker(JobConf job)
|
void |
StreamBaseRecordReader.validateInput(JobConf job)
This implementation always returns true. |
Constructors in org.apache.hadoop.streaming with parameters of type JobConf | |
---|---|
StreamBaseRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
|
StreamXmlRecordReader(FSDataInputStream in,
FileSplit split,
Reporter reporter,
JobConf job,
FileSystem fs)
|
Uses of JobConf in org.apache.hadoop.util |
---|
Methods in org.apache.hadoop.util with parameters of type JobConf | |
---|---|
boolean |
NativeCodeLoader.getLoadNativeLibraries(JobConf jobConf)
Return if native hadoop libraries, if present, can be used for this job. |
static String[] |
Shell.getUlimitMemoryCommand(JobConf job)
Get the Unix command for setting the maximum virtual memory available to a given child process. |
void |
NativeCodeLoader.setLoadNativeLibraries(JobConf jobConf,
boolean loadNativeLibraries)
Set if native hadoop libraries, if present, can be used for this job. |
|
||||||||||
PREV NEXT | FRAMES NO FRAMES |