|
||||||||||
PREV NEXT | FRAMES NO FRAMES All Classes |
Serialization
supports the given class.
RemoteException
.
AccessControlException
with the specified detail message.
Path
to the list of inputs for the map-reduce job.
FileInputFormat.addInputPath(JobConf, Path)
or
FileInputFormat.addInputPaths(JobConf, String)
FileSystemStore
.BufferedFSInputStream
with the specified buffer size,
and saves its argument, the input stream
in
, for later use.
param
, to the IPC server running at
address
, returning the value.
position
.
IOException
or
null pointers.
Writable
class.
JobClient
.
InputSplit
to future operations.
RecordWriter
to future operations.
IOException
IOException
.
MultiFilterRecordReader.emit(org.apache.hadoop.mapred.join.TupleWritable)
every Tuple from the
collector (the outer join of child RRs).
Comparable
.
CompressionOutputStream
to compress data.Configuration
.JobConf
.
JobConf
.
Configuration
.Group
of counters, comprising of counters from a particular
counter Enum
class.Compressor
for use by this CompressionCodec
.
SequenceFile.Reader
returned.
Decompressor
for use by this CompressionCodec
.
FsPermission
object.
PermissionStatus
object.
UnixUserGroupInformation
object.
CompressionInputStream
that will read from the given
InputStream
with the given Decompressor
.
IOException
.
CompressionOutputStream
that will write to the given
OutputStream
.
CompressionOutputStream
that will write to the given
OutputStream
with the given Compressor
.
recordName
.
recordName
.
Thread.setDaemon(boolean)
with true.DataInput
implementation that reads from an in-memory
buffer.DataOutput
implementation that writes to an in-memory
buffer.CompressionInputStream
to compress data.Stringifier
interface which stringifies the objects using base64 encoding of the
serialized version of the objects.WritableComparable
implementation.
Record
implementation.
InputStream
.RawComparator
that uses a Deserializer
to deserialize
the objects to be compared so that the standard Comparator
can
be used to compare them.o
is a ByteWritable with the same value.
o
is a DoubleWritable with the same value.
o
is a FloatWritable with the same value.
o
is a IntWritable with the same value.
o
is a LongWritable with the same value.
o
is an MD5Hash whose digest contains the
same values.
o
is a Text with the same contents.
o
is a UTF8 with the same contents.
o
is a VIntWritable with the same value.
o
is a VLongWritable with the same value.
charToEscape
in the string
with the escape char escapeChar
This class is configured by setting ContextFactory attributes which in turn are usually configured through a properties file.
InputFormat
.OutputFormat
.INode
s and Block
s.FilterFileSystem
contains
some other file system, which it uses as
its basic file system, possibly transforming
the data along the way or providing additional
functionality.what
in the backing
buffer, starting as position start
.
FSInputStream
in a DataInputStream
and buffers input through a BufferedInputStream
.OutputStream
in a DataOutputStream
,
buffers output through a BufferedOutputStream
and creates a checksum
file.FsAction
.
Throwable
into a Runtime Exception.FileSystem
backed by an FTP client provided by Apache Commons Net.GenericOptionsParser
is a utility to parse command line
arguments generic to the Hadoop framework.GenericOptionsParser to parse only the generic Hadoop
arguments.
GenericOptionsParser(Configuration, Options, String[]) -
Constructor for class org.apache.hadoop.util.GenericOptionsParser
Create a GenericOptionsParser
to parse given options as well
as generic Hadoop options.
GenericsUtil - Class in org.apache.hadoop.util
Contains utility methods for dealing with Java Generics.
GenericsUtil() -
Constructor for class org.apache.hadoop.util.GenericsUtil
GenericWritable - Class in org.apache.hadoop.io
A wrapper for Writable instances.
GenericWritable() -
Constructor for class org.apache.hadoop.io.GenericWritable
get(String) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property, null
if
no such property exists.
get(String, String) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property.
get(int) -
Method in class org.apache.hadoop.dfs.LocatedBlocks
Get located block.
get(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
Returns the configured filesystem implementation.
get(URI, Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
Returns the FileSystem for this URI's scheme and authority.
get(long, Writable) -
Method in class org.apache.hadoop.io.ArrayFile.Reader
Return the n
th value in the file.
get() -
Method in class org.apache.hadoop.io.ArrayWritable
get() -
Method in class org.apache.hadoop.io.BooleanWritable
Returns the value of the BooleanWritable
get() -
Method in class org.apache.hadoop.io.BytesWritable
Get the data from the BytesWritable.
get() -
Method in class org.apache.hadoop.io.ByteWritable
Return the value of this ByteWritable.
get() -
Method in class org.apache.hadoop.io.DoubleWritable
get() -
Method in class org.apache.hadoop.io.FloatWritable
Return the value of this FloatWritable.
get() -
Method in class org.apache.hadoop.io.GenericWritable
Return the wrapped instance.
get() -
Method in class org.apache.hadoop.io.IntWritable
Return the value of this IntWritable.
get() -
Method in class org.apache.hadoop.io.LongWritable
Return the value of this LongWritable.
get(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.MapFile.Reader
Return the value for the named key, or null if none exists.
get(Object) -
Method in class org.apache.hadoop.io.MapWritable
get() -
Static method in class org.apache.hadoop.io.NullWritable
Returns the single instance of this class.
get() -
Method in class org.apache.hadoop.io.ObjectWritable
Return the instance, or null if none.
get(Text) -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
get(WritableComparable) -
Method in class org.apache.hadoop.io.SetFile.Reader
Read the matching key from a set into key
.
get(Object) -
Method in class org.apache.hadoop.io.SortedMapWritable
get() -
Method in class org.apache.hadoop.io.TwoDArrayWritable
get() -
Method in class org.apache.hadoop.io.VIntWritable
Return the value of this VIntWritable.
get() -
Method in class org.apache.hadoop.io.VLongWritable
Return the value of this LongWritable.
get(Class) -
Static method in class org.apache.hadoop.io.WritableComparator
Get a comparator for a WritableComparable
implementation.
get() -
Static method in class org.apache.hadoop.ipc.Server
Returns the server instance called under or null.
get(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Get ith child InputSplit.
get(int) -
Method in class org.apache.hadoop.mapred.join.TupleWritable
Get ith Writable from Tuple.
get() -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
Get value
get() -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
Get value
get(DataInput) -
Static method in class org.apache.hadoop.record.BinaryRecordInput
Get a thread-local record input for the supplied DataInput.
get(DataOutput) -
Static method in class org.apache.hadoop.record.BinaryRecordOutput
Get a thread-local record output for the supplied DataOutput.
get() -
Method in class org.apache.hadoop.record.Buffer
Get the data from the Buffer.
get() -
Method in class org.apache.hadoop.util.Progress
Returns the overall progress of the root.
getAbsolutePath(String) -
Method in class org.apache.hadoop.streaming.PathFinder
Returns the full path name of this file if it is listed in the
path
getAccessKey() -
Method in class org.apache.hadoop.fs.s3.S3Credentials
getAddress(Configuration) -
Static method in class org.apache.hadoop.mapred.JobTracker
getAllJobs() -
Method in class org.apache.hadoop.mapred.JobClient
Get the jobs that are submitted.
getAllJobs() -
Method in class org.apache.hadoop.mapred.JobTracker
getAllStaticResolutions() -
Static method in class org.apache.hadoop.net.NetUtils
This is used to get all the resolutions that were added using
NetUtils.addStaticResolution(String, String)
.
getAllTasks() -
Method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Returns all map and reduce tasks .
getApproxChkSumLength(long) -
Static method in class org.apache.hadoop.fs.ChecksumFileSystem
getArchiveClassPaths(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the archive entries in classpath as an array of Path
getArchiveTimestamps(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the archives
getAssignedJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getAssignedTracker(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getAssignedTracker(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.JobTracker
Get tracker name for a given task id.
getAttribute(String) -
Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the value in the webapp context.
getAttribute(String) -
Method in class org.apache.hadoop.metrics.ContextFactory
Returns the value of the named attribute, or null if there is no
attribute of that name.
getAttribute(String) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Convenience method for subclasses to access factory attributes.
getAttributeNames() -
Method in class org.apache.hadoop.metrics.ContextFactory
Returns the names of all the factory's attributes.
getAttributeTable(String) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns an attribute-value map derived from the factory attributes
by finding all factory attributes that begin with
contextName.tableName.
getAvailable() -
Method in class org.apache.hadoop.fs.DF
getBasePathInJarOut(String) -
Method in class org.apache.hadoop.streaming.JarBuilder
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
getBaseRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
getBeginColumn() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getBeginLine() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getBlockIndex(BlockLocation[], long) -
Method in class org.apache.hadoop.mapred.FileInputFormat
getBlockInputStream(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns an input stream to read the contents of the specified block
getBlockInputStream(Block, long) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns an input stream at specified offset of the specified block
getBlockLocations(String, long, long) -
Method in class org.apache.hadoop.dfs.NameNode
Get locations of the blocks of the specified file within the specified range.
getBlockMetaDataInfo(Block) -
Method in class org.apache.hadoop.dfs.DataNode
getBlockReport() -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns the block report - the full list of blocks stored
getBlockReportAverageTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getBlockReportAverageTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Average time for Block Report Processing in last interval
getBlockReportMaxTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getBlockReportMaxTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Maximum Block Report Processing Time since reset was called
getBlockReportMinTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getBlockReportMinTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Minimum Block Report Processing Time since reset was called
getBlockReportNum() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getBlockReportNum() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of block Reports processed in the last interval
getBlockReportsAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlockReportsAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for Block Reports Operation in last interval
getBlockReportsMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlockReportsMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum Block Reports Operation Time since reset was called
getBlockReportsMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlockReportsMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum Block Reports Operation Time since reset was called
getBlockReportsNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlockReportsNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of Block Reports sent in last interval
getBlocks(DatanodeInfo, long) -
Method in class org.apache.hadoop.dfs.NameNode
return a list of blocks & their locations on datanode
whose
total size is size
getBlocks() -
Method in class org.apache.hadoop.fs.s3.INode
getBlockSize() -
Method in class org.apache.hadoop.fs.FileStatus
Get the block size of the file.
getBlockSize(Path) -
Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getBlocksRead() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlocksRead() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of blocks read in the last interval
getBlocksRemoved() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlocksRemoved() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of blocks removed in the last interval
getBlocksReplicated() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlocksReplicated() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of blocks replicated in the last interval
getBlocksScheduled() -
Method in class org.apache.hadoop.dfs.DatanodeDescriptor
getBlocksTotal() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Number of allocated blocks in the system
getBlocksVerified() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlocksVerified() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of blocks verified in the last interval
getBlocksWritten() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlocksWritten() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of blocks written in the last interval
getBlockVerificationFailures() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBlockVerificationFailures() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of block verification failures in the last interval
getBoolean(String, boolean) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as a boolean
.
getBoundAntProperty(String, String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
getBuildVersion() -
Method in class org.apache.hadoop.mapred.JobTracker
getBuildVersion() -
Static method in class org.apache.hadoop.util.VersionInfo
Returns the buildVersion which includes version,
revision, user and date.
getBytes() -
Method in class org.apache.hadoop.io.Text
Returns the raw bytes; however, only data up to Text.getLength()
is
valid.
getBytes() -
Method in class org.apache.hadoop.io.UTF8
Deprecated. The raw bytes.
getBytes(String) -
Static method in class org.apache.hadoop.io.UTF8
Deprecated. Convert a string to a UTF-8 encoded byte array.
getBytesPerChecksum() -
Method in class org.apache.hadoop.dfs.DataChecksum
getBytesPerSum() -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the bytes Per Checksum
getBytesRead() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getBytesRead() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of bytes read in the last interval
getBytesRead() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
Get the total number of bytes read
getBytesRead() -
Method in interface org.apache.hadoop.io.compress.Compressor
Return number of uncompressed bytes input so far.
getBytesRead() -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
Return number of bytes given to this compressor since last reset.
getBytesRead() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of uncompressed bytes input so far.
getBytesRead() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of uncompressed bytes input so far.
getBytesWritten() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
Get the total number of bytes written
getBytesWritten() -
Method in interface org.apache.hadoop.io.compress.Compressor
Return number of compressed bytes output so far.
getBytesWritten() -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
Return number of bytes consumed by callers of compress since last reset.
getBytesWritten() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
Returns the total number of compressed bytes output so far.
getBytesWritten() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
Returns the total number of compressed bytes output so far.
getCacheArchives(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache archives set in the Configuration
getCacheFiles(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get cache files set in the Configuration
getCallQueueLen() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The number of rpc calls in the queue.
getCallQueueLen() -
Method in class org.apache.hadoop.ipc.Server
The number of rpc calls in the queue.
getCapacity() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.FSDatasetMBean
Returns total capacity (in bytes) of storage (used and unused)
getCapacity() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw capacity.
getCapacity() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem.DiskStatus
getCapacity() -
Method in class org.apache.hadoop.fs.DF
getCapacity() -
Method in class org.apache.hadoop.io.BytesWritable
Get the capacity, which is the maximum size that could handled without
resizing the backing storage.
getCapacity() -
Method in class org.apache.hadoop.record.Buffer
Get the capacity, which is the maximum count that could handled without
resizing the backing storage.
getCapacityRemaining() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Free (unused) storage capacity
getCapacityTotal() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Total storage capacity
getCapacityUsed() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Used storage capacity
getCategory(List<List<Pentomino.ColumnName>>) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
Find whether the solution has the x in the upper left quadrant, the
x-midline, the y-midline or in the center.
getChannel() -
Method in class org.apache.hadoop.net.SocketInputStream
Returns underlying channel used by inputstream.
getChannel() -
Method in class org.apache.hadoop.net.SocketOutputStream
Returns underlying channel used by this stream.
getChannelPosition(Block, FSDatasetInterface.BlockWriteStreams) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns the current offset in the data stream.
getChecksumFile(Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the name of the checksum file associated with a file.
getChecksumFileLength(Path, long) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
Return the length of the checksum file given the size of the
actual file.
getChecksumHeaderSize() -
Static method in class org.apache.hadoop.dfs.DataChecksum
getChecksumLength(long, int) -
Static method in class org.apache.hadoop.fs.ChecksumFileSystem
Calculated the length of the checksum file in bytes.
getChecksumSize() -
Method in class org.apache.hadoop.dfs.DataChecksum
getChecksumType() -
Method in class org.apache.hadoop.dfs.DataChecksum
getChunkPosition(long) -
Method in class org.apache.hadoop.fs.FSInputChecker
Return position of beginning of chunk containing pos.
getClass(String, Class<?>) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as a Class
.
getClass(String, Class<? extends U>, Class<U>) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as a Class
implementing the interface specified by xface
.
getClass(byte) -
Method in class org.apache.hadoop.io.AbstractMapWritable
getClass(String, Configuration) -
Static method in class org.apache.hadoop.io.WritableName
Return the class for a name.
getClass(T) -
Static method in class org.apache.hadoop.util.GenericsUtil
Returns the Class object (of type Class<T>
) of the
argument of type T
.
getClass(T) -
Static method in class org.apache.hadoop.util.ReflectionUtils
Return the correctly-typed Class
of the given object.
getClassByName(String) -
Method in class org.apache.hadoop.conf.Configuration
Load a class by name.
getClassByName(String) -
Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
getClassLoader() -
Method in class org.apache.hadoop.conf.Configuration
Get the ClassLoader
for this job.
getClassName() -
Method in exception org.apache.hadoop.ipc.RemoteException
getClientVersion() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the client's preferred version
getClosest(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClosest(WritableComparable, Writable, boolean) -
Method in class org.apache.hadoop.io.MapFile.Reader
Finds the record that is the closest match to the specified key.
getClusterNick() -
Method in class org.apache.hadoop.streaming.StreamJob
getClusterStatus() -
Method in class org.apache.hadoop.mapred.JobClient
Get status information about the Map-Reduce cluster.
getClusterStatus() -
Method in class org.apache.hadoop.mapred.JobTracker
getCodec(Path) -
Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Find the relevant compression codec for the given file based on its
filename suffix.
getCodecClasses(Configuration) -
Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
Get the list of codecs listed in the configuration
getColumnName(int) -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the name of a given column as a string
getCombineOnceOnly() -
Method in class org.apache.hadoop.mapred.JobConf
Deprecated.
getCombinerClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
getCombinerOutput() -
Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
getCombinerOutput() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
getCommandLine() -
Method in class org.apache.hadoop.util.GenericOptionsParser
Returns the commons-cli CommandLine
object
to process the parsed arguments.
getCommandLineConfig() -
Static method in class org.apache.hadoop.mapred.JobClient
return the command line configuration
getCommandName() -
Method in class org.apache.hadoop.fs.shell.Command
Return the command's name excluding the leading character -
getCommandName() -
Method in class org.apache.hadoop.fs.shell.Count
getComparator() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return comparator defining the ordering for RecordReaders in this
composite.
getCompressionCodec() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the compression codec of data in this file.
getCompressionCodec() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the compression codec of data in this file.
getCompressionType(Configuration) -
Static method in class org.apache.hadoop.io.SequenceFile
Deprecated. Use JobConf.getMapOutputCompressionType()
to get SequenceFile.CompressionType
for intermediate map-outputs or
SequenceFileOutputFormat.getOutputCompressionType(org.apache.hadoop.mapred.JobConf)
to get SequenceFile.CompressionType
for job-outputs.
getCompressMapOutput() -
Method in class org.apache.hadoop.mapred.JobConf
Are the outputs of the maps be compressed?
getCompressor(CompressionCodec) -
Static method in class org.apache.hadoop.io.compress.CodecPool
Get a Compressor
for the given CompressionCodec
from the
pool or a new one.
getCompressorType() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Compressor
needed by this CompressionCodec
.
getCompressorType() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
getCompressorType() -
Method in class org.apache.hadoop.io.compress.GzipCodec
getCompressorType() -
Method in class org.apache.hadoop.io.compress.LzoCodec
getCompressOutput(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
Is the job output compressed?
getCompressOutput(JobConf) -
Static method in class org.apache.hadoop.mapred.OutputFormatBase
Deprecated. Is the job output compressed?
getConf() -
Method in interface org.apache.hadoop.conf.Configurable
Return the configuration used by this object.
getConf() -
Method in class org.apache.hadoop.conf.Configured
getConf() -
Method in class org.apache.hadoop.dfs.Balancer
return this balancer's configuration
getConf() -
Method in class org.apache.hadoop.fs.FilterFileSystem
getConf() -
Method in class org.apache.hadoop.io.AbstractMapWritable
getConf() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
getConf() -
Method in class org.apache.hadoop.io.compress.LzoCodec
getConf() -
Method in class org.apache.hadoop.io.GenericWritable
getConf() -
Method in class org.apache.hadoop.io.ObjectWritable
getConf() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return the configuration used by this object.
getConf() -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
getConf() -
Method in class org.apache.hadoop.net.ScriptBasedMapping
getConf() -
Method in class org.apache.hadoop.net.SocksSocketFactory
getConfiguration() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the underlying configuration object.
getConfResourceAsInputStream(String) -
Method in class org.apache.hadoop.conf.Configuration
Get an input stream attached to the configuration resource with the
given name
.
getConfResourceAsReader(String) -
Method in class org.apache.hadoop.conf.Configuration
Get a Reader
attached to the configuration resource with the
given name
.
getConnectAddress(Server) -
Static method in class org.apache.hadoop.net.NetUtils
Returns InetSocketAddress that a client can use to
connect to the server.
getContentSummary(Path) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the ContentSummary
of a given Path
.
getContentSummary(String) -
Method in class org.apache.hadoop.dfs.NameNode
Get ContentSummary
rooted at the specified directory.
getContentSummary(Path) -
Method in class org.apache.hadoop.fs.FileSystem
Return the ContentSummary
of a given Path
.
getContext(String) -
Method in class org.apache.hadoop.metrics.ContextFactory
Returns the named MetricsContext instance, constructing it if necessary
using the factory's current configuration attributes.
getContext(String) -
Static method in class org.apache.hadoop.metrics.MetricsUtil
Utility method to return the named context.
getContext() -
Method in class org.apache.hadoop.streaming.PipeMapRed
getContextFactory() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the factory by which this context was created.
getContextName() -
Method in interface org.apache.hadoop.metrics.MetricsContext
Returns the context name.
getContextName() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the context name.
getCopyBlockOpAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getCopyBlockOpAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for CopyBlock Operation in last interval
getCopyBlockOpMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getCopyBlockOpMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum CopyBlock Operation Time since reset was called
getCopyBlockOpMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getCopyBlockOpMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum CopyBlock Operation Time since reset was called
getCopyBlockOpNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getCopyBlockOpNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of CopyBlock Operation in last interval
getCorruptFiles() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of currupted files.
getCount() -
Method in class org.apache.hadoop.record.Buffer
Get the current count of the buffer.
getCounter() -
Method in class org.apache.hadoop.mapred.Counters.Counter
What is the current value of this counter?
getCounter(Enum) -
Method in class org.apache.hadoop.mapred.Counters
Returns current value of the specified counter, or 0 if the counter
does not exist.
getCounter(String) -
Method in class org.apache.hadoop.mapred.Counters.Group
Returns the value of the specified counter, or 0 if the counter does
not exist.
getCounter(int, String) -
Method in class org.apache.hadoop.mapred.Counters.Group
Deprecated. use Counters.Group.getCounter(String)
instead
getCounterForName(String) -
Method in class org.apache.hadoop.mapred.Counters.Group
Get the counter for the given name and create it if it doesn't exist.
getCounters() -
Method in interface org.apache.hadoop.mapred.RunningJob
Gets the counters for this job.
getCounters() -
Method in class org.apache.hadoop.mapred.TaskReport
A table of counters.
getCurrentSegmentGeneration(Directory) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Get the generation (N) of the current segments_N file in the directory.
getCurrentSegmentGeneration(String[]) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
Get the generation (N) of the current segments_N file from a list of
files.
getCurrentSplit(JobConf) -
Static method in class org.apache.hadoop.streaming.StreamUtil
getCurrentTrashDir() -
Method in class org.apache.hadoop.fs.FsShell
Returns the Trash object associated with this shell.
getCurrentUGI() -
Static method in class org.apache.hadoop.security.UserGroupInformation
getCurrentValue(Writable) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(Object) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Get the 'value' corresponding to the last read 'key'.
getCurrentValue(V) -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
getData() -
Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
getData() -
Method in class org.apache.hadoop.io.DataInputBuffer
getData() -
Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the current contents of the buffer.
getData() -
Method in class org.apache.hadoop.io.OutputBuffer
Returns the current contents of the buffer.
getDataNode() -
Static method in class org.apache.hadoop.dfs.DataNode
Return the DataNode object
getDatanodeReport() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
A formatted string for reporting the status of the DataNode.
getDatanodeReport(FSConstants.DatanodeReportType) -
Method in class org.apache.hadoop.dfs.NameNode
getDataNodeStats() -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return statistics for each datanode.
getDataNodeStats() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return statistics for each datanode.
getDate() -
Static method in class org.apache.hadoop.util.VersionInfo
The date that Hadoop was compiled.
getDeclaredClass() -
Method in class org.apache.hadoop.io.ObjectWritable
Return the class this is meant to be.
getDecompressor(CompressionCodec) -
Static method in class org.apache.hadoop.io.compress.CodecPool
Get a Decompressor
for the given CompressionCodec
from the
pool or a new one.
getDecompressorType() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the type of Decompressor
needed by this CompressionCodec
.
getDecompressorType() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
getDecompressorType() -
Method in class org.apache.hadoop.io.compress.GzipCodec
getDecompressorType() -
Method in class org.apache.hadoop.io.compress.LzoCodec
getDefault() -
Static method in class org.apache.hadoop.fs.permission.FsPermission
Get the default permission.
getDefaultBlockSize() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.FileSystem
Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.FilterFileSystem
Return the number of bytes that large input files should be optimally
be split into to minimize i/o time.
getDefaultBlockSize() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
getDefaultExtension() -
Method in interface org.apache.hadoop.io.compress.CompressionCodec
Get the default filename extension for this kind of compression.
getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.DefaultCodec
getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.GzipCodec
getDefaultExtension() -
Method in class org.apache.hadoop.io.compress.LzoCodec
Get the default filename extension for this kind of compression.
getDefaultHost(String, String) -
Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the provided
nameserver with the address bound to the specified network interface
getDefaultHost(String) -
Static method in class org.apache.hadoop.net.DNS
Returns the default (first) host name associated by the default
nameserver with the address bound to the specified network interface
getDefaultIP(String) -
Static method in class org.apache.hadoop.net.DNS
Returns the first available IP address associated with the provided
network interface
getDefaultMaps() -
Method in class org.apache.hadoop.mapred.JobClient
Get status information about the max available Maps in the cluster.
getDefaultReduces() -
Method in class org.apache.hadoop.mapred.JobClient
Get status information about the max available Reduces in the cluster.
getDefaultReplication() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
getDefaultReplication() -
Method in class org.apache.hadoop.fs.FileSystem
Get the default replication.
getDefaultReplication() -
Method in class org.apache.hadoop.fs.FilterFileSystem
Get the default replication.
getDefaultReplication() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
getDefaultSocketFactory(Configuration) -
Static method in class org.apache.hadoop.net.NetUtils
Get the default socket factory as specified by the configuration
parameter hadoop.rpc.socket.factory.default
getDefaultUri(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
Get the default filesystem URI from a configuration.
getDelegate() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Obtain an iterator over the child RRs apropos of the value type
ultimately emitted from this join.
getDelegate() -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader
Return an iterator wrapping the JoinCollector.
getDelegate() -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
Return an iterator returning a single value from the tuple.
getDependingJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getDescription() -
Method in interface org.apache.hadoop.dfs.Upgradeable
Description of the upgrade object for displaying.
getDeserializer(Class<Serializable>) -
Method in class org.apache.hadoop.io.serializer.JavaSerialization
getDeserializer(Class<T>) -
Method in interface org.apache.hadoop.io.serializer.Serialization
getDeserializer(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
getDeserializer(Class<Writable>) -
Method in class org.apache.hadoop.io.serializer.WritableSerialization
getDfsUsed() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.FSDatasetMBean
Returns the total space (in bytes) used by dfs datanode
getDfsUsed() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
The used space by the data node.
getDfsUsed() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem.DiskStatus
getDiagnostics() -
Method in class org.apache.hadoop.mapred.TaskReport
A list of error messages.
getDigest() -
Method in class org.apache.hadoop.io.MD5Hash
Returns the digest bytes.
getDirectory() -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
Get the ram directory of the intermediate form.
getDirectory() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the directory where this shard resides.
getDirectoryCount() -
Method in class org.apache.hadoop.fs.ContentSummary
getDirPath() -
Method in class org.apache.hadoop.fs.DF
getDirPath() -
Method in class org.apache.hadoop.fs.DU
getDiskStatus() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the disk usage of the filesystem, including total capacity,
used space, and remaining space
getDisplayName() -
Method in class org.apache.hadoop.mapred.Counters.Counter
Get the name of the counter.
getDisplayName() -
Method in class org.apache.hadoop.mapred.Counters.Group
Returns localized name of the group.
getDistance(Node, Node) -
Method in class org.apache.hadoop.net.NetworkTopology
Return the distance between two nodes
It is assumed that the distance from one node to its parent is 1
The distance between two nodes is calculated by summing up their distances
to their closest common ancestor.
getDistributionPolicyClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the distribution policy class.
getDocument() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the document.
getDocumentAnalyzerClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the analyzer class.
getDoubleValue(Object) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
getDU(File) -
Static method in class org.apache.hadoop.fs.FileUtil
Takes an input dir and returns the du on that local directory.
getEditLogSize() -
Method in class org.apache.hadoop.dfs.NameNode
Returns the size of the current edit log.
getElementTypeID() -
Method in class org.apache.hadoop.record.meta.VectorTypeID
getEmptier() -
Method in class org.apache.hadoop.fs.Trash
Return a Runnable
that periodically empties the trash of all
users, intended to be run by the superuser.
getEndColumn() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getEndLine() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getEntry(MapFile.Reader[], Partitioner<K, V>, K, V) -
Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Get an entry from output generated by this class.
getError() -
Static method in class org.apache.hadoop.metrics.jvm.EventCounter
getEventId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns event Id.
getExceptions() -
Method in exception org.apache.hadoop.io.MultipleIOException
getExcessiveReplicas() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of over-replicated blocks.
getExcludedHosts() -
Method in class org.apache.hadoop.util.HostsFileReader
getExecString() -
Method in class org.apache.hadoop.fs.DF
getExecString() -
Method in class org.apache.hadoop.fs.DU
getExecString() -
Method in class org.apache.hadoop.util.Shell
return an array containing the command name & its parameters
getExecString() -
Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
getExecutable(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Get the URI of the application's executable.
getExitCode() -
Method in exception org.apache.hadoop.util.Shell.ExitCodeException
getExitCode() -
Method in class org.apache.hadoop.util.Shell
get the exit code
getFactor() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the number of streams to merge at once.
getFactory(Class) -
Static method in class org.apache.hadoop.io.WritableFactories
Define a factory for a class.
getFactory() -
Static method in class org.apache.hadoop.metrics.ContextFactory
Returns the singleton ContextFactory instance, constructing it if
necessary.
getFailedJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
getFatal() -
Static method in class org.apache.hadoop.metrics.jvm.EventCounter
getFieldID() -
Method in class org.apache.hadoop.record.meta.FieldTypeInfo
get the field's id (name)
getFieldTypeInfos() -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Return a collection of field type infos
getFieldTypeInfos() -
Method in class org.apache.hadoop.record.meta.StructTypeID
getFile(String, String) -
Method in class org.apache.hadoop.conf.Configuration
Get a local file name under a directory named in dirsProp with
the given path.
getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
getFileBlockLocations(Path, long, long) -
Method in class org.apache.hadoop.fs.FileSystem
Deprecated. use FileSystem.getFileBlockLocations(FileStatus, long, long)
getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.FileSystem
Return an array containing hostnames, offset and size of
portions of the given file.
getFileBlockLocations(Path, long, long) -
Method in class org.apache.hadoop.fs.FilterFileSystem
Return an array containing hostnames, offset and size of
portions of the given file.
getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.FilterFileSystem
getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.HarFileSystem
get block locations from the underlying fs
getFileBlockLocations(FileStatus, long, long) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Return null if the file doesn't exist; otherwise, get the
locations of the various chunks of the file file from KFS.
getFileClassPaths(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the file entries in classpath as an array of Path
getFileCount() -
Method in class org.apache.hadoop.fs.ContentSummary
getFileInfo(String) -
Method in class org.apache.hadoop.dfs.NameNode
Get the file info for a specific file.
getFileLength() -
Method in class org.apache.hadoop.dfs.LocatedBlocks
getFileName() -
Method in class org.apache.hadoop.metrics.file.FileContext
Returns the configured file name, or null.
getFiles(PathFilter) -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.
getFileStatus(Path) -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Returns the stat information about the file.
getFileStatus(Path) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Returns the stat information about the file.
getFileStatus(Path) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.FileSystem
Return a file status object that represents the path.
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
Get file status.
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
return the filestatus of files in har archive.
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
FileStatus for S3 file systems.
getFileStatus(Path) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
getFilesTotal() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Total number of files and directories
getFilesystem() -
Method in class org.apache.hadoop.fs.DF
getFileSystem(Configuration) -
Method in class org.apache.hadoop.fs.Path
Return the FileSystem that owns this Path.
getFilesystemName() -
Method in class org.apache.hadoop.mapred.JobTracker
Grab the local fs name
getFileTimestamps(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the timestamps of the files
getFileType() -
Method in class org.apache.hadoop.fs.s3.INode
getFinishTime() -
Method in class org.apache.hadoop.mapred.TaskReport
Get finish time of task.
getFlippable() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
getFloat(String, float) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as a float
.
getFormattedTimeWithDiff(DateFormat, long, long) -
Static method in class org.apache.hadoop.util.StringUtils
Formats time in ms and appends difference (finishTime - startTime)
as returned by formatTimeDiff().
getFs() -
Method in class org.apache.hadoop.mapred.JobClient
Get a filesystem handle.
getFSDataset() -
Method in class org.apache.hadoop.dfs.DataNode
This method is used for testing.
getFSImageLoadTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getFSImageLoadTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Time spent loading the FS Image at startup
getFsImageName() -
Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file
getFsImageNameCheckpoint() -
Method in class org.apache.hadoop.dfs.NameNode
Returns the name of the fsImage file uploaded by periodic
checkpointing
getFSSize() -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.
getFSState() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
The state of the file system: Safemode or Operational
getGeneration() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the generation of the Lucene instance.
getGET_PERMISSION_COMMAND() -
Static method in class org.apache.hadoop.util.Shell
Return a Unix command to get permission information.
getGroup() -
Method in class org.apache.hadoop.fs.FileStatus
Get the group associated with the file.
getGroup(String) -
Method in class org.apache.hadoop.mapred.Counters
Returns the named counter group, or an empty group if there is none
with the specified name.
getGroupAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
Return group FsAction
.
getGroupName() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return group name
getGroupNames() -
Method in class org.apache.hadoop.mapred.Counters
Returns the names of all counter classes.
getGroupNames() -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
Return an array of group names
getGroupNames() -
Method in class org.apache.hadoop.security.UserGroupInformation
Get the name of the groups that the user belong to
getGROUPS_COMMAND() -
Static method in class org.apache.hadoop.util.Shell
a Unix command to get the current user's groups list
getHadoopClientHome() -
Method in class org.apache.hadoop.streaming.StreamJob
getHarHash(Path) -
Static method in class org.apache.hadoop.fs.HarFileSystem
the hash of the path p inside iniside
the filesystem
getHarVersion() -
Method in class org.apache.hadoop.fs.HarFileSystem
getHeader() -
Method in class org.apache.hadoop.dfs.DataChecksum
getHeader(boolean) -
Static method in class org.apache.hadoop.fs.ContentSummary
Return the header of the output.
getHeartbeatsAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getHeartbeatsAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for Heartbeat Operation in last interval
getHeartbeatsMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getHeartbeatsMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum Heartbeat Operation Time since reset was called
getHeartbeatsMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getHeartbeatsMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum Heartbeat Operation Time since reset was called
getHeartbeatsNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getHeartbeatsNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of Heartbeat Operation in last interval
getHomeDirectory() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the current user's home directory in this filesystem.
getHomeDirectory() -
Method in class org.apache.hadoop.fs.FileSystem
Return the current user's home directory in this filesystem.
getHomeDirectory() -
Method in class org.apache.hadoop.fs.FilterFileSystem
getHomeDirectory() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
getHomeDirectory() -
Method in class org.apache.hadoop.fs.HarFileSystem
return the top level archive path.
getHomeDirectory() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
getHost() -
Method in class org.apache.hadoop.dfs.DatanodeID
getHost() -
Method in class org.apache.hadoop.streaming.Environment
getHostName() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
getHostname() -
Static method in class org.apache.hadoop.util.StringUtils
Return hostname without throwing exception.
getHosts() -
Method in class org.apache.hadoop.fs.BlockLocation
Get the list of hosts (hostname) hosting this block
getHosts(String, String) -
Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the provided nameserver with the
address bound to the specified network interface
getHosts(String) -
Static method in class org.apache.hadoop.net.DNS
Returns all the host names associated by the default nameserver with the
address bound to the specified network interface
getHosts() -
Method in class org.apache.hadoop.util.HostsFileReader
getId() -
Method in class org.apache.hadoop.fs.s3.Block
getId(Class) -
Method in class org.apache.hadoop.io.AbstractMapWritable
getId() -
Method in class org.apache.hadoop.mapred.ID
returns the int which represents the identifier
getID() -
Method in interface org.apache.hadoop.mapred.RunningJob
Get the job identifier.
GetImage() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
GetImageServlet - Class in org.apache.hadoop.dfs
This class is used in Namesystem's jetty to retrieve a file.
GetImageServlet() -
Constructor for class org.apache.hadoop.dfs.GetImageServlet
getIndexInputFormatClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the index input format class.
getIndexInterval() -
Method in class org.apache.hadoop.io.MapFile.Writer
The number of entries that are added before an index entry is added.
getIndexMaxFieldLength() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the max field length for a Lucene instance.
getIndexMaxNumSegments() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the max number of segments for a Lucene instance.
getIndexShards() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the string representation of a number of shards.
getIndexShards(IndexUpdateConfiguration) -
Static method in class org.apache.hadoop.contrib.index.mapred.Shard
getIndexUpdaterClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the index updater class.
getIndexUseCompoundFile() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Check whether to use the compound file format for a Lucene instance.
getInfo() -
Static method in class org.apache.hadoop.metrics.jvm.EventCounter
getInfoPort() -
Method in class org.apache.hadoop.dfs.DatanodeID
getInfoPort() -
Method in class org.apache.hadoop.mapred.JobTracker
getInodeLimitText() -
Method in class org.apache.hadoop.dfs.JspHelper
getInputFileBasedOutputFileName(JobConf, String) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Generate the outfile name based on a given anme and the input file name.
getInputFormat() -
Method in class org.apache.hadoop.mapred.JobConf
Get the InputFormat
implementation for the map-reduce job,
defaults to TextInputFormat
if not specified explicity.
getInputPathFilter(JobConf) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
Get a PathFilter instance of the filter set for the input paths.
getInputPaths(JobConf) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
Get the list of input Path
s for the map-reduce job.
getInputPaths() -
Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Use FileInputFormat.getInputPaths(JobConf)
getInputSplit() -
Method in interface org.apache.hadoop.mapred.Reporter
Get the InputSplit
object for a map.
getInputStream(Socket) -
Static method in class org.apache.hadoop.net.NetUtils
Same as getInputStream(socket, socket.getSoTimeout()).
From documentation for NetUtils.getInputStream(Socket, long)
:
Returns InputStream for the socket.
getInputStream(Socket, long) -
Static method in class org.apache.hadoop.net.NetUtils
Returns InputStream for the socket.
getInt(String, int) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as an int
.
getInterfaceName() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the interface name
getIOSortMB() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the IO sort space in MB.
getIpcPort() -
Method in class org.apache.hadoop.dfs.DatanodeID
getIPs(String) -
Static method in class org.apache.hadoop.net.DNS
Returns all the IPs associated with the provided interface, if any, in
textual form.
getIsJavaMapper(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Mapper.
getIsJavaRecordReader(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java RecordReader
getIsJavaRecordWriter(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Will the reduce use a Java RecordWriter?
getIsJavaReducer(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Check whether the job is using a Java Reducer.
getJar() -
Method in class org.apache.hadoop.mapred.JobConf
Get the user jar for the map-reduce job.
getJob(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
Get an RunningJob
object to track an ongoing job.
getJob(String) -
Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getJob(JobID)
.
getJob(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getJob(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getJobClient() -
Method in class org.apache.hadoop.mapred.TaskTracker
The connection to the JobTracker, used by the TaskRunner
for locating remote files.
getJobConf() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getJobCounters(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getJobCounters(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getJobEndNotificationURI() -
Method in class org.apache.hadoop.mapred.JobConf
Get the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
getJobFile() -
Method in class org.apache.hadoop.mapred.JobProfile
Get the configuration file for the job.
getJobFile() -
Method in interface org.apache.hadoop.mapred.RunningJob
Get the path of the submitted job configuration.
getJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getJobID() -
Method in class org.apache.hadoop.mapred.JobProfile
Get the job id.
getJobId() -
Method in class org.apache.hadoop.mapred.JobProfile
Deprecated. use getJobID() instead
getJobId() -
Method in class org.apache.hadoop.mapred.JobStatus
Deprecated. use getJobID instead
getJobID() -
Method in class org.apache.hadoop.mapred.JobStatus
getJobID() -
Method in interface org.apache.hadoop.mapred.RunningJob
Deprecated. This method is deprecated and will be removed. Applications should
rather use RunningJob.getID()
.
getJobID() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
Returns the JobID
object that this task attempt belongs to
getJobID() -
Method in class org.apache.hadoop.mapred.TaskID
Returns the JobID
object that this tip belongs to
getJobIDsPattern(String, Integer) -
Static method in class org.apache.hadoop.mapred.JobID
Returns a regex pattern which matches task IDs.
getJobLocalDir() -
Method in class org.apache.hadoop.mapred.JobConf
Get job-specific shared directory for use as scratch space
getJobName() -
Method in class org.apache.hadoop.mapred.JobConf
Get the user-specified job name.
getJobName() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getJobName() -
Method in class org.apache.hadoop.mapred.JobProfile
Get the user-specified job name.
getJobName() -
Method in interface org.apache.hadoop.mapred.RunningJob
Get the name of the job.
getJobPriority() -
Method in class org.apache.hadoop.mapred.JobConf
Get the JobPriority
for this job.
getJobProfile(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getJobProfile(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getJobStatus(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getJobStatus(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getJobTrackerHostPort() -
Method in class org.apache.hadoop.streaming.StreamJob
getJobTrackerMachine() -
Method in class org.apache.hadoop.mapred.JobTracker
getJobTrackerState() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the current state of the JobTracker
,
as JobTracker.State
getJournalSyncAverageTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalSyncAverageTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Average time for Journal Sync in last interval
getJournalSyncMaxTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalSyncMaxTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Maximum Journal Sync Time since reset was called
getJournalSyncMinTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalSyncMinTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Minimum Journal Sync Time since reset was called
getJournalSyncNum() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalSyncNum() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of Journal Syncs in the last interval
getJournalTransactionAverageTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalTransactionAverageTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Average time for Journal transactions in last interval
getJournalTransactionMaxTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalTransactionMaxTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Maximum Journal Transaction Time since reset was called
getJournalTransactionMinTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalTransactionMinTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The Minimum Journal Transaction Time since reset was called
getJournalTransactionNum() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getJournalTransactionNum() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of Journal Transactions in the last interval
getJtIdentifier() -
Method in class org.apache.hadoop.mapred.JobID
getKeepCommandFile(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
Does the user want to keep the command file for debugging? If this is
true, pipes will write a copy of the command data to a file in the
task directory named "downlink.data", which may be used to run the C++
program under the debugger.
getKeepFailedTaskFiles() -
Method in class org.apache.hadoop.mapred.JobConf
Should the temporary files for failed tasks be kept?
getKeepTaskFilesPattern() -
Method in class org.apache.hadoop.mapred.JobConf
Get the regular expression that is matched against the task names
to see if we need to keep the files.
getKey() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw key
getKey() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
Returns the stored rawKey
getKeyClass() -
Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of keys in this file.
getKeyClass() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of keys in this file.
getKeyClass() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of keys in this file.
getKeyClass() -
Method in class org.apache.hadoop.io.WritableComparator
Returns the WritableComparable implementation class.
getKeyClass() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
getKeyClass() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of key that must be passed to SequenceFileRecordReader.next(Object, Object)
..
getKeyClassName() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the key class.
getKeyClassName() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Retrieve the name of the key class for this SequenceFile.
getKeyTypeID() -
Method in class org.apache.hadoop.record.meta.MapTypeID
get the TypeID of the map's key element
getLastUpdate() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
The time when this information was accurate.
getLen() -
Method in class org.apache.hadoop.fs.FileStatus
getLength(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns the specified block's on-disk length (excluding metadata)
getLength() -
Method in class org.apache.hadoop.dfs.FSDatasetInterface.MetaDataInputStream
getLength() -
Method in class org.apache.hadoop.fs.BlockLocation
Get the length of the block
getLength() -
Method in class org.apache.hadoop.fs.ContentSummary
getLength(Path) -
Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getLength(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated.
getLength() -
Method in class org.apache.hadoop.fs.s3.Block
getLength() -
Method in class org.apache.hadoop.io.DataInputBuffer
Returns the length of the input.
getLength() -
Method in class org.apache.hadoop.io.DataOutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() -
Method in class org.apache.hadoop.io.InputBuffer
Returns the length of the input.
getLength() -
Method in class org.apache.hadoop.io.OutputBuffer
Returns the length of the valid data currently in the buffer.
getLength() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the current length of the output file.
getLength() -
Method in class org.apache.hadoop.io.Text
Returns the number of bytes in the byte array
getLength() -
Method in class org.apache.hadoop.io.UTF8
Deprecated. The number of bytes in the encoded string.
getLength() -
Method in class org.apache.hadoop.mapred.FileSplit
The number of bytes in the file to process.
getLength() -
Method in interface org.apache.hadoop.mapred.InputSplit
Get the total number of bytes in the data of the InputSplit
.
getLength() -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Return the aggregate length of all child InputSplits currently added.
getLength(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Get the length of ith child InputSplit.
getLength() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
getLength(int) -
Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the length of the ith Path
getLengths() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns an array containing the lengths of the files in
the split
getLevel() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's level in the tree.
getLevel() -
Method in interface org.apache.hadoop.net.Node
Return this node's level in the tree.
getLevel() -
Method in class org.apache.hadoop.net.NodeBase
Return this node's level in the tree.
getLinkCount(File) -
Static method in class org.apache.hadoop.fs.FileUtil.HardLink
Retrieves the number of links to the specified file.
getListenerAddress() -
Method in class org.apache.hadoop.ipc.Server
Return the socket (ip+port) on which the RPC server is listening to.
getListing(String) -
Method in class org.apache.hadoop.dfs.NameNode
getLoadNativeLibraries(JobConf) -
Method in class org.apache.hadoop.util.NativeCodeLoader
Return if native hadoop libraries, if present, can be used for this job.
getLocal(Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
Get the local file syste
getLocalAnalysisClass() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the local analysis class.
getLocalCache(URI, Configuration, Path, FileStatus, boolean, long, Path) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be
previously cached (and valid) or copy it from the FileSystem
now.
getLocalCache(URI, Configuration, Path, boolean, long, Path) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Get the locally cached file or archive; it could either be
previously cached (and valid) or copy it from the FileSystem
now.
getLocalCacheArchives(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized caches
getLocalCacheFiles(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Return the path array of the localized files
getLocalDirs() -
Method in class org.apache.hadoop.mapred.JobConf
getLocalJobFilePath(String) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Deprecated.
getLocalJobFilePath(JobID) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
Get the path of the locally stored job file
getLocalJobFilePath(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getLocalJobFilePath(JobID) -
Static method in class org.apache.hadoop.mapred.JobTracker
Get the localized job file path on the job trackers local file system
getLocalPath(String, String) -
Method in class org.apache.hadoop.conf.Configuration
Get a local file under a directory named by dirsProp with
the given path.
getLocalPath(String) -
Method in class org.apache.hadoop.mapred.JobConf
Constructs a local file name.
getLocalPathForWrite(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathForWrite(String, long, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS.
getLocalPathToRead(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
Get a path from the local FS for reading.
getLocatedBlocks() -
Method in class org.apache.hadoop.dfs.LocatedBlocks
Get located blocks.
getLocation(int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
getLocations from ith InputSplit.
getLocations() -
Method in class org.apache.hadoop.mapred.FileSplit
getLocations() -
Method in interface org.apache.hadoop.mapred.InputSplit
Get the list of hostnames where the input split is located.
getLocations() -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
Collect a set of hosts from all child InputSplits.
getLocations() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
getLong(String, long) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property as a long
.
getLongValue(Object) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
getMapCompletionEvents(String, int, int) -
Method in class org.apache.hadoop.mapred.TaskTracker
Deprecated.
getMapCompletionEvents(JobID, int, int) -
Method in class org.apache.hadoop.mapred.TaskTracker
getMapDebugScript() -
Method in class org.apache.hadoop.mapred.JobConf
Get the map task's debug script.
getMapOutputCompressionType() -
Method in class org.apache.hadoop.mapred.JobConf
Deprecated. SequenceFile.CompressionType
is no longer valid for intermediate
map-outputs.
getMapOutputCompressorClass(Class<? extends CompressionCodec>) -
Method in class org.apache.hadoop.mapred.JobConf
Get the CompressionCodec
for compressing the map outputs.
getMapOutputKeyClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
Get the map output key class.
getMapOutputKeyClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the key class for the map output data.
getMapOutputValueClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
Get the map output value class.
getMapOutputValueClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the value class for the map output data.
getMapperClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the Mapper
class for the job.
getMapredJobID() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
Deprecated. use Job.getAssignedJobID()
instead
getMapredTempDir() -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
Get the Map/Reduce temp directory.
getMapRunnerClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the MapRunnable
class for the job.
getMapSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
Should speculative execution be used for this job for map tasks?
Defaults to true
.
getMapTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the map tasks of a job.
getMapTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getMapTaskReports(JobID)
getMapTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getMapTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getMapTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of currently running map tasks in the cluster.
getMaxDepth(int) -
Static method in class org.apache.hadoop.util.QuickSort
Deepest recursion before giving up and doing a heapsort.
getMaxMapAttempts() -
Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a
map task, as specified by the mapred.map.max.attempts
property.
getMaxMapTaskFailuresPercent() -
Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of map tasks that can fail without
the job being aborted.
getMaxMapTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the maximum capacity for running map tasks in the cluster.
getMaxReduceAttempts() -
Method in class org.apache.hadoop.mapred.JobConf
Get the configured number of maximum attempts that will be made to run a
reduce task, as specified by the mapred.reduce.max.attempts
property.
getMaxReduceTaskFailuresPercent() -
Method in class org.apache.hadoop.mapred.JobConf
Get the maximum percentage of reduce tasks that can fail without
the job being aborted.
getMaxReduceTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the maximum capacity for running reduce tasks in the cluster.
getMaxTaskFailuresPerTracker() -
Method in class org.apache.hadoop.mapred.JobConf
Expert: Get the maximum no.
getMaxTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The max time for a single operation since the last reset
MetricsTimeVaryingRate.resetMinMax()
getMemory() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
Get the total amount of buffer memory, in bytes.
getMessage() -
Method in exception org.apache.hadoop.dfs.QuotaExceededException
getMessage() -
Method in exception org.apache.hadoop.mapred.InvalidInputException
Get a summary message of the problems found.
getMessage() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getMessage() -
Method in exception org.apache.hadoop.record.compiler.generated.ParseException
This method has the standard behavior when this object has been
created using the standard constructors.
getMessage() -
Method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
You can also modify the body of this method to customize your error messages.
getMetadata() -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
getMetadata() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the metadata object of the file
getMetaDataInputStream(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns metaData of block b as an input stream (and its length)
getMetaDataLength(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
Returns the length of the metadata file of the specified block
getMetric(String) -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the metric object which can be a Float, Integer, Short or Byte.
getMetricNames() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of metric names.
getMinTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The min time for a single operation since the last reset
MetricsTimeVaryingRate.resetMinMax()
getMissingIds() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return a list of missing block names (as list of Strings).
getMissingReplicas() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the number of under-replicated blocks.
getMissingSize() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of missing data, in bytes.
getModificationTime() -
Method in class org.apache.hadoop.fs.FileStatus
Get the modification time of the file.
getMount() -
Method in class org.apache.hadoop.fs.DF
getName() -
Method in class org.apache.hadoop.dfs.DatanodeID
getName() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Deprecated.
getName() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
getName() -
Method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #getUri() instead.
getName() -
Method in class org.apache.hadoop.fs.FilterFileSystem
Deprecated. call #getUri() instead.
getName() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated.
getName() -
Method in class org.apache.hadoop.fs.Path
Returns the final component of this path.
getName() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
Deprecated.
getName() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
getName(Class) -
Static method in class org.apache.hadoop.io.WritableName
Return the name for a class.
getName() -
Method in class org.apache.hadoop.mapred.Counters.Counter
Get the internal name of the counter.
getName() -
Method in class org.apache.hadoop.mapred.Counters.Group
Returns raw name of the group.
getName() -
Method in interface org.apache.hadoop.net.Node
Return this node's name
getName() -
Method in class org.apache.hadoop.net.NodeBase
Return this node's name
getName() -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
return the name of the record
getNamed(String, Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
Deprecated. call #get(URI,Configuration) instead.
getNamenode() -
Method in class org.apache.hadoop.dfs.DataNode
Return the namenode's identifier
getNameNodeAddr() -
Method in class org.apache.hadoop.dfs.DataNode
getNameNodeAddress() -
Method in class org.apache.hadoop.dfs.NameNode
Returns the address on which the NameNodes is listening to.
getNameNodeMetrics() -
Static method in class org.apache.hadoop.dfs.NameNode
getNames() -
Method in class org.apache.hadoop.fs.BlockLocation
Get the list of names (hostname:port) hosting this block
getNestedStructTypeInfo(String) -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
Return the type info of a nested record.
getNetworkLocation() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
rack name
getNetworkLocation() -
Method in interface org.apache.hadoop.net.Node
Return the string representation of this node's network location
getNetworkLocation() -
Method in class org.apache.hadoop.net.NodeBase
Return this node's network location
getNewJobId() -
Method in class org.apache.hadoop.mapred.JobTracker
Allocates a new JobId string.
getNextToken() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
getNextToken() -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
getNode(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Return the Node in the network topology that corresponds to the hostname
getNode() -
Method in class org.apache.hadoop.mapred.join.Parser.NodeToken
getNode() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
getNode(String) -
Method in class org.apache.hadoop.net.NetworkTopology
Given a string representation of a node, return its reference
getNodesAtMaxLevel() -
Method in class org.apache.hadoop.mapred.JobTracker
Returns a collection of nodes at the max level
getNullContext(String) -
Static method in class org.apache.hadoop.metrics.ContextFactory
Returns a "null" context - one which does nothing.
getNum() -
Method in class org.apache.hadoop.mapred.join.Parser.NumToken
getNum() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
getNumAddBlockOps() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumAddBlockOps() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of add block operations in the last interval
getNumber() -
Method in class org.apache.hadoop.metrics.spi.MetricValue
getNumberColumns() -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
Get the number of columns.
getNumBytesInSum() -
Method in class org.apache.hadoop.dfs.DataChecksum
getNumCreateFileOps() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumCreateFileOps() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of file creation operations in the last interval
getNumDeleteFileOps() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumDeleteFileOps() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of file deletion operations in the last interval
getNumFiles(PathFilter) -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.
getNumFilesCreated() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumFilesCreated() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of files created in the last interval
getNumFilesListed() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
Deprecated. call getNumGetListingOps() instead
getNumFilesListed() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Deprecated. Use getNumGetListingOps() instead
getNumFilesRenamed() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumFilesRenamed() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of files renamed in the last interval
getNumGetBlockLocations() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumGetBlockLocations() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of
NameNode.getBlockLocations(String,long,long)
getNumGetListingOps() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getNumGetListingOps() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
Number of files listed in the last interval
getNumMapTasks() -
Method in class org.apache.hadoop.mapred.JobConf
Get configured the number of reduce tasks for this job.
getNumOfLeaves() -
Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of nodes
getNumOfRacks() -
Method in class org.apache.hadoop.net.NetworkTopology
Return the total number of racks
getNumOpenConnections() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The number of open RPC conections
getNumOpenConnections() -
Method in class org.apache.hadoop.ipc.Server
The number of open RPC conections
getNumPaths() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the number of Paths in the split
getNumReduceTasks() -
Method in class org.apache.hadoop.mapred.JobConf
Get configured the number of reduce tasks for this job.
getNumResolvedTaskTrackers() -
Method in class org.apache.hadoop.mapred.JobTracker
getNumTaskCacheLevels() -
Method in class org.apache.hadoop.mapred.JobTracker
getOffset() -
Method in class org.apache.hadoop.fs.BlockLocation
Get the start offset of file associated with this block
getOp() -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Get the type of the operation.
getOp() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the type of operation.
getOpt(String) -
Method in class org.apache.hadoop.fs.shell.CommandFormat
Return if the option is set or not
getOtherAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
Return other FsAction
.
getOutput() -
Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
Get the output of the shell command.
getOutputCompressionType(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Get the SequenceFile.CompressionType
for the output SequenceFile
.
getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the CompressionCodec
for compressing the job outputs.
getOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapred.OutputFormatBase
Deprecated. Get the CompressionCodec
for compressing the job outputs.
getOutputFormat() -
Method in class org.apache.hadoop.mapred.JobConf
Get the OutputFormat
implementation for the map-reduce job,
defaults to TextOutputFormat
if not specified explicity.
getOutputKeyClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
Get the reduce output key class.
getOutputKeyClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the key class for the job output data.
getOutputKeyComparator() -
Method in class org.apache.hadoop.mapred.JobConf
Get the RawComparator
comparator used to compare keys.
getOutputPath(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the Path
to the output directory for the map-reduce job.
getOutputPath() -
Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Use FileOutputFormat.getOutputPath(JobConf)
or
FileOutputFormat.getWorkOutputPath(JobConf)
Get the Path
to the output directory for the map-reduce job.
getOutputStream(Socket) -
Static method in class org.apache.hadoop.net.NetUtils
Same as getOutputStream(socket, 0).
getOutputStream(Socket, long) -
Static method in class org.apache.hadoop.net.NetUtils
Returns OutputStream for the socket.
getOutputValueClass() -
Static method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
Get the reduce output value class.
getOutputValueClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the value class for job outputs.
getOutputValueGroupingComparator() -
Method in class org.apache.hadoop.mapred.JobConf
Get the user defined WritableComparable
comparator for
grouping keys of inputs to the reduce.
getOwner() -
Method in class org.apache.hadoop.fs.FileStatus
Get the owner of the file.
getParameter(ServletRequest, String) -
Static method in class org.apache.hadoop.util.ServletUtil
Get a parameter from a ServletRequest.
getParent() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
Return this node's parent
getParent() -
Method in class org.apache.hadoop.fs.Path
Returns the parent of a path or null if at root.
getParent() -
Method in interface org.apache.hadoop.net.Node
Return this node's parent
getParent() -
Method in class org.apache.hadoop.net.NodeBase
Return this node's parent
getParentNode(Node, int) -
Static method in class org.apache.hadoop.mapred.JobTracker
getPartition(Shard, IntermediateForm, int) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
getPartition(IntWritable, IntWritable, int) -
Method in class org.apache.hadoop.examples.SleepJob
getPartition(K2, V2, int) -
Method in class org.apache.hadoop.mapred.lib.HashPartitioner
Use Object.hashCode()
to partition.
getPartition(K2, V2, int) -
Method in class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
Use Object.hashCode()
to partition.
getPartition(K2, V2, int) -
Method in interface org.apache.hadoop.mapred.Partitioner
Get the paritition number for a given key (hence record) given the total
number of partitions i.e.
getPartitionerClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the Partitioner
used to partition Mapper
-outputs
to be sent to the Reducer
s.
getPath() -
Method in class org.apache.hadoop.fs.FileStatus
getPath() -
Method in class org.apache.hadoop.mapred.FileSplit
The file containing this split's data.
getPath(int) -
Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns the ith Path
getPath(Node) -
Static method in class org.apache.hadoop.net.NodeBase
Return this node's path
getPaths() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
Returns all the Paths in the split
getPendingReplicationBlocks() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Blocks pending to be replicated
getPercentUsed() -
Method in class org.apache.hadoop.fs.DF
getPercentUsed() -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
Deprecated.
getPeriod() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
Returns the timer period.
getPermission() -
Method in class org.apache.hadoop.fs.FileStatus
Get FsPermission associated with the file.
getPermission() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return permission
getPlatformName() -
Static method in class org.apache.hadoop.util.PlatformName
Get the complete platform as per the java-vm.
getPort() -
Method in class org.apache.hadoop.dfs.DatanodeID
getPort() -
Method in class org.apache.hadoop.mapred.StatusHttpServer
Get the port that the server is on
getPos() -
Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
getPos() -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
getPos() -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
getPos() -
Method in exception org.apache.hadoop.fs.ChecksumException
getPos() -
Method in class org.apache.hadoop.fs.FSDataInputStream
getPos() -
Method in class org.apache.hadoop.fs.FSDataOutputStream
getPos() -
Method in class org.apache.hadoop.fs.FSInputChecker
getPos() -
Method in class org.apache.hadoop.fs.FSInputStream
Return the current offset from the start of the file
getPos() -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
getPos() -
Method in interface org.apache.hadoop.fs.Seekable
Return the current offset from the start of the file
getPos() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Unsupported (returns zero in all cases).
getPos() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request position from proxied RR.
getPos() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
getPos() -
Method in class org.apache.hadoop.mapred.LineRecordReader
getPos() -
Method in interface org.apache.hadoop.mapred.RecordReader
Returns the current position in the input.
getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
getPos() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
getPos() -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
Returns the current position in the input.
getPosition() -
Method in class org.apache.hadoop.io.DataInputBuffer
Returns the current position in the input.
getPosition() -
Method in class org.apache.hadoop.io.InputBuffer
Returns the current position in the input.
getPosition() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Return the current byte position in the input file.
getPreferredBlockSize(String) -
Method in class org.apache.hadoop.dfs.NameNode
getPreviousIntervalAverageTime() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The average rate of an operation in the previous interval
getPreviousIntervalNumOps() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
The number of operations in the previous interval
getPreviousIntervalValue() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
The Value at the Previous interval
getProblems() -
Method in exception org.apache.hadoop.mapred.InvalidInputException
Get the complete list of the problems reported.
getProcess() -
Method in class org.apache.hadoop.util.Shell
get the current sub-process executing the given command
getProfileEnabled() -
Method in class org.apache.hadoop.mapred.JobConf
Get whether the task profiling is enabled.
getProfileParams() -
Method in class org.apache.hadoop.mapred.JobConf
Get the profiler configuration arguments.
getProfileTaskRange(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
Get the range of maps or reduces to profile.
getProgress() -
Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
getProgress() -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
getProgress() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the Progress object; this has a float (0.0 - 1.0)
indicating the bytes processed by the iterator so far
getProgress() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Report progress as the minimum of all child RR progress.
getProgress() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
Request progress from proxied RR.
getProgress() -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
getProgress() -
Method in class org.apache.hadoop.mapred.LineRecordReader
Get the progress within the split
getProgress() -
Method in interface org.apache.hadoop.mapred.RecordReader
How much of the input has the RecordReader
consumed i.e.
getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Return the progress within the input split
getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
getProgress() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
Return the progress within the input split
getProgress() -
Method in class org.apache.hadoop.mapred.TaskReport
The amount completed, between zero and one.
getProgress() -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
getProtocolVersion(String, long) -
Method in class org.apache.hadoop.dfs.DataNode
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) -
Method in class org.apache.hadoop.dfs.NameNode
getProtocolVersion(String, long) -
Method in interface org.apache.hadoop.ipc.VersionedProtocol
Return protocol version corresponding to protocol interface.
getProtocolVersion(String, long) -
Method in class org.apache.hadoop.mapred.JobTracker
getProtocolVersion(String, long) -
Method in class org.apache.hadoop.mapred.TaskTracker
getProxy(Class<?>, long, InetSocketAddress, Configuration, SocketFactory) -
Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
getProxy(Class<?>, long, InetSocketAddress, UserGroupInformation, Configuration, SocketFactory) -
Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object that implements the named protocol,
talking to a server at the named address.
getProxy(Class<?>, long, InetSocketAddress, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
Construct a client-side proxy object with the default SocketFactory
getQuota() -
Method in class org.apache.hadoop.fs.ContentSummary
Return the directory quota
getRange(String, String) -
Method in class org.apache.hadoop.conf.Configuration
Parse the given attribute as a set of integer ranges
getRaw(String) -
Method in class org.apache.hadoop.conf.Configuration
Get the value of the name
property, without doing
variable expansion.
getRawCapacity() -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return the total raw capacity of the filesystem, disregarding
replication .
getRawCapacity() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw capacity of the filesystem, disregarding
replication .
getRawFileSystem() -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
get the raw file system
getRawUsed() -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
Return the total raw used space in the filesystem, disregarding
replication .
getRawUsed() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
Return the total raw used space in the filesystem, disregarding
replication .
getReadBlockOpAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadBlockOpAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for ReadBlock Operation in last interval
getReadBlockOpMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadBlockOpMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum ReadBlock Operation Time since reset was called
getReadBlockOpMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadBlockOpMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum ReadBlock Operation Time since reset was called
getReadBlockOpNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadBlockOpNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of ReadBlock Operation in last interval
getReaders(FileSystem, Path, Configuration) -
Static method in class org.apache.hadoop.mapred.MapFileOutputFormat
Open the output generated by this format.
getReaders(Configuration, Path) -
Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
Open the output generated by this format.
getReadMetadataOpAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadMetadataOpAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for ReadMetadata Operation in last interval
getReadMetadataOpMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadMetadataOpMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum ReadMetadata Operation Time since reset was called
getReadMetadataOpMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadMetadataOpMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum ReadMetadata Operation Time since reset was called
getReadMetadataOpNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadMetadataOpNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of ReadMetadata Operation in last interval
getReadsFromLocalClient() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadsFromLocalClient() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of reads from local clients in the last interval
getReadsFromRemoteClient() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReadsFromRemoteClient() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of reads from remote clients in the last interval
getReadyJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
getRecordName() -
Method in interface org.apache.hadoop.metrics.MetricsRecord
Returns the record name.
getRecordName() -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
Returns the record name.
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.contrib.index.example.LineDocInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.FileInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in interface org.apache.hadoop.mapred.InputFormat
Get the RecordReader
for the given InputSplit
.
getRecordReader(InputSplit, JobConf, Reporter) -
Method in interface org.apache.hadoop.mapred.join.ComposableInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Construct a CompositeRecordReader for the children of this InputFormat
as defined in the init expression.
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.MultiFileInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter
Create a record reader for the given split
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.mapred.TextInputFormat
getRecordReader(InputSplit, JobConf, Reporter) -
Method in class org.apache.hadoop.streaming.StreamInputFormat
getRecordReaderQueue() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
Return sorted list of RecordReaders for this composite.
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.FileOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.MultipleOutputFormat
Create a composite record writer that can write key/value data to different
output files
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.lib.NullOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.MapFileOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in interface org.apache.hadoop.mapred.OutputFormat
Get the RecordWriter
for the given job.
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.OutputFormatBase
Deprecated.
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
getRecordWriter(FileSystem, JobConf, String, Progressable) -
Method in class org.apache.hadoop.mapred.TextOutputFormat
getReduceDebugScript() -
Method in class org.apache.hadoop.mapred.JobConf
Get the reduce task's debug Script
getReducerClass() -
Method in class org.apache.hadoop.mapred.JobConf
Get the Reducer
class for the job.
getReduceSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
Should speculative execution be used for this job for reduce tasks?
Defaults to true
.
getReduceTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobClient
Get the information of the current state of the reduce tasks of a job.
getReduceTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobClient
Deprecated. Applications should rather use JobClient.getReduceTaskReports(JobID)
getReduceTaskReports(String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getReduceTaskReports(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
getReduceTasks() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of currently running reduce tasks in the cluster.
getRemaining() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.FSDatasetMBean
Returns the amount of free storage space (in bytes)
getRemaining() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
The raw free space.
getRemaining() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem.DiskStatus
getRemainingArgs() -
Method in class org.apache.hadoop.util.GenericOptionsParser
Returns an array of Strings containing only application-specific arguments.
getRemoteAddress() -
Static method in class org.apache.hadoop.ipc.Server
Returns remote address as a string when invoked inside an RPC.
getRemoteIp() -
Static method in class org.apache.hadoop.ipc.Server
Returns the remote side ip address when invoked inside an RPC
Returns null incase of an error.
getReplaceBlockOpAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReplaceBlockOpAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Average time for ReplaceBlock Operation in last interval
getReplaceBlockOpMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReplaceBlockOpMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Maximum ReplaceBlock Operation Time since reset was called
getReplaceBlockOpMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReplaceBlockOpMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
The Minimum ReplaceBlock Operation Time since reset was called
getReplaceBlockOpNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
getReplaceBlockOpNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
Number of ReplaceBlock Operation in last interval
getReplication() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the intended replication factor, against which the over/under-
replicated blocks are counted.
getReplication() -
Method in class org.apache.hadoop.fs.FileStatus
Get the replication factor of a file.
getReplication(Path) -
Method in class org.apache.hadoop.fs.FileSystem
Deprecated. Use getFileStatus() instead
getReplication(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
Deprecated.
getReplicationFactor() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the actual replication factor.
getReport() -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
log the counters
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
getReport() -
Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
getReport() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
getReportDetails() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
getReportItems() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
getResource(String) -
Method in class org.apache.hadoop.conf.Configuration
Get the URL
for the named resource.
getRevision() -
Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion revision number for the root directory
getRotations() -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
getRpcOpsAvgProcessingTime() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
Average time for RPC Operations in last interval
getRpcOpsAvgProcessingTimeMax() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Maximum RPC Operation Processing Time since reset was called
getRpcOpsAvgProcessingTimeMin() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Minimum RPC Operation Processing Time since reset was called
getRpcOpsAvgQueueTime() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Average RPC Operation Queued Time in the last interval
getRpcOpsAvgQueueTimeMax() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Maximum RPC Operation Queued Time since reset was called
getRpcOpsAvgQueueTimeMin() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
The Minimum RPC Operation Queued Time since reset was called
getRpcOpsNumber() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
Number of RPC Operations in the last interval
getRunnable() -
Method in class org.apache.hadoop.util.Daemon
getRunningJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
getRunningJobs() -
Method in class org.apache.hadoop.mapred.JobTracker
Version that is called from a timer thread, and therefore needs to be
careful to synchronize.
getRunState() -
Method in class org.apache.hadoop.mapred.JobStatus
getSafeModeText() -
Method in class org.apache.hadoop.dfs.JspHelper
getSafemodeTime() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
getSafemodeTime() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
The time spent in the Safemode at startup
getScheduledReplicationBlocks() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Blocks scheduled for replication
getSecretAccessKey() -
Method in class org.apache.hadoop.fs.s3.S3Credentials
getSelfAddr() -
Method in class org.apache.hadoop.dfs.DataNode
getSequenceFileOutputKeyClass(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Get the key class for the SequenceFile
getSequenceFileOutputValueClass(JobConf) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
Get the value class for the SequenceFile
getSerialization(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
getSerializedLength() -
Method in class org.apache.hadoop.fs.s3.INode
getSerializer(Class<Serializable>) -
Method in class org.apache.hadoop.io.serializer.JavaSerialization
getSerializer(Class<T>) -
Method in interface org.apache.hadoop.io.serializer.Serialization
getSerializer(Class<T>) -
Method in class org.apache.hadoop.io.serializer.SerializationFactory
getSerializer(Class<Writable>) -
Method in class org.apache.hadoop.io.serializer.WritableSerialization
getServer(Object, String, int, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a
port and address.
getServer(Object, String, int, int, boolean, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
Construct a server for a protocol implementation instance listening on a
port and address.
getServerAddress(Configuration, String, String, String) -
Static method in class org.apache.hadoop.net.NetUtils
Deprecated.
getServerVersion() -
Method in exception org.apache.hadoop.ipc.RPC.VersionMismatch
Get the server's agreed to version.
getSessionId() -
Method in class org.apache.hadoop.mapred.JobConf
Get the user-specified session identifier.
getShape(boolean, int) -
Method in class org.apache.hadoop.examples.dancing.Pentomino.Piece
getSize() -
Method in class org.apache.hadoop.io.BytesWritable
Get the current size of the buffer.
getSize() -
Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
Size of stored data.
getSize() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
getSocketFactory(Configuration, Class<?>) -
Static method in class org.apache.hadoop.net.NetUtils
Get the socket factory for the given class according to its
configuration parameter
hadoop.rpc.socket.factory.class.<ClassName>.
getSocketFactoryFromProperty(Configuration, String) -
Static method in class org.apache.hadoop.net.NetUtils
Get the socket factory corresponding to the given proxy URI.
getSpace(int) -
Static method in class org.apache.hadoop.streaming.StreamUtil
getSpeculativeExecution() -
Method in class org.apache.hadoop.mapred.JobConf
Should speculative execution be used for this job?
Defaults to true
.
getSplits(int) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
Generate a list of prefixes to a given depth
getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.FileInputFormat
Splits files returned by FileInputFormat.listStatus(JobConf)
when
they're too big.
getSplits(JobConf, int) -
Method in interface org.apache.hadoop.mapred.InputFormat
Logically split the set of input files for the job.
getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
Build a CompositeInputSplit from the child InputFormats by assigning the
ith split from each child to the ith composite split.
getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.lib.NLineInputFormat
Logically splits the set of input files for the job, splits N lines
of the input as one split.
getSplits(JobConf, int) -
Method in class org.apache.hadoop.mapred.MultiFileInputFormat
getStart() -
Method in class org.apache.hadoop.mapred.FileSplit
The position of the first byte in the file to process.
getStartTime() -
Method in class org.apache.hadoop.mapred.JobStatus
getStartTime() -
Method in class org.apache.hadoop.mapred.JobTracker
getStartTime() -
Method in class org.apache.hadoop.mapred.TaskReport
Get start time of task.
getState() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
getState() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
getState() -
Method in class org.apache.hadoop.mapred.TaskReport
The most recent state, reported by a Reporter
.
getStaticResolution(String) -
Static method in class org.apache.hadoop.net.NetUtils
Retrieves the resolved name for the passed host.
getStatistics(Class<? extends FileSystem>) -
Static method in class org.apache.hadoop.fs.FileSystem
Get the statistics for a particular file system
getStats() -
Method in class org.apache.hadoop.dfs.NameNode
getStatusText(boolean) -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get upgradeStatus data as a text for reporting.
getStorageID() -
Method in class org.apache.hadoop.dfs.DatanodeID
getStorageInfo() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.FSDatasetMBean
Returns the storage id of the underlying storage
getStoredBlock(long) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
getStr() -
Method in class org.apache.hadoop.mapred.join.Parser.StrToken
getStr() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
getStringCollection(String) -
Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name
property as
a collection of String
s.
getStringCollection(String) -
Static method in class org.apache.hadoop.util.StringUtils
Returns a collection of strings.
getStrings(String) -
Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name
property as
an array of String
s.
getStrings(String, String...) -
Method in class org.apache.hadoop.conf.Configuration
Get the comma delimited values of the name
property as
an array of String
s.
getStrings(String) -
Static method in class org.apache.hadoop.util.StringUtils
Returns an arraylist of strings.
getSuccessfulJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
GetSuffix(int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getSum() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
getSum() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
getSymlink(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
This method checks to see if symlinks are to be create for the
localized cache files in the current working directory
getSystemDir() -
Method in class org.apache.hadoop.mapred.JobClient
Grab the jobtracker system directory path where job-specific files are to be placed.
getSystemDir() -
Method in class org.apache.hadoop.mapred.JobConf
Deprecated. Use JobClient.getSystemDir()
instead.
Get the system directory where job-specific files are to be placed.
getSystemDir() -
Method in class org.apache.hadoop.mapred.JobTracker
getTabSize(int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
getTag() -
Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
getTag(String) -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns a tag object which is can be a String, Integer, Short or Byte.
getTagNames() -
Method in class org.apache.hadoop.metrics.spi.OutputRecord
Returns the set of tag names
getTask(String) -
Method in class org.apache.hadoop.mapred.TaskTracker
Deprecated.
getTask(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.TaskTracker
Called upon startup by the child process, to fetch Task data.
getTaskAttemptId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns task id.
getTaskAttemptIDsPattern(String, Integer, Boolean, Integer, Integer) -
Static method in class org.apache.hadoop.mapred.TaskAttemptID
Returns a regex pattern which matches task attempt IDs.
getTaskAttempts() -
Method in class org.apache.hadoop.mapred.JobHistory.Task
Returns all task attempts for this task.
getTaskCompletionEvents(String, int, int) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getTaskCompletionEvents(JobID, int, int) -
Method in class org.apache.hadoop.mapred.JobTracker
getTaskCompletionEvents(int) -
Method in interface org.apache.hadoop.mapred.RunningJob
Get events indicating completion (success/failure) of component tasks.
getTaskDiagnostics(String, String, String) -
Method in class org.apache.hadoop.mapred.JobTracker
Deprecated.
getTaskDiagnostics(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.JobTracker
Get the diagnostics for a given task
getTaskID() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
Returns the TaskID
object that this task attempt belongs to
getTaskId() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Deprecated. use TaskCompletionEvent.getTaskAttemptId()
instead.
getTaskId() -
Method in class org.apache.hadoop.mapred.TaskLogAppender
Getter/Setter methods for log4j.
getTaskId() -
Method in class org.apache.hadoop.mapred.TaskReport
Deprecated. use TaskReport.getTaskID()
instead
getTaskID() -
Method in class org.apache.hadoop.mapred.TaskReport
The id of the task.
getTaskIDsPattern(String, Integer, Boolean, Integer) -
Static method in class org.apache.hadoop.mapred.TaskID
Returns a regex pattern which matches task IDs.
getTaskInfo(JobConf) -
Static method in class org.apache.hadoop.streaming.StreamUtil
getTaskLogFile(String, TaskLog.LogName) -
Static method in class org.apache.hadoop.mapred.TaskLog
Deprecated.
getTaskLogFile(TaskAttemptID, TaskLog.LogName) -
Static method in class org.apache.hadoop.mapred.TaskLog
getTaskLogLength(JobConf) -
Static method in class org.apache.hadoop.mapred.TaskLog
Get the desired maximum length of task's logs.
getTaskOutputFilter(JobConf) -
Static method in class org.apache.hadoop.mapred.JobClient
Get the task output filter out of the JobConf.
getTaskOutputFilter() -
Method in class org.apache.hadoop.mapred.JobClient
Deprecated.
getTaskOutputPath(JobConf, String) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
Helper function to create the task's temporary output directory and
return the path to the task's output file.
getTaskRunTime() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns time (in millisec) the task took to complete.
getTaskStatus() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
Returns enum Status.SUCESS or Status.FAILURE.
getTaskTracker(String) -
Method in class org.apache.hadoop.mapred.JobTracker
getTaskTrackerHttp() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
http location of the tasktracker where this task ran.
getTaskTrackerMetrics() -
Method in class org.apache.hadoop.mapred.TaskTracker
getTaskTrackerReportAddress() -
Method in class org.apache.hadoop.mapred.TaskTracker
Return the port at which the tasktracker bound to
getTaskTrackers() -
Method in class org.apache.hadoop.mapred.ClusterStatus
Get the number of task trackers in the cluster.
getTerm() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
Get the term.
getText() -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
Get the text that represents a document.
getText() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
The text of the document id.
getTimestamp(Configuration, URI) -
Static method in class org.apache.hadoop.filecache.DistributedCache
Returns mtime of a given cache file on hdfs.
getTip(TaskID) -
Method in class org.apache.hadoop.mapred.JobTracker
Returns specified TaskInProgress, or null.
getToken(int) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
getTotalBlocks() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the total number of blocks in the scanned area.
getTotalDirs() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of directories encountered during this scan.
getTotalFiles() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of files encountered during this scan.
getTotalLoad() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Total Load on the FSNamesystem
getTotalLogFileSize() -
Method in class org.apache.hadoop.mapred.TaskLogAppender
getTotalOpenFiles() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total number of files opened for write encountered during this scan.
getTotalOpenFilesBlocks() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return the total number of blocks held by open files.
getTotalOpenFilesSize() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of open files data, in bytes.
getTotalSize() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
Return total size of scanned data, in bytes.
getTotalSubmissions() -
Method in class org.apache.hadoop.mapred.JobTracker
getTrackerIdentifier() -
Method in class org.apache.hadoop.mapred.JobTracker
Get the unique identifier (ie.
getTrackerPort() -
Method in class org.apache.hadoop.mapred.JobTracker
getTrackingURL() -
Method in interface org.apache.hadoop.mapred.RunningJob
Get the URL where some job progress information will be displayed.
getType() -
Method in interface org.apache.hadoop.dfs.Upgradeable
Get the type of the software component, which this object is upgrading.
getType() -
Method in class org.apache.hadoop.mapred.join.Parser.Token
getTypeID() -
Method in class org.apache.hadoop.record.meta.FieldTypeInfo
get the field's TypeID object
getTypes() -
Method in class org.apache.hadoop.io.GenericWritable
Return all classes that may be wrapped.
getTypeVal() -
Method in class org.apache.hadoop.record.meta.TypeID
Get the type value.
getUlimitMemoryCommand(JobConf) -
Static method in class org.apache.hadoop.util.Shell
Get the Unix command for setting the maximum virtual memory available
to a given child process.
getUMask(Configuration) -
Static method in class org.apache.hadoop.fs.permission.FsPermission
Get the user file creation mask (umask)
getUnderReplicatedBlocks() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
Blocks under replicated
getUniqueItems() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
getUpgradeStatus() -
Method in interface org.apache.hadoop.dfs.Upgradeable
Upgrade status determines a percentage of the work done out of the total
amount required by the upgrade.
getUpgradeStatus() -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get upgrade upgradeStatus as a percentage of the total upgrade done.
getUpgradeStatusReport(boolean) -
Method in interface org.apache.hadoop.dfs.Upgradeable
Get status report for the upgrade.
getUpgradeStatusText() -
Method in class org.apache.hadoop.dfs.JspHelper
getUri() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
getUri() -
Method in class org.apache.hadoop.dfs.HftpFileSystem
getUri() -
Method in class org.apache.hadoop.dfs.HsftpFileSystem
getUri() -
Method in class org.apache.hadoop.fs.FileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() -
Method in class org.apache.hadoop.fs.FilterFileSystem
Returns a URI whose scheme and authority identify this FileSystem.
getUri() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
getUri() -
Method in class org.apache.hadoop.fs.HarFileSystem
Returns the uri of this filesystem.
getUri() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
getUri() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
getUri() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
getUri() -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
getURIs(String, String) -
Method in class org.apache.hadoop.streaming.StreamJob
get the uris of all the files/caches
getURL() -
Method in class org.apache.hadoop.mapred.JobProfile
Get the link to the web-ui for details of the job.
getUrl() -
Static method in class org.apache.hadoop.util.VersionInfo
Get the subversion URL for the root Hadoop directory.
getUsed() -
Method in class org.apache.hadoop.fs.DF
getUsed() -
Method in class org.apache.hadoop.fs.DU
getUsed() -
Method in class org.apache.hadoop.fs.FileSystem
Return the total size of all files in the filesystem.
getUser() -
Method in class org.apache.hadoop.mapred.JobConf
Get the reported username for this job.
getUser() -
Method in class org.apache.hadoop.mapred.JobProfile
Get the user id.
getUser() -
Static method in class org.apache.hadoop.util.VersionInfo
The user that compiled Hadoop.
getUserAction() -
Method in class org.apache.hadoop.fs.permission.FsPermission
Return user FsAction
.
getUserName() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
Return user name
getUsername() -
Method in class org.apache.hadoop.mapred.JobStatus
getUserName() -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
Return the user's name
getUserName() -
Method in class org.apache.hadoop.security.UserGroupInformation
Get username
getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
getVal() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
getValue() -
Method in class org.apache.hadoop.dfs.DataChecksum
getValue() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
Gets the current raw value
getValueClass() -
Method in class org.apache.hadoop.io.ArrayWritable
getValueClass() -
Method in class org.apache.hadoop.io.MapFile.Reader
Returns the class of values in this file.
getValueClass() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the class of values in this file.
getValueClass() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
Returns the class of values in this file.
getValueClass() -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
The class of value that must be passed to SequenceFileRecordReader.next(Object, Object)
..
getValueClassName() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
Returns the name of the value class.
getValueClassName() -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
Retrieve the name of the value class for this SequenceFile.
getValueTypeID() -
Method in class org.apache.hadoop.record.meta.MapTypeID
get the TypeID of the map's value element
getVersion() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
Get the version number of the entire index.
getVersion() -
Method in interface org.apache.hadoop.dfs.Upgradeable
Get the layout version of the upgrade object.
getVersion() -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
Get the layout version of the currently running upgrade.
getVersion() -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
getVersion() -
Method in class org.apache.hadoop.io.VersionedWritable
Return the version number of the current implementation.
getVersion() -
Static method in class org.apache.hadoop.util.VersionInfo
Get the Hadoop version.
getVIntSize(long) -
Static method in class org.apache.hadoop.io.WritableUtils
Get the encoded length if an integer is stored in a variable-length format
getVIntSize(long) -
Static method in class org.apache.hadoop.record.Utils
Get the encoded length if an integer is stored in a variable-length format
getWaitingJobs() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
getWarn() -
Static method in class org.apache.hadoop.metrics.jvm.EventCounter
getWorkingDirectory() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.dfs.HftpFileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.FileSystem
Get the current working directory for the given file system
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.FilterFileSystem
Get the current working directory for the given file system
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.HarFileSystem
return the top level archive.
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
getWorkingDirectory() -
Method in class org.apache.hadoop.mapred.JobConf
Get the current working directory for the default file system.
getWorkOutputPath(JobConf) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
Get the Path
to the task's temporary output directory
for the map-reduce job
Tasks' Side-Effect Files
- getWrappedStream() -
Method in class org.apache.hadoop.fs.FSDataOutputStream
-
- getWriteBlockOpAverageTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWriteBlockOpAverageTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- Average time for WriteBlock Operation in last interval
- getWriteBlockOpMaxTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWriteBlockOpMaxTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- The Maximum WriteBlock Operation Time since reset was called
- getWriteBlockOpMinTime() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWriteBlockOpMinTime() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- The Minimum WriteBlock Operation Time since reset was called
- getWriteBlockOpNum() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWriteBlockOpNum() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- Number of WriteBlock Operation in last interval
- getWritesFromLocalClient() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWritesFromLocalClient() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- Number of writes from local clients in the last interval
- getWritesFromRemoteClient() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- getWritesFromRemoteClient() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- Number of writes from remote clients in the last interval
- getXceiverCount() -
Method in class org.apache.hadoop.dfs.DatanodeInfo
- number of active connections
- getZlibCompressor(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate implementation of the zlib compressor.
- getZlibCompressorType(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate type of the zlib compressor.
- getZlibDecompressor(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate implementation of the zlib decompressor.
- getZlibDecompressorType(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Return the appropriate type of the zlib decompressor.
- globStatus(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Return all the files that match filePattern and are not checksum
files.
- globStatus(Path, PathFilter) -
Method in class org.apache.hadoop.fs.FileSystem
- Return an array of FileStatus objects whose path names match pathPattern
and is accepted by the user-supplied path filter.
- go() -
Method in class org.apache.hadoop.streaming.StreamJob
- This is the method that actually
intializes the job conf and submits the job
to the jobtracker
- goodClassOrNull(String, String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
- It may seem strange to silently switch behaviour when a String
is not a classname; the reason is simplified Usage:
- Grep - Class in org.apache.hadoop.examples
-
- GT_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- GzipCodec - Class in org.apache.hadoop.io.compress
- This class creates gzip compressors/decompressors.
- GzipCodec() -
Constructor for class org.apache.hadoop.io.compress.GzipCodec
-
- GzipCodec.GzipInputStream - Class in org.apache.hadoop.io.compress
-
- GzipCodec.GzipInputStream(InputStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
-
- GzipCodec.GzipInputStream(DecompressorStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
- Allow subclasses to directly set the inflater stream.
- GzipCodec.GzipOutputStream - Class in org.apache.hadoop.io.compress
- A bridge that wraps around a DeflaterOutputStream to make it
a CompressionOutputStream.
- GzipCodec.GzipOutputStream(OutputStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
-
- GzipCodec.GzipOutputStream(CompressorStream) -
Constructor for class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
- Allow children types to put a different type in here.
H
- hadoopAliasConf_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- HadoopStreaming - Class in org.apache.hadoop.streaming
- The main entrypoint.
- HadoopStreaming() -
Constructor for class org.apache.hadoop.streaming.HadoopStreaming
-
- HadoopVersionAnnotation - Annotation Type in org.apache.hadoop
- A package attribute that captures the version of Hadoop that was compiled.
- halfDigest() -
Method in class org.apache.hadoop.io.MD5Hash
- Construct a half-sized version of this MD5.
- handle(JobHistory.RecordTypes, Map<JobHistory.Keys, String>) -
Method in interface org.apache.hadoop.mapred.JobHistory.Listener
- Callback method for history parser.
- HarFileSystem - Class in org.apache.hadoop.fs
- This is an implementation of the Hadoop Archive
Filesystem.
- HarFileSystem() -
Constructor for class org.apache.hadoop.fs.HarFileSystem
- public construction of harfilesystem
- HarFileSystem(FileSystem) -
Constructor for class org.apache.hadoop.fs.HarFileSystem
- Constructor to create a HarFileSystem with an
underlying filesystem.
- has(int) -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Return true if tuple has an element at the position provided.
- hashBytes(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Compute hash for binary data.
- hashCode() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
- hashCode() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- hashCode() -
Method in class org.apache.hadoop.dfs.DatanodeID
-
- hashCode() -
Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
-
- hashCode() -
Method in class org.apache.hadoop.fs.FileStatus
- Returns a hash code value for the object, which is defined as
the hash code of the path name.
- hashCode() -
Method in class org.apache.hadoop.fs.Path
-
- hashCode() -
Method in class org.apache.hadoop.fs.permission.FsPermission
-
- hashCode() -
Method in class org.apache.hadoop.io.BooleanWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.BytesWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.ByteWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.DoubleWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.FloatWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.IntWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.LongWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.MD5Hash
- Returns a hash code value for this object.
- hashCode() -
Method in class org.apache.hadoop.io.NullWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- hashCode() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
-
- hashCode() -
Method in class org.apache.hadoop.io.Text
- hash function
- hashCode() -
Method in class org.apache.hadoop.io.UTF8
- Deprecated.
- hashCode() -
Method in class org.apache.hadoop.io.VIntWritable
-
- hashCode() -
Method in class org.apache.hadoop.io.VLongWritable
-
- hashCode() -
Method in class org.apache.hadoop.mapred.ID
-
- hashCode() -
Method in class org.apache.hadoop.mapred.JobID
-
- hashCode() -
Method in class org.apache.hadoop.mapred.join.TupleWritable
-
- hashCode() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
-
- hashCode() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- hashCode() -
Method in class org.apache.hadoop.mapred.TaskID
-
- hashCode() -
Method in class org.apache.hadoop.net.SocksSocketFactory
-
- hashCode() -
Method in class org.apache.hadoop.net.StandardSocketFactory
-
- hashCode() -
Method in class org.apache.hadoop.record.Buffer
-
- hashCode() -
Method in class org.apache.hadoop.record.meta.FieldTypeInfo
- We use a basic hashcode implementation, since this class will likely not
be used as a hashmap key
- hashCode() -
Method in class org.apache.hadoop.record.meta.MapTypeID
- We use a basic hashcode implementation, since this class will likely not
be used as a hashmap key
- hashCode() -
Method in class org.apache.hadoop.record.meta.TypeID
- We use a basic hashcode implementation, since this class will likely not
be used as a hashmap key
- hashCode() -
Method in class org.apache.hadoop.record.meta.VectorTypeID
- We use a basic hashcode implementation, since this class will likely not
be used as a hashmap key
- hashCode() -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
- Returns a hash code for this UGI.
- HashingDistributionPolicy - Class in org.apache.hadoop.contrib.index.example
- Choose a shard for each insert or delete based on document id hashing.
- HashingDistributionPolicy() -
Constructor for class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
-
- HashPartitioner<K2,V2> - Class in org.apache.hadoop.mapred.lib
- Partition keys by their
Object.hashCode()
. - HashPartitioner() -
Constructor for class org.apache.hadoop.mapred.lib.HashPartitioner
-
- hasNext() -
Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
-
- hasNext() -
Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- hasNext() -
Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
- Returns true if the stream is not empty, but provides no guarantee that
a call to next(K,V) will succeed.
- hasNext() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return true if it is possible that this could emit more values.
- hasNext() -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- hasNext() -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- hasNext() -
Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- hasNext() -
Method in interface org.apache.hadoop.mapred.join.ResetableIterator
- True iff a call to next will succeed.
- hasNext() -
Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- hasNext() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Return true if the RR- including the k,v pair stored in this object-
is exhausted.
- hasSimpleInputSpecs_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- HEADER -
Static variable in class org.apache.hadoop.ipc.Server
- The first four bytes of Hadoop RPC connections
- HEADER_LEN -
Static variable in class org.apache.hadoop.dfs.DataChecksum
-
- headMap(WritableComparable) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- HeapSort - Class in org.apache.hadoop.util
- An implementation of the core algorithm of HeapSort.
- HeapSort() -
Constructor for class org.apache.hadoop.util.HeapSort
-
- heartbeat(TaskTrackerStatus, boolean, boolean, short) -
Method in class org.apache.hadoop.mapred.JobTracker
- The periodic heartbeat mechanism between the
TaskTracker
and
the JobTracker
.
- HEARTBEAT_INTERVAL -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- heartbeats -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- height -
Variable in class org.apache.hadoop.examples.dancing.Pentomino
-
- height -
Static variable in class org.apache.hadoop.mapred.StatusHttpServer.TaskGraphServlet
- height of the graph w/o margins
- hexchars -
Static variable in class org.apache.hadoop.record.Utils
-
- hexStringToByte(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Given a hexstring this will return the byte array corresponding to the
string
- HftpFileSystem - Class in org.apache.hadoop.dfs
- An implementation of a protocol for accessing filesystems over HTTP.
- HftpFileSystem() -
Constructor for class org.apache.hadoop.dfs.HftpFileSystem
-
- hostName -
Variable in class org.apache.hadoop.dfs.DatanodeInfo
- HostName as suplied by the datanode during registration as its
name.
- HostsFileReader - Class in org.apache.hadoop.util
-
- HostsFileReader(String, String) -
Constructor for class org.apache.hadoop.util.HostsFileReader
-
- HsftpFileSystem - Class in org.apache.hadoop.dfs
- An implementation of a protocol for accessing filesystems over HTTPS.
- HsftpFileSystem() -
Constructor for class org.apache.hadoop.dfs.HsftpFileSystem
-
- HTML_TAIL -
Static variable in class org.apache.hadoop.util.ServletUtil
-
- htmlFooter() -
Static method in class org.apache.hadoop.util.ServletUtil
- HTML footer to be added in the jsps.
- humanReadableInt(long) -
Static method in class org.apache.hadoop.util.StringUtils
- Given an integer, return a string that is in an approximate, but human
readable format.
I
- ID - Class in org.apache.hadoop.mapred
- A general identifier, which internally stores the id
as an integer.
- ID(int) -
Constructor for class org.apache.hadoop.mapred.ID
- constructs an ID object from the given int
- ID() -
Constructor for class org.apache.hadoop.mapred.ID
-
- id -
Variable in class org.apache.hadoop.mapred.ID
-
- id() -
Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
- Return the position in the collector this class occupies.
- id() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return the position in the collector this class occupies.
- id -
Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- id() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Return the position in the collector this class occupies.
- ident -
Variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- IDENT_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- IdentityLocalAnalysis - Class in org.apache.hadoop.contrib.index.example
- Identity local analysis maps inputs directly into outputs.
- IdentityLocalAnalysis() -
Constructor for class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
-
- IdentityMapper<K,V> - Class in org.apache.hadoop.mapred.lib
- Implements the identity function, mapping inputs directly to outputs.
- IdentityMapper() -
Constructor for class org.apache.hadoop.mapred.lib.IdentityMapper
-
- IdentityReducer<K,V> - Class in org.apache.hadoop.mapred.lib
- Performs no reduction, writing all input values directly to the output.
- IdentityReducer() -
Constructor for class org.apache.hadoop.mapred.lib.IdentityReducer
-
- IDistributionPolicy - Interface in org.apache.hadoop.contrib.index.mapred
- A distribution policy decides, given a document with a document id, which
one shard the request should be sent to if the request is an insert, and
which shard(s) the request should be sent to if the request is a delete.
- idWithinJob() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- ifExists(String, Configuration) -
Method in class org.apache.hadoop.fs.LocalDirAllocator
- We search through all the configured dirs for the file's existence
and return true when we find
- ifmt(double) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- IIndexUpdater - Interface in org.apache.hadoop.contrib.index.mapred
- A class implements an index updater interface should create a Map/Reduce job
configuration and run the Map/Reduce job to analyze documents and update
Lucene instances in parallel.
- ILLEGAL_ARGS -
Static variable in class org.apache.hadoop.dfs.Balancer
-
- ILocalAnalysis<K extends WritableComparable,V extends Writable> - Interface in org.apache.hadoop.contrib.index.mapred
- Application specific local analysis.
- image -
Variable in class org.apache.hadoop.record.compiler.generated.Token
- The string image of the token.
- implies(FsAction) -
Method in enum org.apache.hadoop.fs.permission.FsAction
- Return true if this action implies that action.
- in -
Variable in class org.apache.hadoop.io.compress.CompressionInputStream
- The input stream to be compressed.
- inBuf -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- inc(int) -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
- Inc metrics for incr vlaue
- inc() -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
- Inc metrics by one
- inc(long) -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
- Inc metrics for incr vlaue
- inc() -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
- Inc metrics by one
- inc(int) -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Inc metrics for incr vlaue
- inc() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Inc metrics by one
- inc(int, long) -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Increment the metrics for numOps operations
- inc(long) -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Increment the metrics for one operation
- incDfsUsed(long) -
Method in class org.apache.hadoop.fs.DU
- Increase how much disk space we use.
- Include() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- INCLUDE_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- incr() -
Method in interface org.apache.hadoop.record.Index
-
- incrAllCounters(Counters) -
Method in class org.apache.hadoop.mapred.Counters
- Increments multiple counters by their amounts in another Counters
instance.
- incrCounter(Enum, long) -
Method in class org.apache.hadoop.mapred.Counters
- Increments the specified counter by the specified amount, creating it if
it didn't already exist.
- incrCounter(String, String, long) -
Method in class org.apache.hadoop.mapred.Counters
- Increments the specified counter by the specified amount, creating it if
it didn't already exist.
- incrCounter(Enum, long) -
Method in interface org.apache.hadoop.mapred.Reporter
- Increments the counter identified by the key, which can be of
any
Enum
type, by the specified amount.
- incrCounter(String, String, long) -
Method in interface org.apache.hadoop.mapred.Reporter
- Increments the counter identified by the group and counter name
by the specified amount.
- increment(long) -
Method in class org.apache.hadoop.mapred.Counters.Counter
- Increment this counter by the given value
- INCREMENT -
Static variable in class org.apache.hadoop.metrics.spi.MetricValue
-
- incrementBytesRead(long) -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Increment the bytes read in the statistics
- incrementBytesWritten(long) -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
- Increment the bytes written in the statistics
- incrMetric(String, int) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Increments the named metric by the specified value.
- incrMetric(String, long) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Increments the named metric by the specified value.
- incrMetric(String, short) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Increments the named metric by the specified value.
- incrMetric(String, byte) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Increments the named metric by the specified value.
- incrMetric(String, float) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Increments the named metric by the specified value.
- incrMetric(String, int) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Increments the named metric by the specified value.
- incrMetric(String, long) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Increments the named metric by the specified value.
- incrMetric(String, short) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Increments the named metric by the specified value.
- incrMetric(String, byte) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Increments the named metric by the specified value.
- incrMetric(String, float) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Increments the named metric by the specified value.
- INDEX -
Variable in enum org.apache.hadoop.fs.permission.FsAction
- Octal representation
- Index - Interface in org.apache.hadoop.record
- Interface that acts as an iterator for deserializing maps.
- INDEX_FILE_NAME -
Static variable in class org.apache.hadoop.io.MapFile
- The name of the index file.
- IndexedSortable - Interface in org.apache.hadoop.util
- Interface for collections capable of being sorted by
IndexedSorter
algorithms. - IndexedSorter - Interface in org.apache.hadoop.util
- Interface for sort algorithms accepting
IndexedSortable
items. - IndexUpdateCombiner - Class in org.apache.hadoop.contrib.index.mapred
- This combiner combines multiple intermediate forms into one intermediate
form.
- IndexUpdateCombiner() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
-
- IndexUpdateConfiguration - Class in org.apache.hadoop.contrib.index.mapred
- This class provides the getters and the setters to a number of parameters.
- IndexUpdateConfiguration(Configuration) -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Constructor
- IndexUpdateMapper<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.contrib.index.mapred
- This class applies local analysis on a key-value pair and then convert the
result docid-operation pair to a shard-and-intermediate form pair.
- IndexUpdateMapper() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
-
- IndexUpdateOutputFormat - Class in org.apache.hadoop.contrib.index.mapred
- The record writer of this output format simply puts a message in an output
path when a shard update is done.
- IndexUpdateOutputFormat() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateOutputFormat
-
- IndexUpdatePartitioner - Class in org.apache.hadoop.contrib.index.mapred
- This partitioner class puts the values of the same key - in this case the
same shard - in the same partition.
- IndexUpdatePartitioner() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdatePartitioner
-
- IndexUpdater - Class in org.apache.hadoop.contrib.index.mapred
- An implementation of an index updater interface which creates a Map/Reduce
job configuration and run the Map/Reduce job to analyze documents and update
Lucene instances in parallel.
- IndexUpdater() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdater
-
- IndexUpdateReducer - Class in org.apache.hadoop.contrib.index.mapred
- This reducer applies to a shard the changes for it.
- IndexUpdateReducer() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
- infoPort -
Variable in class org.apache.hadoop.dfs.DatanodeID
-
- init(Shard[]) -
Method in class org.apache.hadoop.contrib.index.example.HashingDistributionPolicy
-
- init(Shard[]) -
Method in class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
-
- init(Shard[]) -
Method in interface org.apache.hadoop.contrib.index.mapred.IDistributionPolicy
- Initialization.
- init() -
Method in class org.apache.hadoop.fs.FsShell
-
- init(JobConf) -
Method in class org.apache.hadoop.mapred.JobClient
- Connect to the default
JobTracker
.
- init(JobConf, String) -
Static method in class org.apache.hadoop.mapred.JobHistory
- Initialize JobHistory files.
- init() -
Method in class org.apache.hadoop.mapred.JobShell
-
- init(String, ContextFactory) -
Method in class org.apache.hadoop.metrics.file.FileContext
-
- init(String, ContextFactory) -
Method in class org.apache.hadoop.metrics.ganglia.GangliaContext
-
- init(String, String) -
Static method in class org.apache.hadoop.metrics.jvm.JvmMetrics
-
- init(String, ContextFactory) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Initializes the context.
- init(String, ContextFactory) -
Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
-
- init() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- init() -
Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- initHTML(ServletResponse, String) -
Static method in class org.apache.hadoop.util.ServletUtil
- Initial HTML header
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.FileSystem
- Called after a new FileSystem instance is constructed.
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Called after a new FileSystem instance is constructed.
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.HarFileSystem
- Initialize a Har filesystem per har archive.
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- initialize(URI, Configuration) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- initialize(URI) -
Method in class org.apache.hadoop.fs.s3.MigrationTool
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.s3.S3Credentials
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- initialize(URI, Configuration) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- initialize(int) -
Method in class org.apache.hadoop.util.PriorityQueue
- Subclass constructors must call this.
- initializePieces() -
Method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
- Define the one sided pieces.
- initializePieces() -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Fill in the pieces list.
- InMemoryFileSystem - Class in org.apache.hadoop.fs
- Deprecated.
- InMemoryFileSystem() -
Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- InMemoryFileSystem(URI, Configuration) -
Constructor for class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated.
- InnerJoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
- Full inner join.
- INode - Class in org.apache.hadoop.fs.s3
- Holds file metadata including type (regular file, or directory),
and the list of blocks that are pointers to the data.
- INode(INode.FileType, Block[]) -
Constructor for class org.apache.hadoop.fs.s3.INode
-
- inodeExists(Path) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- Input() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- input_stream -
Variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- InputBuffer - Class in org.apache.hadoop.io
- A reusable
InputStream
implementation that reads from an in-memory
buffer. - InputBuffer() -
Constructor for class org.apache.hadoop.io.InputBuffer
- Constructs a new empty buffer.
- inputFile -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- inputFile -
Variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- InputFormat<K,V> - Interface in org.apache.hadoop.mapred
InputFormat
describes the input-specification for a
Map-Reduce job.- inputFormatSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- inputSpecs_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- InputSplit - Interface in org.apache.hadoop.mapred
InputSplit
represents the data to be processed by an
individual Mapper
.- inputStream -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- inputTag -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- inReaderSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- INSERT -
Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
-
- insert(Object) -
Method in class org.apache.hadoop.util.PriorityQueue
- Adds element to the PriorityQueue in log(size) time if either
the PriorityQueue is not full, or not lessThan(element, top()).
- INT -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- INT_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- IntermediateForm - Class in org.apache.hadoop.contrib.index.mapred
- An intermediate form for one or more parsed Lucene documents and/or
delete terms.
- IntermediateForm() -
Constructor for class org.apache.hadoop.contrib.index.mapred.IntermediateForm
- Constructor
- IntTypeID -
Static variable in class org.apache.hadoop.record.meta.TypeID
-
- IntWritable - Class in org.apache.hadoop.io
- A WritableComparable for ints.
- IntWritable() -
Constructor for class org.apache.hadoop.io.IntWritable
-
- IntWritable(int) -
Constructor for class org.apache.hadoop.io.IntWritable
-
- IntWritable.Comparator - Class in org.apache.hadoop.io
- A Comparator optimized for IntWritable.
- IntWritable.Comparator() -
Constructor for class org.apache.hadoop.io.IntWritable.Comparator
-
- invalidate(Block[]) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Invalidates the specified blocks
- InvalidFileTypeException - Exception in org.apache.hadoop.mapred
- Used when file type differs from the desired file type.
- InvalidFileTypeException() -
Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
-
- InvalidFileTypeException(String) -
Constructor for exception org.apache.hadoop.mapred.InvalidFileTypeException
-
- InvalidInputException - Exception in org.apache.hadoop.mapred
- This class wraps a list of problems with the input, so that the user
can get a list of problems together instead of finding and fixing them one
by one.
- InvalidInputException(List<IOException>) -
Constructor for exception org.apache.hadoop.mapred.InvalidInputException
- Create the exception with the given list.
- InvalidJobConfException - Exception in org.apache.hadoop.mapred
- This exception is thrown when jobconf misses some mendatory attributes
or value of some attributes is invalid.
- InvalidJobConfException() -
Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InvalidJobConfException(String) -
Constructor for exception org.apache.hadoop.mapred.InvalidJobConfException
-
- InverseMapper<K,V> - Class in org.apache.hadoop.mapred.lib
- A
Mapper
that swaps keys and values. - InverseMapper() -
Constructor for class org.apache.hadoop.mapred.lib.InverseMapper
-
- IO_EXCEPTION -
Static variable in class org.apache.hadoop.dfs.Balancer
-
- IOUtils - Class in org.apache.hadoop.io
- An utility class for I/O related functionality.
- IOUtils() -
Constructor for class org.apache.hadoop.io.IOUtils
-
- IOUtils.NullOutputStream - Class in org.apache.hadoop.io
- /dev/null of OutputStreams.
- IOUtils.NullOutputStream() -
Constructor for class org.apache.hadoop.io.IOUtils.NullOutputStream
-
- ipcPort -
Variable in class org.apache.hadoop.dfs.DatanodeID
-
- isAbsolute() -
Method in class org.apache.hadoop.fs.Path
- True if the directory of this path is absolute.
- isAbsolute() -
Method in class org.apache.hadoop.metrics.spi.MetricValue
-
- isAlive -
Variable in class org.apache.hadoop.dfs.DatanodeDescriptor
-
- isBlockCompressed() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns true if records are block-compressed.
- isChecksumFile(Path) -
Static method in class org.apache.hadoop.fs.ChecksumFileSystem
- Return true iff file is a checksum file name.
- isComplete() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Check if the job is finished or not.
- isCompleted() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- isCompressed() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns true if values are compressed.
- isContextValid(String) -
Static method in class org.apache.hadoop.fs.LocalDirAllocator
- Method to check whether a context is valid
- isCygwin() -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- isDir() -
Method in class org.apache.hadoop.fs.FileStatus
- Is this a directory?
- isDirectory(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Deprecated. Use getFileStatus() instead
- isDirectory(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- isDirectory() -
Method in class org.apache.hadoop.fs.s3.INode
-
- isDisableHistory() -
Static method in class org.apache.hadoop.mapred.JobHistory
- Returns history disable status.
- isEmpty() -
Method in class org.apache.hadoop.io.MapWritable
-
- isEmpty() -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- isFile(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- True iff the named path is a regular file.
- isFile(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- isFile() -
Method in class org.apache.hadoop.fs.s3.INode
-
- isFile(Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- isFinalized() -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
- Is current upgrade finalized.
- isHealthy() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
- DFS is considered healthy if there are no missing blocks.
- isIdle() -
Method in class org.apache.hadoop.mapred.TaskTracker
- Is this task tracker idle?
- isIncluded(int) -
Method in class org.apache.hadoop.conf.Configuration.IntegerRanges
- Is the given value in the set of ranges
- isIncrement() -
Method in class org.apache.hadoop.metrics.spi.MetricValue
-
- isInSafeMode() -
Method in class org.apache.hadoop.dfs.NameNode
- Is the cluster currently in safe mode?
- isLocalHadoop() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- isLocalJobTracker(JobConf) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- isMap() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
- Returns whether this TaskAttemptID is a map ID
- isMap() -
Method in class org.apache.hadoop.mapred.TaskID
- Returns whether this TaskID is a map ID
- isMapTask() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- isMonitoring() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Returns true if monitoring is currently in progress.
- isMonitoring() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Returns true if monitoring is currently in progress.
- isNativeCodeLoaded() -
Static method in class org.apache.hadoop.util.NativeCodeLoader
- Check if native-hadoop code is loaded for this platform.
- isNativeLzoLoaded() -
Static method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
- Check if lzo compressors are loaded and initialized.
- isNativeLzoLoaded() -
Static method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
- Check if lzo decompressors are loaded and initialized.
- isNativeLzoLoaded(Configuration) -
Static method in class org.apache.hadoop.io.compress.LzoCodec
- Check if native-lzo library is loaded & initialized.
- isNativeZlibLoaded(Configuration) -
Static method in class org.apache.hadoop.io.compress.zlib.ZlibFactory
- Check if native-zlib code is loaded & initialized correctly and
can be loaded for this job.
- isNegativeVInt(byte) -
Static method in class org.apache.hadoop.io.WritableUtils
- Given the first byte of a vint/vlong, determine the sign
- IsolationRunner - Class in org.apache.hadoop.mapred
-
- IsolationRunner() -
Constructor for class org.apache.hadoop.mapred.IsolationRunner
-
- isOnSameRack(Node, Node) -
Method in class org.apache.hadoop.net.NetworkTopology
- Check if two nodes are on the same rack
- isOpen() -
Method in class org.apache.hadoop.net.SocketInputStream
-
- isOpen() -
Method in class org.apache.hadoop.net.SocketOutputStream
-
- isReady() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- isSegmentsFile(String) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
- Check if the file is a segments_N file
- isSegmentsGenFile(String) -
Static method in class org.apache.hadoop.contrib.index.lucene.LuceneUtil
- Check if the file is the segments.gen file
- isSplitable(FileSystem, Path) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- Is the given filename splitable? Usually, true, but if the file is
stream compressed, it will not be.
- isSplitable(FileSystem, Path) -
Method in class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- isSplitable(FileSystem, Path) -
Method in class org.apache.hadoop.mapred.TextInputFormat
-
- isSuccessful() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Check if the job completed successfully.
- isUnderConstruction() -
Method in class org.apache.hadoop.dfs.LocatedBlocks
- Return ture if file was under construction when
this LocatedBlocks was constructed, false otherwise.
- isValidBlock(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Is the block valid?
- iterator() -
Method in class org.apache.hadoop.conf.Configuration
- Get an
Iterator
to go through the list of String
key-value pairs in the configuration.
- iterator() -
Method in class org.apache.hadoop.mapred.Counters.Group
-
- iterator() -
Method in class org.apache.hadoop.mapred.Counters
-
- iterator() -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Return an iterator over the elements in this tuple.
J
- jar_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- JarBuilder - Class in org.apache.hadoop.streaming
- This class is the main class for generating job.jar
for Hadoop Streaming jobs.
- JarBuilder() -
Constructor for class org.apache.hadoop.streaming.JarBuilder
-
- JavaSerialization - Class in org.apache.hadoop.io.serializer
-
An experimental
Serialization
for Java Serializable
classes. - JavaSerialization() -
Constructor for class org.apache.hadoop.io.serializer.JavaSerialization
-
- JavaSerializationComparator<T extends Serializable & Comparable<T>> - Class in org.apache.hadoop.io.serializer
-
A
RawComparator
that uses a JavaSerialization
Deserializer
to deserialize objects that are then compared via
their Comparable
interfaces. - JavaSerializationComparator() -
Constructor for class org.apache.hadoop.io.serializer.JavaSerializationComparator
-
- JBoolean - Class in org.apache.hadoop.record.compiler
-
- JBoolean() -
Constructor for class org.apache.hadoop.record.compiler.JBoolean
- Creates a new instance of JBoolean
- JBuffer - Class in org.apache.hadoop.record.compiler
- Code generator for "buffer" type.
- JBuffer() -
Constructor for class org.apache.hadoop.record.compiler.JBuffer
- Creates a new instance of JBuffer
- JByte - Class in org.apache.hadoop.record.compiler
- Code generator for "byte" type.
- JByte() -
Constructor for class org.apache.hadoop.record.compiler.JByte
-
- jc -
Variable in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- jc_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- JDouble - Class in org.apache.hadoop.record.compiler
-
- JDouble() -
Constructor for class org.apache.hadoop.record.compiler.JDouble
- Creates a new instance of JDouble
- JField<T> - Class in org.apache.hadoop.record.compiler
- A thin wrappper around record field.
- JField(String, T) -
Constructor for class org.apache.hadoop.record.compiler.JField
- Creates a new instance of JField
- JFile - Class in org.apache.hadoop.record.compiler
- Container for the Hadoop Record DDL.
- JFile(String, ArrayList<JFile>, ArrayList<JRecord>) -
Constructor for class org.apache.hadoop.record.compiler.JFile
- Creates a new instance of JFile
- JFloat - Class in org.apache.hadoop.record.compiler
-
- JFloat() -
Constructor for class org.apache.hadoop.record.compiler.JFloat
- Creates a new instance of JFloat
- JInt - Class in org.apache.hadoop.record.compiler
- Code generator for "int" type
- JInt() -
Constructor for class org.apache.hadoop.record.compiler.JInt
- Creates a new instance of JInt
- jj_nt -
Variable in class org.apache.hadoop.record.compiler.generated.Rcc
-
- jjFillToken() -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- jjnewLexState -
Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- jjstrLiteralImages -
Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- JLong - Class in org.apache.hadoop.record.compiler
- Code generator for "long" type
- JLong() -
Constructor for class org.apache.hadoop.record.compiler.JLong
- Creates a new instance of JLong
- JMap - Class in org.apache.hadoop.record.compiler
-
- JMap(JType, JType) -
Constructor for class org.apache.hadoop.record.compiler.JMap
- Creates a new instance of JMap
- job -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- job -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- Job - Class in org.apache.hadoop.mapred.jobcontrol
- This class encapsulates a MapReduce job and its dependency.
- Job(JobConf, ArrayList<Job>) -
Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
- Construct a job.
- Job(JobConf) -
Constructor for class org.apache.hadoop.mapred.jobcontrol.Job
- Construct a job.
- JobBase - Class in org.apache.hadoop.contrib.utils.join
- A common base implementing some statics collecting mechanisms that are
commonly used in a typical map/reduce job.
- JobBase() -
Constructor for class org.apache.hadoop.contrib.utils.join.JobBase
-
- JobClient - Class in org.apache.hadoop.mapred
JobClient
is the primary interface for the user-job to interact
with the JobTracker
.- JobClient() -
Constructor for class org.apache.hadoop.mapred.JobClient
- Create a job client.
- JobClient(JobConf) -
Constructor for class org.apache.hadoop.mapred.JobClient
- Build a job client with the given
JobConf
, and connect to the
default JobTracker
.
- JobClient(InetSocketAddress, Configuration) -
Constructor for class org.apache.hadoop.mapred.JobClient
- Build a job client, connect to the indicated job tracker.
- JobClient.TaskStatusFilter - Enum in org.apache.hadoop.mapred
-
- JobConf - Class in org.apache.hadoop.mapred
- A map/reduce job configuration.
- JobConf() -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce job configuration.
- JobConf(Class) -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce job configuration.
- JobConf(Configuration) -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce job configuration.
- JobConf(Configuration, Class) -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce job configuration.
- JobConf(String) -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce configuration.
- JobConf(Path) -
Constructor for class org.apache.hadoop.mapred.JobConf
- Construct a map/reduce configuration.
- jobConf_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- JobConfigurable - Interface in org.apache.hadoop.mapred
- That what may be configured.
- JobControl - Class in org.apache.hadoop.mapred.jobcontrol
- This class encapsulates a set of MapReduce jobs and its dependency.
- JobControl(String) -
Constructor for class org.apache.hadoop.mapred.jobcontrol.JobControl
- Construct a job control for a group of jobs.
- JobEndNotifier - Class in org.apache.hadoop.mapred
-
- JobEndNotifier() -
Constructor for class org.apache.hadoop.mapred.JobEndNotifier
-
- JobHistory - Class in org.apache.hadoop.mapred
- Provides methods for writing to and reading from job history.
- JobHistory() -
Constructor for class org.apache.hadoop.mapred.JobHistory
-
- JobHistory.HistoryCleaner - Class in org.apache.hadoop.mapred
- Delete history files older than one month.
- JobHistory.HistoryCleaner() -
Constructor for class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
-
- JobHistory.JobInfo - Class in org.apache.hadoop.mapred
- Helper class for logging or reading back events related to job start, finish or failure.
- JobHistory.JobInfo(String) -
Constructor for class org.apache.hadoop.mapred.JobHistory.JobInfo
- Create new JobInfo
- JobHistory.Keys - Enum in org.apache.hadoop.mapred
- Job history files contain key="value" pairs, where keys belong to this enum.
- JobHistory.Listener - Interface in org.apache.hadoop.mapred
- Callback interface for reading back log events from JobHistory.
- JobHistory.MapAttempt - Class in org.apache.hadoop.mapred
- Helper class for logging or reading back events related to start, finish or failure of
a Map Attempt on a node.
- JobHistory.MapAttempt() -
Constructor for class org.apache.hadoop.mapred.JobHistory.MapAttempt
-
- JobHistory.RecordTypes - Enum in org.apache.hadoop.mapred
- Record types are identifiers for each line of log in history files.
- JobHistory.ReduceAttempt - Class in org.apache.hadoop.mapred
- Helper class for logging or reading back events related to start, finish or failure of
a Map Attempt on a node.
- JobHistory.ReduceAttempt() -
Constructor for class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
-
- JobHistory.Task - Class in org.apache.hadoop.mapred
- Helper class for logging or reading back events related to Task's start, finish or failure.
- JobHistory.Task() -
Constructor for class org.apache.hadoop.mapred.JobHistory.Task
-
- JobHistory.TaskAttempt - Class in org.apache.hadoop.mapred
- Base class for Map and Reduce TaskAttempts.
- JobHistory.TaskAttempt() -
Constructor for class org.apache.hadoop.mapred.JobHistory.TaskAttempt
-
- JobHistory.Values - Enum in org.apache.hadoop.mapred
- This enum contains some of the values commonly used by history log events.
- JobID - Class in org.apache.hadoop.mapred
- JobID represents the immutable and unique identifier for
the job.
- JobID(String, int) -
Constructor for class org.apache.hadoop.mapred.JobID
- Constructs a JobID object
- jobId_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- jobInfo() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- JobPriority - Enum in org.apache.hadoop.mapred
- Used to describe the priority of the running job.
- JobProfile - Class in org.apache.hadoop.mapred
- A JobProfile is a MapReduce primitive.
- JobProfile() -
Constructor for class org.apache.hadoop.mapred.JobProfile
- Construct an empty
JobProfile
.
- JobProfile(String, JobID, String, String, String) -
Constructor for class org.apache.hadoop.mapred.JobProfile
- Construct a
JobProfile
the userid, jobid,
job config-file, job-details url and job name.
- JobProfile(String, String, String, String, String) -
Constructor for class org.apache.hadoop.mapred.JobProfile
- Deprecated. use JobProfile(String, JobID, String, String, String) instead
- JobShell - Class in org.apache.hadoop.mapred
- Provide command line parsing for JobSubmission
job submission looks like
hadoop jar -libjars
-archives
-files inputjar args - JobShell() -
Constructor for class org.apache.hadoop.mapred.JobShell
-
- JobShell(Configuration) -
Constructor for class org.apache.hadoop.mapred.JobShell
-
- JobStatus - Class in org.apache.hadoop.mapred
- Describes the current status of a job.
- JobStatus() -
Constructor for class org.apache.hadoop.mapred.JobStatus
-
- JobStatus(String, float, float, int) -
Constructor for class org.apache.hadoop.mapred.JobStatus
- Deprecated.
- JobStatus(JobID, float, float, int) -
Constructor for class org.apache.hadoop.mapred.JobStatus
- Create a job status object for a given jobid.
- jobsToComplete() -
Method in class org.apache.hadoop.mapred.JobClient
- Get the jobs that are not completed and not failed.
- jobsToComplete() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- JobTracker - Class in org.apache.hadoop.mapred
- JobTracker is the central location for submitting and
tracking MR jobs in a network environment.
- JobTracker.IllegalStateException - Exception in org.apache.hadoop.mapred
- A client tried to submit a job before the Job Tracker was ready.
- JobTracker.IllegalStateException(String) -
Constructor for exception org.apache.hadoop.mapred.JobTracker.IllegalStateException
-
- JobTracker.State - Enum in org.apache.hadoop.mapred
-
- JOBTRACKER_START_TIME -
Static variable in class org.apache.hadoop.mapred.JobHistory
-
- join() -
Method in class org.apache.hadoop.dfs.NameNode
- Wait for service to finish.
- Join - Class in org.apache.hadoop.examples
- This is the trivial map/reduce program that does absolutely nothing
other than use the framework to fragment and sort the input values.
- Join() -
Constructor for class org.apache.hadoop.examples.Join
-
- join() -
Method in class org.apache.hadoop.ipc.Server
- Wait for the server to be stopped.
- JoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
- Base class for Composite joins returning Tuples of arbitrary Writables.
- JoinRecordReader(int, JobConf, int, Class<? extends WritableComparator>) -
Constructor for class org.apache.hadoop.mapred.join.JoinRecordReader
-
- JoinRecordReader.JoinDelegationIterator - Class in org.apache.hadoop.mapred.join
- Since the JoinCollector is effecting our operation, we need only
provide an iterator proxy wrapping its operation.
- JoinRecordReader.JoinDelegationIterator() -
Constructor for class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- JRecord - Class in org.apache.hadoop.record.compiler
-
- JRecord(String, ArrayList<JField<JType>>) -
Constructor for class org.apache.hadoop.record.compiler.JRecord
- Creates a new instance of JRecord
- JspHelper - Class in org.apache.hadoop.dfs
-
- JspHelper() -
Constructor for class org.apache.hadoop.dfs.JspHelper
-
- JString - Class in org.apache.hadoop.record.compiler
-
- JString() -
Constructor for class org.apache.hadoop.record.compiler.JString
- Creates a new instance of JString
- JType - Class in org.apache.hadoop.record.compiler
- Abstract Base class for all types supported by Hadoop Record I/O.
- JType() -
Constructor for class org.apache.hadoop.record.compiler.JType
-
- JVector - Class in org.apache.hadoop.record.compiler
-
- JVector(JType) -
Constructor for class org.apache.hadoop.record.compiler.JVector
- Creates a new instance of JVector
- JvmMetrics - Class in org.apache.hadoop.metrics.jvm
- Singleton class which eports Java Virtual Machine metrics to the metrics API.
K
- key() -
Method in class org.apache.hadoop.io.ArrayFile.Reader
- Returns the key associated with the most recent call to
ArrayFile.Reader.seek(long)
, ArrayFile.Reader.next(Writable)
, or ArrayFile.Reader.get(long,Writable)
.
- key() -
Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
- Return the key this RecordReader would supply on a call to next(K,V)
- key(K) -
Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
- Clone the key at the head of this RecordReader into the object provided.
- key() -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Return the key for the current join or the value at the top of the
RecordReader heap.
- key(K) -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Clone the key at the top of this RR into the given object.
- key() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Return the key at the head of this RR.
- key(K) -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Clone the key at the head of this RR into the object supplied.
- KeyFieldBasedPartitioner<K2,V2> - Class in org.apache.hadoop.mapred.lib
-
- KeyFieldBasedPartitioner() -
Constructor for class org.apache.hadoop.mapred.lib.KeyFieldBasedPartitioner
-
- keySerializer -
Variable in class org.apache.hadoop.io.SequenceFile.Writer
-
- keySet() -
Method in class org.apache.hadoop.io.MapWritable
-
- keySet() -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- KeyValueLineRecordReader - Class in org.apache.hadoop.mapred
- This class treats a line in the input as a key/value pair separated by a
separator character.
- KeyValueLineRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.mapred.KeyValueLineRecordReader
-
- KeyValueTextInputFormat - Class in org.apache.hadoop.mapred
- An
InputFormat
for plain text files. - KeyValueTextInputFormat() -
Constructor for class org.apache.hadoop.mapred.KeyValueTextInputFormat
-
- kids -
Variable in class org.apache.hadoop.mapred.join.CompositeRecordReader
-
- killJob(String) -
Method in class org.apache.hadoop.mapred.JobTracker
- Deprecated.
- killJob(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- killJob() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Kill the running job.
- killTask(String, boolean) -
Method in class org.apache.hadoop.mapred.JobTracker
- Deprecated.
- killTask(TaskAttemptID, boolean) -
Method in class org.apache.hadoop.mapred.JobTracker
- Mark a Task to be killed
- killTask(TaskAttemptID, boolean) -
Method in interface org.apache.hadoop.mapred.RunningJob
- Kill indicated task attempt.
- killTask(String, boolean) -
Method in interface org.apache.hadoop.mapred.RunningJob
- Deprecated. Applications should rather use
RunningJob.killTask(TaskAttemptID, boolean)
- kind -
Variable in class org.apache.hadoop.record.compiler.generated.Token
- An integer that describes the kind of this token.
- KosmosFileSystem - Class in org.apache.hadoop.fs.kfs
- A FileSystem backed by KFS.
- KosmosFileSystem() -
Constructor for class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
L
- largestNumOfValues -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- lastKey() -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- lastUpdate -
Variable in class org.apache.hadoop.dfs.DatanodeInfo
-
- LAYOUT_VERSION -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- LBRACE_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- LEASE_HARDLIMIT_PERIOD -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- LEASE_SOFTLIMIT_PERIOD -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- LeaseExpiredException - Exception in org.apache.hadoop.dfs
- The lease that was being used to create this file has expired.
- LeaseExpiredException(String) -
Constructor for exception org.apache.hadoop.dfs.LeaseExpiredException
-
- lessThan(Object, Object) -
Method in class org.apache.hadoop.util.PriorityQueue
- Determines the ordering of objects in this priority queue.
- level -
Variable in class org.apache.hadoop.net.NodeBase
-
- LexicalError(boolean, int, int, int, String, char) -
Static method in error org.apache.hadoop.record.compiler.generated.TokenMgrError
- Returns a detailed message for the Error when it is thrown by the
token manager to indicate a lexical error.
- lexStateNames -
Static variable in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- limitDecimalTo2(double) -
Static method in class org.apache.hadoop.fs.FsShell
-
- line -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- LineDocInputFormat - Class in org.apache.hadoop.contrib.index.example
- An InputFormat for LineDoc for plain text files where each line is a doc.
- LineDocInputFormat() -
Constructor for class org.apache.hadoop.contrib.index.example.LineDocInputFormat
-
- LineDocLocalAnalysis - Class in org.apache.hadoop.contrib.index.example
- Convert LineDocTextAndOp to DocumentAndOp as required by ILocalAnalysis.
- LineDocLocalAnalysis() -
Constructor for class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
-
- LineDocRecordReader - Class in org.apache.hadoop.contrib.index.example
- A simple RecordReader for LineDoc for plain text files where each line is a
doc.
- LineDocRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.contrib.index.example.LineDocRecordReader
- Constructor
- LineDocTextAndOp - Class in org.apache.hadoop.contrib.index.example
- This class represents an operation.
- LineDocTextAndOp() -
Constructor for class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
- Constructor
- LineRecordReader - Class in org.apache.hadoop.mapred
- Treats keys as offset in file and value as line.
- LineRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.mapred.LineRecordReader
-
- LineRecordReader(InputStream, long, long) -
Constructor for class org.apache.hadoop.mapred.LineRecordReader
- Deprecated.
- LineRecordReader(InputStream, long, long, int) -
Constructor for class org.apache.hadoop.mapred.LineRecordReader
-
- LineRecordReader(InputStream, long, long, Configuration) -
Constructor for class org.apache.hadoop.mapred.LineRecordReader
-
- LineRecordReader.LineReader - Class in org.apache.hadoop.mapred
- A class that provides a line reader from an input stream.
- LineRecordReader.LineReader(InputStream, Configuration) -
Constructor for class org.apache.hadoop.mapred.LineRecordReader.LineReader
- Create a line reader that reads from the given stream using the
io.file.buffer.size
specified in the given
Configuration
.
- LINK_URI -
Static variable in class org.apache.hadoop.streaming.StreamJob
-
- list() -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- listDeepSubPaths(Path) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- listJobConfProperties() -
Method in class org.apache.hadoop.streaming.StreamJob
- Prints out the jobconf properties on stdout
when verbose is specified.
- listPaths(JobConf) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- Deprecated. Use
FileInputFormat.listStatus(JobConf)
instead.
- listPaths(JobConf) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- ListPathsServlet - Class in org.apache.hadoop.dfs
- Obtain meta-information about a filesystem.
- ListPathsServlet() -
Constructor for class org.apache.hadoop.dfs.ListPathsServlet
-
- listStatus(Path) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- List the statuses of the files/directories in the given path if the path is
a directory.
- listStatus(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- List the statuses of the files/directories in the given path if the path is
a directory.
- listStatus(Path, PathFilter) -
Method in class org.apache.hadoop.fs.FileSystem
- Filter files/directories in the given path using the user-supplied path
filter.
- listStatus(Path[]) -
Method in class org.apache.hadoop.fs.FileSystem
- Filter files/directories in the given list of paths using default
path filter.
- listStatus(Path[], PathFilter) -
Method in class org.apache.hadoop.fs.FileSystem
- Filter files/directories in the given list of paths using user-supplied
path filter.
- listStatus(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- List files in a directory.
- listStatus(Path) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
- liststatus returns the children of a directory
after looking up the index files.
- listStatus(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- listStatus(Path) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
If
f
is a file, this method will make a single call to S3.
- listStatus(JobConf) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- List input directories.
- listSubPaths(Path) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- ljustify(String, int) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- load(Configuration, String, Class<K>) -
Static method in class org.apache.hadoop.io.DefaultStringifier
- Restores the object from the configuration.
- loadArray(Configuration, String, Class<K>) -
Static method in class org.apache.hadoop.io.DefaultStringifier
- Restores the array of objects from the configuration.
- LocalDirAllocator - Class in org.apache.hadoop.fs
- An implementation of a round-robin scheme for disk allocation for creating
files.
- LocalDirAllocator(String) -
Constructor for class org.apache.hadoop.fs.LocalDirAllocator
- Create an allocator object
- LocalFileSystem - Class in org.apache.hadoop.fs
- Implement the FileSystem API for the checksumed local filesystem.
- LocalFileSystem() -
Constructor for class org.apache.hadoop.fs.LocalFileSystem
-
- LocalFileSystem(FileSystem) -
Constructor for class org.apache.hadoop.fs.LocalFileSystem
-
- localHadoop_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- localizeBin(String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- localRunnerNotification(JobConf, JobStatus) -
Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- locatedBlockCount() -
Method in class org.apache.hadoop.dfs.LocatedBlocks
- Get number of located blocks.
- LocatedBlocks - Class in org.apache.hadoop.dfs
- Collection of blocks with their locations and the file length.
- location -
Variable in class org.apache.hadoop.dfs.DatanodeInfo
-
- location -
Variable in class org.apache.hadoop.net.NodeBase
-
- lock(Path, boolean) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- lock(Path, boolean) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Deprecated.
- LOG -
Static variable in class org.apache.hadoop.contrib.index.main.UpdateIndex
-
- LOG -
Static variable in class org.apache.hadoop.contrib.index.mapred.IndexUpdater
-
- LOG -
Static variable in class org.apache.hadoop.contrib.utils.join.JobBase
-
- LOG -
Static variable in class org.apache.hadoop.dfs.DataBlockScanner
-
- LOG -
Static variable in class org.apache.hadoop.dfs.DataNode
-
- LOG -
Static variable in class org.apache.hadoop.dfs.NameNode
-
- LOG -
Static variable in class org.apache.hadoop.dfs.NamenodeFsck
-
- LOG -
Static variable in class org.apache.hadoop.dfs.SecondaryNameNode
-
- LOG -
Static variable in class org.apache.hadoop.fs.FileSystem
-
- LOG -
Static variable in class org.apache.hadoop.fs.FSInputChecker
-
- LOG -
Static variable in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- LOG -
Static variable in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- LOG -
Static variable in class org.apache.hadoop.io.compress.CompressionCodecFactory
-
- LOG -
Static variable in class org.apache.hadoop.ipc.Client
-
- LOG -
Static variable in class org.apache.hadoop.ipc.Server
-
- log(Log) -
Method in class org.apache.hadoop.mapred.Counters
- Logs the current counter values.
- LOG -
Static variable in class org.apache.hadoop.mapred.FileInputFormat
-
- LOG -
Static variable in class org.apache.hadoop.mapred.JobTracker
-
- LOG -
Static variable in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- LOG -
Static variable in class org.apache.hadoop.mapred.TaskTracker
-
- LOG -
Static variable in class org.apache.hadoop.net.NetworkTopology
-
- LOG -
Static variable in class org.apache.hadoop.security.UserGroupInformation
-
- LOG -
Static variable in class org.apache.hadoop.streaming.PipeMapRed
-
- LOG -
Static variable in class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- LOG -
Static variable in class org.apache.hadoop.streaming.StreamJob
-
- LOG -
Static variable in class org.apache.hadoop.util.Shell
-
- logFailed(String, long, int, int) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Deprecated.
- logFailed(JobID, long, int, int) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Logs job failed event.
- logFailed(String, String, String, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Deprecated.
- logFailed(TaskAttemptID, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Log task attempt failed event.
- logFailed(String, String, String, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Deprecated.
- logFailed(TaskAttemptID, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Log failed reduce task attempt.
- logFailed(String, String, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Deprecated.
- logFailed(TaskID, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Log job failed event.
- logFinished(String, long, int, int, int, int, Counters) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Deprecated.
- logFinished(JobID, long, int, int, int, int, Counters) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Log job finished.
- logFinished(String, String, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Deprecated.
- logFinished(TaskAttemptID, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Log finish time of map task attempt.
- logFinished(String, String, String, long, long, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Deprecated.
- logFinished(TaskAttemptID, long, long, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Log finished event of this task.
- logFinished(String, String, String, long, Counters) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Deprecated.
- logFinished(TaskID, String, long, Counters) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Log finish time of task.
- login() -
Static method in class org.apache.hadoop.security.UnixUserGroupInformation
- Get current user's name and the names of all its groups from Unix.
- login(Configuration) -
Static method in class org.apache.hadoop.security.UnixUserGroupInformation
- Equivalent to login(conf, false).
- login(Configuration, boolean) -
Static method in class org.apache.hadoop.security.UnixUserGroupInformation
- Get a user's name & its group names from the given configuration;
If it is not defined in the configuration, get the current user's
information from Unix.
- login(Configuration) -
Static method in class org.apache.hadoop.security.UserGroupInformation
- Login and return a UserGroupInformation object.
- logKilled(String, String, String, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Deprecated.
- logKilled(TaskAttemptID, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Log task attempt killed event.
- logKilled(String, String, String, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Deprecated.
- logKilled(TaskAttemptID, long, String, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Log killed reduce task attempt.
- LogLevel - Class in org.apache.hadoop.log
- Change log level in runtime.
- LogLevel() -
Constructor for class org.apache.hadoop.log.LogLevel
-
- LogLevel.Servlet - Class in org.apache.hadoop.log
- A servlet implementation
- LogLevel.Servlet() -
Constructor for class org.apache.hadoop.log.LogLevel.Servlet
-
- logSpec() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- logStarted(String, long, int, int) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Deprecated.
- logStarted(JobID, long, int, int) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Logs launch time of job.
- logStarted(String, String, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Deprecated.
- logStarted(TaskAttemptID, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.MapAttempt
- Log start time of this map task attempt.
- logStarted(String, String, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Deprecated.
- logStarted(TaskAttemptID, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.ReduceAttempt
- Log start time of Reduce task attempt.
- logStarted(String, String, String, long) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Deprecated.
- logStarted(TaskID, String, long, String) -
Static method in class org.apache.hadoop.mapred.JobHistory.Task
- Log start time of task (TIP).
- logSubmitted(String, JobConf, String, long) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Deprecated.
- logSubmitted(JobID, JobConf, String, long) -
Static method in class org.apache.hadoop.mapred.JobHistory.JobInfo
- Log job submitted event to history.
- logThreadInfo(Log, String, long) -
Static method in class org.apache.hadoop.util.ReflectionUtils
- Log the current thread stacks at INFO level.
- LONG -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- LONG_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- LONG_VALUE_MAX -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- LONG_VALUE_MIN -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- LONG_VALUE_SUM -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- LongSumReducer<K> - Class in org.apache.hadoop.mapred.lib
- A
Reducer
that sums long values. - LongSumReducer() -
Constructor for class org.apache.hadoop.mapred.lib.LongSumReducer
-
- LongTypeID -
Static variable in class org.apache.hadoop.record.meta.TypeID
-
- LongValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that maintain the maximum of
a sequence of long values.
- LongValueMax() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
- the default constructor
- LongValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that maintain the minimum of
a sequence of long values.
- LongValueMin() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
- the default constructor
- LongValueSum - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that sums up
a sequence of long values.
- LongValueSum() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
- the default constructor
- LongWritable - Class in org.apache.hadoop.io
- A WritableComparable for longs.
- LongWritable() -
Constructor for class org.apache.hadoop.io.LongWritable
-
- LongWritable(long) -
Constructor for class org.apache.hadoop.io.LongWritable
-
- LongWritable.Comparator - Class in org.apache.hadoop.io
- A Comparator optimized for LongWritable.
- LongWritable.Comparator() -
Constructor for class org.apache.hadoop.io.LongWritable.Comparator
-
- LongWritable.DecreasingComparator - Class in org.apache.hadoop.io
- A decreasing Comparator optimized for LongWritable.
- LongWritable.DecreasingComparator() -
Constructor for class org.apache.hadoop.io.LongWritable.DecreasingComparator
-
- LT_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- LuceneUtil - Class in org.apache.hadoop.contrib.index.lucene
- This class copies some methods from Lucene's SegmentInfos since that class
is not public.
- LuceneUtil() -
Constructor for class org.apache.hadoop.contrib.index.lucene.LuceneUtil
-
- LzoCodec - Class in org.apache.hadoop.io.compress
- A
CompressionCodec
for a streaming
lzo compression/decompression pair. - LzoCodec() -
Constructor for class org.apache.hadoop.io.compress.LzoCodec
-
- LzoCompressor - Class in org.apache.hadoop.io.compress.lzo
- A
Compressor
based on the lzo algorithm. - LzoCompressor(LzoCompressor.CompressionStrategy, int) -
Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
- Creates a new compressor using the specified
LzoCompressor.CompressionStrategy
.
- LzoCompressor() -
Constructor for class org.apache.hadoop.io.compress.lzo.LzoCompressor
- Creates a new compressor with the default lzo1x_1 compression.
- LzoCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
- The compression algorithm for lzo library.
- LzoDecompressor - Class in org.apache.hadoop.io.compress.lzo
- A
Decompressor
based on the lzo algorithm. - LzoDecompressor(LzoDecompressor.CompressionStrategy, int) -
Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
- Creates a new lzo decompressor.
- LzoDecompressor() -
Constructor for class org.apache.hadoop.io.compress.lzo.LzoDecompressor
- Creates a new lzo decompressor.
- LzoDecompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.lzo
-
M
- main(String[]) -
Static method in class org.apache.hadoop.conf.Configuration
- For debugging.
- main(String[]) -
Static method in class org.apache.hadoop.contrib.index.main.UpdateIndex
- The main() method
- main(String[]) -
Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
-
- main(String[]) -
Static method in class org.apache.hadoop.dfs.Balancer
- Run a balancer
- main(String[]) -
Static method in class org.apache.hadoop.dfs.DataNode
-
- main(String[]) -
Static method in class org.apache.hadoop.dfs.DFSAdmin
- main() has some simple utility methods.
- main(String[]) -
Static method in class org.apache.hadoop.dfs.DFSck
-
- main(String[]) -
Static method in class org.apache.hadoop.dfs.NameNode
-
- main(String[]) -
Static method in class org.apache.hadoop.dfs.SecondaryNameNode
- main() has some simple utility methods.
- main(String[]) -
Static method in class org.apache.hadoop.examples.AggregateWordCount
- The main driver for word count map/reduce program.
- main(String[]) -
Static method in class org.apache.hadoop.examples.AggregateWordHistogram
- The main driver for word count map/reduce program.
- main(String[]) -
Static method in class org.apache.hadoop.examples.dancing.DistributedPentomino
- Launch the solver on 9x10 board and the one sided pentominos.
- main(String[]) -
Static method in class org.apache.hadoop.examples.dancing.OneSidedPentomino
- Solve the 3x30 puzzle.
- main(String[]) -
Static method in class org.apache.hadoop.examples.dancing.Pentomino
- Solve the 6x10 pentomino puzzle.
- main(String[]) -
Static method in class org.apache.hadoop.examples.dancing.Sudoku
- Solves a set of sudoku puzzles.
- main(String[]) -
Static method in class org.apache.hadoop.examples.ExampleDriver
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.Grep
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.Join
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.MultiFileWordCount
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.PiEstimator
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.RandomTextWriter
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.RandomWriter
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.SleepJob
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.Sort
-
- main(String[]) -
Static method in class org.apache.hadoop.examples.WordCount
-
- main(String[]) -
Static method in class org.apache.hadoop.fs.DF
-
- main(String[]) -
Static method in class org.apache.hadoop.fs.DU
-
- main(String[]) -
Static method in class org.apache.hadoop.fs.FsShell
- main() has some simple utility methods
- main(String[]) -
Static method in class org.apache.hadoop.fs.s3.MigrationTool
-
- main(String[]) -
Static method in class org.apache.hadoop.fs.Trash
- Run an emptier.
- main(String[]) -
Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- A little test program.
- main(String[]) -
Static method in class org.apache.hadoop.io.MapFile
-
- main(String[]) -
Static method in class org.apache.hadoop.log.LogLevel
- A command line implementation
- main(String[]) -
Static method in class org.apache.hadoop.mapred.IsolationRunner
- Run a single task
- main(String[]) -
Static method in class org.apache.hadoop.mapred.JobClient
-
- main(String[]) -
Static method in class org.apache.hadoop.mapred.JobShell
-
- main(String[]) -
Static method in class org.apache.hadoop.mapred.JobTracker
- Start the JobTracker process.
- main(String[]) -
Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
- create and run an Aggregate based map/reduce job.
- main(String[]) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Submit a pipes job based on the command line arguments.
- main(String[]) -
Static method in class org.apache.hadoop.mapred.TaskTracker.Child
-
- main(String[]) -
Static method in class org.apache.hadoop.mapred.TaskTracker
- Start the TaskTracker, point toward the indicated JobTracker
- main(String[]) -
Static method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- main(String[]) -
Static method in class org.apache.hadoop.streaming.HadoopStreaming
-
- main(String[]) -
Static method in class org.apache.hadoop.streaming.JarBuilder
- Test program
- main(String[]) -
Static method in class org.apache.hadoop.streaming.PathFinder
-
- main(String[]) -
Static method in class org.apache.hadoop.util.PlatformName
-
- main(String[]) -
Static method in class org.apache.hadoop.util.PrintJarMainClass
-
- main(String[]) -
Static method in class org.apache.hadoop.util.RunJar
- Run a Hadoop job jar.
- main(String[]) -
Static method in class org.apache.hadoop.util.VersionInfo
-
- makeCompactString() -
Method in class org.apache.hadoop.mapred.Counters
- Convert a counters object into a single line that is easy to parse.
- makeJavaCommand(Class, String[]) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- makeLock(String) -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- makeQualified(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Make sure that a path specifies a FileSystem.
- makeQualified(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Make sure that a path specifies a FileSystem.
- makeQualified(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
-
- makeQualified(FileSystem) -
Method in class org.apache.hadoop.fs.Path
- Returns a qualified path object.
- makeRelative(URI, Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
-
- makeShellPath(String) -
Static method in class org.apache.hadoop.fs.FileUtil
- Convert a os-native filename to a path that works for the shell.
- makeShellPath(File) -
Static method in class org.apache.hadoop.fs.FileUtil
- Convert a os-native filename to a path that works for the shell.
- map(DocumentID, DocumentAndOp, OutputCollector<DocumentID, DocumentAndOp>, Reporter) -
Method in class org.apache.hadoop.contrib.index.example.IdentityLocalAnalysis
-
- map(DocumentID, LineDocTextAndOp, OutputCollector<DocumentID, DocumentAndOp>, Reporter) -
Method in class org.apache.hadoop.contrib.index.example.LineDocLocalAnalysis
-
- map(K, V, OutputCollector<Shard, IntermediateForm>, Reporter) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateMapper
- Map a key-value pair to a shard-and-intermediate form pair.
- map(Object, Object, OutputCollector, Reporter) -
Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- map(Object, Object, OutputCollector, Reporter) -
Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- map(WritableComparable, Text, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.examples.dancing.DistributedPentomino.PentMap
- Break the prefix string into moves (a sequence of integer row ids that
will be selected for each column in order).
- map(MultiFileWordCount.WordOffset, Text, OutputCollector<Text, IntWritable>, Reporter) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MapClass
-
- map(LongWritable, Writable, OutputCollector<LongWritable, LongWritable>, Reporter) -
Method in class org.apache.hadoop.examples.PiEstimator.PiMapper
- Map method.
- map(IntWritable, IntWritable, OutputCollector<IntWritable, IntWritable>, Reporter) -
Method in class org.apache.hadoop.examples.SleepJob
-
- map(LongWritable, Text, OutputCollector<Text, IntWritable>, Reporter) -
Method in class org.apache.hadoop.examples.WordCount.MapClass
-
- map(K1, V1, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
- Do nothing.
- map(K1, V1, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
- the map function.
- map(K1, V1, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
- Do nothing.
- map(K, V, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
- The identify function.
- map(K, V, OutputCollector<K, V>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.IdentityMapper
- The identify function.
- map(K, V, OutputCollector<V, K>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.InverseMapper
- The inverse function.
- map(K, Text, OutputCollector<Text, LongWritable>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.RegexMapper
-
- map(K, Text, OutputCollector<Text, LongWritable>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.TokenCountMapper
-
- map(K1, V1, OutputCollector<K2, V2>, Reporter) -
Method in interface org.apache.hadoop.mapred.Mapper
- Maps a single input key/value pair into an intermediate key/value pair.
- Map() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- MAP -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- map(Object, Object, OutputCollector, Reporter) -
Method in class org.apache.hadoop.streaming.PipeMapper
-
- MAP_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- mapCmd_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- mapDebugSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- MapFile - Class in org.apache.hadoop.io
- A file-based map from keys to values.
- MapFile() -
Constructor for class org.apache.hadoop.io.MapFile
-
- MapFile.Reader - Class in org.apache.hadoop.io
- Provide access to an existing map.
- MapFile.Reader(FileSystem, String, Configuration) -
Constructor for class org.apache.hadoop.io.MapFile.Reader
- Construct a map reader for the named map.
- MapFile.Reader(FileSystem, String, WritableComparator, Configuration) -
Constructor for class org.apache.hadoop.io.MapFile.Reader
- Construct a map reader for the named map using the named comparator.
- MapFile.Reader(FileSystem, String, WritableComparator, Configuration, boolean) -
Constructor for class org.apache.hadoop.io.MapFile.Reader
- Hook to allow subclasses to defer opening streams until further
initialization is complete.
- MapFile.Writer - Class in org.apache.hadoop.io
- Writes a new map.
- MapFile.Writer(Configuration, FileSystem, String, Class, Class) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map for keys of the named class.
- MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType, Progressable) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map for keys of the named class.
- MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map for keys of the named class.
- MapFile.Writer(Configuration, FileSystem, String, Class, Class, SequenceFile.CompressionType) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map for keys of the named class.
- MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map using the named key comparator.
- MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map using the named key comparator.
- MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, Progressable) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map using the named key comparator.
- MapFile.Writer(Configuration, FileSystem, String, WritableComparator, Class, SequenceFile.CompressionType, CompressionCodec, Progressable) -
Constructor for class org.apache.hadoop.io.MapFile.Writer
- Create the named map using the named key comparator.
- MapFileOutputFormat - Class in org.apache.hadoop.mapred
- An
OutputFormat
that writes MapFile
s. - MapFileOutputFormat() -
Constructor for class org.apache.hadoop.mapred.MapFileOutputFormat
-
- mapOutputFieldSeparator -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- mapOutputLost(String, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Deprecated.
- mapOutputLost(TaskAttemptID, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- A completed map task's output has been lost.
- Mapper<K1,V1,K2,V2> - Interface in org.apache.hadoop.mapred
- Maps input key/value pairs to a set of intermediate key/value pairs.
- mapProgress() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- mapProgress() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the progress of the job's map-tasks, as a float between 0.0
and 1.0.
- mapRedFinished() -
Method in class org.apache.hadoop.streaming.PipeMapRed
-
- MapReduceBase - Class in org.apache.hadoop.mapred
- Base class for
Mapper
and Reducer
implementations. - MapReduceBase() -
Constructor for class org.apache.hadoop.mapred.MapReduceBase
-
- MapRunnable<K1,V1,K2,V2> - Interface in org.apache.hadoop.mapred
- Expert: Generic interface for
Mapper
s. - MapRunner<K1,V1,K2,V2> - Class in org.apache.hadoop.mapred
- Default
MapRunnable
implementation. - MapRunner() -
Constructor for class org.apache.hadoop.mapred.MapRunner
-
- MapTypeID - Class in org.apache.hadoop.record.meta
- Represents typeID for a Map
- MapTypeID(TypeID, TypeID) -
Constructor for class org.apache.hadoop.record.meta.MapTypeID
-
- MapWritable - Class in org.apache.hadoop.io
- A Writable Map.
- MapWritable() -
Constructor for class org.apache.hadoop.io.MapWritable
- Default constructor.
- MapWritable(MapWritable) -
Constructor for class org.apache.hadoop.io.MapWritable
- Copy constructor.
- mark(int) -
Method in class org.apache.hadoop.fs.FSInputChecker
-
- mark(int) -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- markSupported() -
Method in class org.apache.hadoop.fs.FSInputChecker
-
- markSupported() -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- matches(String) -
Static method in class org.apache.hadoop.fs.shell.Count
- Check if a command is the count command
- MAX_PATH_DEPTH -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- MAX_PATH_LENGTH -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- maxNextCharInd -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- mayExit_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- MBeanUtil - Class in org.apache.hadoop.metrics.util
- This util class provides a method to register an MBean using
our standard naming convention as described in the doc
for {link
MBeanUtil.registerMBean(String, String, Object)
- MBeanUtil() -
Constructor for class org.apache.hadoop.metrics.util.MBeanUtil
-
- MD5_LEN -
Static variable in class org.apache.hadoop.io.MD5Hash
-
- MD5_LEN -
Static variable in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
- MD5Hash - Class in org.apache.hadoop.io
- A Writable for MD5 hash values.
- MD5Hash() -
Constructor for class org.apache.hadoop.io.MD5Hash
- Constructs an MD5Hash.
- MD5Hash(String) -
Constructor for class org.apache.hadoop.io.MD5Hash
- Constructs an MD5Hash from a hex string.
- MD5Hash(byte[]) -
Constructor for class org.apache.hadoop.io.MD5Hash
- Constructs an MD5Hash with a specified value.
- MD5Hash.Comparator - Class in org.apache.hadoop.io
- A WritableComparator optimized for MD5Hash keys.
- MD5Hash.Comparator() -
Constructor for class org.apache.hadoop.io.MD5Hash.Comparator
-
- merge(List<SequenceFile.Sorter.SegmentDescriptor>, Path) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Merges the list of segments of type
SegmentDescriptor
- merge(Path[], boolean, Path) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Merges the contents of files passed in Path[] using a max factor value
that is already set
- merge(Path[], boolean, int, Path) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Merges the contents of files passed in Path[]
- merge(Path[], Path, boolean) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Merges the contents of files passed in Path[]
- merge(Path[], Path) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Merge the provided files.
- merge(List, List, String) -
Method in class org.apache.hadoop.streaming.JarBuilder
-
- MergeSort - Class in org.apache.hadoop.util
- An implementation of the core algorithm of MergeSort.
- MergeSort(Comparator<IntWritable>) -
Constructor for class org.apache.hadoop.util.MergeSort
-
- mergeSort(int[], int[], int, int) -
Method in class org.apache.hadoop.util.MergeSort
-
- metaFileExists(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Does the meta file exist for this block?
- metaSave(String) -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
-
- metaSave(String[], int) -
Method in class org.apache.hadoop.dfs.DFSAdmin
- Dumps DFS data structures into specified file.
- metaSave(String) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- metaSave(String) -
Method in class org.apache.hadoop.dfs.NameNode
- Dumps namenode state into specified file
- MetricsContext - Interface in org.apache.hadoop.metrics
- The main interface to the metrics package.
- MetricsException - Exception in org.apache.hadoop.metrics
- General-purpose, unchecked metrics exception.
- MetricsException() -
Constructor for exception org.apache.hadoop.metrics.MetricsException
- Creates a new instance of MetricsException
- MetricsException(String) -
Constructor for exception org.apache.hadoop.metrics.MetricsException
- Creates a new instance of MetricsException
- MetricsIntValue - Class in org.apache.hadoop.metrics.util
- The MetricsIntValue class is for a metric that is not time varied
but changes only when it is set.
- MetricsIntValue(String) -
Constructor for class org.apache.hadoop.metrics.util.MetricsIntValue
- Constructor - create a new metric
- metricsList -
Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
-
- MetricsLongValue - Class in org.apache.hadoop.metrics.util
- The MetricsLongValue class is for a metric that is not time varied
but changes only when it is set.
- MetricsLongValue(String) -
Constructor for class org.apache.hadoop.metrics.util.MetricsLongValue
- Constructor - create a new metric
- MetricsRecord - Interface in org.apache.hadoop.metrics
- A named and optionally tagged set of records to be sent to the metrics
system.
- MetricsRecordImpl - Class in org.apache.hadoop.metrics.spi
- An implementation of MetricsRecord.
- MetricsRecordImpl(String, AbstractMetricsContext) -
Constructor for class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Creates a new instance of FileRecord
- MetricsTimeVaryingInt - Class in org.apache.hadoop.metrics.util
- The MetricsTimeVaryingInt class is for a metric that naturally
varies over time (e.g.
- MetricsTimeVaryingInt(String) -
Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Constructor - create a new metric
- MetricsTimeVaryingRate - Class in org.apache.hadoop.metrics.util
- The MetricsTimeVaryingRate class is for a rate based metric that
naturally varies over time (e.g.
- MetricsTimeVaryingRate(String) -
Constructor for class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Constructor - create a new metric
- MetricsUtil - Class in org.apache.hadoop.metrics
- Utility class to simplify creation and reporting of hadoop metrics.
- MetricValue - Class in org.apache.hadoop.metrics.spi
- A Number that is either an absolute or an incremental amount.
- MetricValue(Number, boolean) -
Constructor for class org.apache.hadoop.metrics.spi.MetricValue
- Creates a new instance of MetricValue
- midKey() -
Method in class org.apache.hadoop.io.MapFile.Reader
- Get the key at approximately the middle of the file.
- MigrationTool - Class in org.apache.hadoop.fs.s3
-
This class is a tool for migrating data from an older to a newer version
of an S3 filesystem.
- MigrationTool() -
Constructor for class org.apache.hadoop.fs.s3.MigrationTool
-
- MIN_BLOCKS_FOR_WRITE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- minRecWrittenToEnableSkip_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- mkdirs(String, FsPermission) -
Method in class org.apache.hadoop.dfs.NameNode
- Create a directory (or hierarchy of directories) with the given
name and permission.
- mkdirs(Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- mkdirs(FileSystem, Path, FsPermission) -
Static method in class org.apache.hadoop.fs.FileSystem
- create a directory with the provided permission
The permission of the directory is set to be the provided permission as in
setPermission, not permission&~umask
- mkdirs(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Call
FileSystem.mkdirs(Path, FsPermission)
with default permission.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.FileSystem
- Make the given file and all non-existent parents into
directories.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Make the given file and all non-existent parents into
directories.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.HarFileSystem
- not implemented.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- mkdirs(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Creates the specified directory hierarchy.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Make the given file and all non-existent parents into
directories.
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- mkdirs(Path, FsPermission) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- mkdirsWithExistsCheck(File) -
Static method in class org.apache.hadoop.util.DiskChecker
- The semantics of mkdirsWithExistsCheck method is different from the mkdirs
method provided in the Sun's java.io.File class in the following way:
While creating the non-existent parent directories, this method checks for
the existence of those directories if the mkdir fails at any point (since
that directory might have just been created by some other process).
- modifFmt -
Static variable in class org.apache.hadoop.fs.FsShell
-
- Module() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- MODULE_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- ModuleName() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- moveFromLocalFile(Path[], Path) -
Method in class org.apache.hadoop.fs.FileSystem
- The src files is on the local disk.
- moveFromLocalFile(Path, Path) -
Method in class org.apache.hadoop.fs.FileSystem
- The src file is on the local disk.
- moveFromLocalFile(Path, Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- moveToLocalFile(Path, Path) -
Method in class org.apache.hadoop.fs.FileSystem
- The src file is under FS, and the dst is on the local disk.
- moveToTrash(Path) -
Method in class org.apache.hadoop.fs.Trash
- Move a file or directory to the current trash directory.
- msg(String) -
Method in class org.apache.hadoop.streaming.StreamJob
-
- MultiFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
- An abstract
InputFormat
that returns MultiFileSplit
's
in MultiFileInputFormat.getSplits(JobConf, int)
method. - MultiFileInputFormat() -
Constructor for class org.apache.hadoop.mapred.MultiFileInputFormat
-
- MultiFileSplit - Class in org.apache.hadoop.mapred
- A sub-collection of input files.
- MultiFileSplit(JobConf, Path[], long[]) -
Constructor for class org.apache.hadoop.mapred.MultiFileSplit
-
- MultiFileWordCount - Class in org.apache.hadoop.examples
- MultiFileWordCount is an example to demonstrate the usage of
MultiFileInputFormat.
- MultiFileWordCount() -
Constructor for class org.apache.hadoop.examples.MultiFileWordCount
-
- MultiFileWordCount.MapClass - Class in org.apache.hadoop.examples
- This Mapper is similar to the one in
WordCount.MapClass
. - MultiFileWordCount.MapClass() -
Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MapClass
-
- MultiFileWordCount.MultiFileLineRecordReader - Class in org.apache.hadoop.examples
- RecordReader is responsible from extracting records from the InputSplit.
- MultiFileWordCount.MultiFileLineRecordReader(Configuration, MultiFileSplit) -
Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
-
- MultiFileWordCount.MyInputFormat - Class in org.apache.hadoop.examples
- To use
MultiFileInputFormat
, one should extend it, to return a
(custom) RecordReader
. - MultiFileWordCount.MyInputFormat() -
Constructor for class org.apache.hadoop.examples.MultiFileWordCount.MyInputFormat
-
- MultiFileWordCount.WordOffset - Class in org.apache.hadoop.examples
- This record keeps <filename,offset> pairs.
- MultiFileWordCount.WordOffset() -
Constructor for class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
-
- MultiFilterRecordReader<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.mapred.join
- Base class for Composite join returning values derived from multiple
sources, but generally not tuples.
- MultiFilterRecordReader(int, JobConf, int, Class<? extends WritableComparator>) -
Constructor for class org.apache.hadoop.mapred.join.MultiFilterRecordReader
-
- MultiFilterRecordReader.MultiFilterDelegationIterator - Class in org.apache.hadoop.mapred.join
- Proxy the JoinCollector, but include callback to emit.
- MultiFilterRecordReader.MultiFilterDelegationIterator() -
Constructor for class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- MultipleIOException - Exception in org.apache.hadoop.io
- Encapsulate a list of
IOException
into an IOException
- MultipleOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
- This abstract class extends the OutputFormatBase, allowing to write the
output data to different output files.
- MultipleOutputFormat() -
Constructor for class org.apache.hadoop.mapred.lib.MultipleOutputFormat
-
- MultipleSequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
- This class extends the MultipleOutputFormat, allowing to write the output data
to different output files in sequence file output format.
- MultipleSequenceFileOutputFormat() -
Constructor for class org.apache.hadoop.mapred.lib.MultipleSequenceFileOutputFormat
-
- MultipleTextOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
- This class extends the MultipleOutputFormat, allowing to write the output
data to different output files in Text output format.
- MultipleTextOutputFormat() -
Constructor for class org.apache.hadoop.mapred.lib.MultipleTextOutputFormat
-
- MultithreadedMapRunner<K1,V1,K2,V2> - Class in org.apache.hadoop.mapred.lib
- Multithreaded implementation for @link org.apache.hadoop.mapred.MapRunnable.
- MultithreadedMapRunner() -
Constructor for class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
N
- name -
Variable in class org.apache.hadoop.dfs.DatanodeID
-
- NAME -
Static variable in class org.apache.hadoop.fs.shell.Count
-
- name -
Variable in class org.apache.hadoop.net.NodeBase
-
- NameNode - Class in org.apache.hadoop.dfs
- NameNode serves as both directory namespace manager and
"inode table" for the Hadoop DFS.
- NameNode(Configuration) -
Constructor for class org.apache.hadoop.dfs.NameNode
- Start NameNode.
- NameNode(String, Configuration) -
Constructor for class org.apache.hadoop.dfs.NameNode
- Create a NameNode at the specified location and start it.
- NamenodeFsck - Class in org.apache.hadoop.dfs
- This class provides rudimentary checking of DFS volumes for errors and
sub-optimal conditions.
- NamenodeFsck(Configuration, NameNode, Map<String, String[]>, HttpServletResponse) -
Constructor for class org.apache.hadoop.dfs.NamenodeFsck
- Filesystem checker.
- NamenodeFsck.FsckResult - Class in org.apache.hadoop.dfs
- FsckResult of checking, plus overall DFS statistics.
- NamenodeFsck.FsckResult() -
Constructor for class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- NameNodeMetrics - Class in org.apache.hadoop.dfs
- This class is for maintaining the various NameNode statistics
and publishing them through the metrics interfaces.
- NameNodeStatistics - Class in org.apache.hadoop.dfs.namenode.metrics
- This is the implementation of the Name Node JMX MBean
- NameNodeStatistics(NameNodeMetrics) -
Constructor for class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
- This constructs and registers the NameNodeStatisticsMBean
- NameNodeStatisticsMBean - Interface in org.apache.hadoop.dfs.namenode.metrics
- This is the JMX management interface for getting runtime statistics of
the name node.
- NativeCodeLoader - Class in org.apache.hadoop.util
- A helper to load the native hadoop code i.e.
- NativeCodeLoader() -
Constructor for class org.apache.hadoop.util.NativeCodeLoader
-
- NativeS3FileSystem - Class in org.apache.hadoop.fs.s3native
-
A
FileSystem
for reading and writing files stored on
Amazon S3. - NativeS3FileSystem() -
Constructor for class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- NativeS3FileSystem(NativeFileSystemStore) -
Constructor for class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- needChecksum() -
Method in class org.apache.hadoop.fs.FSInputChecker
- Return true if there is a need for checksum verification
- needsDictionary() -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Returns
true
if a preset dictionary is needed for decompression.
- needsDictionary() -
Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
-
- needsDictionary() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- needsInput() -
Method in interface org.apache.hadoop.io.compress.Compressor
- Returns true if the input data buffer is empty and
#setInput() should be called to provide more input.
- needsInput() -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Returns true if the input data buffer is empty and
#setInput() should be called to provide more input.
- needsInput() -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
- Returns true if the input data buffer is empty and
#setInput() should be called to provide more input.
- needsInput() -
Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
-
- needsInput() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
-
- needsInput() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- NetUtils - Class in org.apache.hadoop.net
-
- NetUtils() -
Constructor for class org.apache.hadoop.net.NetUtils
-
- NetworkTopology - Class in org.apache.hadoop.net
- The class represents a cluster of computer with a tree hierarchical
network topology.
- NetworkTopology() -
Constructor for class org.apache.hadoop.net.NetworkTopology
-
- newDataChecksum(int, int) -
Static method in class org.apache.hadoop.dfs.DataChecksum
-
- newDataChecksum(byte[], int) -
Static method in class org.apache.hadoop.dfs.DataChecksum
- Creates a DataChecksum from HEADER_LEN bytes from arr[offset].
- newDataChecksum(DataInputStream) -
Static method in class org.apache.hadoop.dfs.DataChecksum
- This constructucts a DataChecksum by reading HEADER_LEN bytes from
input stream in
- newInstance(Class, Configuration) -
Static method in class org.apache.hadoop.io.WritableFactories
- Create a new instance of a class with a defined factory.
- newInstance(Class) -
Static method in class org.apache.hadoop.io.WritableFactories
- Create a new instance of a class with a defined factory.
- newInstance() -
Method in interface org.apache.hadoop.io.WritableFactory
- Return a new instance.
- newInstance(Class<?>, Configuration) -
Static method in class org.apache.hadoop.util.ReflectionUtils
- Create an object for the given class and initialize it from conf
- newKey() -
Method in class org.apache.hadoop.io.WritableComparator
- Construct a new
WritableComparable
instance.
- newRecord(String) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Subclasses should override this if they subclass MetricsRecordImpl.
- newToken(int) -
Static method in class org.apache.hadoop.record.compiler.generated.Token
- Returns a new Token object, by default.
- next(DocumentID, LineDocTextAndOp) -
Method in class org.apache.hadoop.contrib.index.example.LineDocRecordReader
-
- next() -
Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
-
- next(MultiFileWordCount.WordOffset, Text) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.MultiFileLineRecordReader
-
- next(Writable) -
Method in class org.apache.hadoop.io.ArrayFile.Reader
- Read and return the next value in the file.
- next(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.MapFile.Reader
- Read the next key/value pair in the map into
key
and
val
.
- next(Writable) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read the next key in the file into
key
, skipping its
value.
- next(Writable, Writable) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read the next key/value pair in the file into
key
and
val
.
- next(DataOutputBuffer) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Deprecated. Call
SequenceFile.Reader.nextRaw(DataOutputBuffer,SequenceFile.ValueBytes)
.
- next(Object) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read the next key in the file, skipping its
value.
- next() -
Method in interface org.apache.hadoop.io.SequenceFile.Sorter.RawKeyValueIterator
- Sets up the current key and value (for getKey and getValue)
- next(WritableComparable) -
Method in class org.apache.hadoop.io.SetFile.Reader
- Read the next key in a set into
key
.
- next(X) -
Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- next(TupleWritable) -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- next(K, TupleWritable) -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader
- Emit the next set of key, value pairs as defined by the child
RecordReaders and operation associated with this composite RR.
- next(V) -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- next(K, V) -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader
- Reads the next key/value pair from the input for processing.
- next(U) -
Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- next(T) -
Method in interface org.apache.hadoop.mapred.join.ResetableIterator
- Assign next value to actual.
- next(X) -
Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- next() -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Read the next k,v pair into the head of this object; return true iff
the RR and this are exhausted.
- next(K, U) -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Write key-value pair at the head of this stream to the objects provided;
get next key-value pair from proxied RR.
- next(Text, Text) -
Method in class org.apache.hadoop.mapred.KeyValueLineRecordReader
- Read key/value pair in a line.
- next(LongWritable, Text) -
Method in class org.apache.hadoop.mapred.LineRecordReader
- Read a line.
- next(K, V) -
Method in interface org.apache.hadoop.mapred.RecordReader
- Reads the next key/value pair from the input for processing.
- next(BytesWritable, BytesWritable) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
- Read raw bytes from a SequenceFile.
- next(Text, Text) -
Method in class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
- Read key/value pair in a line.
- next(K, V) -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- next(K) -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- next -
Variable in class org.apache.hadoop.record.compiler.generated.Token
- A reference to the next regular (non-special) token from the input
stream.
- next(Text, Text) -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
- Read a record.
- next(Text, Text) -
Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- nextGenerationStamp(Block) -
Method in class org.apache.hadoop.dfs.NameNode
-
- nextRaw(DataOutputBuffer, SequenceFile.ValueBytes) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read 'raw' records.
- nextRawKey(DataOutputBuffer) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read 'raw' keys.
- nextRawKey() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
- Fills up the rawKey object with the key returned by the Reader
- nextRawValue(SequenceFile.ValueBytes) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Read 'raw' values.
- nextRawValue(SequenceFile.ValueBytes) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
- Fills up the passed rawValue with the value corresponding to the key
read earlier
- NLineInputFormat - Class in org.apache.hadoop.mapred.lib
- NLineInputFormat which splits N lines of input as one split.
- NLineInputFormat() -
Constructor for class org.apache.hadoop.mapred.lib.NLineInputFormat
-
- nnAddr -
Variable in class org.apache.hadoop.dfs.HftpFileSystem
-
- NO_MOVE_BLOCK -
Static variable in class org.apache.hadoop.dfs.Balancer
-
- NO_MOVE_PROGRESS -
Static variable in class org.apache.hadoop.dfs.Balancer
-
- Node - Interface in org.apache.hadoop.net
- The interface defines a node in a network topology.
- NodeBase - Class in org.apache.hadoop.net
- A base class that implements interface Node
- NodeBase() -
Constructor for class org.apache.hadoop.net.NodeBase
- Default constructor
- NodeBase(String) -
Constructor for class org.apache.hadoop.net.NodeBase
- Construct a node from its path
- NodeBase(String, String) -
Constructor for class org.apache.hadoop.net.NodeBase
- Construct a node from its name and its location
- NodeBase(String, String, Node, int) -
Constructor for class org.apache.hadoop.net.NodeBase
- Construct a node from its name and its location
- normalize(String) -
Static method in class org.apache.hadoop.net.NodeBase
- Normalize a path
- normalizePath(String) -
Static method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- not() -
Method in enum org.apache.hadoop.fs.permission.FsAction
- NOT operation.
- NotReplicatedYetException - Exception in org.apache.hadoop.dfs
- The file has not finished being written to enough datanodes yet.
- NotReplicatedYetException(String) -
Constructor for exception org.apache.hadoop.dfs.NotReplicatedYetException
-
- NULL -
Static variable in interface org.apache.hadoop.mapred.Reporter
- A constant of Reporter type that does nothing.
- NullContext - Class in org.apache.hadoop.metrics.spi
- Null metrics context: a metrics context which does nothing.
- NullContext() -
Constructor for class org.apache.hadoop.metrics.spi.NullContext
- Creates a new instance of NullContext
- NullContextWithUpdateThread - Class in org.apache.hadoop.metrics.spi
- A null context which has a thread calling
periodically when monitoring is started.
- NullContextWithUpdateThread() -
Constructor for class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
- Creates a new instance of NullContextWithUpdateThread
- NullOutputFormat<K,V> - Class in org.apache.hadoop.mapred.lib
- Consume all outputs and put them in /dev/null.
- NullOutputFormat() -
Constructor for class org.apache.hadoop.mapred.lib.NullOutputFormat
-
- NullWritable - Class in org.apache.hadoop.io
- Singleton Writable with no data.
- NullWritable.Comparator - Class in org.apache.hadoop.io
- A Comparator "optimized" for NullWritable.
- NullWritable.Comparator() -
Constructor for class org.apache.hadoop.io.NullWritable.Comparator
-
- NUM_OF_VALUES_FIELD -
Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- numAddBlockOps -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numBlocksCorrupted -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numCreateFileOps -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numDeadDataNodes() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
- Number of dead data nodes
- numDeleteFileOps -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numFilesCreated -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numFilesRenamed -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numGetBlockLocations -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numGetListingOps -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- numLiveDataNodes() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.FSNamesystemMBean
- Number of Live data nodes
- numOfMapOutputKeyFields -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- numOfMapOutputPartitionFields -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- numOfReduceOutputKeyFields -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- numOfValues -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- numReduceTasksSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
O
- ObjectWritable - Class in org.apache.hadoop.io
- A polymorphic Writable that writes an instance with it's class name.
- ObjectWritable() -
Constructor for class org.apache.hadoop.io.ObjectWritable
-
- ObjectWritable(Object) -
Constructor for class org.apache.hadoop.io.ObjectWritable
-
- ObjectWritable(Class, Object) -
Constructor for class org.apache.hadoop.io.ObjectWritable
-
- offerService() -
Method in class org.apache.hadoop.dfs.DataNode
- Main loop for the DataNode.
- offerService() -
Method in class org.apache.hadoop.mapred.JobTracker
- Run forever
- ONE -
Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
- oneRotation -
Static variable in class org.apache.hadoop.examples.dancing.Pentomino
- Is the piece fixed under rotation?
- OneSidedPentomino - Class in org.apache.hadoop.examples.dancing
- Of the "normal" 12 pentominos, 6 of them have distinct shapes when flipped.
- OneSidedPentomino() -
Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
-
- OneSidedPentomino(int, int) -
Constructor for class org.apache.hadoop.examples.dancing.OneSidedPentomino
-
- OP_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_BLOCKRECEIVED -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_BLOCKREPORT -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ABANDONBLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ABANDONBLOCK_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ADDBLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ADDBLOCK_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_COMPLETEFILE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_COMPLETEFILE_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DATANODE_HINTS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DATANODE_HINTS_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DATANODEREPORT -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DATANODEREPORT_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DELETE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_DELETE_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_EXISTS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_EXISTS_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ISDIR -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_ISDIR_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_LISTING -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_LISTING_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_MKDIRS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_MKDIRS_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_OBTAINLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_OBTAINLOCK_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_OPEN -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_OPEN_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RAWSTATS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RAWSTATS_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RELEASELOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RELEASELOCK_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RENAMETO -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RENAMETO_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RENEW_LEASE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_RENEW_LEASE_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_STARTFILE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_STARTFILE_ACK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_CLIENT_TRYAGAIN -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_COPY_BLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_ERROR -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_FAILURE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_HEARTBEAT -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_INVALIDATE_BLOCKS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_READ_BLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_READ_METADATA -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_REPLACE_BLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_CHECKSUM_OK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_ERROR -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_ERROR_CHECKSUM -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_ERROR_EXISTS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_ERROR_INVALID -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_STATUS_SUCCESS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_TRANSFERBLOCKS -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_TRANSFERDATA -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- OP_WRITE_BLOCK -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- open(Path, int) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Opens an FSDataInputStream at the indicated Path.
- open(Path, int) -
Method in class org.apache.hadoop.fs.FileSystem
- Opens an FSDataInputStream at the indicated Path.
- open(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Opens an FSDataInputStream at the indicated Path.
- open(Path, int) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Opens an FSDataInputStream at the indicated Path.
- open(Path, int) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.fs.HarFileSystem
- Returns a har input stream which fakes end of
file.
- open(Path, int) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- open(Path, int) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- open(FileSystem, String, WritableComparator, Configuration) -
Method in class org.apache.hadoop.io.MapFile.Reader
-
- open(InputStream) -
Method in interface org.apache.hadoop.io.serializer.Deserializer
- Prepare the deserializer for reading.
- open(OutputStream) -
Method in interface org.apache.hadoop.io.serializer.Serializer
- Prepare the serializer for writing.
- openConnection(String, String) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
- Open an HTTP connection to the namenode to read file data and metadata.
- openConnection(String, String) -
Method in class org.apache.hadoop.dfs.HsftpFileSystem
-
- openFile(FileSystem, Path, int, long) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Override this method to specialize the type of
FSDataInputStream
returned.
- openInput(String) -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- openInput(String, int) -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- OPERATION_FAILED -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- or(FsAction) -
Method in enum org.apache.hadoop.fs.permission.FsAction
- OR operation.
- org.apache.hadoop - package org.apache.hadoop
-
- org.apache.hadoop.conf - package org.apache.hadoop.conf
- Configuration of system parameters.
- org.apache.hadoop.contrib.index.example - package org.apache.hadoop.contrib.index.example
-
- org.apache.hadoop.contrib.index.lucene - package org.apache.hadoop.contrib.index.lucene
-
- org.apache.hadoop.contrib.index.main - package org.apache.hadoop.contrib.index.main
-
- org.apache.hadoop.contrib.index.mapred - package org.apache.hadoop.contrib.index.mapred
-
- org.apache.hadoop.contrib.utils.join - package org.apache.hadoop.contrib.utils.join
-
- org.apache.hadoop.dfs - package org.apache.hadoop.dfs
- A distributed implementation of
FileSystem
. - org.apache.hadoop.dfs.datanode.metrics - package org.apache.hadoop.dfs.datanode.metrics
-
- org.apache.hadoop.dfs.namenode.metrics - package org.apache.hadoop.dfs.namenode.metrics
-
- org.apache.hadoop.examples - package org.apache.hadoop.examples
- Hadoop example code.
- org.apache.hadoop.examples.dancing - package org.apache.hadoop.examples.dancing
- This package is a distributed implementation of Knuth's dancing links
algorithm that can run under Hadoop.
- org.apache.hadoop.filecache - package org.apache.hadoop.filecache
-
- org.apache.hadoop.fs - package org.apache.hadoop.fs
- An abstract file system API.
- org.apache.hadoop.fs.ftp - package org.apache.hadoop.fs.ftp
-
- org.apache.hadoop.fs.kfs - package org.apache.hadoop.fs.kfs
- A client for the Kosmos filesystem (KFS)
- org.apache.hadoop.fs.permission - package org.apache.hadoop.fs.permission
-
- org.apache.hadoop.fs.s3 - package org.apache.hadoop.fs.s3
- A distributed, block-based implementation of
FileSystem
that uses Amazon S3
as a backing store. - org.apache.hadoop.fs.s3native - package org.apache.hadoop.fs.s3native
-
A distributed implementation of
FileSystem
for reading and writing files on
Amazon S3. - org.apache.hadoop.fs.shell - package org.apache.hadoop.fs.shell
-
- org.apache.hadoop.io - package org.apache.hadoop.io
- Generic i/o code for use when reading and writing data to the network,
to databases, and to files.
- org.apache.hadoop.io.compress - package org.apache.hadoop.io.compress
-
- org.apache.hadoop.io.compress.lzo - package org.apache.hadoop.io.compress.lzo
-
- org.apache.hadoop.io.compress.zlib - package org.apache.hadoop.io.compress.zlib
-
- org.apache.hadoop.io.retry - package org.apache.hadoop.io.retry
-
A mechanism for selectively retrying methods that throw exceptions under certain circumstances.
- org.apache.hadoop.io.serializer - package org.apache.hadoop.io.serializer
-
This package provides a mechanism for using different serialization frameworks
in Hadoop.
- org.apache.hadoop.ipc - package org.apache.hadoop.ipc
- Tools to help define network clients and servers.
- org.apache.hadoop.ipc.metrics - package org.apache.hadoop.ipc.metrics
-
- org.apache.hadoop.log - package org.apache.hadoop.log
-
- org.apache.hadoop.mapred - package org.apache.hadoop.mapred
- A software framework for easily writing applications which process vast
amounts of data (multi-terabyte data-sets) parallelly on large clusters
(thousands of nodes) built of commodity hardware in a reliable, fault-tolerant
manner.
- org.apache.hadoop.mapred.jobcontrol - package org.apache.hadoop.mapred.jobcontrol
- Utilities for managing dependent jobs.
- org.apache.hadoop.mapred.join - package org.apache.hadoop.mapred.join
- Given a set of sorted datasets keyed with the same class and yielding equal
partitions, it is possible to effect a join of those datasets prior to the map.
- org.apache.hadoop.mapred.lib - package org.apache.hadoop.mapred.lib
- Library of generally useful mappers, reducers, and partitioners.
- org.apache.hadoop.mapred.lib.aggregate - package org.apache.hadoop.mapred.lib.aggregate
- Classes for performing various counting and aggregations.
- org.apache.hadoop.mapred.pipes - package org.apache.hadoop.mapred.pipes
- Hadoop Pipes allows C++ code to use Hadoop DFS and map/reduce.
- org.apache.hadoop.metrics - package org.apache.hadoop.metrics
- This package defines an API for reporting performance metric information.
- org.apache.hadoop.metrics.file - package org.apache.hadoop.metrics.file
- Implementation of the metrics package that writes the metrics to a file.
- org.apache.hadoop.metrics.ganglia - package org.apache.hadoop.metrics.ganglia
- Implementation of the metrics package that sends metric data to
Ganglia.
- org.apache.hadoop.metrics.jvm - package org.apache.hadoop.metrics.jvm
-
- org.apache.hadoop.metrics.spi - package org.apache.hadoop.metrics.spi
- The Service Provider Interface for the Metrics API.
- org.apache.hadoop.metrics.util - package org.apache.hadoop.metrics.util
-
- org.apache.hadoop.net - package org.apache.hadoop.net
- Network-related classes.
- org.apache.hadoop.record - package org.apache.hadoop.record
- Hadoop record I/O contains classes and a record description language
translator for simplifying serialization and deserialization of records in a
language-neutral manner.
- org.apache.hadoop.record.compiler - package org.apache.hadoop.record.compiler
- This package contains classes needed for code generation
from the hadoop record compiler.
- org.apache.hadoop.record.compiler.ant - package org.apache.hadoop.record.compiler.ant
-
- org.apache.hadoop.record.compiler.generated - package org.apache.hadoop.record.compiler.generated
- This package contains code generated by JavaCC from the
Hadoop record syntax file rcc.jj.
- org.apache.hadoop.record.meta - package org.apache.hadoop.record.meta
-
- org.apache.hadoop.security - package org.apache.hadoop.security
-
- org.apache.hadoop.streaming - package org.apache.hadoop.streaming
- Hadoop Streaming is a utility which allows users to create and run
Map-Reduce jobs with any executables (e.g.
- org.apache.hadoop.util - package org.apache.hadoop.util
- Common utilities.
- out -
Variable in class org.apache.hadoop.io.compress.CompressionOutputStream
- The output stream to be compressed.
- OuterJoinRecordReader<K extends WritableComparable> - Class in org.apache.hadoop.mapred.join
- Full outer join.
- outerrThreadsThrowable -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- output_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- OutputBuffer - Class in org.apache.hadoop.io
- A reusable
OutputStream
implementation that writes to an in-memory
buffer. - OutputBuffer() -
Constructor for class org.apache.hadoop.io.OutputBuffer
- Constructs a new empty buffer.
- OutputCollector<K,V> - Interface in org.apache.hadoop.mapred
- Collects the
<key, value>
pairs output by Mapper
s
and Reducer
s. - OutputFormat<K,V> - Interface in org.apache.hadoop.mapred
OutputFormat
describes the output-specification for a
Map-Reduce job.- OutputFormatBase<K,V> - Class in org.apache.hadoop.mapred
- Deprecated. Use
FileOutputFormat
- OutputFormatBase() -
Constructor for class org.apache.hadoop.mapred.OutputFormatBase
- Deprecated.
- outputFormatSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- OutputLogFilter - Class in org.apache.hadoop.mapred
- This class filters log files from directory given
It doesnt accept paths having _logs.
- OutputLogFilter() -
Constructor for class org.apache.hadoop.mapred.OutputLogFilter
-
- OutputRecord - Class in org.apache.hadoop.metrics.spi
- Represents a record of metric data to be sent to a metrics system.
- outputSingleNode_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- OverrideRecordReader<K extends WritableComparable,V extends Writable> - Class in org.apache.hadoop.mapred.join
- Prefer the "rightmost" data source for this key.
P
- packageFiles_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- packageJobJar() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- parent -
Variable in class org.apache.hadoop.net.NodeBase
-
- parse(String[], int) -
Method in class org.apache.hadoop.fs.shell.CommandFormat
- Parse parameters starting from the given position
- parse(String, int) -
Static method in class org.apache.hadoop.metrics.spi.Util
- Parses a space and/or comma separated sequence of server specifications
of the form hostname or hostname:port.
- parseArgs(String[], int, Configuration) -
Static method in class org.apache.hadoop.fs.FileSystem
- Parse the cmd-line args, starting at i.
- ParseException - Exception in org.apache.hadoop.record.compiler.generated
- This exception is thrown when parse errors are encountered.
- ParseException(Token, int[][], String[]) -
Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
- This constructor is used by the method "generateParseException"
in the generated parser.
- ParseException() -
Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
- The following constructors are for use by you for whatever
purpose you can think of.
- ParseException(String) -
Constructor for exception org.apache.hadoop.record.compiler.generated.ParseException
-
- parseExecResult(BufferedReader) -
Method in class org.apache.hadoop.fs.DF
-
- parseExecResult(BufferedReader) -
Method in class org.apache.hadoop.fs.DU
-
- parseExecResult(BufferedReader) -
Method in class org.apache.hadoop.util.Shell
- Parse the execution result
- parseExecResult(BufferedReader) -
Method in class org.apache.hadoop.util.Shell.ShellCommandExecutor
-
- parseHistoryFromFS(String, JobHistory.Listener, FileSystem) -
Static method in class org.apache.hadoop.mapred.JobHistory
- Parses history file and invokes Listener.handle() for
each line of history.
- parseJobTasks(String, JobHistory.JobInfo, FileSystem) -
Static method in class org.apache.hadoop.mapred.DefaultJobHistoryParser
- Populates a JobInfo object from the job's history log file.
- Parser - Class in org.apache.hadoop.mapred.join
- Very simple shift-reduce parser for join expressions.
- Parser() -
Constructor for class org.apache.hadoop.mapred.join.Parser
-
- Parser.Node - Class in org.apache.hadoop.mapred.join
-
- Parser.Node(String) -
Constructor for class org.apache.hadoop.mapred.join.Parser.Node
-
- Parser.NodeToken - Class in org.apache.hadoop.mapred.join
-
- Parser.NumToken - Class in org.apache.hadoop.mapred.join
-
- Parser.NumToken(double) -
Constructor for class org.apache.hadoop.mapred.join.Parser.NumToken
-
- Parser.StrToken - Class in org.apache.hadoop.mapred.join
-
- Parser.StrToken(Parser.TType, String) -
Constructor for class org.apache.hadoop.mapred.join.Parser.StrToken
-
- Parser.Token - Class in org.apache.hadoop.mapred.join
- Tagged-union type for tokens from the join expression.
- Parser.TType - Enum in org.apache.hadoop.mapred.join
-
- Partitioner<K2,V2> - Interface in org.apache.hadoop.mapred
- Partitions the key space.
- partitionerSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- Path - Class in org.apache.hadoop.fs
- Names a file or directory in a
FileSystem
. - Path(String, String) -
Constructor for class org.apache.hadoop.fs.Path
- Resolve a child path against a parent path.
- Path(Path, String) -
Constructor for class org.apache.hadoop.fs.Path
- Resolve a child path against a parent path.
- Path(String, Path) -
Constructor for class org.apache.hadoop.fs.Path
- Resolve a child path against a parent path.
- Path(Path, Path) -
Constructor for class org.apache.hadoop.fs.Path
- Resolve a child path against a parent path.
- Path(String) -
Constructor for class org.apache.hadoop.fs.Path
- Construct a path from a String.
- Path(String, String, String) -
Constructor for class org.apache.hadoop.fs.Path
- Construct a Path from components.
- PATH_SEPARATOR -
Static variable in class org.apache.hadoop.net.NodeBase
-
- PATH_SEPARATOR_STR -
Static variable in class org.apache.hadoop.net.NodeBase
-
- PathFilter - Interface in org.apache.hadoop.fs
-
- PathFinder - Class in org.apache.hadoop.streaming
- Maps a relative pathname to an absolute pathname using the
PATH enviroment.
- PathFinder() -
Constructor for class org.apache.hadoop.streaming.PathFinder
- Construct a PathFinder object using the path from
java.class.path
- PathFinder(String) -
Constructor for class org.apache.hadoop.streaming.PathFinder
- Construct a PathFinder object using the path from
the specified system environment variable.
- pathToFile(Path) -
Method in class org.apache.hadoop.fs.LocalFileSystem
- Convert a path to a File.
- pathToFile(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Convert a path to a File.
- pendingReplicationBlocks -
Variable in class org.apache.hadoop.dfs.FSNamesystemMetrics
-
- Pentomino - Class in org.apache.hadoop.examples.dancing
-
- Pentomino(int, int) -
Constructor for class org.apache.hadoop.examples.dancing.Pentomino
- Create the model for a given pentomino set of pieces and board size.
- Pentomino() -
Constructor for class org.apache.hadoop.examples.dancing.Pentomino
- Create the object without initialization.
- Pentomino.ColumnName - Interface in org.apache.hadoop.examples.dancing
- This interface just is a marker for what types I expect to get back
as column names.
- Pentomino.Piece - Class in org.apache.hadoop.examples.dancing
- Maintain information about a puzzle piece.
- Pentomino.Piece(String, String, boolean, int[]) -
Constructor for class org.apache.hadoop.examples.dancing.Pentomino.Piece
-
- Pentomino.SolutionCategory - Enum in org.apache.hadoop.examples.dancing
-
- percentageGraph(int, int) -
Static method in class org.apache.hadoop.dfs.JspHelper
-
- percentageGraph(float, int) -
Static method in class org.apache.hadoop.dfs.JspHelper
-
- PERIOD_PROPERTY -
Static variable in class org.apache.hadoop.metrics.file.FileContext
-
- PermissionStatus - Class in org.apache.hadoop.fs.permission
- Store permission related information.
- PermissionStatus(String, String, FsPermission) -
Constructor for class org.apache.hadoop.fs.permission.PermissionStatus
- Constructor
- phase() -
Method in class org.apache.hadoop.util.Progress
- Returns the current sub-node executing.
- pieces -
Variable in class org.apache.hadoop.examples.dancing.Pentomino
-
- PiEstimator - Class in org.apache.hadoop.examples
- A Map-reduce program to estimaate the valu eof Pi using monte-carlo
method.
- PiEstimator() -
Constructor for class org.apache.hadoop.examples.PiEstimator
-
- PiEstimator.PiMapper - Class in org.apache.hadoop.examples
- Mappper class for Pi estimation.
- PiEstimator.PiMapper() -
Constructor for class org.apache.hadoop.examples.PiEstimator.PiMapper
-
- PiEstimator.PiReducer - Class in org.apache.hadoop.examples
-
- PiEstimator.PiReducer() -
Constructor for class org.apache.hadoop.examples.PiEstimator.PiReducer
-
- ping(String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Deprecated.
- ping(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Child checking to see if we're alive.
- PipeMapper - Class in org.apache.hadoop.streaming
- A generic Mapper bridge.
- PipeMapper() -
Constructor for class org.apache.hadoop.streaming.PipeMapper
-
- PipeMapRed - Class in org.apache.hadoop.streaming
- Shared functionality for PipeMapper, PipeReducer.
- PipeMapRed() -
Constructor for class org.apache.hadoop.streaming.PipeMapRed
-
- PipeReducer - Class in org.apache.hadoop.streaming
- A generic Reducer bridge.
- PipeReducer() -
Constructor for class org.apache.hadoop.streaming.PipeReducer
-
- PlatformName - Class in org.apache.hadoop.util
- A helper class for getting build-info of the java-vm.
- PlatformName() -
Constructor for class org.apache.hadoop.util.PlatformName
-
- pop() -
Method in class org.apache.hadoop.util.PriorityQueue
- Removes and returns the least element of the PriorityQueue in log(size)
time.
- PositionedReadable - Interface in org.apache.hadoop.fs
- Stream that permits positional reading.
- PREP -
Static variable in class org.apache.hadoop.mapred.JobStatus
-
- prependPathComponent(String) -
Method in class org.apache.hadoop.streaming.PathFinder
- Appends the specified component to the path list
- preserveInput(boolean) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
- Whether to delete the files when no longer needed
- prevCharIsCR -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- prevCharIsLF -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- printGenericCommandUsage(PrintStream) -
Static method in class org.apache.hadoop.util.GenericOptionsParser
- Print the usage message for generic command-line options supported.
- printGenericCommandUsage(PrintStream) -
Static method in class org.apache.hadoop.util.ToolRunner
- Prints generic command-line argurments and usage information.
- printGotoForm(JspWriter, int, String) -
Static method in class org.apache.hadoop.dfs.JspHelper
-
- PrintJarMainClass - Class in org.apache.hadoop.util
- A micro-application that prints the main class name out of a jar file.
- PrintJarMainClass() -
Constructor for class org.apache.hadoop.util.PrintJarMainClass
-
- printPathWithLinks(String, JspWriter, int) -
Static method in class org.apache.hadoop.dfs.JspHelper
-
- printStatistics() -
Static method in class org.apache.hadoop.fs.FileSystem
-
- printThreadInfo(PrintWriter, String) -
Static method in class org.apache.hadoop.util.ReflectionUtils
- Print all of the thread's information and stack traces.
- PriorityQueue - Class in org.apache.hadoop.util
- A PriorityQueue maintains a partial ordering of its elements such that the
least element can always be found in constant time.
- PriorityQueue() -
Constructor for class org.apache.hadoop.util.PriorityQueue
-
- process(IntermediateForm) -
Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
- Process an intermediate form by carrying out, on the Lucene instance of
the shard, the deletes and the inserts (a ram index) in the form.
- process(DocumentAndOp, Analyzer) -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
- This method is used by the index update mapper and process a document
operation into the current intermediate form.
- process(IntermediateForm) -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
- This method is used by the index update combiner and process an
intermediate form into the current intermediate form.
- processDeleteOnExit() -
Method in class org.apache.hadoop.fs.FileSystem
- Delete all files that were marked as delete-on-exit.
- processUpgradeCommand(UpgradeCommand) -
Method in class org.apache.hadoop.dfs.NameNode
-
- ProgramDriver - Class in org.apache.hadoop.util
- A driver that is used to run programs added to it
- ProgramDriver() -
Constructor for class org.apache.hadoop.util.ProgramDriver
-
- Progress - Class in org.apache.hadoop.util
- Utility to assist with generation of progress reports.
- Progress() -
Constructor for class org.apache.hadoop.util.Progress
- Creates a new root node.
- progress() -
Method in interface org.apache.hadoop.util.Progressable
- Report progress to the Hadoop framework.
- Progressable - Interface in org.apache.hadoop.util
- A facility for reporting progress.
- pseudoSortByDistance(Node, Node[]) -
Method in class org.apache.hadoop.net.NetworkTopology
- Sort nodes array by their distances to reader
It linearly scans the array, if a local node is found, swap it with
the first element of the array.
- purge() -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
- Delete everything.
- purgeCache(Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Clear the entire contents of the cache and delete the backing files.
- pushMetric(MetricsRecord) -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
- Push the metric to the mr.
- pushMetric(MetricsRecord) -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
- Push the metric to the mr.
- pushMetric(MetricsRecord) -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingInt
- Push the delta metrics to the mr.
- pushMetric(MetricsRecord) -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Push the delta metrics to the mr.
- put(Writable, Writable) -
Method in class org.apache.hadoop.io.MapWritable
-
- put(WritableComparable, Writable) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- put(Object) -
Method in class org.apache.hadoop.util.PriorityQueue
- Adds an Object to a PriorityQueue in log(size) time.
- putAll(Map<? extends Writable, ? extends Writable>) -
Method in class org.apache.hadoop.io.MapWritable
-
- putAll(Map<? extends WritableComparable, ? extends Writable>) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
Q
- quarterDigest() -
Method in class org.apache.hadoop.io.MD5Hash
- Return a 32-bit digest of the MD5.
- QuickSort - Class in org.apache.hadoop.util
- An implementation of the core algorithm of QuickSort.
- QuickSort() -
Constructor for class org.apache.hadoop.util.QuickSort
-
- QuotaExceededException - Exception in org.apache.hadoop.dfs
- This class is for the error when an attempt to add an inode to namespace
violates the quota restriction of any inode on the path to the newly added
inode.
- QuotaExceededException(String) -
Constructor for exception org.apache.hadoop.dfs.QuotaExceededException
-
- QuotaExceededException(long, long) -
Constructor for exception org.apache.hadoop.dfs.QuotaExceededException
-
R
- RAMDirectoryUtil - Class in org.apache.hadoop.contrib.index.lucene
- A utility class which writes an index in a ram dir into a DataOutput and
read from a DataInput an index into a ram dir.
- RAMDirectoryUtil() -
Constructor for class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
-
- randomNode() -
Method in class org.apache.hadoop.dfs.JspHelper
-
- RandomTextWriter - Class in org.apache.hadoop.examples
- This program uses map/reduce to just run a distributed job where there is
no interaction between the tasks and each task writes a large unsorted
random sequence of words.
- RandomTextWriter() -
Constructor for class org.apache.hadoop.examples.RandomTextWriter
-
- RandomWriter - Class in org.apache.hadoop.examples
- This program uses map/reduce to just run a distributed job where there is
no interaction between the tasks and each task write a large unsorted
random binary sequence file of BytesWritable.
- RandomWriter() -
Constructor for class org.apache.hadoop.examples.RandomWriter
-
- RawComparator<T> - Interface in org.apache.hadoop.io
-
A
Comparator
that operates directly on byte representations of
objects. - RawLocalFileSystem - Class in org.apache.hadoop.fs
- Implement the FileSystem API for the raw local filesystem.
- RawLocalFileSystem() -
Constructor for class org.apache.hadoop.fs.RawLocalFileSystem
-
- RBRACE_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- Rcc - Class in org.apache.hadoop.record.compiler.generated
-
- Rcc(InputStream) -
Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
-
- Rcc(InputStream, String) -
Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
-
- Rcc(Reader) -
Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
-
- Rcc(RccTokenManager) -
Constructor for class org.apache.hadoop.record.compiler.generated.Rcc
-
- RccConstants - Interface in org.apache.hadoop.record.compiler.generated
-
- RccTask - Class in org.apache.hadoop.record.compiler.ant
- Hadoop record compiler ant Task
- RccTask() -
Constructor for class org.apache.hadoop.record.compiler.ant.RccTask
- Creates a new instance of RccTask
- RccTokenManager - Class in org.apache.hadoop.record.compiler.generated
-
- RccTokenManager(SimpleCharStream) -
Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- RccTokenManager(SimpleCharStream, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- read(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- read(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- read() -
Method in class org.apache.hadoop.fs.FSInputChecker
- Read one checksum-verified byte
- read(byte[], int, int) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Read checksum verified bytes from this byte-input stream into
the specified byte array, starting at the given offset.
- read(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.FSInputStream
-
- read() -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- read(byte[], int, int) -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- read(DataInput) -
Static method in class org.apache.hadoop.fs.permission.FsPermission
- Create and initialize a
FsPermission
from DataInput
.
- read(DataInput) -
Static method in class org.apache.hadoop.fs.permission.PermissionStatus
- Create and initialize a
PermissionStatus
from DataInput
.
- read(long, byte[], int, int) -
Method in interface org.apache.hadoop.fs.PositionedReadable
- Read upto the specified number of bytes, from a given
position within a file, and return the number of bytes read.
- read(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.CompressionInputStream
- Read bytes from the stream.
- read() -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
-
- read(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
-
- read(DataInput) -
Static method in class org.apache.hadoop.io.MD5Hash
- Constructs, reads and returns an instance.
- read(DataInput) -
Static method in class org.apache.hadoop.mapred.ID
-
- read(DataInput) -
Static method in class org.apache.hadoop.mapred.JobID
-
- read(DataInput) -
Static method in class org.apache.hadoop.mapred.TaskAttemptID
-
- read(DataInput) -
Static method in class org.apache.hadoop.mapred.TaskID
-
- read() -
Method in class org.apache.hadoop.net.SocketInputStream
-
- read(byte[], int, int) -
Method in class org.apache.hadoop.net.SocketInputStream
-
- read(ByteBuffer) -
Method in class org.apache.hadoop.net.SocketInputStream
-
- READ_TIMEOUT -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- readBlockOp -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- readBool(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readBool(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readBool(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a boolean from serialized record.
- readBool(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readBuffer(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readBuffer(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readBuffer(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read byte array from serialized record.
- readBuffer(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readByte(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readByte(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readByte(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a byte from serialized record.
- readByte(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readChar() -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- readChunk(long, byte[], int, int, byte[]) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Reads in next checksum chunk data into
buf
at offset
and checksum into checksum
.
- readCompressedByteArray(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- readCompressedString(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- readCompressedStringArray(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- readDouble(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Parse a double from a byte array.
- readDouble(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readDouble(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readDouble(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a double-precision number from serialized record.
- readDouble(byte[], int) -
Static method in class org.apache.hadoop.record.Utils
- Parse a double from a byte array.
- readDouble(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readEnum(DataInput, Class<T>) -
Static method in class org.apache.hadoop.io.WritableUtils
- Read an Enum value from DataInput, Enums are read and written
using String values.
- readFields(DataInput) -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
- readFields(DataInput) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
- readFields(DataInput) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
- readFields(DataInput) -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
- readFields(DataInput) -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- readFields(DataInput) -
Method in class org.apache.hadoop.dfs.DatanodeID
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
- Deserialize the fields of this object from
in
.
For efficiency, implementations should attempt to re-use storage in the
existing object where possible.
- readFields(DataInput) -
Method in class org.apache.hadoop.dfs.LocatedBlocks
-
- readFields(DataInput) -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
-
- readFields(DataInput) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
-
- readFields(DataInput) -
Method in class org.apache.hadoop.fs.BlockLocation
- Implement readFields of Writable
- readFields(DataInput) -
Method in class org.apache.hadoop.fs.ContentSummary
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.fs.FileStatus
-
- readFields(DataInput) -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.io.AbstractMapWritable
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.io.ArrayWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.BooleanWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.BytesWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.ByteWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.CompressedWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.DoubleWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.FloatWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.GenericWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.IntWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.LongWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.MapWritable
- Deserialize the fields of this object from
in
.
For efficiency, implementations should attempt to re-use storage in the
existing object where possible.
- readFields(DataInput) -
Method in class org.apache.hadoop.io.MD5Hash
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.NullWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.ObjectWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.SortedMapWritable
- Deserialize the fields of this object from
in
.
For efficiency, implementations should attempt to re-use storage in the
existing object where possible.
- readFields(DataInput) -
Method in class org.apache.hadoop.io.Text
- deserialize
- readFields(DataInput) -
Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.UTF8
- Deprecated.
- readFields(DataInput) -
Method in class org.apache.hadoop.io.VersionedWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.VIntWritable
-
- readFields(DataInput) -
Method in class org.apache.hadoop.io.VLongWritable
-
- readFields(DataInput) -
Method in interface org.apache.hadoop.io.Writable
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.ClusterStatus
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.Counters.Counter
- Read the binary representation of the counter
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.Counters.Group
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.Counters
- Read a set of groups.
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.FileSplit
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.ID
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.JobID
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.JobProfile
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.JobStatus
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Deserialize the fields of this object from
in
.
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.MultiFileSplit
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.TaskID
-
- readFields(DataInput) -
Method in class org.apache.hadoop.mapred.TaskReport
-
- readFields(DataInput) -
Method in class org.apache.hadoop.record.Record
-
- readFields(DataInput) -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
- Deserialize this object
First check if this is a UGI in the string format.
- readFieldsCompressed(DataInput) -
Method in class org.apache.hadoop.io.CompressedWritable
- Subclasses implement this instead of
CompressedWritable.readFields(DataInput)
.
- readFloat(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Parse a float from a byte array.
- readFloat(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readFloat(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readFloat(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a single-precision float from serialized record.
- readFloat(byte[], int) -
Static method in class org.apache.hadoop.record.Utils
- Parse a float from a byte array.
- readFloat(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readFrom(Configuration) -
Static method in class org.apache.hadoop.security.UserGroupInformation
- Read a
UserGroupInformation
from conf
- readFromConf(Configuration, String) -
Static method in class org.apache.hadoop.security.UnixUserGroupInformation
- Read a UGI from the given
conf
The object is expected to store with the property name attr
as a comma separated string that starts
with the user name followed by group names.
- readFully(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- readFully(long, byte[]) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- readFully(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- readFully(long, byte[]) -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- readFully(InputStream, byte[], int, int) -
Static method in class org.apache.hadoop.fs.FSInputChecker
- A utility function that tries to read up to
len
bytes from
stm
- readFully(long, byte[], int, int) -
Method in class org.apache.hadoop.fs.FSInputStream
-
- readFully(long, byte[]) -
Method in class org.apache.hadoop.fs.FSInputStream
-
- readFully(long, byte[], int, int) -
Method in interface org.apache.hadoop.fs.PositionedReadable
- Read the specified number of bytes, from a given
position within a file.
- readFully(long, byte[]) -
Method in interface org.apache.hadoop.fs.PositionedReadable
- Read number of bytes equalt to the length of the buffer, from a given
position within a file.
- readFully(InputStream, byte[], int, int) -
Static method in class org.apache.hadoop.io.IOUtils
- Reads len bytes in a loop.
- readInt(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Parse an integer from a byte array.
- readInt(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readInt(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readInt(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read an integer from serialized record.
- readInt(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readLine(Text, int, int) -
Method in class org.apache.hadoop.mapred.LineRecordReader.LineReader
- Read from the InputStream into the given Text.
- readLine(Text, int) -
Method in class org.apache.hadoop.mapred.LineRecordReader.LineReader
- Read from the InputStream into the given Text.
- readLine(Text) -
Method in class org.apache.hadoop.mapred.LineRecordReader.LineReader
- Read from the InputStream into the given Text.
- readLine(LineRecordReader.LineReader, Text) -
Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
- Read a utf8 encoded line from a data input stream.
- readLong(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Parse a long from a byte array.
- readLong(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readLong(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readLong(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a long integer from serialized record.
- readLong(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readMetadataOp -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- readObject(DataInput, Configuration) -
Static method in class org.apache.hadoop.io.ObjectWritable
- Read a
Writable
, String
, primitive type, or an array of
the preceding.
- readObject(DataInput, ObjectWritable, Configuration) -
Static method in class org.apache.hadoop.io.ObjectWritable
- Read a
Writable
, String
, primitive type, or an array of
the preceding.
- readRAMFiles(DataInput, RAMDirectory) -
Static method in class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
- Read a number of files from a data input to a ram directory.
- readsFromLocalClient -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- readsFromRemoteClient -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- readString(DataInput) -
Static method in class org.apache.hadoop.io.Text
- Read a UTF8 encoded string from in
- readString(DataInput) -
Static method in class org.apache.hadoop.io.UTF8
- Deprecated. Read a UTF-8 encoded string.
- readString(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- readString(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- readString(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- readString(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Read a UTF-8 encoded string from serialized record.
- readString(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- readStringArray(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- readUnsignedShort(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Parse an unsigned short from a byte array.
- readVInt(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Reads a zero-compressed encoded integer from a byte array and returns it.
- readVInt(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
- Reads a zero-compressed encoded integer from input stream and returns it.
- readVInt(byte[], int) -
Static method in class org.apache.hadoop.record.Utils
- Reads a zero-compressed encoded integer from a byte array and returns it.
- readVInt(DataInput) -
Static method in class org.apache.hadoop.record.Utils
- Reads a zero-compressed encoded integer from a stream and returns it.
- readVLong(byte[], int) -
Static method in class org.apache.hadoop.io.WritableComparator
- Reads a zero-compressed encoded long from a byte array and returns it.
- readVLong(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
- Reads a zero-compressed encoded long from input stream and returns it.
- readVLong(byte[], int) -
Static method in class org.apache.hadoop.record.Utils
- Reads a zero-compressed encoded long from a byte array and returns it.
- readVLong(DataInput) -
Static method in class org.apache.hadoop.record.Utils
- Reads a zero-compressed encoded long from a stream and return it.
- READY -
Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- Record() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- Record - Class in org.apache.hadoop.record
- Abstract class that is extended by generated classes.
- Record() -
Constructor for class org.apache.hadoop.record.Record
-
- RECORD_INPUT -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- RECORD_OUTPUT -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- RECORD_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- RecordComparator - Class in org.apache.hadoop.record
- A raw record comparator base class
- RecordComparator(Class) -
Constructor for class org.apache.hadoop.record.RecordComparator
- Construct a raw
Record
comparison implementation.
- RecordInput - Interface in org.apache.hadoop.record
- Interface that all the Deserializers have to implement.
- RecordList() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- RecordOutput - Interface in org.apache.hadoop.record
- Interface that alll the serializers have to implement.
- RecordReader<K,V> - Interface in org.apache.hadoop.mapred
RecordReader
reads <key, value> pairs from an
InputSplit
.- RecordTypeInfo - Class in org.apache.hadoop.record.meta
- A record's Type Information object which can read/write itself.
- RecordTypeInfo() -
Constructor for class org.apache.hadoop.record.meta.RecordTypeInfo
- Create an empty RecordTypeInfo object.
- RecordTypeInfo(String) -
Constructor for class org.apache.hadoop.record.meta.RecordTypeInfo
- Create a RecordTypeInfo object representing a record with the given name
- RecordWriter<K,V> - Interface in org.apache.hadoop.mapred
RecordWriter
writes the output <key, value> pairs
to an output file.- recoverBlock(Block, DatanodeInfo[]) -
Method in class org.apache.hadoop.dfs.DataNode
- Start generation-stamp recovery for specified block
- redCmd_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- reduce(Shard, Iterator<IntermediateForm>, OutputCollector<Shard, IntermediateForm>, Reporter) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateCombiner
-
- reduce(Shard, Iterator<IntermediateForm>, OutputCollector<Shard, Text>, Reporter) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateReducer
-
- reduce(Object, Iterator, OutputCollector, Reporter) -
Method in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- reduce(Object, Iterator, OutputCollector, Reporter) -
Method in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- reduce(LongWritable, Iterator<LongWritable>, OutputCollector<WritableComparable, Writable>, Reporter) -
Method in class org.apache.hadoop.examples.PiEstimator.PiReducer
- Reduce method.
- reduce(IntWritable, Iterator<IntWritable>, OutputCollector<IntWritable, IntWritable>, Reporter) -
Method in class org.apache.hadoop.examples.SleepJob
-
- reduce(Text, Iterator<IntWritable>, OutputCollector<Text, IntWritable>, Reporter) -
Method in class org.apache.hadoop.examples.WordCount.Reduce
-
- reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
- Combines values for a given key.
- reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
- Do nothing.
- reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
-
- reduce(Text, Iterator<Text>, OutputCollector<Text, Text>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.FieldSelectionMapReduce
-
- reduce(K, Iterator<V>, OutputCollector<K, V>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.IdentityReducer
- Writes all keys and values directly to output.
- reduce(K, Iterator<LongWritable>, OutputCollector<K, LongWritable>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.LongSumReducer
-
- reduce(K2, Iterator<V2>, OutputCollector<K3, V3>, Reporter) -
Method in interface org.apache.hadoop.mapred.Reducer
- Reduces values for a given key.
- reduce(Object, Iterator, OutputCollector, Reporter) -
Method in class org.apache.hadoop.streaming.PipeReducer
-
- reduceDebugSpec_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- reduceOutFieldSeparator -
Variable in class org.apache.hadoop.streaming.PipeMapRed
-
- reduceProgress() -
Method in class org.apache.hadoop.mapred.JobStatus
-
- reduceProgress() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Get the progress of the job's reduce-tasks, as a float between 0.0
and 1.0.
- Reducer<K2,V2,K3,V3> - Interface in org.apache.hadoop.mapred
- Reduces a set of intermediate values which share a key to a smaller set of
values.
- ReflectionUtils - Class in org.apache.hadoop.util
- General reflection utils
- ReflectionUtils() -
Constructor for class org.apache.hadoop.util.ReflectionUtils
-
- refresh() -
Method in class org.apache.hadoop.util.HostsFileReader
-
- refreshNodes() -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
-
- refreshNodes() -
Method in class org.apache.hadoop.dfs.DFSAdmin
- Command to ask the namenode to reread the hosts and excluded hosts
file.
- refreshNodes() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- refreshNodes() -
Method in class org.apache.hadoop.dfs.NameNode
-
- RegexMapper<K> - Class in org.apache.hadoop.mapred.lib
- A
Mapper
that extracts text matching a regular expression. - RegexMapper() -
Constructor for class org.apache.hadoop.mapred.lib.RegexMapper
-
- regexpEscape(String) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- register(DatanodeRegistration) -
Method in class org.apache.hadoop.dfs.NameNode
-
- registerMBean(String, String, Object) -
Static method in class org.apache.hadoop.metrics.util.MBeanUtil
- Register the mbean using out standard MBeanName format
"hadoop.dfs:service=
,name="
Where the and are the supplied parameters
- registerNotification(JobConf, JobStatus) -
Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- registerUpdater(Updater) -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Registers a callback to be called at regular time intervals, as
determined by the implementation-class specific configuration.
- registerUpdater(Updater) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Registers a callback to be called at time intervals determined by
the configuration.
- ReInit(InputStream) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- ReInit(InputStream, String) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- ReInit(Reader) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- ReInit(RccTokenManager) -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- ReInit(SimpleCharStream) -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- ReInit(SimpleCharStream, int) -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- ReInit(Reader, int, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(Reader, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(Reader) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream, String, int, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream, int, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream, String) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream, String, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- ReInit(InputStream, int, int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- release(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
- Deprecated.
- release(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Deprecated.
- releaseCache(URI, Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- This is the opposite of getlocalcache.
- remaining -
Variable in class org.apache.hadoop.dfs.DatanodeInfo
-
- RemoteException - Exception in org.apache.hadoop.ipc
-
- RemoteException(String, String) -
Constructor for exception org.apache.hadoop.ipc.RemoteException
-
- remove() -
Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
-
- remove(Object) -
Method in class org.apache.hadoop.io.MapWritable
-
- remove(Object) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- remove() -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Removes, from the buffered data table, all rows having tags
that equal the tags that have been set on this record.
- remove(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Called by MetricsRecordImpl.remove().
- remove() -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Removes the row, if it exists, in the buffered data table having tags
that equal the tags that have been set on this record.
- remove(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.NullContext
- Do-nothing version of remove
- remove(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
- Do-nothing version of remove
- remove(Node) -
Method in class org.apache.hadoop.net.NetworkTopology
- Remove a node
Update node counter & rack counter if neccessary
- removeAttribute(String) -
Method in class org.apache.hadoop.metrics.ContextFactory
- Removes the named attribute if it exists.
- removeSuffix(String, String) -
Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- Removes a suffix from a filename, if it has it.
- removeTag(String) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Removes any tag of the specified name.
- removeTag(String) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Removes any tag of the specified name.
- rename(Path, Path) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
- Rename files/dirs
- rename(Path, Path) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- rename(String, String) -
Method in class org.apache.hadoop.dfs.NameNode
-
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Rename files/dirs
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Renames Path src to Path dst.
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Renames Path src to Path dst.
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- rename(Path, Path) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
-
- rename(FileSystem, String, String) -
Static method in class org.apache.hadoop.io.MapFile
- Renames an existing map directory.
- renameFile(String, String) -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- renewLease(String) -
Method in class org.apache.hadoop.dfs.NameNode
-
- replaceBlockOp -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- replaceFile(File, File) -
Static method in class org.apache.hadoop.fs.FileUtil
- Move the src file to the name specified by target.
- replay(X) -
Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- replay(TupleWritable) -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- replay(V) -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- replay(U) -
Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- replay(T) -
Method in interface org.apache.hadoop.mapred.join.ResetableIterator
- Assign last value returned to actual.
- replay(X) -
Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- report() -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
- log the counters
- report() -
Method in class org.apache.hadoop.dfs.DFSAdmin
- Gives a report on how the FileSystem is doing.
- reportBadBlocks(LocatedBlock[]) -
Method in class org.apache.hadoop.dfs.NameNode
- The client has detected an error on the specified located blocks
and is reporting them to the server.
- reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
- We need to find the blocks that didn't match.
- reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
- We need to find the blocks that didn't match.
- reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Report a checksum error to the file system.
- reportChecksumFailure(Path, FSDataInputStream, long, FSDataInputStream, long) -
Method in class org.apache.hadoop.fs.LocalFileSystem
- Moves files to a bad file directory on the same device, so that their
storage will not be reused.
- reportDiagnosticInfo(String, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Deprecated.
- reportDiagnosticInfo(TaskAttemptID, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Called when the task dies before completion, and we want to report back
diagnostic info
- reporter -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinMapperBase
-
- reporter -
Variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- Reporter - Interface in org.apache.hadoop.mapred
- A facility for Map-Reduce applications to report progress and update
counters, status information etc.
- reportTaskTrackerError(String, String, String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- requiresLayout() -
Method in class org.apache.hadoop.metrics.jvm.EventCounter
-
- reserveSpaceWithCheckSum(Path, long) -
Method in class org.apache.hadoop.fs.InMemoryFileSystem
- Deprecated. Register a file with its size.
- reset() -
Method in class org.apache.hadoop.contrib.utils.join.ArrayListBackedIterator
-
- reset() -
Method in interface org.apache.hadoop.contrib.utils.join.ResetableIterator
-
- reset() -
Method in class org.apache.hadoop.dfs.DataChecksum
-
- reset() -
Method in class org.apache.hadoop.fs.FSInputChecker
-
- reset() -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- reset() -
Method in interface org.apache.hadoop.io.compress.Compressor
- Resets compressor so that a new set of input data can be processed.
- reset() -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Resets decompressor so that a new set of input data can be processed.
- reset() -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
-
- reset() -
Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
-
- reset() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
-
- reset() -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- reset(byte[], int) -
Method in class org.apache.hadoop.io.DataInputBuffer
- Resets the data that the buffer reads.
- reset(byte[], int, int) -
Method in class org.apache.hadoop.io.DataInputBuffer
- Resets the data that the buffer reads.
- reset() -
Method in class org.apache.hadoop.io.DataOutputBuffer
- Resets the buffer to empty.
- reset(byte[], int) -
Method in class org.apache.hadoop.io.InputBuffer
- Resets the data that the buffer reads.
- reset(byte[], int, int) -
Method in class org.apache.hadoop.io.InputBuffer
- Resets the data that the buffer reads.
- reset() -
Method in class org.apache.hadoop.io.MapFile.Reader
- Re-positions the reader before its first key.
- reset() -
Method in class org.apache.hadoop.io.OutputBuffer
- Resets the buffer to empty.
- reset() -
Method in class org.apache.hadoop.mapred.join.ArrayListBackedIterator
-
- reset() -
Method in class org.apache.hadoop.mapred.join.JoinRecordReader.JoinDelegationIterator
-
- reset() -
Method in class org.apache.hadoop.mapred.join.MultiFilterRecordReader.MultiFilterDelegationIterator
-
- reset() -
Method in class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- reset() -
Method in interface org.apache.hadoop.mapred.join.ResetableIterator
- Set iterator to return to the start of its range.
- reset() -
Method in class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.DoubleValueSum
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMax
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueMin
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.LongValueSum
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
- reset the aggregator
- reset() -
Method in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregator
- reset the aggregator
- reset() -
Method in class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
- reset the aggregator
- reset(BytesWritable) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- reset() -
Method in class org.apache.hadoop.record.Buffer
- Reset the buffer to 0 size
- ResetableIterator - Interface in org.apache.hadoop.contrib.utils.join
- This interface defines an iterator interface that will help the reducer class
for re-grouping the values in the values iterator of the reduce method
according the their source tags.
- ResetableIterator<T extends Writable> - Interface in org.apache.hadoop.mapred.join
- This defines an interface to a stateful Iterator that can replay elements
added to it directly.
- ResetableIterator.EMPTY<U extends Writable> - Class in org.apache.hadoop.mapred.join
-
- ResetableIterator.EMPTY() -
Constructor for class org.apache.hadoop.mapred.join.ResetableIterator.EMPTY
-
- resetAllMinMax() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- resetAllMinMax() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
-
- resetAllMinMax() -
Method in interface org.apache.hadoop.dfs.datanode.metrics.DataNodeStatisticsMBean
- Reset all min max times
- resetAllMinMax() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
-
- resetAllMinMax() -
Method in interface org.apache.hadoop.dfs.namenode.metrics.NameNodeStatisticsMBean
- Reset all min max times
- resetAllMinMax() -
Method in class org.apache.hadoop.dfs.NameNodeMetrics
-
- resetAllMinMax() -
Method in interface org.apache.hadoop.ipc.metrics.RpcMgtMBean
- Reset all min max times
- resetMinMax() -
Method in class org.apache.hadoop.metrics.util.MetricsTimeVaryingRate
- Reset the min max values
- resetState() -
Method in class org.apache.hadoop.io.compress.CompressionInputStream
- Reset the decompressor to its initial state and discard any buffered data,
as the underlying stream may have been repositioned.
- resetState() -
Method in class org.apache.hadoop.io.compress.CompressionOutputStream
- Reset the compression to the initial state.
- resetState() -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
-
- resetState() -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
-
- resolve(List<String>) -
Method in interface org.apache.hadoop.net.DNSToSwitchMapping
- Resolves a list of DNS-names/IP-addresses and returns back a list of
switch information (network paths).
- resolve(List<String>) -
Method in class org.apache.hadoop.net.ScriptBasedMapping
-
- resolveAndAddToTopology(String) -
Method in class org.apache.hadoop.mapred.JobTracker
-
- resume() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
- resume the suspended thread
- retrieveBlock(Block, long) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- retrieveINode(Path) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- RETRY_FOREVER -
Static variable in class org.apache.hadoop.io.retry.RetryPolicies
-
Keep trying forever.
- retryByException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) -
Static method in class org.apache.hadoop.io.retry.RetryPolicies
-
Set a default policy with some explicit handlers for specific exceptions.
- retryByRemoteException(RetryPolicy, Map<Class<? extends Exception>, RetryPolicy>) -
Static method in class org.apache.hadoop.io.retry.RetryPolicies
-
A retry policy for RemoteException
Set a default policy with some explicit handlers for specific exceptions.
- RetryPolicies - Class in org.apache.hadoop.io.retry
-
A collection of useful implementations of
RetryPolicy
. - RetryPolicies() -
Constructor for class org.apache.hadoop.io.retry.RetryPolicies
-
- RetryPolicy - Interface in org.apache.hadoop.io.retry
-
Specifies a policy for retrying method failures.
- RetryProxy - Class in org.apache.hadoop.io.retry
-
A factory for creating retry proxies.
- RetryProxy() -
Constructor for class org.apache.hadoop.io.retry.RetryProxy
-
- retryUpToMaximumCountWithFixedSleep(int, long, TimeUnit) -
Static method in class org.apache.hadoop.io.retry.RetryPolicies
-
Keep trying a limited number of times, waiting a fixed time between attempts,
and then fail by re-throwing the exception.
- retryUpToMaximumCountWithProportionalSleep(int, long, TimeUnit) -
Static method in class org.apache.hadoop.io.retry.RetryPolicies
-
Keep trying a limited number of times, waiting a growing amount of time between attempts,
and then fail by re-throwing the exception.
- retryUpToMaximumTimeWithFixedSleep(long, long, TimeUnit) -
Static method in class org.apache.hadoop.io.retry.RetryPolicies
-
Keep trying for a maximum time, waiting a fixed time between attempts,
and then fail by re-throwing the exception.
- returnCompressor(Compressor) -
Static method in class org.apache.hadoop.io.compress.CodecPool
- Return the
Compressor
to the pool.
- returnDecompressor(Decompressor) -
Static method in class org.apache.hadoop.io.compress.CodecPool
- Return the
Decompressor
to the pool.
- reverseDns(InetAddress, String) -
Static method in class org.apache.hadoop.net.DNS
- Returns the hostname associated with the specified IP address by the
provided nameserver.
- RIO_PREFIX -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- rjustify(String, int) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- rollEditLog() -
Method in class org.apache.hadoop.dfs.NameNode
- Roll the edit log.
- rollFsImage() -
Method in class org.apache.hadoop.dfs.NameNode
- Roll the image
- ROOT -
Static variable in class org.apache.hadoop.net.NodeBase
-
- RoundRobinDistributionPolicy - Class in org.apache.hadoop.contrib.index.example
- Choose a shard for each insert in a round-robin fashion.
- RoundRobinDistributionPolicy() -
Constructor for class org.apache.hadoop.contrib.index.example.RoundRobinDistributionPolicy
-
- RPC - Class in org.apache.hadoop.ipc
- A simple RPC mechanism.
- RPC.Server - Class in org.apache.hadoop.ipc
- An RPC Server.
- RPC.Server(Object, Configuration, String, int) -
Constructor for class org.apache.hadoop.ipc.RPC.Server
- Construct an RPC server.
- RPC.Server(Object, Configuration, String, int, int, boolean) -
Constructor for class org.apache.hadoop.ipc.RPC.Server
- Construct an RPC server.
- RPC.VersionMismatch - Exception in org.apache.hadoop.ipc
- A version mismatch for the RPC protocol.
- RPC.VersionMismatch(String, long, long) -
Constructor for exception org.apache.hadoop.ipc.RPC.VersionMismatch
- Create a version mismatch exception
- RpcMetrics - Class in org.apache.hadoop.ipc.metrics
- This class is for maintaining the various RPC statistics
and publishing them through the metrics interfaces.
- RpcMetrics(String, String, Server) -
Constructor for class org.apache.hadoop.ipc.metrics.RpcMetrics
-
- rpcMetrics -
Variable in class org.apache.hadoop.ipc.Server
-
- RpcMgtMBean - Interface in org.apache.hadoop.ipc.metrics
- This is the JMX management interface for the RPC layer.
- rpcProcessingTime -
Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
-
- rpcQueueTime -
Variable in class org.apache.hadoop.ipc.metrics.RpcMetrics
- The metrics variables are public:
- they can be set directly by calling their set/inc methods
-they can also be read directly - e.g.
- rrCstrMap -
Static variable in class org.apache.hadoop.mapred.join.Parser.Node
-
- RTI_FILTER -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- RTI_FILTER_FIELDS -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- RTI_VAR -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- run(Configuration, Path[], Path, int, Shard[]) -
Method in interface org.apache.hadoop.contrib.index.mapred.IIndexUpdater
- Create a Map/Reduce job configuration and run the Map/Reduce job to
analyze documents and update Lucene instances in parallel.
- run(Configuration, Path[], Path, int, Shard[]) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdater
-
- run(String[]) -
Method in class org.apache.hadoop.dfs.Balancer
- main method of Balancer
- run() -
Method in class org.apache.hadoop.dfs.DataBlockScanner
-
- run() -
Method in class org.apache.hadoop.dfs.DataNode
- No matter what kind of exception we get, keep retrying to offerService().
- run(String[]) -
Method in class org.apache.hadoop.dfs.DFSAdmin
-
- run(String[]) -
Method in class org.apache.hadoop.dfs.DFSck
-
- run(String[]) -
Method in class org.apache.hadoop.dfs.NamenodeFsck
-
- run() -
Method in class org.apache.hadoop.dfs.SecondaryNameNode
-
- run(String[]) -
Method in class org.apache.hadoop.examples.dancing.DistributedPentomino
-
- run(String[]) -
Method in class org.apache.hadoop.examples.Grep
-
- run(String[]) -
Method in class org.apache.hadoop.examples.Join
- The main driver for sort program.
- run(String[]) -
Method in class org.apache.hadoop.examples.MultiFileWordCount
-
- run(String[]) -
Method in class org.apache.hadoop.examples.PiEstimator
- Launches all the tasks in order.
- run(String[]) -
Method in class org.apache.hadoop.examples.RandomTextWriter
- This is the main routine for launching a distributed random write job.
- run(String[]) -
Method in class org.apache.hadoop.examples.RandomWriter
- This is the main routine for launching a distributed random write job.
- run(int, int, long, long, long, long) -
Method in class org.apache.hadoop.examples.SleepJob
-
- run(String[]) -
Method in class org.apache.hadoop.examples.SleepJob
-
- run(String[]) -
Method in class org.apache.hadoop.examples.Sort
- The main driver for sort program.
- run(String[]) -
Method in class org.apache.hadoop.examples.WordCount
- The main driver for word count map/reduce program.
- run(String[]) -
Method in class org.apache.hadoop.fs.FsShell
- run
- run(String[]) -
Method in class org.apache.hadoop.fs.s3.MigrationTool
-
- run(Path) -
Method in class org.apache.hadoop.fs.shell.Command
- Execute the command on the input path
- run(Path) -
Method in class org.apache.hadoop.fs.shell.Count
-
- run(String[]) -
Method in class org.apache.hadoop.mapred.JobClient
-
- run() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
- The main loop for the thread.
- run() -
Method in class org.apache.hadoop.mapred.JobHistory.HistoryCleaner
- Cleans up history data.
- run(String[]) -
Method in class org.apache.hadoop.mapred.JobShell
- run method from Tool
- run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) -
Method in class org.apache.hadoop.mapred.lib.MultithreadedMapRunner
-
- run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) -
Method in interface org.apache.hadoop.mapred.MapRunnable
- Start mapping input <key, value> pairs.
- run(RecordReader<K1, V1>, OutputCollector<K2, V2>, Reporter) -
Method in class org.apache.hadoop.mapred.MapRunner
-
- run() -
Method in class org.apache.hadoop.mapred.TaskTracker
- The server retry loop.
- run() -
Method in class org.apache.hadoop.util.Shell
- check to see if a command needs to be executed and execute if needed
- run(String[]) -
Method in interface org.apache.hadoop.util.Tool
- Execute the command with the given arguments.
- run(Configuration, Tool, String[]) -
Static method in class org.apache.hadoop.util.ToolRunner
- Runs the given
Tool
by Tool.run(String[])
, after
parsing with the given generic arguments.
- run(Tool, String[]) -
Static method in class org.apache.hadoop.util.ToolRunner
- Runs the
Tool
with its Configuration
.
- runAll() -
Method in class org.apache.hadoop.fs.shell.Command
- For each source path, execute the command
- RunJar - Class in org.apache.hadoop.util
- Run a Hadoop job jar.
- RunJar() -
Constructor for class org.apache.hadoop.util.RunJar
-
- runJob(JobConf) -
Static method in class org.apache.hadoop.contrib.utils.join.DataJoinJob
- Submit/run a map/reduce job.
- runJob(JobConf) -
Static method in class org.apache.hadoop.mapred.JobClient
- Utility that submits a job, then polls for progress until the job is
complete.
- RUNNING -
Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- RUNNING -
Static variable in class org.apache.hadoop.mapred.JobStatus
-
- running_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- RunningJob - Interface in org.apache.hadoop.mapred
RunningJob
is the user-interface to query for details on a
running Map-Reduce job.- runningJobs() -
Method in class org.apache.hadoop.mapred.JobTracker
-
S
- S3Credentials - Class in org.apache.hadoop.fs.s3
-
Extracts AWS credentials from the filesystem URI or configuration.
- S3Credentials() -
Constructor for class org.apache.hadoop.fs.s3.S3Credentials
-
- S3Exception - Exception in org.apache.hadoop.fs.s3
- Thrown if there is a problem communicating with Amazon S3.
- S3Exception(Throwable) -
Constructor for exception org.apache.hadoop.fs.s3.S3Exception
-
- S3FileSystem - Class in org.apache.hadoop.fs.s3
-
A block-based
FileSystem
backed by
Amazon S3. - S3FileSystem() -
Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
-
- S3FileSystem(FileSystemStore) -
Constructor for class org.apache.hadoop.fs.s3.S3FileSystem
-
- S3FileSystemException - Exception in org.apache.hadoop.fs.s3
- Thrown when there is a fatal exception while using
S3FileSystem
. - S3FileSystemException(String) -
Constructor for exception org.apache.hadoop.fs.s3.S3FileSystemException
-
- safeGetCanonicalPath(File) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- SafeModeException - Exception in org.apache.hadoop.dfs
- This exception is thrown when the name node is in safe mode.
- SafeModeException(String, FSNamesystem.SafeModeInfo) -
Constructor for exception org.apache.hadoop.dfs.SafeModeException
-
- safeModeTime -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- saveToConf(Configuration, String, UnixUserGroupInformation) -
Static method in class org.apache.hadoop.security.UnixUserGroupInformation
- Store the given
ugi
as a comma separated string in
conf
as a property attr
The String starts with the user name followed by the default group names,
and other group names.
- scheduleBlockReport(long) -
Method in class org.apache.hadoop.dfs.DataNode
- This methods arranges for the data node to send the block report at the next heartbeat.
- scheduledReplicationBlocks -
Variable in class org.apache.hadoop.dfs.FSNamesystemMetrics
-
- ScriptBasedMapping - Class in org.apache.hadoop.net
- This class implements the
DNSToSwitchMapping
interface using a
script configured via topology.script.file.name . - ScriptBasedMapping() -
Constructor for class org.apache.hadoop.net.ScriptBasedMapping
-
- SecondaryNameNode - Class in org.apache.hadoop.dfs
- The Secondary NameNode is a helper to the primary NameNode.
- seek(long) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- seek(long) -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- seek(long) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Seek to the given position in the stream.
- seek(long) -
Method in class org.apache.hadoop.fs.FSInputStream
- Seek to the given offset from the start of the file.
- seek(long) -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- seek(long) -
Method in interface org.apache.hadoop.fs.Seekable
- Seek to the given offset from the start of the file.
- seek(long) -
Method in class org.apache.hadoop.io.ArrayFile.Reader
- Positions the reader before its
n
th value.
- seek(WritableComparable) -
Method in class org.apache.hadoop.io.MapFile.Reader
- Positions the reader at the named key, or if none such exists, at the
first entry after the named key.
- seek(long) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Set the current byte position in the input file.
- seek(WritableComparable) -
Method in class org.apache.hadoop.io.SetFile.Reader
-
- seek(long) -
Method in class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- Seekable - Interface in org.apache.hadoop.fs
- Stream that permits seeking.
- seekNextRecordBoundary() -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
- Implementation should seek forward in_ to the first byte of the next record.
- seekNextRecordBoundary() -
Method in class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- seekToNewSource(long) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- seekToNewSource(long) -
Method in class org.apache.hadoop.fs.FSDataInputStream
-
- seekToNewSource(long) -
Method in class org.apache.hadoop.fs.FSInputStream
- Seeks a different copy of the data.
- seekToNewSource(long) -
Method in class org.apache.hadoop.fs.ftp.FTPInputStream
-
- seekToNewSource(long) -
Method in interface org.apache.hadoop.fs.Seekable
- Seeks a different copy of the data.
- seenPrimary_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- SEMICOLON_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- sendHeartbeat(DatanodeRegistration, long, long, long, int, int) -
Method in class org.apache.hadoop.dfs.NameNode
- Data node notify the name node that it is alive
Return a block-oriented command for the datanode to execute.
- SEPARATOR -
Static variable in class org.apache.hadoop.fs.Path
- The directory separator, a slash.
- SEPARATOR_CHAR -
Static variable in class org.apache.hadoop.fs.Path
-
- SequenceFile - Class in org.apache.hadoop.io
SequenceFile
s are flat files consisting of binary key/value
pairs.- SequenceFile.CompressionType - Enum in org.apache.hadoop.io
- The compression type used to compress key/value pairs in the
SequenceFile
. - SequenceFile.Metadata - Class in org.apache.hadoop.io
- The class encapsulating with the metadata of a file.
- SequenceFile.Metadata() -
Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
-
- SequenceFile.Metadata(TreeMap<Text, Text>) -
Constructor for class org.apache.hadoop.io.SequenceFile.Metadata
-
- SequenceFile.Reader - Class in org.apache.hadoop.io
- Reads key/value pairs from a sequence-format file.
- SequenceFile.Reader(FileSystem, Path, Configuration) -
Constructor for class org.apache.hadoop.io.SequenceFile.Reader
- Open the named file.
- SequenceFile.Sorter - Class in org.apache.hadoop.io
- Sorts key/value pairs in a sequence-format file.
- SequenceFile.Sorter(FileSystem, Class, Class, Configuration) -
Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
- Sort and merge files containing the named classes.
- SequenceFile.Sorter(FileSystem, RawComparator, Class, Class, Configuration) -
Constructor for class org.apache.hadoop.io.SequenceFile.Sorter
- Sort and merge using an arbitrary
RawComparator
.
- SequenceFile.Sorter.RawKeyValueIterator - Interface in org.apache.hadoop.io
- The interface to iterate over raw keys/values of SequenceFiles.
- SequenceFile.Sorter.SegmentDescriptor - Class in org.apache.hadoop.io
- This class defines a merge segment.
- SequenceFile.Sorter.SegmentDescriptor(long, long, Path) -
Constructor for class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
- Constructs a segment
- SequenceFile.ValueBytes - Interface in org.apache.hadoop.io
- The interface to 'raw' values of SequenceFiles.
- SequenceFile.Writer - Class in org.apache.hadoop.io
- Write key/value pairs to a sequence-format file.
- SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class) -
Constructor for class org.apache.hadoop.io.SequenceFile.Writer
- Create the named file.
- SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, Progressable, SequenceFile.Metadata) -
Constructor for class org.apache.hadoop.io.SequenceFile.Writer
- Create the named file with write-progress reporter.
- SequenceFile.Writer(FileSystem, Configuration, Path, Class, Class, int, short, long, Progressable, SequenceFile.Metadata) -
Constructor for class org.apache.hadoop.io.SequenceFile.Writer
- Create the named file with write-progress reporter.
- SequenceFileAsBinaryInputFormat - Class in org.apache.hadoop.mapred
- InputFormat reading keys, values from SequenceFiles in binary (raw)
format.
- SequenceFileAsBinaryInputFormat() -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat
-
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader - Class in org.apache.hadoop.mapred
- Read records from a SequenceFile as binary (raw) bytes.
- SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryInputFormat.SequenceFileAsBinaryRecordReader
-
- SequenceFileAsBinaryOutputFormat - Class in org.apache.hadoop.mapred
- An
OutputFormat
that writes keys, values to
SequenceFile
s in binary(raw) format - SequenceFileAsBinaryOutputFormat() -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes - Class in org.apache.hadoop.mapred
- Inner class used for appendRaw
- SequenceFileAsBinaryOutputFormat.WritableValueBytes() -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsBinaryOutputFormat.WritableValueBytes(BytesWritable) -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- SequenceFileAsTextInputFormat - Class in org.apache.hadoop.mapred
- This class is similar to SequenceFileInputFormat, except it generates SequenceFileAsTextRecordReader
which converts the input keys and values to their String forms by calling toString() method.
- SequenceFileAsTextInputFormat() -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextInputFormat
-
- SequenceFileAsTextRecordReader - Class in org.apache.hadoop.mapred
- This class converts the input keys and values to their String forms by calling toString()
method.
- SequenceFileAsTextRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.mapred.SequenceFileAsTextRecordReader
-
- SequenceFileInputFilter<K,V> - Class in org.apache.hadoop.mapred
- A class that allows a map/red job to work on a sample of sequence files.
- SequenceFileInputFilter() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter
-
- SequenceFileInputFilter.Filter - Interface in org.apache.hadoop.mapred
- filter interface
- SequenceFileInputFilter.FilterBase - Class in org.apache.hadoop.mapred
- base class for Filters
- SequenceFileInputFilter.FilterBase() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.FilterBase
-
- SequenceFileInputFilter.MD5Filter - Class in org.apache.hadoop.mapred
- This class returns a set of records by examing the MD5 digest of its
key against a filtering frequency f.
- SequenceFileInputFilter.MD5Filter() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
-
- SequenceFileInputFilter.PercentFilter - Class in org.apache.hadoop.mapred
- This class returns a percentage of records
The percentage is determined by a filtering frequency f using
the criteria record# % f == 0.
- SequenceFileInputFilter.PercentFilter() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
-
- SequenceFileInputFilter.RegexFilter - Class in org.apache.hadoop.mapred
- Records filter by matching key to regex
- SequenceFileInputFilter.RegexFilter() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
-
- SequenceFileInputFormat<K,V> - Class in org.apache.hadoop.mapred
- An
InputFormat
for SequenceFile
s. - SequenceFileInputFormat() -
Constructor for class org.apache.hadoop.mapred.SequenceFileInputFormat
-
- SequenceFileOutputFormat<K,V> - Class in org.apache.hadoop.mapred
- An
OutputFormat
that writes SequenceFile
s. - SequenceFileOutputFormat() -
Constructor for class org.apache.hadoop.mapred.SequenceFileOutputFormat
-
- SequenceFileRecordReader<K,V> - Class in org.apache.hadoop.mapred
- An
RecordReader
for SequenceFile
s. - SequenceFileRecordReader(Configuration, FileSplit) -
Constructor for class org.apache.hadoop.mapred.SequenceFileRecordReader
-
- Serialization<T> - Interface in org.apache.hadoop.io.serializer
-
Encapsulates a
Serializer
/Deserializer
pair. - SerializationFactory - Class in org.apache.hadoop.io.serializer
-
A factory for
Serialization
s. - SerializationFactory(Configuration) -
Constructor for class org.apache.hadoop.io.serializer.SerializationFactory
-
Serializations are found by reading the
io.serializations
property from conf
, which is a comma-delimited list of
classnames.
- serialize() -
Method in class org.apache.hadoop.fs.s3.INode
-
- serialize(T) -
Method in interface org.apache.hadoop.io.serializer.Serializer
- Serialize
t
to the underlying output stream.
- serialize(RecordOutput, String) -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
- Serialize the type information for a record
- serialize(RecordOutput, String) -
Method in class org.apache.hadoop.record.Record
- Serialize a record with tag (ususally field name)
- serialize(RecordOutput) -
Method in class org.apache.hadoop.record.Record
- Serialize a record without a tag
- Serializer<T> - Interface in org.apache.hadoop.io.serializer
-
Provides a facility for serializing objects of type
to an
OutputStream
. - Server - Class in org.apache.hadoop.ipc
- An abstract IPC service.
- Server(String, int, Class, int, Configuration) -
Constructor for class org.apache.hadoop.ipc.Server
-
- Server(String, int, Class<?>, int, Configuration, String) -
Constructor for class org.apache.hadoop.ipc.Server
- Constructs a server listening on the named port and address.
- ServletUtil - Class in org.apache.hadoop.util
-
- ServletUtil() -
Constructor for class org.apache.hadoop.util.ServletUtil
-
- set(String, String) -
Method in class org.apache.hadoop.conf.Configuration
- Set the
value
of the name
property.
- set(Checksum, int, int) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Set the checksum related parameters
- set(Writable[]) -
Method in class org.apache.hadoop.io.ArrayWritable
-
- set(boolean) -
Method in class org.apache.hadoop.io.BooleanWritable
- Set the value of the BooleanWritable
- set(BytesWritable) -
Method in class org.apache.hadoop.io.BytesWritable
- Set the BytesWritable to the contents of the given newData.
- set(byte[], int, int) -
Method in class org.apache.hadoop.io.BytesWritable
- Set the value to a copy of the given byte range
- set(byte) -
Method in class org.apache.hadoop.io.ByteWritable
- Set the value of this ByteWritable.
- set(double) -
Method in class org.apache.hadoop.io.DoubleWritable
-
- set(float) -
Method in class org.apache.hadoop.io.FloatWritable
- Set the value of this FloatWritable.
- set(Writable) -
Method in class org.apache.hadoop.io.GenericWritable
- Set the instance that is wrapped.
- set(int) -
Method in class org.apache.hadoop.io.IntWritable
- Set the value of this IntWritable.
- set(long) -
Method in class org.apache.hadoop.io.LongWritable
- Set the value of this LongWritable.
- set(MD5Hash) -
Method in class org.apache.hadoop.io.MD5Hash
- Copy the contents of another instance into this instance.
- set(Object) -
Method in class org.apache.hadoop.io.ObjectWritable
- Reset the instance.
- set(Text, Text) -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- set(String) -
Method in class org.apache.hadoop.io.Text
- Set to contain the contents of a string.
- set(byte[]) -
Method in class org.apache.hadoop.io.Text
- Set to a utf8 byte array
- set(Text) -
Method in class org.apache.hadoop.io.Text
- copy a text.
- set(byte[], int, int) -
Method in class org.apache.hadoop.io.Text
- Set the Text to range of bytes
- set(Writable[][]) -
Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- set(String) -
Method in class org.apache.hadoop.io.UTF8
- Deprecated. Set to contain the contents of a string.
- set(UTF8) -
Method in class org.apache.hadoop.io.UTF8
- Deprecated. Set to contain the contents of a string.
- set(int) -
Method in class org.apache.hadoop.io.VIntWritable
- Set the value of this VIntWritable.
- set(long) -
Method in class org.apache.hadoop.io.VLongWritable
- Set the value of this LongWritable.
- set(int) -
Method in class org.apache.hadoop.metrics.util.MetricsIntValue
- Set the value
- set(long) -
Method in class org.apache.hadoop.metrics.util.MetricsLongValue
- Set the value
- set(byte[]) -
Method in class org.apache.hadoop.record.Buffer
- Use the specified bytes array as underlying sequence.
- set(float) -
Method in class org.apache.hadoop.util.Progress
- Called during execution on a leaf node to set its progress.
- SET_GROUP_COMMAND -
Static variable in class org.apache.hadoop.util.Shell
-
- SET_OWNER_COMMAND -
Static variable in class org.apache.hadoop.util.Shell
- a Unix command to set owner
- SET_PERMISSION_COMMAND -
Static variable in class org.apache.hadoop.util.Shell
- a Unix command to set permission
- setAggregatorDescriptors(JobConf, Class<? extends ValueAggregatorDescriptor>[]) -
Static method in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- setArchiveTimestamps(Configuration, String) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- This is to check the timestamp of the archives to be localized
- setAssignedJobID(JobID) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the mapred ID for this job as assigned by the
mapred framework.
- setAttribute(String, Object) -
Method in class org.apache.hadoop.mapred.StatusHttpServer
- Set a value in the webapp context.
- setAttribute(String, Object) -
Method in class org.apache.hadoop.metrics.ContextFactory
- Sets the named factory attribute to the specified value, creating it
if it did not already exist.
- setBoolean(String, boolean) -
Method in class org.apache.hadoop.conf.Configuration
- Set the value of the
name
property to a boolean
.
- setCacheArchives(URI[], Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Set the configuration with the given set of archives
- setCacheFiles(URI[], Configuration) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Set the configuration with the given set of files
- setCapacity(int) -
Method in class org.apache.hadoop.io.BytesWritable
- Change the capacity of the backing storage.
- setCapacity(int) -
Method in class org.apache.hadoop.record.Buffer
- Change the capacity of the backing storage.
- setChannelPosition(Block, FSDatasetInterface.BlockWriteStreams, long, long) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Sets the file pointer of the data stream and checksum stream to
the specified values.
- setClass(String, Class<?>, Class<?>) -
Method in class org.apache.hadoop.conf.Configuration
- Set the value of the
name
property to the name of a
theClass
implementing the given interface xface
.
- setClassLoader(ClassLoader) -
Method in class org.apache.hadoop.conf.Configuration
- Set the class loader that will be used to load the various objects.
- setCodecClasses(Configuration, List<Class>) -
Static method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- Sets a list of codec classes in the configuration.
- setCombineOnceOnly(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated.
- setCombinerClass(Class<? extends Reducer>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the user-defined combiner class used to combine map-outputs
before being sent to the reducers.
- setCompressionType(Configuration, SequenceFile.CompressionType) -
Static method in class org.apache.hadoop.io.SequenceFile
- Deprecated. Use the one of the many SequenceFile.createWriter methods to specify
the
SequenceFile.CompressionType
while creating the SequenceFile
or
JobConf.setMapOutputCompressionType(org.apache.hadoop.io.SequenceFile.CompressionType)
to specify the SequenceFile.CompressionType
for intermediate map-outputs or
SequenceFileOutputFormat.setOutputCompressionType(org.apache.hadoop.mapred.JobConf, org.apache.hadoop.io.SequenceFile.CompressionType)
to specify the SequenceFile.CompressionType
for job-outputs.
or
- setCompressMapOutput(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Should the map outputs be compressed before transfer?
Uses the SequenceFile compression.
- setCompressOutput(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Set whether the output of the job is compressed.
- setCompressOutput(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.OutputFormatBase
- Deprecated. Set whether the output of the job is compressed.
- setConf(Configuration) -
Method in interface org.apache.hadoop.conf.Configurable
- Set the configuration to be used by this object.
- setConf(Configuration) -
Method in class org.apache.hadoop.conf.Configured
-
- setConf(Configuration) -
Method in class org.apache.hadoop.dfs.Balancer
- set this balancer's configuration
- setConf(Configuration) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- setConf(Configuration) -
Method in class org.apache.hadoop.io.AbstractMapWritable
-
- setConf(Configuration) -
Method in class org.apache.hadoop.io.compress.DefaultCodec
-
- setConf(Configuration) -
Method in class org.apache.hadoop.io.compress.LzoCodec
-
- setConf(Configuration) -
Method in class org.apache.hadoop.io.GenericWritable
-
- setConf(Configuration) -
Method in class org.apache.hadoop.io.ObjectWritable
-
- setConf(Configuration) -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Set the configuration to be used by this object.
- setConf(Configuration) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
- configure the filter according to configuration
- setConf(Configuration) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
- configure the filter by checking the configuration
- setConf(Configuration) -
Method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
- configure the Filter by checking the configuration
- setConf(Configuration) -
Method in class org.apache.hadoop.net.ScriptBasedMapping
-
- setConf(Configuration) -
Method in class org.apache.hadoop.net.SocksSocketFactory
-
- setConf(Object, Configuration) -
Static method in class org.apache.hadoop.util.ReflectionUtils
- Check and set 'configuration' if necessary.
- setContentionTracing(boolean) -
Static method in class org.apache.hadoop.util.ReflectionUtils
-
- setCorruptFiles(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setCurrentUGI(UserGroupInformation) -
Static method in class org.apache.hadoop.security.UserGroupInformation
- Set the
UserGroupInformation
for the current thread
- setDebugStream(PrintStream) -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- setDefaultUri(Configuration, URI) -
Static method in class org.apache.hadoop.fs.FileSystem
- Set the default filesystem URI in a configuration.
- setDefaultUri(Configuration, String) -
Static method in class org.apache.hadoop.fs.FileSystem
- Set the default filesystem URI in a configuration.
- setDelete(Term) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Set the instance to be a delete operation.
- setDestdir(File) -
Method in class org.apache.hadoop.record.compiler.ant.RccTask
- Sets directory where output files will be generated
- setDictionary(byte[], int, int) -
Method in interface org.apache.hadoop.io.compress.Compressor
- Sets preset dictionary for compression.
- setDictionary(byte[], int, int) -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Sets preset dictionary for compression.
- setDictionary(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
-
- setDictionary(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
-
- setDictionary(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
-
- setDictionary(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- setDigest(String) -
Method in class org.apache.hadoop.io.MD5Hash
- Sets the digest value from a hex string.
- setDisableHistory(boolean) -
Static method in class org.apache.hadoop.mapred.JobHistory
- Enable/disable history logging.
- setDistributionPolicyClass(Class<? extends IDistributionPolicy>) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the distribution policy class.
- setDocumentAnalyzerClass(Class<? extends Analyzer>) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the analyzer class.
- setDoubleValue(Object, double) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
- Set the given counter to the given value
- setEnvironment(Map<String, String>) -
Method in class org.apache.hadoop.util.Shell
- set the environment for the command
- setEventId(int) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- set event Id.
- setExcessiveReplicas(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setExecutable(JobConf, String) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set the URI for the application's executable.
- setFactor(int) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Set the number of streams to merge at once.
- setFactory(Class, WritableFactory) -
Static method in class org.apache.hadoop.io.WritableFactories
- Define a factory for a class.
- setFailonerror(boolean) -
Method in class org.apache.hadoop.record.compiler.ant.RccTask
- Given multiple files (via fileset), set the error handling behavior
- SetFile - Class in org.apache.hadoop.io
- A file-based set of keys.
- SetFile() -
Constructor for class org.apache.hadoop.io.SetFile
-
- setFile(File) -
Method in class org.apache.hadoop.record.compiler.ant.RccTask
- Sets the record definition file attribute
- SetFile.Reader - Class in org.apache.hadoop.io
- Provide access to an existing set file.
- SetFile.Reader(FileSystem, String, Configuration) -
Constructor for class org.apache.hadoop.io.SetFile.Reader
- Construct a set reader for the named set.
- SetFile.Reader(FileSystem, String, WritableComparator, Configuration) -
Constructor for class org.apache.hadoop.io.SetFile.Reader
- Construct a set reader for the named set using the named comparator.
- SetFile.Writer - Class in org.apache.hadoop.io
- Write a new set file.
- SetFile.Writer(FileSystem, String, Class) -
Constructor for class org.apache.hadoop.io.SetFile.Writer
- Deprecated. pass a Configuration too
- SetFile.Writer(Configuration, FileSystem, String, Class, SequenceFile.CompressionType) -
Constructor for class org.apache.hadoop.io.SetFile.Writer
- Create a set naming the element class and compression type.
- SetFile.Writer(Configuration, FileSystem, String, WritableComparator, SequenceFile.CompressionType) -
Constructor for class org.apache.hadoop.io.SetFile.Writer
- Create a set naming the element comparator and compression type.
- setFileTimestamps(Configuration, String) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- This is to check the timestamp of the files to be localized
- setFilterClass(Configuration, Class) -
Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter
- set the filter class
- setFormat(JobConf) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
- Interpret a given string as a composite expression.
- setFrequency(Configuration, int) -
Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.MD5Filter
- set the filtering frequency in configuration
- setFrequency(Configuration, int) -
Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.PercentFilter
- set the frequency and stores it in conf
- setGroup(String) -
Method in class org.apache.hadoop.fs.FileStatus
- Sets group.
- setHostName(String) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
-
- setHosts(String[]) -
Method in class org.apache.hadoop.fs.BlockLocation
- Set the hosts hosting this block
- setID(int) -
Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setIndexInputFormatClass(Class<? extends InputFormat>) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the index input format class.
- setIndexInterval(int) -
Method in class org.apache.hadoop.io.MapFile.Writer
- Sets the index interval.
- setIndexInterval(Configuration, int) -
Static method in class org.apache.hadoop.io.MapFile.Writer
- Sets the index interval and stores it in conf
- setIndexMaxFieldLength(int) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the max field length for a Lucene instance.
- setIndexMaxNumSegments(int) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the max number of segments for a Lucene instance.
- setIndexShards(String) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the string representation of a number of shards.
- setIndexShards(IndexUpdateConfiguration, Shard[]) -
Static method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- setIndexUpdaterClass(Class<? extends IIndexUpdater>) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the index updater class.
- setIndexUseCompoundFile(boolean) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set whether use the compound file format for a Lucene instance.
- setInput(byte[], int, int) -
Method in interface org.apache.hadoop.io.compress.Compressor
- Sets input data for compression.
- setInput(byte[], int, int) -
Method in interface org.apache.hadoop.io.compress.Decompressor
- Sets input data for decompression.
- setInput(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.lzo.LzoCompressor
-
- setInput(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.lzo.LzoDecompressor
-
- setInput(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.zlib.ZlibCompressor
-
- setInput(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- setInputFormat(Class<? extends InputFormat>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
InputFormat
implementation for the map-reduce job.
- setInputPath(Path) -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated. Use
FileInputFormat.setInputPaths(JobConf, Path...)
or
FileInputFormat.setInputPaths(JobConf, String)
- setInputPathFilter(JobConf, Class<? extends PathFilter>) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
- Set a PathFilter to be applied to the input paths for the map-reduce job.
- setInputPaths(JobConf, String) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
- Sets the given comma separated paths as the list of inputs
for the map-reduce job.
- setInputPaths(JobConf, Path...) -
Static method in class org.apache.hadoop.mapred.FileInputFormat
- Set the array of
Path
s as the list of inputs
for the map-reduce job.
- setInsert(Document) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Set the instance to be an insert operation.
- setInt(String, int) -
Method in class org.apache.hadoop.conf.Configuration
- Set the value of the
name
property to an int
.
- setIOSortMB(int) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the IO sort space in MB.
- setIsJavaMapper(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set whether the Mapper is written in Java.
- setIsJavaRecordReader(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set whether the job is using a Java RecordReader.
- setIsJavaRecordWriter(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set whether the job will use a Java RecordWriter.
- setIsJavaReducer(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set whether the Reducer is written in Java.
- setJar(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the user jar for the map-reduce job.
- setJarByClass(Class) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the job's jar file by finding an example class location.
- setJobConf(JobConf) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the mapred job conf for this job.
- setJobConf() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- setJobEndNotificationURI(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the uri to be invoked in-order to send a notification after the job
has completed (success/failure).
- setJobID(String) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the job ID for this job.
- setJobName(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the user-specified job name.
- setJobName(String) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the job name for this job.
- setJobPriority(JobPriority) -
Method in class org.apache.hadoop.mapred.JobConf
- Set
JobPriority
for this job.
- setKeepCommandFile(JobConf, boolean) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Set whether to keep the command file for debugging
- setKeepFailedTaskFiles(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Set whether the framework should keep the intermediate files for
failed tasks.
- setKeepTaskFilesPattern(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set a regular expression for task names that should be kept.
- setKeyComparator(Class<? extends WritableComparator>) -
Method in class org.apache.hadoop.mapred.join.Parser.Node
-
- setLanguage(String) -
Method in class org.apache.hadoop.record.compiler.ant.RccTask
- Sets the output language option
- setLength(long) -
Method in class org.apache.hadoop.fs.BlockLocation
- Set the length of block
- setLevel(int) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
-
- setLevel(int) -
Method in interface org.apache.hadoop.net.Node
- Set this node's level in the tree.
- setLevel(int) -
Method in class org.apache.hadoop.net.NodeBase
- Set this node's level in the tree
- setLoadNativeLibraries(JobConf, boolean) -
Method in class org.apache.hadoop.util.NativeCodeLoader
- Set if native hadoop libraries, if present, can be used for this job.
- setLocalAnalysisClass(Class<? extends ILocalAnalysis>) -
Method in class org.apache.hadoop.contrib.index.mapred.IndexUpdateConfiguration
- Set the local analysis class.
- setLocalArchives(Configuration, String) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Set the conf to contain the location for localized archives
- setLocalFiles(Configuration, String) -
Static method in class org.apache.hadoop.filecache.DistributedCache
- Set the conf to contain the location for localized files
- setLong(String, long) -
Method in class org.apache.hadoop.conf.Configuration
- Set the value of the
name
property to a long
.
- setLongValue(Object, long) -
Method in class org.apache.hadoop.contrib.utils.join.JobBase
- Set the given counter to the given value
- setMapDebugScript(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the debug script to run when the map tasks fail.
- setMapOutputCompressionType(SequenceFile.CompressionType) -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated.
SequenceFile.CompressionType
is no longer valid for intermediate
map-outputs.
- setMapOutputCompressorClass(Class<? extends CompressionCodec>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the given class as the
CompressionCodec
for the map outputs.
- setMapOutputKeyClass(Class<?>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the key class for the map output data.
- setMapOutputValueClass(Class<?>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the value class for the map output data.
- setMapperClass(Class<? extends Mapper>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
Mapper
class for the job.
- setMapredJobID(String) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Deprecated. use
Job.setAssignedJobID(JobID)
instead
- setMapRunnerClass(Class<? extends MapRunnable>) -
Method in class org.apache.hadoop.mapred.JobConf
- Expert: Set the
MapRunnable
class for the job.
- setMapSpeculativeExecution(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Turn speculative execution on or off for this job for map tasks.
- setMaxItems(long) -
Method in class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
- Set the limit on the number of unique values
- setMaxMapAttempts(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Expert: Set the number of maximum attempts that will be made to run a
map task.
- setMaxMapTaskFailuresPercent(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Expert: Set the maximum percentage of map tasks that can fail without the
job being aborted.
- setMaxReduceAttempts(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Expert: Set the number of maximum attempts that will be made to run a
reduce task.
- setMaxReduceTaskFailuresPercent(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the maximum percentage of reduce tasks that can fail without the job
being aborted.
- setMaxTaskFailuresPerTracker(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the maximum no.
- setMemory(int) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Set the total amount of buffer memory, in bytes.
- setMessage(String) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the message for this job.
- setMetric(String, int) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named metric to the specified value.
- setMetric(String, long) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named metric to the specified value.
- setMetric(String, short) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named metric to the specified value.
- setMetric(String, byte) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named metric to the specified value.
- setMetric(String, float) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named metric to the specified value.
- setMetric(String, int) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named metric to the specified value.
- setMetric(String, long) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named metric to the specified value.
- setMetric(String, short) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named metric to the specified value.
- setMetric(String, byte) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named metric to the specified value.
- setMetric(String, float) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named metric to the specified value.
- setMinSplitSize(long) -
Method in class org.apache.hadoop.mapred.FileInputFormat
-
- setMissingReplicas(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setMissingSize(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setName(Class, String) -
Static method in class org.apache.hadoop.io.WritableName
- Set the name that a class should be known as to something other than the
class name.
- setName(String) -
Method in class org.apache.hadoop.record.meta.RecordTypeInfo
- set the name of the record
- setNames(String[]) -
Method in class org.apache.hadoop.fs.BlockLocation
- Set the names (host:port) hosting this block
- setNetworkLocation(String) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
- Sets the rack name
- setNetworkLocation(String) -
Method in interface org.apache.hadoop.net.Node
- Set the node's network location
- setNetworkLocation(String) -
Method in class org.apache.hadoop.net.NodeBase
- Set this node's network location
- setNumMapTasks(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the number of map tasks for this job.
- setNumReduceTasks(int) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the requisite number of reduce tasks for this job.
- setOffset(long) -
Method in class org.apache.hadoop.fs.BlockLocation
- Set the start offset of file associated with this block
- setOp(DocumentAndOp.Op) -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
- Set the type of the operation.
- setOutputCompressionType(JobConf, SequenceFile.CompressionType) -
Static method in class org.apache.hadoop.mapred.SequenceFileOutputFormat
- Set the
SequenceFile.CompressionType
for the output SequenceFile
.
- setOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Set the
CompressionCodec
to be used to compress job outputs.
- setOutputCompressorClass(JobConf, Class<? extends CompressionCodec>) -
Static method in class org.apache.hadoop.mapred.OutputFormatBase
- Deprecated. Set the
CompressionCodec
to be used to compress job outputs.
- setOutputFormat(Class<? extends OutputFormat>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
OutputFormat
implementation for the map-reduce job.
- setOutputKeyClass(Class<?>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the key class for the job output data.
- setOutputKeyComparatorClass(Class<? extends RawComparator>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
RawComparator
comparator used to compare keys.
- setOutputPath(JobConf, Path) -
Static method in class org.apache.hadoop.mapred.FileOutputFormat
- Set the
Path
of the output directory for the map-reduce job.
- setOutputPath(Path) -
Method in class org.apache.hadoop.mapred.JobConf
- Deprecated. Use
FileOutputFormat.setOutputPath(JobConf, Path)
Set the Path
of the output directory for the map-reduce job.
Note:
- setOutputValueClass(Class<?>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the value class for job outputs.
- setOutputValueGroupingComparator(Class<? extends RawComparator>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the user defined
RawComparator
comparator for
grouping keys in the input to the reduce.
- setOwner(Path, String, String) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
- Set owner of a path (i.e.
- setOwner(String, String, String) -
Method in class org.apache.hadoop.dfs.NameNode
- Set owner of a path (i.e.
- setOwner(String) -
Method in class org.apache.hadoop.fs.FileStatus
- Sets owner.
- setOwner(Path, String, String) -
Method in class org.apache.hadoop.fs.FileSystem
- Set owner of a path (i.e.
- setOwner(Path, String, String) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Set owner of a path (i.e.
- setOwner(Path, String, String) -
Method in class org.apache.hadoop.fs.HarFileSystem
- not implemented.
- setOwner(Path, String, String) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Use the command chown to set owner.
- setParent(Node) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
-
- setParent(Node) -
Method in interface org.apache.hadoop.net.Node
- Set this node's parent
- setParent(Node) -
Method in class org.apache.hadoop.net.NodeBase
- Set this node's parent
- setPartitionerClass(Class<? extends Partitioner>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
Partitioner
class used to partition
Mapper
-outputs to be sent to the Reducer
s.
- setPathName(String) -
Method in exception org.apache.hadoop.dfs.QuotaExceededException
-
- setPattern(Configuration, String) -
Static method in class org.apache.hadoop.mapred.SequenceFileInputFilter.RegexFilter
- Define the filtering regex and stores it in conf
- setPeriod(int) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Sets the timer period
- setPermission(Path, FsPermission) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
- Set permission of a path.
- setPermission(String, FsPermission) -
Method in class org.apache.hadoop.dfs.NameNode
- Set permissions for an existing file/directory.
- setPermission(FsPermission) -
Method in class org.apache.hadoop.fs.FileStatus
- Sets permission.
- setPermission(Path, FsPermission) -
Method in class org.apache.hadoop.fs.FileSystem
- Set permission of a path.
- setPermission(Path, FsPermission) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Set permission of a path.
- setPermission(Path, FsPermission) -
Method in class org.apache.hadoop.fs.HarFileSystem
- Not implemented.
- setPermission(Path, FsPermission) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Use the command chmod to set permission.
- setPingInterval(Configuration, int) -
Static method in class org.apache.hadoop.ipc.Client
- set the ping interval value in configuration
- setPrinter(DancingLinks.SolutionAcceptor<Pentomino.ColumnName>) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Set the printer for the puzzle.
- setProfileEnabled(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Set whether the system should collect profiler information for some of
the tasks in this job? The information is stored in the the user log
directory.
- setProfileParams(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the profiler configuration arguments.
- setProfileTaskRange(boolean, String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the ranges of maps or reduces to profile.
- setProgressable(Progressable) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Set the progressable object in order to report progress.
- setQuietMode(boolean) -
Method in class org.apache.hadoop.conf.Configuration
- Set the quiteness-mode.
- setQuota(String, long) -
Method in class org.apache.hadoop.dfs.NameNode
- Set the quota for a directory.
- setReduceDebugScript(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the debug script to run when the reduce tasks fail.
- setReducerClass(Class<? extends Reducer>) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the
Reducer
class for the job.
- setReduceSpeculativeExecution(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Turn speculative execution on or off for this job for reduce tasks.
- setReplication(Path, short) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- setReplication(String, short) -
Method in class org.apache.hadoop.dfs.NameNode
- Set replication for an existing file.
- setReplication(int) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setReplication(Path, short) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
- Set replication for an existing file.
- setReplication(Path, short) -
Method in class org.apache.hadoop.fs.FileSystem
- Set replication for an existing file.
- setReplication(Path, short) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Set replication for an existing file.
- setReplication(Path, short) -
Method in class org.apache.hadoop.fs.HarFileSystem
- Not implemented.
- setReplication(Path, short) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- setRunState(int) -
Method in class org.apache.hadoop.mapred.JobStatus
- Change the current run state of the job.
- setSafeMode(FSConstants.SafeModeAction) -
Method in class org.apache.hadoop.dfs.ChecksumDistributedFileSystem
- Enter, leave or get safe mode.
- setSafeMode(String[], int) -
Method in class org.apache.hadoop.dfs.DFSAdmin
- Safe mode maintenance command.
- setSafeMode(FSConstants.SafeModeAction) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
- Enter, leave or get safe mode.
- setSafeMode(FSConstants.SafeModeAction) -
Method in class org.apache.hadoop.dfs.NameNode
-
- setSequenceFileOutputKeyClass(JobConf, Class<?>) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
- Set the key class for the
SequenceFile
- setSequenceFileOutputValueClass(JobConf, Class<?>) -
Static method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat
- Set the value class for the
SequenceFile
- setSessionId(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the user-specified session identifier.
- setSize(int) -
Method in class org.apache.hadoop.io.BytesWritable
- Change the size of the buffer.
- setSocketSendBufSize(int) -
Method in class org.apache.hadoop.ipc.Server
- Sets the socket buffer size used for responding to RPCs
- setSpeculativeExecution(boolean) -
Method in class org.apache.hadoop.mapred.JobConf
- Turn speculative execution on or off for this job.
- setState(int) -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Set the state for this job.
- setStatus(String) -
Method in interface org.apache.hadoop.mapred.Reporter
- Set the status description for the task.
- setStatus(String) -
Method in class org.apache.hadoop.util.Progress
-
- setStrings(String, String...) -
Method in class org.apache.hadoop.conf.Configuration
- Set the array of string values for the
name
property as
as comma delimited values.
- setTabSize(int) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- setTag(Text) -
Method in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- setTag(String, String) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named tag to the specified value.
- setTag(String, int) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named tag to the specified value.
- setTag(String, long) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named tag to the specified value.
- setTag(String, short) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named tag to the specified value.
- setTag(String, byte) -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Sets the named tag to the specified value.
- setTag(String, String) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named tag to the specified value.
- setTag(String, int) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named tag to the specified value.
- setTag(String, long) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named tag to the specified value.
- setTag(String, short) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named tag to the specified value.
- setTag(String, byte) -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Sets the named tag to the specified value.
- setTaskId(String) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Deprecated. use
TaskCompletionEvent.setTaskID(TaskAttemptID)
instead.
- setTaskID(TaskAttemptID) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Sets task id.
- setTaskId(String) -
Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setTaskOutputFilter(JobClient.TaskStatusFilter) -
Method in class org.apache.hadoop.mapred.JobClient
- Deprecated.
- setTaskOutputFilter(JobConf, JobClient.TaskStatusFilter) -
Static method in class org.apache.hadoop.mapred.JobClient
- Modify the JobConf to set the task output filter.
- setTaskRunTime(int) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Set the task completion time
- setTaskStatus(TaskCompletionEvent.Status) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Set task status.
- setTaskTrackerHttp(String) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
- Set task tracker http location.
- setThreads(int, int) -
Method in class org.apache.hadoop.mapred.StatusHttpServer
-
- setTimeout(int) -
Method in class org.apache.hadoop.ipc.Server
- Deprecated.
- setTotalBlocks(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setTotalDirs(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setTotalFiles(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setTotalLogFileSize(long) -
Method in class org.apache.hadoop.mapred.TaskLogAppender
-
- setTotalOpenFiles(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
- Set total number of open files encountered during this scan.
- setTotalOpenFilesBlocks(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setTotalOpenFilesSize(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setTotalSize(long) -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- setUMask(Configuration, FsPermission) -
Static method in class org.apache.hadoop.fs.permission.FsPermission
- Set the user file creation mask (umask)
- setUpdate(Document, Term) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
- Set the instance to be an update operation.
- setUser(String) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the reported username for this job.
- setUserJobConfProps(boolean) -
Method in class org.apache.hadoop.streaming.StreamJob
- This method sets the user jobconf variable specified
by user using -jobconf key=value
- setVerbose(boolean) -
Method in class org.apache.hadoop.streaming.JarBuilder
-
- setVerifyChecksum(boolean) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.dfs.HftpFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Set the current working directory for the given file system.
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Set the current working directory for the given file system.
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.ftp.FTPFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
- Set the working directory to the given directory.
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.s3.S3FileSystem
-
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.fs.s3native.NativeS3FileSystem
- Set the working directory to the given directory.
- setWorkingDirectory(Path) -
Method in class org.apache.hadoop.mapred.JobConf
- Set the current working directory for the default file system.
- setWorkingDirectory(File) -
Method in class org.apache.hadoop.util.Shell
- set the working directory
- Shard - Class in org.apache.hadoop.contrib.index.mapred
- This class represents the metadata of a shard.
- Shard() -
Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
- Constructor.
- Shard(long, String, long) -
Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
- Construct a shard from a versio number, a directory and a generation
number.
- Shard(Shard) -
Constructor for class org.apache.hadoop.contrib.index.mapred.Shard
- Construct using a shard object.
- ShardWriter - Class in org.apache.hadoop.contrib.index.lucene
- The initial version of an index is stored in the perm dir.
- ShardWriter(FileSystem, Shard, String, IndexUpdateConfiguration) -
Constructor for class org.apache.hadoop.contrib.index.lucene.ShardWriter
- Constructor
- Shell - Class in org.apache.hadoop.util
- A base class for running a Unix command.
- Shell() -
Constructor for class org.apache.hadoop.util.Shell
-
- Shell(long) -
Constructor for class org.apache.hadoop.util.Shell
-
- Shell.ExitCodeException - Exception in org.apache.hadoop.util
- This is an IOException with exit code added.
- Shell.ExitCodeException(int, String) -
Constructor for exception org.apache.hadoop.util.Shell.ExitCodeException
-
- Shell.ShellCommandExecutor - Class in org.apache.hadoop.util
- A simple shell command executor.
- Shell.ShellCommandExecutor(String[]) -
Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
-
- Shell.ShellCommandExecutor(String[], File) -
Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
-
- Shell.ShellCommandExecutor(String[], File, Map<String, String>) -
Constructor for class org.apache.hadoop.util.Shell.ShellCommandExecutor
-
- ShellCommand - Class in org.apache.hadoop.fs
- Deprecated. Use
Shell
instead. - ShellCommand() -
Constructor for class org.apache.hadoop.fs.ShellCommand
- Deprecated.
- shippedCanonFiles_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- shouldPreserveInput() -
Method in class org.apache.hadoop.io.SequenceFile.Sorter.SegmentDescriptor
-
- shouldRetry(Exception, int) -
Method in interface org.apache.hadoop.io.retry.RetryPolicy
-
Determines whether the framework should retry a
method for the given exception, and the number
of retries that have been made for that operation
so far.
- shuffleError(String, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Deprecated.
- shuffleError(TaskAttemptID, String) -
Method in class org.apache.hadoop.mapred.TaskTracker
- A reduce-task failed to shuffle the map-outputs.
- shutdown() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- shutdown() -
Method in class org.apache.hadoop.dfs.datanode.metrics.DataNodeStatistics
- Shuts down the statistics
- unregisters the mbean
- shutdown() -
Method in class org.apache.hadoop.dfs.DataNode
- Shut down this instance of the datanode.
- shutdown() -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Shutdown the FSDataset
- shutdown() -
Method in class org.apache.hadoop.dfs.namenode.metrics.NameNodeStatistics
- Shuts down the statistics
- unregisters the mbean
- shutdown() -
Method in class org.apache.hadoop.dfs.NameNodeMetrics
-
- shutdown() -
Method in class org.apache.hadoop.dfs.SecondaryNameNode
- Shut down this instance of the datanode.
- shutdown() -
Method in class org.apache.hadoop.fs.DU
- Shut down the refreshing thread.
- shutdown() -
Method in class org.apache.hadoop.ipc.metrics.RpcMetrics
-
- shutdown() -
Method in class org.apache.hadoop.mapred.TaskTracker
-
- SimpleCharStream - Class in org.apache.hadoop.record.compiler.generated
- An implementation of interface CharStream, where the stream is assumed to
contain only ASCII characters (without unicode processing).
- SimpleCharStream(Reader, int, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(Reader, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(Reader) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream, String, int, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream, int, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream, String, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream, int, int) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream, String) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- SimpleCharStream(InputStream) -
Constructor for class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- simpleHostname(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Given a full hostname, return the word upto the first dot.
- size() -
Method in class org.apache.hadoop.io.MapWritable
-
- size() -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- size() -
Method in class org.apache.hadoop.mapred.Counters.Group
- Returns the number of counters in this group.
- size() -
Method in class org.apache.hadoop.mapred.Counters
- Returns the total number of counters, by summing the number of counters
in each group.
- size() -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- The number of children in this Tuple.
- size() -
Method in class org.apache.hadoop.util.PriorityQueue
- Returns the number of elements currently stored in the PriorityQueue.
- SIZE_OF_INTEGER -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- skip(long) -
Method in class org.apache.hadoop.fs.BufferedFSInputStream
-
- skip(long) -
Method in class org.apache.hadoop.fs.FSInputChecker
- Skips over and discards
n
bytes of data from the
input stream.
- skip(long) -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipInputStream
-
- skip(DataInput) -
Static method in class org.apache.hadoop.io.Text
- Skips over one Text in the input.
- skip(DataInput) -
Static method in class org.apache.hadoop.io.UTF8
- Deprecated. Skips over one UTF8 in the input.
- skip(K) -
Method in interface org.apache.hadoop.mapred.join.ComposableRecordReader
- Skip key-value pairs with keys less than or equal to the key provided.
- skip(K) -
Method in class org.apache.hadoop.mapred.join.CompositeRecordReader
- Pass skip key to child RRs.
- skip(K) -
Method in class org.apache.hadoop.mapred.join.WrappedRecordReader
- Skip key-value pairs with keys less than or equal to the key provided.
- skip(RecordInput, String, TypeID) -
Static method in class org.apache.hadoop.record.meta.Utils
- read/skip bytes from stream based on a type
- skipCompressedByteArray(DataInput) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- skipFully(InputStream, long) -
Static method in class org.apache.hadoop.io.IOUtils
- Similar to readFully().
- skipFully(DataInput, int) -
Static method in class org.apache.hadoop.io.WritableUtils
- Skip len number of bytes in input streamin
- SleepJob - Class in org.apache.hadoop.examples
- Dummy class for testing MR framefork.
- SleepJob() -
Constructor for class org.apache.hadoop.examples.SleepJob
-
- SMALL_BUFFER_SIZE -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- SocketInputStream - Class in org.apache.hadoop.net
- This implements an input stream that can have a timeout while reading.
- SocketInputStream(ReadableByteChannel, long) -
Constructor for class org.apache.hadoop.net.SocketInputStream
- Create a new input stream with the given timeout.
- SocketInputStream(Socket, long) -
Constructor for class org.apache.hadoop.net.SocketInputStream
- Same as SocketInputStream(socket.getChannel(), timeout):
Create a new input stream with the given timeout.
- SocketInputStream(Socket) -
Constructor for class org.apache.hadoop.net.SocketInputStream
- Same as SocketInputStream(socket.getChannel(), socket.getSoTimeout())
:
Create a new input stream with the given timeout.
- SocketOutputStream - Class in org.apache.hadoop.net
- This implements an output stream that can have a timeout while writing.
- SocketOutputStream(WritableByteChannel, long) -
Constructor for class org.apache.hadoop.net.SocketOutputStream
- Create a new ouput stream with the given timeout.
- SocketOutputStream(Socket, long) -
Constructor for class org.apache.hadoop.net.SocketOutputStream
- Same as SocketOutputStream(socket.getChannel(), timeout):
Create a new ouput stream with the given timeout.
- SocksSocketFactory - Class in org.apache.hadoop.net
- Specialized SocketFactory to create sockets with a SOCKS proxy
- SocksSocketFactory() -
Constructor for class org.apache.hadoop.net.SocksSocketFactory
- Default empty constructor (for use with the reflection API).
- SocksSocketFactory(Proxy) -
Constructor for class org.apache.hadoop.net.SocksSocketFactory
- Constructor with a supplied Proxy
- solution(List<List<ColumnName>>) -
Method in interface org.apache.hadoop.examples.dancing.DancingLinks.SolutionAcceptor
- A callback to return a solution to the application.
- solve(int[], DancingLinks.SolutionAcceptor<ColumnName>) -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
- Given a prefix, find solutions under it.
- solve(DancingLinks.SolutionAcceptor<ColumnName>) -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
- Solve a complete problem
- solve(int[]) -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Find all of the solutions that start with the given prefix.
- solve() -
Method in class org.apache.hadoop.examples.dancing.Pentomino
- Find all of the solutions to the puzzle.
- solve() -
Method in class org.apache.hadoop.examples.dancing.Sudoku
-
- Sort - Class in org.apache.hadoop.examples
- This is the trivial map/reduce program that does absolutely nothing
other than use the framework to fragment and sort the input values.
- Sort() -
Constructor for class org.apache.hadoop.examples.Sort
-
- sort(Path[], Path, boolean) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Perform a file sort from a set of input files into an output file.
- sort(Path, Path) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- The backwards compatible interface to sort.
- sort(IndexedSortable, int, int) -
Method in class org.apache.hadoop.util.HeapSort
- Sort the given range of items using heap sort.
- sort(IndexedSortable, int, int, Progressable) -
Method in class org.apache.hadoop.util.HeapSort
- Same as
IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
- sort(IndexedSortable, int, int) -
Method in interface org.apache.hadoop.util.IndexedSorter
- Sort the items accessed through the given IndexedSortable over the given
range of logical indices.
- sort(IndexedSortable, int, int, Progressable) -
Method in interface org.apache.hadoop.util.IndexedSorter
- Same as
IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
- sort(IndexedSortable, int, int) -
Method in class org.apache.hadoop.util.QuickSort
- Sort the given range of items using quick sort.
- sort(IndexedSortable, int, int, Progressable) -
Method in class org.apache.hadoop.util.QuickSort
- Same as
IndexedSorter.sort(IndexedSortable,int,int)
, but indicate progress
periodically.
- sortAndIterate(Path[], Path, boolean) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Perform a file sort from a set of input files and return an iterator.
- SortedMapWritable - Class in org.apache.hadoop.io
- A Writable SortedMap.
- SortedMapWritable() -
Constructor for class org.apache.hadoop.io.SortedMapWritable
- default constructor.
- SortedMapWritable(SortedMapWritable) -
Constructor for class org.apache.hadoop.io.SortedMapWritable
- Copy constructor.
- sortNodeList(ArrayList<DatanodeDescriptor>, String, String) -
Method in class org.apache.hadoop.dfs.JspHelper
-
- SOURCE_TAGS_FIELD -
Static variable in class org.apache.hadoop.contrib.utils.join.DataJoinReducerBase
-
- specialConstructor -
Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
- This variable determines which constructor was used to create
this object and thereby affects the semantics of the
"getMessage" method (see below).
- specialToken -
Variable in class org.apache.hadoop.record.compiler.generated.Token
- This field is used to access special tokens that occur prior to this
token, but after the immediately preceding regular (non-special) token.
- split(int) -
Method in class org.apache.hadoop.examples.dancing.DancingLinks
- Generate a list of row choices to cover the first moves.
- split(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Split a string using the default separator
- split(String, char, char) -
Static method in class org.apache.hadoop.util.StringUtils
- Split a string using the given separator
- splitKeyVal(byte[], int, int, Text, Text, int) -
Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
- split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- splitKeyVal(byte[], Text, Text, int) -
Static method in class org.apache.hadoop.streaming.UTF8ByteArrayUtils
- split a UTF-8 byte array into key and value
assuming that the delimilator is at splitpos.
- StandardSocketFactory - Class in org.apache.hadoop.net
- Specialized SocketFactory to create sockets with a SOCKS proxy
- StandardSocketFactory() -
Constructor for class org.apache.hadoop.net.StandardSocketFactory
- Default empty constructor (for use with the reflection API).
- start() -
Method in class org.apache.hadoop.fs.DU
- Start the disk usage checking thread.
- start() -
Method in class org.apache.hadoop.ipc.Server
- Starts the service.
- start() -
Method in class org.apache.hadoop.mapred.StatusHttpServer
- Start the server.
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.ChecksumFileSystem
-
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.FileSystem
- Returns a local File that the user can write output to.
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.FilterFileSystem
- Returns a local File that the user can write output to.
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.HarFileSystem
- not implemented.
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.kfs.KosmosFileSystem
-
- startLocalOutput(Path, Path) -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- startMap(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- startMap(TreeMap, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- startMap(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- startMap(TreeMap, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- startMap(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Check the mark for start of the serialized map.
- startMap(TreeMap, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Mark the start of a map to be serialized.
- startMap(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- startMap(TreeMap, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- startMonitoring() -
Method in class org.apache.hadoop.metrics.file.FileContext
- Starts or restarts monitoring, by opening in append-mode, the
file specified by the
fileName
attribute,
if specified.
- startMonitoring() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Starts or restarts monitoring, the emitting of metrics records as they are
updated.
- startMonitoring() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Starts or restarts monitoring, the emitting of metrics records.
- startMonitoring() -
Method in class org.apache.hadoop.metrics.spi.NullContext
- Do-nothing version of startMonitoring
- startNextPhase() -
Method in class org.apache.hadoop.util.Progress
- Called during execution to move to the next phase at this level in the
tree.
- startNotifier() -
Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- startRecord(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- startRecord(Record, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- startRecord(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- startRecord(Record, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- startRecord(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Check the mark for start of the serialized record.
- startRecord(Record, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Mark the start of a record to be serialized.
- startRecord(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- startRecord(Record, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- startTracker(JobConf) -
Static method in class org.apache.hadoop.mapred.JobTracker
- Start the JobTracker with given configuration.
- startUpgrade() -
Method in interface org.apache.hadoop.dfs.Upgradeable
- Prepare for the upgrade.
- startupShutdownMessage(Class, String[], Log) -
Static method in class org.apache.hadoop.util.StringUtils
- Print a log message for starting up and shutting down
- startVector(String) -
Method in class org.apache.hadoop.record.BinaryRecordInput
-
- startVector(ArrayList, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- startVector(String) -
Method in class org.apache.hadoop.record.CsvRecordInput
-
- startVector(ArrayList, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- startVector(String) -
Method in interface org.apache.hadoop.record.RecordInput
- Check the mark for start of the serialized vector.
- startVector(ArrayList, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Mark the start of a vector to be serialized.
- startVector(String) -
Method in class org.apache.hadoop.record.XmlRecordInput
-
- startVector(ArrayList, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- stat2Paths(FileStatus[]) -
Static method in class org.apache.hadoop.fs.FileUtil
- convert an array of FileStatus to an array of Path
- stat2Paths(FileStatus[], Path) -
Static method in class org.apache.hadoop.fs.FileUtil
- convert an array of FileStatus to an array of Path.
- stateChangeLog -
Static variable in class org.apache.hadoop.dfs.NameNode
-
- staticFlag -
Static variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- statistics -
Variable in class org.apache.hadoop.fs.FileSystem
- The statistics for this file system.
- StatusHttpServer - Class in org.apache.hadoop.mapred
- Create a Jetty embedded server to answer http requests.
- StatusHttpServer(String, String, int, boolean) -
Constructor for class org.apache.hadoop.mapred.StatusHttpServer
- Create a status server on the given port.
- StatusHttpServer.StackServlet - Class in org.apache.hadoop.mapred
- A very simple servlet to serve up a text representation of the current
stack traces.
- StatusHttpServer.StackServlet() -
Constructor for class org.apache.hadoop.mapred.StatusHttpServer.StackServlet
-
- StatusHttpServer.TaskGraphServlet - Class in org.apache.hadoop.mapred
- The servlet that outputs svg graphics for map / reduce task
statuses
- StatusHttpServer.TaskGraphServlet() -
Constructor for class org.apache.hadoop.mapred.StatusHttpServer.TaskGraphServlet
-
- statusUpdate(String, TaskStatus) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Deprecated.
- statusUpdate(TaskAttemptID, TaskStatus) -
Method in class org.apache.hadoop.mapred.TaskTracker
- Called periodically to report Task progress, from 0.0 to 1.0.
- STILL_WAITING -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- stop() -
Method in class org.apache.hadoop.dfs.NameNode
- Stop all NameNode threads and wait for all to finish.
- stop() -
Method in class org.apache.hadoop.ipc.Client
- Stop all threads related to this client.
- stop() -
Method in class org.apache.hadoop.ipc.Server
- Stops the service.
- stop() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
- set the thread state to STOPPING so that the
thread will stop when it wakes up.
- stop() -
Method in class org.apache.hadoop.mapred.StatusHttpServer
- stop the server
- stopMonitoring() -
Method in class org.apache.hadoop.metrics.file.FileContext
- Stops monitoring, closing the file.
- stopMonitoring() -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Stops monitoring.
- stopMonitoring() -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Stops monitoring.
- stopNotifier() -
Static method in class org.apache.hadoop.mapred.JobEndNotifier
-
- stopProxy(VersionedProtocol) -
Static method in class org.apache.hadoop.ipc.RPC
- Stop this proxy and release its invoker's resource
- stopTracker() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- storageID -
Variable in class org.apache.hadoop.dfs.DatanodeID
-
- store(Configuration, K, String) -
Static method in class org.apache.hadoop.io.DefaultStringifier
- Stores the item in the configuration with the given keyName.
- storeArray(Configuration, K[], String) -
Static method in class org.apache.hadoop.io.DefaultStringifier
- Stores the array of items in the configuration with the given keyName.
- storeBlock(Block, File) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- storeINode(Path, INode) -
Method in interface org.apache.hadoop.fs.s3.FileSystemStore
-
- StreamBackedIterator<X extends Writable> - Class in org.apache.hadoop.mapred.join
- This class provides an implementation of ResetableIterator.
- StreamBackedIterator() -
Constructor for class org.apache.hadoop.mapred.join.StreamBackedIterator
-
- StreamBaseRecordReader - Class in org.apache.hadoop.streaming
- Shared functionality for hadoopStreaming formats.
- StreamBaseRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) -
Constructor for class org.apache.hadoop.streaming.StreamBaseRecordReader
-
- streamBlockInAscii(InetSocketAddress, long, long, long, long, long, JspWriter) -
Method in class org.apache.hadoop.dfs.JspHelper
-
- StreamFile - Class in org.apache.hadoop.dfs
-
- StreamFile() -
Constructor for class org.apache.hadoop.dfs.StreamFile
-
- StreamInputFormat - Class in org.apache.hadoop.streaming
- An input format that selects a RecordReader based on a JobConf property.
- StreamInputFormat() -
Constructor for class org.apache.hadoop.streaming.StreamInputFormat
-
- StreamJob - Class in org.apache.hadoop.streaming
- All the client-side work happens here.
- StreamJob(String[], boolean) -
Constructor for class org.apache.hadoop.streaming.StreamJob
-
- StreamUtil - Class in org.apache.hadoop.streaming
- Utilities not available elsewhere in Hadoop.
- StreamUtil() -
Constructor for class org.apache.hadoop.streaming.StreamUtil
-
- StreamXmlRecordReader - Class in org.apache.hadoop.streaming
- A way to interpret XML fragments as Mapper input records.
- StreamXmlRecordReader(FSDataInputStream, FileSplit, Reporter, JobConf, FileSystem) -
Constructor for class org.apache.hadoop.streaming.StreamXmlRecordReader
-
- STRING -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- STRING_VALUE_MAX -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- STRING_VALUE_MIN -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- Stringifier<T> - Interface in org.apache.hadoop.io
- Stringifier interface offers two methods to convert an object
to a string representation and restore the object given its
string representation.
- stringifyException(Throwable) -
Static method in class org.apache.hadoop.util.StringUtils
- Make a string representation of the exception.
- stringifySolution(int, int, List<List<Pentomino.ColumnName>>) -
Static method in class org.apache.hadoop.examples.dancing.Pentomino
- Convert a solution to the puzzle returned by the model into a string
that represents the placement of the pieces onto the board.
- stringToPath(String[]) -
Static method in class org.apache.hadoop.util.StringUtils
-
- stringToURI(String[]) -
Static method in class org.apache.hadoop.util.StringUtils
-
- StringTypeID -
Static variable in class org.apache.hadoop.record.meta.TypeID
-
- StringUtils - Class in org.apache.hadoop.util
- General string utils
- StringUtils() -
Constructor for class org.apache.hadoop.util.StringUtils
-
- StringValueMax - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that maintain the biggest of
a sequence of strings.
- StringValueMax() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMax
- the default constructor
- StringValueMin - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that maintain the smallest of
a sequence of strings.
- StringValueMin() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.StringValueMin
- the default constructor
- STRUCT -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- StructTypeID - Class in org.apache.hadoop.record.meta
- Represents typeID for a struct
- StructTypeID(RecordTypeInfo) -
Constructor for class org.apache.hadoop.record.meta.StructTypeID
- Create a StructTypeID based on the RecordTypeInfo of some record
- subMap(WritableComparable, WritableComparable) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- submit() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
- Submit this job to mapred.
- submitAndMonitorJob() -
Method in class org.apache.hadoop.streaming.StreamJob
-
- submitJob(String) -
Method in class org.apache.hadoop.mapred.JobClient
- Submit a job to the MR system.
- submitJob(JobConf) -
Method in class org.apache.hadoop.mapred.JobClient
- Submit a job to the MR system.
- submitJob(String) -
Method in class org.apache.hadoop.mapred.JobTracker
- Deprecated.
- submitJob(JobID) -
Method in class org.apache.hadoop.mapred.JobTracker
- JobTracker.submitJob() kicks off a new job.
- submitJob(JobConf) -
Static method in class org.apache.hadoop.mapred.pipes.Submitter
- Submit a job to the map/reduce cluster.
- Submitter - Class in org.apache.hadoop.mapred.pipes
- The main entry point and job submitter.
- Submitter() -
Constructor for class org.apache.hadoop.mapred.pipes.Submitter
-
- SUCCEEDED -
Static variable in class org.apache.hadoop.mapred.JobStatus
-
- SUCCESS -
Static variable in class org.apache.hadoop.dfs.Balancer
-
- SUCCESS -
Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- Sudoku - Class in org.apache.hadoop.examples.dancing
- This class uses the dancing links algorithm from Knuth to solve sudoku
puzzles.
- Sudoku(InputStream) -
Constructor for class org.apache.hadoop.examples.dancing.Sudoku
- Set up a puzzle board to the given size.
- Sudoku.ColumnName - Interface in org.apache.hadoop.examples.dancing
- This interface is a marker class for the columns created for the
Sudoku solver.
- suffix(String) -
Method in class org.apache.hadoop.fs.Path
- Adds a suffix to the final name in the path.
- sum(Counters, Counters) -
Static method in class org.apache.hadoop.mapred.Counters
- Convenience method for computing the sum of two sets of counters.
- suspend() -
Method in class org.apache.hadoop.mapred.jobcontrol.JobControl
- suspend the running thread
- swap(int, int) -
Method in interface org.apache.hadoop.util.IndexedSortable
- Swap items at the given addresses.
- SwitchTo(int) -
Method in class org.apache.hadoop.record.compiler.generated.RccTokenManager
-
- SYMBOL -
Variable in enum org.apache.hadoop.fs.permission.FsAction
- Symbolic representation
- symLink(String, String) -
Static method in class org.apache.hadoop.fs.FileUtil
- Create a soft link between a src and destination
only on a local disk.
- sync() -
Method in class org.apache.hadoop.fs.FSDataOutputStream
- Synchronize all buffer with the underlying devices.
- sync() -
Method in interface org.apache.hadoop.fs.Syncable
- Synchronize all buffer with the underlying devices.
- sync(long) -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Seek to the next sync mark past a given position.
- sync() -
Method in class org.apache.hadoop.io.SequenceFile.Writer
- create a sync point
- SYNC_INTERVAL -
Static variable in class org.apache.hadoop.io.SequenceFile
- The number of bytes between sync points.
- Syncable - Interface in org.apache.hadoop.fs
- This interface declare the sync() operation.
- syncs -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- syncSeen() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns true iff the previous call to next passed a sync mark.
T
- tabSize -
Variable in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- tag -
Variable in class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- TAG -
Static variable in class org.apache.hadoop.record.compiler.Consts
-
- TaggedMapOutput - Class in org.apache.hadoop.contrib.utils.join
- This abstract class serves as the base class for the values that
flow from the mappers to the reducers in a data join job.
- TaggedMapOutput() -
Constructor for class org.apache.hadoop.contrib.utils.join.TaggedMapOutput
-
- tailMap(WritableComparable) -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- TaskAttemptID - Class in org.apache.hadoop.mapred
- TaskAttemptID represents the immutable and unique identifier for
a task attempt.
- TaskAttemptID(TaskID, int) -
Constructor for class org.apache.hadoop.mapred.TaskAttemptID
- Constructs a TaskAttemptID object from given
TaskID
.
- TaskAttemptID(String, int, boolean, int, int) -
Constructor for class org.apache.hadoop.mapred.TaskAttemptID
- Constructs a TaskId object from given parts.
- TaskCompletionEvent - Class in org.apache.hadoop.mapred
- This is used to track task completion events on
job tracker.
- TaskCompletionEvent() -
Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
- Default constructor for Writable.
- TaskCompletionEvent(int, String, int, boolean, TaskCompletionEvent.Status, String) -
Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
- Deprecated.
- TaskCompletionEvent(int, TaskAttemptID, int, boolean, TaskCompletionEvent.Status, String) -
Constructor for class org.apache.hadoop.mapred.TaskCompletionEvent
- Constructor.
- TaskCompletionEvent.Status - Enum in org.apache.hadoop.mapred
-
- TaskID - Class in org.apache.hadoop.mapred
- TaskID represents the immutable and unique identifier for
a Map or Reduce Task.
- TaskID(JobID, boolean, int) -
Constructor for class org.apache.hadoop.mapred.TaskID
- Constructs a TaskID object from given
JobID
.
- TaskID(String, int, boolean, int) -
Constructor for class org.apache.hadoop.mapred.TaskID
- Constructs a TaskInProgressId object from given parts.
- TaskLog - Class in org.apache.hadoop.mapred
- A simple logger to handle the task-specific user logs.
- TaskLog() -
Constructor for class org.apache.hadoop.mapred.TaskLog
-
- TaskLog.LogName - Enum in org.apache.hadoop.mapred
- The filter for userlogs.
- TaskLogAppender - Class in org.apache.hadoop.mapred
- A simple log4j-appender for the task child's
map-reduce system logs.
- TaskLogAppender() -
Constructor for class org.apache.hadoop.mapred.TaskLogAppender
-
- TaskLogServlet - Class in org.apache.hadoop.mapred
- A servlet that is run by the TaskTrackers to provide the task logs via http.
- TaskLogServlet() -
Constructor for class org.apache.hadoop.mapred.TaskLogServlet
-
- TaskReport - Class in org.apache.hadoop.mapred
- A report on the state of a task.
- TaskReport() -
Constructor for class org.apache.hadoop.mapred.TaskReport
-
- TaskTracker - Class in org.apache.hadoop.mapred
- TaskTracker is a process that starts and tracks MR Tasks
in a networked environment.
- TaskTracker(JobConf) -
Constructor for class org.apache.hadoop.mapred.TaskTracker
- Start with the local machine name, and the default JobTracker
- TaskTracker.Child - Class in org.apache.hadoop.mapred
- The main() for child processes.
- TaskTracker.Child() -
Constructor for class org.apache.hadoop.mapred.TaskTracker.Child
-
- TaskTracker.MapOutputServlet - Class in org.apache.hadoop.mapred
- This class is used in TaskTracker's Jetty to serve the map outputs
to other nodes.
- TaskTracker.MapOutputServlet() -
Constructor for class org.apache.hadoop.mapred.TaskTracker.MapOutputServlet
-
- TaskTracker.TaskTrackerMetrics - Class in org.apache.hadoop.mapred
-
- taskTrackers() -
Method in class org.apache.hadoop.mapred.JobTracker
-
- Text - Class in org.apache.hadoop.io
- This class stores text using standard UTF8 encoding.
- Text() -
Constructor for class org.apache.hadoop.io.Text
-
- Text(String) -
Constructor for class org.apache.hadoop.io.Text
- Construct from a string.
- Text(Text) -
Constructor for class org.apache.hadoop.io.Text
- Construct from another text.
- Text(byte[]) -
Constructor for class org.apache.hadoop.io.Text
- Construct from a byte array.
- Text.Comparator - Class in org.apache.hadoop.io
- A WritableComparator optimized for Text keys.
- Text.Comparator() -
Constructor for class org.apache.hadoop.io.Text.Comparator
-
- TextInputFormat - Class in org.apache.hadoop.mapred
- An
InputFormat
for plain text files. - TextInputFormat() -
Constructor for class org.apache.hadoop.mapred.TextInputFormat
-
- TextOutputFormat<K,V> - Class in org.apache.hadoop.mapred
- An
OutputFormat
that writes plain text files. - TextOutputFormat() -
Constructor for class org.apache.hadoop.mapred.TextOutputFormat
-
- TextOutputFormat.LineRecordWriter<K,V> - Class in org.apache.hadoop.mapred
-
- TextOutputFormat.LineRecordWriter(DataOutputStream, String) -
Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
-
- TextOutputFormat.LineRecordWriter(DataOutputStream) -
Constructor for class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
-
- toArray() -
Method in class org.apache.hadoop.io.ArrayWritable
-
- toArray() -
Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- toArray(Class<T>, List<T>) -
Static method in class org.apache.hadoop.util.GenericsUtil
- Converts the given
List<T>
to a an array of
T[]
.
- toArray(List<T>) -
Static method in class org.apache.hadoop.util.GenericsUtil
- Converts the given
List<T>
to a an array of
T[]
.
- token -
Variable in class org.apache.hadoop.record.compiler.generated.Rcc
-
- Token - Class in org.apache.hadoop.record.compiler.generated
- Describes the input token stream.
- Token() -
Constructor for class org.apache.hadoop.record.compiler.generated.Token
-
- token_source -
Variable in class org.apache.hadoop.record.compiler.generated.Rcc
-
- TokenCountMapper<K> - Class in org.apache.hadoop.mapred.lib
- A
Mapper
that maps text values into pairs. - TokenCountMapper() -
Constructor for class org.apache.hadoop.mapred.lib.TokenCountMapper
-
- tokenImage -
Variable in exception org.apache.hadoop.record.compiler.generated.ParseException
- This is a reference to the "tokenImage" array of the generated
parser within which the parse error occurred.
- tokenImage -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- TokenMgrError - Error in org.apache.hadoop.record.compiler.generated
-
- TokenMgrError() -
Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
- TokenMgrError(String, int) -
Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
- TokenMgrError(boolean, int, int, int, String, char, int) -
Constructor for error org.apache.hadoop.record.compiler.generated.TokenMgrError
-
- toMap() -
Method in class org.apache.hadoop.streaming.Environment
-
- Tool - Interface in org.apache.hadoop.util
- A tool interface that supports handling of generic command-line options.
- ToolRunner - Class in org.apache.hadoop.util
- A utility to help run
Tool
s. - ToolRunner() -
Constructor for class org.apache.hadoop.util.ToolRunner
-
- top() -
Method in class org.apache.hadoop.util.PriorityQueue
- Returns the least element of the PriorityQueue in constant time.
- toShort() -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Encode the object to a short.
- toString() -
Method in class org.apache.hadoop.conf.Configuration.IntegerRanges
-
- toString() -
Method in class org.apache.hadoop.conf.Configuration
-
- toString() -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
- toString() -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- toString() -
Method in class org.apache.hadoop.contrib.index.lucene.ShardWriter
-
- toString() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
-
- toString() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
- toString() -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
- toString() -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
- toString() -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- toString() -
Method in class org.apache.hadoop.dfs.DataNode
-
- toString() -
Method in class org.apache.hadoop.dfs.DatanodeID
-
- toString() -
Method in class org.apache.hadoop.dfs.DistributedFileSystem
-
- toString() -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Stringifies the name of the storage
- toString() -
Method in class org.apache.hadoop.dfs.NamenodeFsck.FsckResult
-
- toString() -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
- Print basic upgradeStatus details.
- toString() -
Method in class org.apache.hadoop.fs.BlockLocation
-
- toString() -
Method in class org.apache.hadoop.fs.ContentSummary
-
- toString(boolean) -
Method in class org.apache.hadoop.fs.ContentSummary
- Return the string representation of the object in the output format.
- toString() -
Method in class org.apache.hadoop.fs.DF
-
- toString() -
Method in class org.apache.hadoop.fs.DU
-
- toString() -
Method in class org.apache.hadoop.fs.FileSystem.Statistics
-
- toString() -
Method in class org.apache.hadoop.fs.Path
-
- toString() -
Method in class org.apache.hadoop.fs.permission.FsPermission
-
- toString() -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
-
- toString() -
Method in class org.apache.hadoop.fs.RawLocalFileSystem
-
- toString() -
Method in class org.apache.hadoop.fs.s3.Block
-
- toString() -
Method in class org.apache.hadoop.io.BooleanWritable
-
- toString() -
Method in class org.apache.hadoop.io.BytesWritable
- Generate the stream of bytes as hex pairs separated by ' '.
- toString() -
Method in class org.apache.hadoop.io.ByteWritable
-
- toString() -
Method in class org.apache.hadoop.io.compress.CompressionCodecFactory
- Print the extension map out as a string.
- toString(T) -
Method in class org.apache.hadoop.io.DefaultStringifier
-
- toString() -
Method in class org.apache.hadoop.io.DoubleWritable
-
- toString() -
Method in class org.apache.hadoop.io.FloatWritable
-
- toString() -
Method in class org.apache.hadoop.io.GenericWritable
-
- toString() -
Method in class org.apache.hadoop.io.IntWritable
-
- toString() -
Method in class org.apache.hadoop.io.LongWritable
-
- toString() -
Method in class org.apache.hadoop.io.MD5Hash
- Returns a string representation of this object.
- toString() -
Method in class org.apache.hadoop.io.NullWritable
-
- toString() -
Method in class org.apache.hadoop.io.ObjectWritable
-
- toString() -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- toString() -
Method in class org.apache.hadoop.io.SequenceFile.Reader
- Returns the name of the file.
- toString(T) -
Method in interface org.apache.hadoop.io.Stringifier
- Converts the object to a string representation
- toString() -
Method in class org.apache.hadoop.io.Text
- Convert text back to string
- toString() -
Method in class org.apache.hadoop.io.UTF8
- Deprecated. Convert to a String.
- toString() -
Method in exception org.apache.hadoop.io.VersionMismatchException
- Returns a string representation of this object.
- toString() -
Method in class org.apache.hadoop.io.VIntWritable
-
- toString() -
Method in class org.apache.hadoop.io.VLongWritable
-
- toString() -
Method in class org.apache.hadoop.mapred.Counters
- Return textual representation of the counter values.
- toString() -
Method in class org.apache.hadoop.mapred.FileSplit
-
- toString() -
Method in class org.apache.hadoop.mapred.ID
-
- toString() -
Method in class org.apache.hadoop.mapred.jobcontrol.Job
-
- toString() -
Method in class org.apache.hadoop.mapred.JobID
-
- toString() -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Convert Tuple to String as in the following.
- toString() -
Method in class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
- toString() -
Method in class org.apache.hadoop.mapred.MultiFileSplit
-
- toString() -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- toString() -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- toString() -
Method in class org.apache.hadoop.mapred.TaskID
-
- toString() -
Method in enum org.apache.hadoop.mapred.TaskLog.LogName
-
- toString() -
Method in class org.apache.hadoop.net.NetworkTopology
- convert a network tree to a string
- toString() -
Method in class org.apache.hadoop.net.NodeBase
- Return this node's string representation
- toString() -
Method in class org.apache.hadoop.record.Buffer
-
- toString(String) -
Method in class org.apache.hadoop.record.Buffer
- Convert the byte buffer to a string an specific character encoding
- toString() -
Method in class org.apache.hadoop.record.compiler.CodeBuffer
-
- toString() -
Method in class org.apache.hadoop.record.compiler.generated.Token
- Returns the image.
- toString() -
Method in class org.apache.hadoop.record.Record
-
- toString() -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
- Convert this object to a string
- toString() -
Method in class org.apache.hadoop.util.Progress
-
- toStrings() -
Method in class org.apache.hadoop.io.ArrayWritable
-
- totalLoad -
Variable in class org.apache.hadoop.dfs.FSNamesystemMetrics
-
- touch(File) -
Static method in class org.apache.hadoop.streaming.StreamUtil
-
- touchFile(String) -
Method in class org.apache.hadoop.contrib.index.lucene.FileSystemDirectory
-
- toUri() -
Method in class org.apache.hadoop.fs.Path
- Convert this to a URI.
- transactions -
Variable in class org.apache.hadoop.dfs.NameNodeMetrics
-
- transferToFully(FileChannel, long, int) -
Method in class org.apache.hadoop.net.SocketOutputStream
- Transfers data from FileChannel using
FileChannel.transferTo(long, long, WritableByteChannel)
.
- transform(InputStream, InputStream, Writer) -
Static method in class org.apache.hadoop.util.XMLUtils
- Transform input xml given a stylesheet.
- Trash - Class in org.apache.hadoop.fs
- Provides a trash feature.
- Trash(Configuration) -
Constructor for class org.apache.hadoop.fs.Trash
- Construct a trash can accessor.
- truncate() -
Method in class org.apache.hadoop.record.Buffer
- Change the capacity of the backing store to be the same as the current
count of buffer.
- TRY_ONCE_DONT_FAIL -
Static variable in class org.apache.hadoop.io.retry.RetryPolicies
-
Try once, and fail silently for
void
methods, or by
re-throwing the exception for non-void
methods.
- TRY_ONCE_THEN_FAIL -
Static variable in class org.apache.hadoop.io.retry.RetryPolicies
-
Try once, and fail by re-throwing the exception.
- TupleWritable - Class in org.apache.hadoop.mapred.join
- Writable type storing multiple
Writable
s. - TupleWritable() -
Constructor for class org.apache.hadoop.mapred.join.TupleWritable
- Create an empty tuple with no allocated storage for writables.
- TupleWritable(Writable[]) -
Constructor for class org.apache.hadoop.mapred.join.TupleWritable
- Initialize tuple with storage; unknown whether any of them contain
"written" values.
- TwoDArrayWritable - Class in org.apache.hadoop.io
- A Writable for 2D arrays containing a matrix of instances of a class.
- TwoDArrayWritable(Class) -
Constructor for class org.apache.hadoop.io.TwoDArrayWritable
-
- TwoDArrayWritable(Class, Writable[][]) -
Constructor for class org.apache.hadoop.io.TwoDArrayWritable
-
- twoRotations -
Static variable in class org.apache.hadoop.examples.dancing.Pentomino
- Is the piece identical if rotated 180 degrees?
- Type() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- TYPE_SEPARATOR -
Static variable in interface org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorDescriptor
-
- TypeID - Class in org.apache.hadoop.record.meta
- Represents typeID for basic types.
- TypeID.RIOType - Class in org.apache.hadoop.record.meta
- constants representing the IDL types we support
- TypeID.RIOType() -
Constructor for class org.apache.hadoop.record.meta.TypeID.RIOType
-
- typeVal -
Variable in class org.apache.hadoop.record.meta.TypeID
-
U
- ugi -
Variable in class org.apache.hadoop.dfs.HftpFileSystem
-
- UGI_PROPERTY_NAME -
Static variable in class org.apache.hadoop.security.UnixUserGroupInformation
-
- UMASK_LABEL -
Static variable in class org.apache.hadoop.fs.permission.FsPermission
- umask property label
- uncompressedValSerializer -
Variable in class org.apache.hadoop.io.SequenceFile.Writer
-
- underReplicatedBlocks -
Variable in class org.apache.hadoop.dfs.FSNamesystemMetrics
-
- unEscapeString(String) -
Static method in class org.apache.hadoop.util.StringUtils
- Unescape commas in the string using the default escape char
- unEscapeString(String, char, char) -
Static method in class org.apache.hadoop.util.StringUtils
- Unescape
charToEscape
in the string
with the escape char escapeChar
- unfinalizeBlock(Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Unfinalizes the block previously opened for writing using writeToBlock.
- UNIQ_VALUE_COUNT -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- UniqValueCount - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that dedupes a sequence of objects.
- UniqValueCount() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
- the default constructor
- UniqValueCount(long) -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.UniqValueCount
- constructor
- UnixUserGroupInformation - Class in org.apache.hadoop.security
- An implementation of UserGroupInformation in the Unix system
- UnixUserGroupInformation() -
Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
- Default constructor
- UnixUserGroupInformation(String, String[]) -
Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
- Constructor with parameters user name and its group names.
- UnixUserGroupInformation(String[]) -
Constructor for class org.apache.hadoop.security.UnixUserGroupInformation
- Constructor with parameter user/group names
- unJar(File, File) -
Static method in class org.apache.hadoop.util.RunJar
- Unpack a jar file into a directory.
- unregisterMBean(ObjectName) -
Static method in class org.apache.hadoop.metrics.util.MBeanUtil
-
- unregisterUpdater(Updater) -
Method in interface org.apache.hadoop.metrics.MetricsContext
- Removes a callback, if it exists.
- unregisterUpdater(Updater) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Removes a callback, if it exists.
- UNRESOLVED -
Static variable in class org.apache.hadoop.net.NetworkTopology
-
- unTar(File, File) -
Static method in class org.apache.hadoop.fs.FileUtil
- Given a Tar File as input it will untar the file in a the untar directory
passed as the second parameter
This utility will untar ".tar" files and ".tar.gz","tgz" files.
- unwrapRemoteException(Class<?>...) -
Method in exception org.apache.hadoop.ipc.RemoteException
- If this remote exception wraps up one of the lookupTypes
then return this exception.
- unwrapRemoteException() -
Method in exception org.apache.hadoop.ipc.RemoteException
- Instantiate and return the exception wrapped up by this remote exception.
- unZip(File, File) -
Static method in class org.apache.hadoop.fs.FileUtil
- Given a File input it will unzip the file in a the unzip directory
passed as the second parameter
- UPDATE -
Static variable in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp.Op
-
- update(byte[], int, int) -
Method in class org.apache.hadoop.dfs.DataChecksum
-
- update(int) -
Method in class org.apache.hadoop.dfs.DataChecksum
-
- update() -
Method in interface org.apache.hadoop.metrics.MetricsRecord
- Updates the table of buffered data which is to be sent periodically.
- update(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.AbstractMetricsContext
- Called by MetricsRecordImpl.update().
- update() -
Method in class org.apache.hadoop.metrics.spi.MetricsRecordImpl
- Updates the table of buffered data which is to be sent periodically.
- update(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.NullContext
- Do-nothing version of update
- update(MetricsRecordImpl) -
Method in class org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
- Do-nothing version of update
- updateBlock(Block, Block, boolean) -
Method in class org.apache.hadoop.dfs.DataNode
- Update the block to the new generation stamp and length.
- updateBlock(Block, Block) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Update the block to the new generation stamp and length.
- UpdateIndex - Class in org.apache.hadoop.contrib.index.main
- A distributed "index" is partitioned into "shards".
- UpdateIndex() -
Constructor for class org.apache.hadoop.contrib.index.main.UpdateIndex
-
- UpdateLineColumn(char) -
Method in class org.apache.hadoop.record.compiler.generated.SimpleCharStream
-
- Updater - Interface in org.apache.hadoop.metrics
- Call-back interface.
- Upgradeable - Interface in org.apache.hadoop.dfs
- Common interface for distributed upgrade objects.
- upgradeProgress(String[], int) -
Method in class org.apache.hadoop.dfs.DFSAdmin
- Command to request current distributed upgrade status,
a detailed status, or to force the upgrade to proceed.
- upgradeStatus -
Variable in class org.apache.hadoop.dfs.UpgradeStatusReport
-
- UpgradeStatusReport - Class in org.apache.hadoop.dfs
- Base upgrade upgradeStatus class.
- UpgradeStatusReport() -
Constructor for class org.apache.hadoop.dfs.UpgradeStatusReport
-
- UpgradeStatusReport(int, short, boolean) -
Constructor for class org.apache.hadoop.dfs.UpgradeStatusReport
-
- uriToString(URI[]) -
Static method in class org.apache.hadoop.util.StringUtils
-
- USAGE -
Static variable in class org.apache.hadoop.fs.shell.Count
-
- usage() -
Static method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- USAGES -
Static variable in class org.apache.hadoop.log.LogLevel
-
- USER_NAME_COMMAND -
Static variable in class org.apache.hadoop.util.Shell
- a Unix command to get the current user's name
- UserDefinedValueAggregatorDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a wrapper for a user defined value aggregator descriptor.
- UserDefinedValueAggregatorDescriptor(String, JobConf) -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.UserDefinedValueAggregatorDescriptor
-
- UserGroupInformation - Class in org.apache.hadoop.security
- A
Writable
abstract class for storing user and groups information. - UserGroupInformation() -
Constructor for class org.apache.hadoop.security.UserGroupInformation
-
- userJobConfProps_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- USTRING_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- UTF8 - Class in org.apache.hadoop.io
- Deprecated. replaced by Text
- UTF8() -
Constructor for class org.apache.hadoop.io.UTF8
- Deprecated.
- UTF8(String) -
Constructor for class org.apache.hadoop.io.UTF8
- Deprecated. Construct from a given string.
- UTF8(UTF8) -
Constructor for class org.apache.hadoop.io.UTF8
- Deprecated. Construct from a given string.
- UTF8.Comparator - Class in org.apache.hadoop.io
- Deprecated. A WritableComparator optimized for UTF8 keys.
- UTF8.Comparator() -
Constructor for class org.apache.hadoop.io.UTF8.Comparator
- Deprecated.
- UTF8ByteArrayUtils - Class in org.apache.hadoop.streaming
- General utils for byte array containing UTF-8 encoded strings
- UTF8ByteArrayUtils() -
Constructor for class org.apache.hadoop.streaming.UTF8ByteArrayUtils
-
- utf8Length(String) -
Static method in class org.apache.hadoop.io.Text
- For the given string, returns the number of UTF-8 bytes
required to encode the string.
- Util - Class in org.apache.hadoop.metrics.spi
- Static utility methods
- Utils - Class in org.apache.hadoop.record.meta
- Various utility functions for Hadooop record I/O platform.
- Utils - Class in org.apache.hadoop.record
- Various utility functions for Hadooop record I/O runtime.
V
- validateInput(JobConf) -
Method in class org.apache.hadoop.mapred.FileInputFormat
- Deprecated.
- validateInput(JobConf) -
Method in interface org.apache.hadoop.mapred.InputFormat
- Deprecated. getSplits is called in the client and can perform any
necessary validation of the input
- validateInput(JobConf) -
Method in class org.apache.hadoop.mapred.join.CompositeInputFormat
- Verify that this composite has children and that all its children
can validate their input.
- validateInput(JobConf) -
Method in class org.apache.hadoop.streaming.StreamBaseRecordReader
- This implementation always returns true.
- validateUTF8(byte[]) -
Static method in class org.apache.hadoop.io.Text
- Check if a byte array contains valid utf-8
- validateUTF8(byte[], int, int) -
Static method in class org.apache.hadoop.io.Text
- Check to see if a byte array is valid utf-8
- VALUE_HISTOGRAM -
Static variable in class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- ValueAggregator - Interface in org.apache.hadoop.mapred.lib.aggregate
- This interface defines the minimal protocol for value aggregators.
- ValueAggregatorBaseDescriptor - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements the common functionalities of
the subclasses of ValueAggregatorDescriptor class.
- ValueAggregatorBaseDescriptor() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorBaseDescriptor
-
- ValueAggregatorCombiner<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements the generic combiner of Aggregate.
- ValueAggregatorCombiner() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorCombiner
-
- ValueAggregatorDescriptor - Interface in org.apache.hadoop.mapred.lib.aggregate
- This interface defines the contract a value aggregator descriptor must
support.
- ValueAggregatorJob - Class in org.apache.hadoop.mapred.lib.aggregate
- This is the main class for creating a map/reduce job using Aggregate
framework.
- ValueAggregatorJob() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJob
-
- ValueAggregatorJobBase<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
- This abstract class implements some common functionalities of the
the generic mapper, reducer and combiner classes of Aggregate.
- ValueAggregatorJobBase() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorJobBase
-
- ValueAggregatorMapper<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements the generic mapper of Aggregate.
- ValueAggregatorMapper() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorMapper
-
- ValueAggregatorReducer<K1 extends WritableComparable,V1 extends Writable> - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements the generic reducer of Aggregate.
- ValueAggregatorReducer() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueAggregatorReducer
-
- ValueHistogram - Class in org.apache.hadoop.mapred.lib.aggregate
- This class implements a value aggregator that computes the
histogram of a sequence of strings.
- ValueHistogram() -
Constructor for class org.apache.hadoop.mapred.lib.aggregate.ValueHistogram
-
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.CheckpointStates
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.DatanodeReportType
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.dfs.FSConstants.UpgradeAction
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.fs.permission.FsAction
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in class org.apache.hadoop.fs.permission.FsPermission
- Create a FsPermission from a Unix symbolic permission string
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobHistory.Values
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobPriority
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.JobTracker.State
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.join.Parser.TType
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
- Returns the enum constant of this type with the specified name.
- valueOf(String) -
Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
- Returns the enum constant of this type with the specified name.
- values() -
Static method in enum org.apache.hadoop.dfs.DatanodeInfo.AdminStates
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.CheckpointStates
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.DatanodeReportType
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.NodeType
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.SafeModeAction
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.StartupOption
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.dfs.FSConstants.UpgradeAction
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.examples.dancing.Pentomino.SolutionCategory
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.fs.permission.FsAction
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.lzo.LzoCompressor.CompressionStrategy
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.lzo.LzoDecompressor.CompressionStrategy
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionLevel
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionStrategy
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Method in class org.apache.hadoop.io.MapWritable
-
- values() -
Static method in enum org.apache.hadoop.io.SequenceFile.CompressionType
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Method in class org.apache.hadoop.io.SortedMapWritable
-
- values() -
Static method in enum org.apache.hadoop.mapred.JobClient.TaskStatusFilter
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.JobHistory.Keys
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.JobHistory.RecordTypes
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.JobHistory.Values
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.JobPriority
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.JobTracker.State
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.join.Parser.TType
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.TaskCompletionEvent.Status
- Returns an array containing the constants of this enum type, in
the order they're declared.
- values() -
Static method in enum org.apache.hadoop.mapred.TaskLog.LogName
- Returns an array containing the constants of this enum type, in
the order they're declared.
- Vector() -
Method in class org.apache.hadoop.record.compiler.generated.Rcc
-
- VECTOR -
Static variable in class org.apache.hadoop.record.meta.TypeID.RIOType
-
- VECTOR_TKN -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- VectorTypeID - Class in org.apache.hadoop.record.meta
- Represents typeID for vector.
- VectorTypeID(TypeID) -
Constructor for class org.apache.hadoop.record.meta.VectorTypeID
-
- verbose -
Variable in class org.apache.hadoop.streaming.JarBuilder
-
- verbose_ -
Variable in class org.apache.hadoop.streaming.StreamJob
-
- verifyRequest(DatanodeRegistration) -
Method in class org.apache.hadoop.dfs.NameNode
- Verify request.
- verifyVersion(int) -
Method in class org.apache.hadoop.dfs.NameNode
- Verify version.
- version -
Variable in class org.apache.hadoop.dfs.UpgradeStatusReport
-
- VERSION -
Static variable in class org.apache.hadoop.fs.HarFileSystem
-
- VersionedProtocol - Interface in org.apache.hadoop.ipc
- Superclass of all protocols that use Hadoop RPC.
- VersionedWritable - Class in org.apache.hadoop.io
- A base class for Writables that provides version checking.
- VersionedWritable() -
Constructor for class org.apache.hadoop.io.VersionedWritable
-
- VersionInfo - Class in org.apache.hadoop.util
- This class finds the package info for Hadoop and the HadoopVersionAnnotation
information.
- VersionInfo() -
Constructor for class org.apache.hadoop.util.VersionInfo
-
- VersionMismatchException - Exception in org.apache.hadoop.fs.s3
- Thrown when Hadoop cannot read the version of the data stored
in
S3FileSystem
. - VersionMismatchException(String, String) -
Constructor for exception org.apache.hadoop.fs.s3.VersionMismatchException
-
- VersionMismatchException - Exception in org.apache.hadoop.io
- Thrown by
VersionedWritable.readFields(DataInput)
when the
version of an object being read does not match the current implementation
version as returned by VersionedWritable.getVersion()
. - VersionMismatchException(byte, byte) -
Constructor for exception org.apache.hadoop.io.VersionMismatchException
-
- versionRequest() -
Method in class org.apache.hadoop.dfs.NameNode
-
- VIntWritable - Class in org.apache.hadoop.io
- A WritableComparable for integer values stored in variable-length format.
- VIntWritable() -
Constructor for class org.apache.hadoop.io.VIntWritable
-
- VIntWritable(int) -
Constructor for class org.apache.hadoop.io.VIntWritable
-
- VLongWritable - Class in org.apache.hadoop.io
- A WritableComparable for longs in a variable-length format.
- VLongWritable() -
Constructor for class org.apache.hadoop.io.VLongWritable
-
- VLongWritable(long) -
Constructor for class org.apache.hadoop.io.VLongWritable
-
W
- waitForCompletion() -
Method in interface org.apache.hadoop.mapred.RunningJob
- Blocks until the job is complete.
- waitForProxy(Class, long, InetSocketAddress, Configuration) -
Static method in class org.apache.hadoop.ipc.RPC
-
- waitForReadable() -
Method in class org.apache.hadoop.net.SocketInputStream
- waits for the underlying channel to be ready for reading.
- waitForWritable() -
Method in class org.apache.hadoop.net.SocketOutputStream
- waits for the underlying channel to be ready for writing.
- WAITING -
Static variable in class org.apache.hadoop.mapred.jobcontrol.Job
-
- WEB_UGI_PROPERTY_NAME -
Static variable in class org.apache.hadoop.dfs.JspHelper
-
- webUGI -
Static variable in class org.apache.hadoop.dfs.JspHelper
-
- width -
Variable in class org.apache.hadoop.examples.dancing.Pentomino
-
- width -
Static variable in class org.apache.hadoop.mapred.StatusHttpServer.TaskGraphServlet
- height of the graph w/o margins
- windowBits() -
Method in enum org.apache.hadoop.io.compress.zlib.ZlibCompressor.CompressionHeader
-
- windowBits() -
Method in enum org.apache.hadoop.io.compress.zlib.ZlibDecompressor.CompressionHeader
-
- WINDOWS -
Static variable in class org.apache.hadoop.util.Shell
- Set to true on Windows platforms
- WithinMultiLineComment -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- WithinOneLineComment -
Static variable in interface org.apache.hadoop.record.compiler.generated.RccConstants
-
- WordCount - Class in org.apache.hadoop.examples
- This is an example Hadoop Map/Reduce application.
- WordCount() -
Constructor for class org.apache.hadoop.examples.WordCount
-
- WordCount.MapClass - Class in org.apache.hadoop.examples
- Counts the words in each line.
- WordCount.MapClass() -
Constructor for class org.apache.hadoop.examples.WordCount.MapClass
-
- WordCount.Reduce - Class in org.apache.hadoop.examples
- A reducer class that just emits the sum of the input values.
- WordCount.Reduce() -
Constructor for class org.apache.hadoop.examples.WordCount.Reduce
-
- WrappedRecordReader<K extends WritableComparable,U extends Writable> - Class in org.apache.hadoop.mapred.join
- Proxy class for a RecordReader participating in the join framework.
- Writable - Interface in org.apache.hadoop.io
- A serializable object which implements a simple, efficient, serialization
protocol, based on
DataInput
and DataOutput
. - WritableComparable<T> - Interface in org.apache.hadoop.io
- A
Writable
which is also Comparable
. - WritableComparator - Class in org.apache.hadoop.io
- A Comparator for
WritableComparable
s. - WritableComparator(Class) -
Constructor for class org.apache.hadoop.io.WritableComparator
- Construct for a
WritableComparable
implementation.
- WritableComparator(Class, boolean) -
Constructor for class org.apache.hadoop.io.WritableComparator
-
- WritableFactories - Class in org.apache.hadoop.io
- Factories for non-public writables.
- WritableFactory - Interface in org.apache.hadoop.io
- A factory for a class of Writable.
- WritableName - Class in org.apache.hadoop.io
- Utility to permit renaming of Writable implementation classes without
invalidiating files that contain their class name.
- WritableSerialization - Class in org.apache.hadoop.io.serializer
- A
Serialization
for Writable
s that delegates to
Writable.write(java.io.DataOutput)
and
Writable.readFields(java.io.DataInput)
. - WritableSerialization() -
Constructor for class org.apache.hadoop.io.serializer.WritableSerialization
-
- WritableUtils - Class in org.apache.hadoop.io
-
- WritableUtils() -
Constructor for class org.apache.hadoop.io.WritableUtils
-
- write(OutputStream) -
Method in class org.apache.hadoop.conf.Configuration
- Write out the non-default properties in this configuration to the give
OutputStream
.
- write(DataOutput) -
Method in class org.apache.hadoop.contrib.index.example.LineDocTextAndOp
-
- write(DataOutput) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentAndOp
-
- write(DataOutput) -
Method in class org.apache.hadoop.contrib.index.mapred.DocumentID
-
- write(DataOutput) -
Method in class org.apache.hadoop.contrib.index.mapred.IntermediateForm
-
- write(DataOutput) -
Method in class org.apache.hadoop.contrib.index.mapred.Shard
-
- write(DataOutput) -
Method in class org.apache.hadoop.dfs.DatanodeID
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.dfs.DatanodeInfo
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.dfs.LocatedBlocks
-
- write(DataOutput) -
Method in class org.apache.hadoop.dfs.UpgradeStatusReport
-
- write(DataOutput) -
Method in class org.apache.hadoop.examples.MultiFileWordCount.WordOffset
-
- write(DataOutput) -
Method in class org.apache.hadoop.fs.BlockLocation
- Implement write of Writable
- write(DataOutput) -
Method in class org.apache.hadoop.fs.ContentSummary
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.fs.FileStatus
-
- write(int) -
Method in class org.apache.hadoop.fs.FSOutputSummer
- Write one byte
- write(byte[], int, int) -
Method in class org.apache.hadoop.fs.FSOutputSummer
- Writes
len
bytes from the specified byte array
starting at offset off
and generate a checksum for
each data chunk.
- write(DataOutput) -
Method in class org.apache.hadoop.fs.permission.FsPermission
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.fs.permission.PermissionStatus
- Serialize the fields of this object to
out
.
- write(DataOutput, String, String, FsPermission) -
Static method in class org.apache.hadoop.fs.permission.PermissionStatus
- Serialize a
PermissionStatus
from its base components.
- write(DataOutput) -
Method in class org.apache.hadoop.io.AbstractMapWritable
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.io.ArrayWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.BooleanWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.BytesWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.ByteWritable
-
- write(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.CompressionOutputStream
- Write compressed bytes to the stream.
- write(int) -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
-
- write(byte[], int, int) -
Method in class org.apache.hadoop.io.compress.GzipCodec.GzipOutputStream
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.CompressedWritable
-
- write(DataInput, int) -
Method in class org.apache.hadoop.io.DataOutputBuffer
- Writes bytes from a DataInput directly into the buffer.
- write(DataOutput) -
Method in class org.apache.hadoop.io.DoubleWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.FloatWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.GenericWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.IntWritable
-
- write(byte[], int, int) -
Method in class org.apache.hadoop.io.IOUtils.NullOutputStream
-
- write(int) -
Method in class org.apache.hadoop.io.IOUtils.NullOutputStream
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.LongWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.MapWritable
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.io.MD5Hash
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.NullWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.ObjectWritable
-
- write(InputStream, int) -
Method in class org.apache.hadoop.io.OutputBuffer
- Writes bytes from a InputStream directly into the buffer.
- write(DataOutput) -
Method in class org.apache.hadoop.io.SequenceFile.Metadata
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.SortedMapWritable
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.io.Text
- serialize
write this object to out
length uses zero-compressed encoding
- write(DataOutput) -
Method in class org.apache.hadoop.io.TwoDArrayWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.UTF8
- Deprecated.
- write(DataOutput) -
Method in class org.apache.hadoop.io.VersionedWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.VIntWritable
-
- write(DataOutput) -
Method in class org.apache.hadoop.io.VLongWritable
-
- write(DataOutput) -
Method in interface org.apache.hadoop.io.Writable
- Serialize the fields of this object to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.ClusterStatus
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.Counters.Counter
- Write the binary representation of the counter
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.Counters.Group
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.Counters
- Write the set of groups.
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.FileSplit
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.ID
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.JobID
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.JobProfile
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.JobStatus
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.join.CompositeInputSplit
- Write splits in the following format.
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.join.TupleWritable
- Writes each Writable to
out
.
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.MultiFileSplit
-
- write(K, V) -
Method in interface org.apache.hadoop.mapred.RecordWriter
- Writes a key/value pair.
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.TaskAttemptID
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.TaskCompletionEvent
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.TaskID
-
- write(DataOutput) -
Method in class org.apache.hadoop.mapred.TaskReport
-
- write(K, V) -
Method in class org.apache.hadoop.mapred.TextOutputFormat.LineRecordWriter
-
- write(int) -
Method in class org.apache.hadoop.net.SocketOutputStream
-
- write(byte[], int, int) -
Method in class org.apache.hadoop.net.SocketOutputStream
-
- write(ByteBuffer) -
Method in class org.apache.hadoop.net.SocketOutputStream
-
- write(DataOutput) -
Method in class org.apache.hadoop.record.Record
-
- write(DataOutput) -
Method in class org.apache.hadoop.security.UnixUserGroupInformation
- Serialize this object
First write a string marking that this is a UGI in the string format,
then write this object's serialized form to the given data output
- WRITE_TIMEOUT -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- WRITE_TIMEOUT_EXTENSION -
Static variable in interface org.apache.hadoop.dfs.FSConstants
-
- writeBlockOp -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- writeBool(boolean, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeBool(boolean, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeBool(boolean, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a boolean to serialized record.
- writeBool(boolean, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeBuffer(Buffer, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeBuffer(Buffer, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeBuffer(Buffer, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a buffer to serialized record.
- writeBuffer(Buffer, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeByte(byte, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeByte(byte, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeByte(byte, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a byte to serialized record.
- writeByte(byte, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeChunk(byte[], int, int, byte[]) -
Method in class org.apache.hadoop.fs.FSOutputSummer
-
- writeCompressed(DataOutput) -
Method in class org.apache.hadoop.io.CompressedWritable
- Subclasses implement this instead of
CompressedWritable.write(DataOutput)
.
- writeCompressedByteArray(DataOutput, byte[]) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- writeCompressedBytes(DataOutputStream) -
Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
- Write compressed bytes to outStream.
- writeCompressedBytes(DataOutputStream) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- writeCompressedString(DataOutput, String) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- writeCompressedStringArray(DataOutput, String[]) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- writeDouble(double, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeDouble(double, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeDouble(double, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a double precision floating point number to serialized record.
- writeDouble(double, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeEnum(DataOutput, Enum) -
Static method in class org.apache.hadoop.io.WritableUtils
- writes String value of enum to DataOutput.
- writeFile(SequenceFile.Sorter.RawKeyValueIterator, SequenceFile.Writer) -
Method in class org.apache.hadoop.io.SequenceFile.Sorter
- Writes records from RawKeyValueIterator into a file represented by the
passed writer
- writeFloat(float, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeFloat(float, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeFloat(float, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a single-precision float to serialized record.
- writeFloat(float, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeHeader(DataOutputStream) -
Method in class org.apache.hadoop.dfs.DataChecksum
- Writes the checksum header to the output stream out.
- writeInt(int, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeInt(int, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeInt(int, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write an integer to serialized record.
- writeInt(int, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeLong(long, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeLong(long, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeLong(long, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a long integer to serialized record.
- writeLong(long, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeObject(DataOutput, Object, Class, Configuration) -
Static method in class org.apache.hadoop.io.ObjectWritable
- Write a
Writable
, String
, primitive type, or an array of
the preceding.
- writeRAMFiles(DataOutput, RAMDirectory, String[]) -
Static method in class org.apache.hadoop.contrib.index.lucene.RAMDirectoryUtil
- Write a number of files from a ram directory to a data output.
- writesFromLocalClient -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- writesFromRemoteClient -
Variable in class org.apache.hadoop.dfs.datanode.metrics.DataNodeMetrics
-
- writeString(DataOutput, String) -
Static method in class org.apache.hadoop.io.Text
- Write a UTF8 encoded string to out
- writeString(DataOutput, String) -
Static method in class org.apache.hadoop.io.UTF8
- Deprecated. Write a UTF-8 encoded string.
- writeString(DataOutput, String) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- writeString(String, String) -
Method in class org.apache.hadoop.record.BinaryRecordOutput
-
- writeString(String, String) -
Method in class org.apache.hadoop.record.CsvRecordOutput
-
- writeString(String, String) -
Method in interface org.apache.hadoop.record.RecordOutput
- Write a unicode string to serialized record.
- writeString(String, String) -
Method in class org.apache.hadoop.record.XmlRecordOutput
-
- writeStringArray(DataOutput, String[]) -
Static method in class org.apache.hadoop.io.WritableUtils
-
- writeToBlock(Block, boolean) -
Method in interface org.apache.hadoop.dfs.FSDatasetInterface
- Creates the block and returns output streams to write data and CRC
- writeUncompressedBytes(DataOutputStream) -
Method in interface org.apache.hadoop.io.SequenceFile.ValueBytes
- Writes the uncompressed bytes to the outStream.
- writeUncompressedBytes(DataOutputStream) -
Method in class org.apache.hadoop.mapred.SequenceFileAsBinaryOutputFormat.WritableValueBytes
-
- writeValue(DataOutputStream, boolean) -
Method in class org.apache.hadoop.dfs.DataChecksum
- Writes the current checksum to the stream.
- writeValue(byte[], int, boolean) -
Method in class org.apache.hadoop.dfs.DataChecksum
- Writes the current checksum to a buffer.
- writeVInt(DataOutput, int) -
Static method in class org.apache.hadoop.io.WritableUtils
- Serializes an integer to a binary stream with zero-compressed encoding.
- writeVInt(DataOutput, int) -
Static method in class org.apache.hadoop.record.Utils
- Serializes an int to a binary stream with zero-compressed encoding.
- writeVLong(DataOutput, long) -
Static method in class org.apache.hadoop.io.WritableUtils
- Serializes a long to a binary stream with zero-compressed encoding.
- writeVLong(DataOutput, long) -
Static method in class org.apache.hadoop.record.Utils
- Serializes a long to a binary stream with zero-compressed encoding.
X
- xceiverCount -
Variable in class org.apache.hadoop.dfs.DatanodeInfo
-
- xmargin -
Static variable in class org.apache.hadoop.mapred.StatusHttpServer.TaskGraphServlet
- margin space on x axis
- XmlRecordInput - Class in org.apache.hadoop.record
- XML Deserializer.
- XmlRecordInput(InputStream) -
Constructor for class org.apache.hadoop.record.XmlRecordInput
- Creates a new instance of XmlRecordInput
- XmlRecordOutput - Class in org.apache.hadoop.record
- XML Serializer.
- XmlRecordOutput(OutputStream) -
Constructor for class org.apache.hadoop.record.XmlRecordOutput
- Creates a new instance of XmlRecordOutput
- XMLUtils - Class in org.apache.hadoop.util
- General xml utilities.
- XMLUtils() -
Constructor for class org.apache.hadoop.util.XMLUtils
-
Y
- ymargin -
Static variable in class org.apache.hadoop.mapred.StatusHttpServer.TaskGraphServlet
- margin space on y axis
Z
- ZlibCompressor - Class in org.apache.hadoop.io.compress.zlib
- A
Compressor
based on the popular
zlib compression algorithm. - ZlibCompressor(ZlibCompressor.CompressionLevel, ZlibCompressor.CompressionStrategy, ZlibCompressor.CompressionHeader, int) -
Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
- Creates a new compressor using the specified compression level.
- ZlibCompressor() -
Constructor for class org.apache.hadoop.io.compress.zlib.ZlibCompressor
- Creates a new compressor with the default compression level.
- ZlibCompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
- The type of header for compressed data.
- ZlibCompressor.CompressionLevel - Enum in org.apache.hadoop.io.compress.zlib
- The compression level for zlib library.
- ZlibCompressor.CompressionStrategy - Enum in org.apache.hadoop.io.compress.zlib
- The compression level for zlib library.
- ZlibDecompressor - Class in org.apache.hadoop.io.compress.zlib
- A
Decompressor
based on the popular
zlib compression algorithm. - ZlibDecompressor(ZlibDecompressor.CompressionHeader, int) -
Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
- Creates a new decompressor.
- ZlibDecompressor() -
Constructor for class org.apache.hadoop.io.compress.zlib.ZlibDecompressor
-
- ZlibDecompressor.CompressionHeader - Enum in org.apache.hadoop.io.compress.zlib
- The headers to detect from compressed data.
- ZlibFactory - Class in org.apache.hadoop.io.compress.zlib
- A collection of factories to create the right
zlib/gzip compressor/decompressor instances.
- ZlibFactory() -
Constructor for class org.apache.hadoop.io.compress.zlib.ZlibFactory
-
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Overview
Package
Class
Use
Tree
Deprecated
Index
Help
PREV
NEXT
FRAMES
NO FRAMES
All Classes
Copyright © 2008 The Apache Software Foundation