|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |
java.lang.Object org.apache.hadoop.dfs.NameNode
public class NameNode
NameNode serves as both directory namespace manager and "inode table" for the Hadoop DFS. There is a single NameNode running in any DFS deployment. (Well, except when there is a second backup/failover NameNode.) The NameNode controls two critical tables: 1) filename->blocksequence (namespace) 2) block->machinelist ("inodes") The first table is stored on disk and is very precious. The second table is rebuilt every time the NameNode comes up. 'NameNode' refers to both this class as well as the 'NameNode server'. The 'FSNamesystem' class actually performs most of the filesystem management. The majority of the 'NameNode' class itself is concerned with exposing the IPC interface to the outside world, plus some configuration management. NameNode implements the ClientProtocol interface, which allows clients to ask for DFS services. ClientProtocol is not designed for direct use by authors of DFS client code. End-users should instead use the org.apache.nutch.hadoop.fs.FileSystem class. NameNode also implements the DatanodeProtocol interface, used by DataNode programs that actually store DFS data blocks. These methods are invoked repeatedly and automatically by all the DataNodes in a DFS deployment. NameNode also implements the NamenodeProtocol interface, used by secondary namenodes or rebalancing processes to get partial namenode's state, for example partial blocksMap etc.
Nested Class Summary |
---|
Nested classes/interfaces inherited from interface org.apache.hadoop.dfs.FSConstants |
---|
FSConstants.CheckpointStates, FSConstants.DatanodeReportType, FSConstants.NodeType, FSConstants.SafeModeAction, FSConstants.StartupOption, FSConstants.UpgradeAction |
Field Summary | |
---|---|
static int |
DEFAULT_PORT
|
static int |
DISK_ERROR
|
static int |
DNA_BLOCKREPORT
|
static int |
DNA_FINALIZE
|
static int |
DNA_INVALIDATE
|
static int |
DNA_RECOVERBLOCK
|
static int |
DNA_REGISTER
|
static int |
DNA_SHUTDOWN
|
static int |
DNA_TRANSFER
|
static int |
DNA_UNKNOWN
Determines actions that data node should perform when receiving a datanode command. |
static int |
INVALID_BLOCK
|
static org.apache.commons.logging.Log |
LOG
|
static int |
NOTIFY
|
static org.apache.commons.logging.Log |
stateChangeLog
|
static long |
versionID
Compared to the previous version the following changes have been introduced: (Only the latest change is reflected. |
static long |
versionID
16: Block parameter added to nextGenerationStamp(). |
static long |
versionID
1: changed the serialization in DatanodeInfo |
Constructor Summary | |
---|---|
NameNode(Configuration conf)
Start NameNode. |
|
NameNode(String bindAddress,
Configuration conf)
Create a NameNode at the specified location and start it. |
Method Summary | |
---|---|
void |
abandonBlock(org.apache.hadoop.dfs.Block b,
String src,
String holder)
The client needs to give up on the block. |
org.apache.hadoop.dfs.LocatedBlock |
addBlock(String src,
String clientName)
A client that wants to write an additional block to the indicated filename (which must currently be open for writing) should call addBlock(). |
void |
blockReceived(org.apache.hadoop.dfs.DatanodeRegistration nodeReg,
org.apache.hadoop.dfs.Block[] blocks,
String[] delHints)
blockReceived() allows the DataNode to tell the NameNode about recently-received block data, with a hint for pereferred replica to be deleted when there is any excessive blocks. |
org.apache.hadoop.dfs.DatanodeCommand |
blockReport(org.apache.hadoop.dfs.DatanodeRegistration nodeReg,
long[] blocks)
blockReport() tells the NameNode about all the locally-stored blocks. |
void |
clearQuota(String path)
Remove the quota for a directory |
void |
commitBlockSynchronization(org.apache.hadoop.dfs.Block block,
long newgenerationstamp,
long newlength,
boolean closeFile,
boolean deleteblock,
DatanodeID[] newtargets)
Commit block synchronization in lease recovery |
boolean |
complete(String src,
String clientName)
The client is done writing data to the given filename, and would like to complete it. |
void |
create(String src,
FsPermission masked,
String clientName,
boolean overwrite,
short replication,
long blockSize)
Create a new file entry in the namespace. |
boolean |
delete(String src)
Deprecated. |
boolean |
delete(String src,
boolean recursive)
Delete the given file or directory from the file system. |
UpgradeStatusReport |
distributedUpgradeProgress(FSConstants.UpgradeAction action)
Report distributed upgrade progress or force current upgrade to proceed. |
void |
errorReport(org.apache.hadoop.dfs.DatanodeRegistration nodeReg,
int errorCode,
String msg)
errorReport() tells the NameNode about something that has gone awry. |
void |
finalizeUpgrade()
Finalize previous upgrade. |
static void |
format(Configuration conf)
Format a new filesystem. |
void |
fsync(String src,
String clientName)
Write all metadata for this file into persistent storage. |
LocatedBlocks |
getBlockLocations(String src,
long offset,
long length)
Get locations of the blocks of the specified file within the specified range. |
org.apache.hadoop.dfs.BlocksWithLocations |
getBlocks(DatanodeInfo datanode,
long size)
return a list of blocks & their locations on datanode whose
total size is size |
ContentSummary |
getContentSummary(String path)
Get ContentSummary rooted at the specified directory. |
DatanodeInfo[] |
getDatanodeReport(FSConstants.DatanodeReportType type)
Get a report on the system's current datanodes. |
long |
getEditLogSize()
Returns the size of the current edit log. |
org.apache.hadoop.dfs.DFSFileInfo |
getFileInfo(String src)
Get the file info for a specific file. |
File |
getFsImageName()
Returns the name of the fsImage file |
File[] |
getFsImageNameCheckpoint()
Returns the name of the fsImage file uploaded by periodic checkpointing |
org.apache.hadoop.dfs.DFSFileInfo[] |
getListing(String src)
Get a listing of the indicated directory |
InetSocketAddress |
getNameNodeAddress()
Returns the address on which the NameNodes is listening to. |
static NameNodeMetrics |
getNameNodeMetrics()
|
long |
getPreferredBlockSize(String filename)
Get the block size for the given file. |
long |
getProtocolVersion(String protocol,
long clientVersion)
Return protocol version corresponding to protocol interface. |
long[] |
getStats()
Get a set of statistics about the filesystem. |
boolean |
isInSafeMode()
Is the cluster currently in safe mode? |
void |
join()
Wait for service to finish. |
static void |
main(String[] argv)
|
void |
metaSave(String filename)
Dumps namenode state into specified file |
boolean |
mkdirs(String src,
FsPermission masked)
Create a directory (or hierarchy of directories) with the given name and permission. |
long |
nextGenerationStamp(org.apache.hadoop.dfs.Block block)
|
org.apache.hadoop.dfs.UpgradeCommand |
processUpgradeCommand(org.apache.hadoop.dfs.UpgradeCommand comm)
This is a very general way to send a command to the name-node during distributed upgrade process. |
void |
refreshNodes()
Tells the namenode to reread the hosts and exclude files. |
org.apache.hadoop.dfs.DatanodeRegistration |
register(org.apache.hadoop.dfs.DatanodeRegistration nodeReg)
Register Datanode. |
boolean |
rename(String src,
String dst)
Rename an item in the file system namespace. |
void |
renewLease(String clientName)
Client programs can cause stateful changes in the NameNode that affect other clients. |
void |
reportBadBlocks(org.apache.hadoop.dfs.LocatedBlock[] blocks)
The client has detected an error on the specified located blocks and is reporting them to the server. |
org.apache.hadoop.dfs.CheckpointSignature |
rollEditLog()
Roll the edit log. |
void |
rollFsImage()
Roll the image |
org.apache.hadoop.dfs.DatanodeCommand |
sendHeartbeat(org.apache.hadoop.dfs.DatanodeRegistration nodeReg,
long capacity,
long dfsUsed,
long remaining,
int xmitsInProgress,
int xceiverCount)
Data node notify the name node that it is alive Return a block-oriented command for the datanode to execute. |
void |
setOwner(String src,
String username,
String groupname)
Set owner of a path (i.e. |
void |
setPermission(String src,
FsPermission permissions)
Set permissions for an existing file/directory. |
void |
setQuota(String path,
long quota)
Set the quota for a directory. |
boolean |
setReplication(String src,
short replication)
Set replication for an existing file. |
boolean |
setSafeMode(FSConstants.SafeModeAction action)
Enter, leave or get safe mode. |
void |
stop()
Stop all NameNode threads and wait for all to finish. |
void |
verifyRequest(org.apache.hadoop.dfs.DatanodeRegistration nodeReg)
Verify request. |
void |
verifyVersion(int version)
Verify version. |
org.apache.hadoop.dfs.NamespaceInfo |
versionRequest()
|
Methods inherited from class java.lang.Object |
---|
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait |
Field Detail |
---|
public static final int DEFAULT_PORT
public static final org.apache.commons.logging.Log LOG
public static final org.apache.commons.logging.Log stateChangeLog
public static final long versionID
public static final long versionID
public static final int NOTIFY
public static final int DISK_ERROR
public static final int INVALID_BLOCK
public static final int DNA_UNKNOWN
public static final int DNA_TRANSFER
public static final int DNA_INVALIDATE
public static final int DNA_SHUTDOWN
public static final int DNA_REGISTER
public static final int DNA_FINALIZE
public static final int DNA_BLOCKREPORT
public static final int DNA_RECOVERBLOCK
public static final long versionID
Constructor Detail |
---|
public NameNode(Configuration conf) throws IOException
The name-node can be started with one of the following startup options:
REGULAR
- normal startupFORMAT
- format name nodeUPGRADE
- start the cluster
upgrade and create a snapshot of the current file system stateROLLBACK
- roll the
cluster back to the previous statezero
in the conf.
conf
- confirguration
IOException
public NameNode(String bindAddress, Configuration conf) throws IOException
zero
.
IOException
Method Detail |
---|
public long getProtocolVersion(String protocol, long clientVersion) throws IOException
VersionedProtocol
protocol
- The classname of the protocol interfaceclientVersion
- The version of the protocol that the client speaks
IOException
public static void format(Configuration conf) throws IOException
IOException
public static NameNodeMetrics getNameNodeMetrics()
public void join()
public void stop()
public org.apache.hadoop.dfs.BlocksWithLocations getBlocks(DatanodeInfo datanode, long size) throws IOException
datanode
whose
total size is size
datanode
- on which blocks are locatedsize
- total size of blocks
IOException
public LocatedBlocks getBlockLocations(String src, long offset, long length) throws IOException
Return LocatedBlocks
which contains
file length, blocks and their locations.
DataNode locations for each block are sorted by
the distance to the client's address.
The client will then have to contact one of the indicated DataNodes to obtain the actual data.
src
- file nameoffset
- range start offsetlength
- range length
IOException
public void create(String src, FsPermission masked, String clientName, boolean overwrite, short replication, long blockSize) throws IOException
This will create an empty file specified by the source path. The path should reflect a full path originated at the root. The name-node does not have a notion of "current" directory for a client.
Once created, the file is visible and available for read to other clients.
Although, other clients cannot delete(String)
, re-create or
rename(String, String)
it until the file is completed
or explicitly as a result of lease expiration.
Blocks have a maximum size. Clients that intend to
create multi-block files must also use addBlock(String, String)
.
src
- path of the file being created.masked
- masked permission.clientName
- name of the current client.overwrite
- indicates whether the file should be
overwritten if it already exists.replication
- block replication factor.blockSize
- maximum block size.
AccessControlException
- if permission to create file is
denied by the system. As usually on the client side the exception will
be wrapped into RemoteException
.
QuotaExceededException
- if the file creation violates
any quota restriction
IOException
- if other errors occur.public boolean setReplication(String src, short replication) throws IOException
The NameNode sets replication to the new value and returns. The actual block replication is not expected to be performed during this method call. The blocks will be populated or removed in the background as the result of the routine block maintenance procedures.
src
- file namereplication
- new replication
IOException
public void setPermission(String src, FsPermission permissions) throws IOException
IOException
public void setOwner(String src, String username, String groupname) throws IOException
username
- If it is null, the original username remains unchanged.groupname
- If it is null, the original groupname remains unchanged.
IOException
public org.apache.hadoop.dfs.LocatedBlock addBlock(String src, String clientName) throws IOException
IOException
public void abandonBlock(org.apache.hadoop.dfs.Block b, String src, String holder) throws IOException
IOException
public boolean complete(String src, String clientName) throws IOException
IOException
public void reportBadBlocks(org.apache.hadoop.dfs.LocatedBlock[] blocks) throws IOException
blocks
- Array of located blocks to report
IOException
public long nextGenerationStamp(org.apache.hadoop.dfs.Block block) throws IOException
IOException
public void commitBlockSynchronization(org.apache.hadoop.dfs.Block block, long newgenerationstamp, long newlength, boolean closeFile, boolean deleteblock, DatanodeID[] newtargets) throws IOException
IOException
public long getPreferredBlockSize(String filename) throws IOException
filename
- The name of the file
IOException
public boolean rename(String src, String dst) throws IOException
src
- existing file or directory name.dst
- new name.
IOException
- if the new name is invalid.
QuotaExceededException
- if the rename would violate
any quota restriction@Deprecated public boolean delete(String src) throws IOException
Any blocks belonging to the deleted files will be garbage-collected.
src
- existing name.
IOException
public boolean delete(String src, boolean recursive) throws IOException
same as delete but provides a way to avoid accidentally deleting non empty directories programmatically.
src
- existing namerecursive
- if true deletes a non empty directory recursively,
else throws an exception.
IOException
public boolean mkdirs(String src, FsPermission masked) throws IOException
src
- The path of the directory being createdmasked
- The masked permission of the directory being created
QuotaExceededException
- if the operation would violate
any quota restriction.
IOException
public void renewLease(String clientName) throws IOException
So, the NameNode will revoke the locks and live file-creates for clients that it thinks have died. A client tells the NameNode that it is still alive by periodically calling renewLease(). If a certain amount of time passes since the last call to renewLease(), the NameNode assumes the client has died.
IOException
public org.apache.hadoop.dfs.DFSFileInfo[] getListing(String src) throws IOException
IOException
public org.apache.hadoop.dfs.DFSFileInfo getFileInfo(String src) throws IOException
src
- The string representation of the path to the file
IOException
- if permission to access file is denied by the systempublic long[] getStats() throws IOException
IOException
public DatanodeInfo[] getDatanodeReport(FSConstants.DatanodeReportType type) throws IOException
IOException
public boolean setSafeMode(FSConstants.SafeModeAction action) throws IOException
Safe mode is a name node state when it
Safe mode is entered automatically at name node startup.
Safe mode can also be entered manually using
setSafeMode(SafeModeAction.SAFEMODE_GET)
.
At startup the name node accepts data node reports collecting information about block locations. In order to leave safe mode it needs to collect a configurable percentage called threshold of blocks, which satisfy the minimal replication condition. The minimal replication condition is that each block must have at least dfs.replication.min replicas. When the threshold is reached the name node extends safe mode for a configurable amount of time to let the remaining data nodes to check in before it will start replicating missing blocks. Then the name node leaves safe mode.
If safe mode is turned on manually using
setSafeMode(SafeModeAction.SAFEMODE_ENTER)
then the name node stays in safe mode until it is manually turned off
using setSafeMode(SafeModeAction.SAFEMODE_LEAVE)
.
Current state of the name node can be verified using
setSafeMode(SafeModeAction.SAFEMODE_GET)
action
- IOException
public boolean isInSafeMode()
public void refreshNodes() throws IOException
IOException
public long getEditLogSize() throws IOException
IOException
public org.apache.hadoop.dfs.CheckpointSignature rollEditLog() throws IOException
IOException
public void rollFsImage() throws IOException
IOException
public void finalizeUpgrade() throws IOException
IOException
public UpgradeStatusReport distributedUpgradeProgress(FSConstants.UpgradeAction action) throws IOException
action
- FSConstants.UpgradeAction
to perform
IOException
public void metaSave(String filename) throws IOException
IOException
public ContentSummary getContentSummary(String path) throws IOException
ContentSummary
rooted at the specified directory.
path
- The string representation of the path
IOException
public void setQuota(String path, long quota) throws IOException
path
- The string representation of the path to the directoryquota
- The limit of the number of names in the tree rooted
at the directory
FileNotFoundException
- if the path is a file or
does not exist
QuotaExceededException
- if the directory size
is greater than the given quota
IOException
public void clearQuota(String path) throws IOException
path
- The string representation of the path to the directory
FileNotFoundException
- if the path is not a directory
IOException
public void fsync(String src, String clientName) throws IOException
src
- The string representation of the pathclientName
- The string representation of the client
IOException
public org.apache.hadoop.dfs.DatanodeRegistration register(org.apache.hadoop.dfs.DatanodeRegistration nodeReg) throws IOException
DatanodeRegistration
, which contains
new storageID if the datanode did not have one and
registration ID for further communication.
IOException
DataNode.register()
,
FSNamesystem.registerDatanode(DatanodeRegistration)
public org.apache.hadoop.dfs.DatanodeCommand sendHeartbeat(org.apache.hadoop.dfs.DatanodeRegistration nodeReg, long capacity, long dfsUsed, long remaining, int xmitsInProgress, int xceiverCount) throws IOException
IOException
public org.apache.hadoop.dfs.DatanodeCommand blockReport(org.apache.hadoop.dfs.DatanodeRegistration nodeReg, long[] blocks) throws IOException
blocks
- - the block list as an array of longs.
Each block is represented as 2 longs.
This is done instead of Block[] to reduce memory used by block reports.
IOException
public void blockReceived(org.apache.hadoop.dfs.DatanodeRegistration nodeReg, org.apache.hadoop.dfs.Block[] blocks, String[] delHints) throws IOException
IOException
public void errorReport(org.apache.hadoop.dfs.DatanodeRegistration nodeReg, int errorCode, String msg) throws IOException
IOException
public org.apache.hadoop.dfs.NamespaceInfo versionRequest() throws IOException
IOException
public org.apache.hadoop.dfs.UpgradeCommand processUpgradeCommand(org.apache.hadoop.dfs.UpgradeCommand comm) throws IOException
IOException
public void verifyRequest(org.apache.hadoop.dfs.DatanodeRegistration nodeReg) throws IOException
nodeReg
- data node registration
IOException
public void verifyVersion(int version) throws IOException
version
-
IOException
public File getFsImageName() throws IOException
IOException
public File[] getFsImageNameCheckpoint() throws IOException
IOException
public InetSocketAddress getNameNodeAddress()
public static void main(String[] argv) throws Exception
Exception
|
||||||||||
PREV CLASS NEXT CLASS | FRAMES NO FRAMES | |||||||||
SUMMARY: NESTED | FIELD | CONSTR | METHOD | DETAIL: FIELD | CONSTR | METHOD |