| Package | Description |
|---|---|
| org.apache.hadoop.hdfs |
A distributed implementation of
FileSystem. |
| org.apache.hadoop.hdfs.protocolPB | |
| org.apache.hadoop.hdfs.server.balancer | |
| org.apache.hadoop.hdfs.server.blockmanagement | |
| org.apache.hadoop.hdfs.server.datanode.fsdataset | |
| org.apache.hadoop.hdfs.server.protocol |
| Modifier and Type | Field and Description |
|---|---|
static StorageType |
StorageType.DEFAULT |
static StorageType[] |
StorageType.EMPTY_ARRAY |
| Modifier and Type | Method and Description |
|---|---|
static StorageType |
StorageType.valueOf(String name)
Returns the enum constant of this type with the specified name.
|
static StorageType[] |
StorageType.values()
Returns an array containing the constants of this enum type, in
the order they are declared.
|
| Modifier and Type | Method and Description |
|---|---|
static List<StorageType> |
StorageType.asList() |
static List<StorageType> |
StorageType.getMovableTypes() |
| Modifier and Type | Method and Description |
|---|---|
static StorageType |
PBHelper.convertStorageType(org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto type) |
static StorageType[] |
PBHelper.convertStorageTypes(List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto> storageTypesList,
int expectedSize) |
| Modifier and Type | Method and Description |
|---|---|
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypesProto |
PBHelper.convert(StorageType[] types) |
static org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto |
PBHelper.convertStorageType(StorageType type) |
static List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto> |
PBHelper.convertStorageTypes(StorageType[] types) |
static List<org.apache.hadoop.hdfs.protocol.proto.HdfsProtos.StorageTypeProto> |
PBHelper.convertStorageTypes(StorageType[] types,
int startIdx) |
| Modifier and Type | Method and Description |
|---|---|
StorageType |
Dispatcher.DDatanode.StorageGroup.getStorageType() |
| Modifier and Type | Method and Description |
|---|---|
Dispatcher.Source |
Dispatcher.DDatanode.addSource(StorageType storageType,
long maxSize2Move,
org.apache.hadoop.hdfs.server.balancer.Dispatcher d) |
Dispatcher.DDatanode.StorageGroup |
Dispatcher.DDatanode.addTarget(StorageType storageType,
long maxSize2Move) |
G |
Dispatcher.StorageGroupMap.get(String datanodeUuid,
StorageType storageType) |
| Modifier and Type | Method and Description |
|---|---|
static StorageType[] |
DatanodeStorageInfo.toStorageTypes(DatanodeStorageInfo[] storages) |
| Modifier and Type | Method and Description |
|---|---|
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalRack(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes) |
protected DatanodeStorageInfo |
BlockPlacementPolicyWithNodeGroup.chooseLocalStorage(org.apache.hadoop.net.Node localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxNodesPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes,
boolean fallbackToLocalRack)
choose local node of localMachine as the target.
|
protected void |
BlockPlacementPolicyWithNodeGroup.chooseRemoteRack(int numOfReplicas,
org.apache.hadoop.hdfs.server.blockmanagement.DatanodeDescriptor localMachine,
Set<org.apache.hadoop.net.Node> excludedNodes,
long blocksize,
int maxReplicasPerRack,
List<DatanodeStorageInfo> results,
boolean avoidStaleNodes,
EnumMap<StorageType,Integer> storageTypes)
Choose numOfReplicas nodes from the racks
that localMachine is NOT on.
|
| Modifier and Type | Method and Description |
|---|---|
StorageType |
FsVolumeSpi.getStorageType() |
| Modifier and Type | Method and Description |
|---|---|
StorageType |
DatanodeStorage.getStorageType() |
| Constructor and Description |
|---|
DatanodeStorage(String sid,
DatanodeStorage.State s,
StorageType sm) |
Copyright © 2016 Apache Software Foundation. All Rights Reserved.