Hadoop2.6.0 + 雲centos +僞分佈式 --->只談部署

  1. 3.0.3玩很差,現將2.6.0tar.gz上傳到 / usr  , chmod -R hadoop:hadop hadoop-2.6.0 , rm掉3.0.3
    java


image.png


2.在/etc/profile中 配置java的環境配置  , hadoop環境配置node

ssh免密登陸配置 (查看以前記錄)git

image.png


3. 配置文件apache

hadoop-env.sh中配置java環境瀏覽器

image.png


core-sit.xmlbash

image.png


官網上沒有提到 端口9000這個配置,可是若是不添加, start-dfs.sh的時候會出現以下錯誤:app

Incorrect configuration: namenode address dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not configured.ssh



hdfs-site.xml分佈式

image.png

參數 描述  默認  配置文件 例子值
dfs.name.dir name node的元數據,以,號隔開,hdfs會把元數據冗餘複製到這些目錄,通常這些目錄是不一樣的塊設備,不存在的目錄會被忽略掉

{hadoop.tmp.dir}ide

/dfs/name

hdfs-site.xm /hadoop/hdfs/name
dfs.name.edits.dir  node node的事務文件存儲的目錄,以,號隔開,hdfs會把事務文件冗餘複製到這些目錄,通常這些目錄是不一樣的塊設備,不存在的目錄會被忽略掉  ${dfs.name.dir}/current?? hdfs-site.xm ${













4.格式化文件系統

# hadoop namenode –format


[root@zui hadoop]# hadoop namenode -format     (由於這裏用到了root用戶, 因此start-dfs.sh若是不在root下執行,啓動不了namenode / datanode and secondnamenode , yarn沒有關係)

DEPRECATED: Use of this script to execute hdfs command is deprecated.

Instead use the hdfs command for it.


18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = zui/182.61.17.191

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 2.6.0

STARTUP_MSG:   classpath =/***********各類jar包的path/

STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e34                                                                                        96499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10 Z

STARTUP_MSG:   java = 1.8.0_152

************************************************************/

18/07/23 17:03:29 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]

18/07/23 17:03:29 INFO namenode.NameNode: createNameNode [-format]

Formatting using clusterid: CID-cb98355b-6a1d-47a2-964c-48dc32752b55

18/07/23 17:03:30 INFO namenode.FSNamesystem: No KeyProvider found.

18/07/23 17:03:30 INFO namenode.FSNamesystem: fsLock is fair:true

18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000

18/07/23 17:03:30 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true

18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000

18/07/23 17:03:30 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jul 23 17:03:30

18/07/23 17:03:30 INFO util.GSet: Computing capacity for map BlocksMap

18/07/23 17:03:30 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:30 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB

18/07/23 17:03:30 INFO util.GSet: capacity      = 2^21 = 2097152 entries

18/07/23 17:03:30 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: defaultReplication= 1

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplication= 512

18/07/23 17:03:30 INFO blockmanagement.BlockManager: minReplication= 1

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxReplicationStreams= 2

18/07/23 17:03:30 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: replicationRecheckInterval= 3000

18/07/23 17:03:30 INFO blockmanagement.BlockManager: encryptDataTransfer= false

18/07/23 17:03:30 INFO blockmanagement.BlockManager: maxNumBlocksToLog= 1000

18/07/23 17:03:30 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)

18/07/23 17:03:30 INFO namenode.FSNamesystem: supergroup          = supergroup

18/07/23 17:03:30 INFO namenode.FSNamesystem: isPermissionEnabled = true

18/07/23 17:03:30 INFO namenode.FSNamesystem: HA Enabled: false

18/07/23 17:03:30 INFO namenode.FSNamesystem: Append Enabled: true

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map INodeMap

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^20 = 1048576 entries

18/07/23 17:03:31 INFO namenode.NameNode: Caching file names occuring more than 10 times

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map cachedBlocks

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^18 = 262144 entries

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0

18/07/23 17:03:31 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension= 30000

18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache on namenode is enabled

18/07/23 17:03:31 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis

18/07/23 17:03:31 INFO util.GSet: Computing capacity for map NameNodeRetryCache

18/07/23 17:03:31 INFO util.GSet: VM type       = 64-bit

18/07/23 17:03:31 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB

18/07/23 17:03:31 INFO util.GSet: capacity      = 2^15 = 32768 entries

18/07/23 17:03:31 INFO namenode.NNConf: ACLs enabled? false

18/07/23 17:03:31 INFO namenode.NNConf: XAttrs enabled? true

18/07/23 17:03:31 INFO namenode.NNConf: Maximum size of an xattr: 16384

18/07/23 17:03:31 INFO namenode.FSImage: Allocated new BlockPoolId: BP-702429615-182.61.17.191-1532336611838

18/07/23 17:03:31 INFO common.Storage: Storage directory /usr/hadoop-2.6.0/data/tmp/dfs/name has been successfully formatted.

18/07/23 17:03:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0

18/07/23 17:03:32 INFO util.ExitUtil: Exiting with status 0

18/07/23 17:03:32 INFO namenode.NameNode: SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode at zui/182.61.17.191

************************************************************/

[root@zui hadoop]# [root@zui hadoop]# hadoop namenode -format

-bash: [root@zui: command not found

[root@zui hadoop]# DEPRECATED: Use of this script to execute hdfs command is deprecated.

-bash: DEPRECATED:: command not found

[root@zui hadoop]# Instead use the hdfs command for it.

-bash: Instead: command not found

[root@zui hadoop]#

[root@zui hadoop]# 18/07/23 17:03:28 INFO namenode.NameNode: STARTUP_MSG:

-bash: 18/07/23: No such file or directory

[root@zui hadoop]# /************************************************************

-bash: /appd.log: Text file busy

[root@zui hadoop]# STARTUP_MSG: Starting NameNode

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   host = zui/182.61.17.191

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   args = [-format]

-bash: STARTUP_MSG:: command not found

[root@zui hadoop]# STARTUP_MSG:   version = 2.6.0

-bash: STARTUP_MSG:: command not found


格式化成功,這裏我把打印的信息貼上了,深刻的學習是須要分析的


5.

執行 start-dfs.sh

檢查 結果jps

image.png


6.經過瀏覽器訪問 : http://公網ip:50070/ 

來張大圖爽快一把

image.png



全文參考: http://www.javashuo.com/article/p-oclthixm-hx.html

若有雷同,全屬抄襲



2018 07 23








Hadoop中的資源調度 : yarn


mapreduce-site.xml

image.png


yarn-site.xml

image.png


切換到hadoop用戶,執行 start-yarn.sh, 由於免密配置是在hadoop用戶下操做的,若是root用戶,須要一次次輸入密碼

image.png

由於以前start-dfs的操做是在root下操做的,因此log文件對hadoop用戶 Permission denied


檢查以下;

image.png


將logs用戶和組 assign給 hadoop (提示:免密登陸在什麼用戶下配置的,後面hadoop任何操做都要在這個user下 1.其餘用戶操做不知要輸入多少次密碼,若是一百次操做都要輸入pwd你會暈掛的 2.假使前面用了root,後面恍然大悟切回到hadoop用戶了,可是有些生成的文件是root用戶和組,若是hadoop下也須要操做這些目錄那麼明顯沒有權限,運行檢查發現100個文件,運氣好也許一個 chown -R就行了,運氣很差 100次 chown你來試試)

image.png


再次執行 start-yarn.sh

image.png


查看 ,爲何沒有顯示 namenode 和 datanode的進程, 此時http://182.61.**.***:50070也仍是能夠訪問的呀 ????


image.png


在瀏覽器輸入,OK, 看到下面結果,僞分佈式搭建完成

image.png

相關文章
相關標籤/搜索