XenServer 5.5 斷電重啓虛擬機磁盤丟失的修復

1.現象

公司雲平臺使用的是XenServer 5.5,版本比較老了。最近幾天由於機房改造,致使雲環境斷電,重啓以後發現有2臺機器沒法ping到,因此再次重啓,登陸修復網卡,最後發現沒法用XenCenter找到Local Storage,部分主機的該欄目內容爲空,致使一些重要的虛擬機沒法啓動。php

2.診斷 

因而遠程登陸機器,運行lvscan命令查看LVM的邏輯卷狀況,發現結果以下:html

[root@host202 backup]# lvscan
inactive          '/dev/VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa/MGT' [4.00 MB] inherit

再用 lvdisplay 查看,發現只有一個邏輯卷,本來的幾個邏輯卷所有不見了。因而判定LVM的磁盤丟失了,再去查看/etc/lvm/backup目錄,發現有2個備份文件mysql

[root@host202 backup]# ls -al /etc/lvm/backup/
total 16
drwx------ 2 root root 4096 Jan 22 16:48 .
drwxr-xr-x 5 root root 4096 Sep 19 2011 ..
-rw------- 1 root root 4072 Jan 21 15:24 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
-rw------- 1 root root 1259 Jan 22 16:48 VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa

依次cat一下兩個文件的內容,發現大的那個文件是我須要的。因而準備着手恢復。linux

3.實驗

恢復的方案是先建立一個實驗環境,在VMWare下安裝一個centos,建立幾個邏輯卷,備份,而後所有刪除,再建立一個邏輯卷,覆蓋掉原數據,而後再開始恢復,看看是否可以成功。整個實驗環境的配置狀況是添加2個磁盤,一個是系統盤10G大小,另外一個是2G的試驗盤,centos安裝在10G的盤上。git

3.1 實驗初始化

初始化實驗環境,使用的命令以下:github

pvcreate /dev/sdb1    ## 建立lvm的物理卷
vgcreate lvmfix /dev/sdb1  ## 在物理捲上建立卷組,也就是將不一樣的磁盤組合成概念上的單個磁盤
lvcreate -n fix01-10M -L 10M lvmfix ## 建立10M大小的邏輯卷,linux會把它當磁盤分區那麼用
lvcreate -n fix01-101M -L 101M lvmfix ## 連續建立多個,這個101M
lvcreate -n fix01-502M -L 502M lvmfix ## 建立502M的空間

格式化並放進一些有內容的文件,如下步驟要每一個邏輯卷都作一次sql

mkfs -j /dev/lvmfix/fix01-10M  ##
mkdir /root/f10m
mount /dev/lvmfix/fix01-10m /root/f10m
echo "abc 10m hello" > /root/f10m/f10m-readme

其他的兩個目錄一樣處理以後,備份一下lvm的分區狀況windows

vgcfgbackup -f %s-20140124 

開始模擬故障狀況,首先完全刪除卷,而後建立一個新的卷組去覆蓋部分數據centos

vgremove lvmfix ## 刪除卷組,若是詢問,則一路yes到底
pvremove /dev/sdb1 ## 連物理卷都刪掉

從新建立一個臨時的,覆蓋掉原來的數據app

pvcreate /dev/sdb1
vgcreate vg-fix-2 /dev/sdb1
lvcreate -n wrong-op -L 1G vg-fix-2
vgcfgbackup -f %s-after-wrong-op ## 備份一下破壞的卷信息,其實也能夠不備份

3.2 恢復過程

此時準備開始恢復,先刪除臨時建立的內容

vgremove vg-fix-2
pvremove /dev/sdb1

而後檢查早先備份的分區狀況 /root/lvmfix-20140124,提取pv的uuid和vg的uuid

grep "id =" /root/lvmfix-20140124

第二行的是pv的uuid,記下來,這裏用{pvuuid}代替。 而後開始建立,用命令建立一個相同uuid的物理卷

pvcreate --restorefile /root/lvmfix-20140124 --uuid {pvuuid} /dev/sdb1 ## 要注意,低版本的lvm不要下劃線部分

而後恢復卷組

vgcfgrestore --test --file /root/lvmfix-20140124 lvmfix ## 恢復初始的卷組狀況,先測試一下
vgcfgrestore --file /root/lvmfix-20140124 lvmfix ## 而後再去掉--test參數進行實際操做
lvscan ##執行完畢以後,看看物理卷是否恢復原樣
vgchange -ay lvmfix ## 記得要激活一下,使之狀態爲active
mount -t ext3 /dev/lvmfix/fix01-10m /root/f10m ## 從新 mount,要進行磁盤檢查

掃描的結果報告mount: wrong fs type, bad option, bad superblock錯誤,看來磁盤已經損壞了,須要修復。

e2fsck /dev/lvmfix/fix01-10m ## 修復磁盤

記得每個邏輯分區都須要mount並掃描一下,有錯誤就修復。不過根據經驗,通常只會前一個或兩個分區損壞,越後的分區基本都無缺。不過要注意,這裏的修復方式不適用於對XenServer的VHD修復。

4.實際操做

實驗成功以後,須要對損壞的主機進行實際操做,過程當中出現了不少其餘異常狀況,讓人感受很是艱苦,套用搜索資料過程當中的一個老外的網名:I Hate Xen!

4.1 第一階段,清除

首先尋找物理卷的uuid

[root@host202 backup]# head -50 /etc/lvm/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
# Generated by LVM2 version 2.02.56(1)-RHEL5 (2010-04-22): Tue Jan 21 15:24:17 2014
contents = "Text Format Volume Group"
version = 1
description = "Created *after* executing '/usr/sbin/pvresize /dev/disk/by-id/scsi-3600605b00283629017a39a1525dc3ec8-part3'"
creation_host = "host202"       # Linux host202 2.6.32.12-0.7.1.xs1.1.0.327.170596xen #1 SMP Fri Sep 16 17:45:00 EDT 2011 i686
creation_time = 1390289057 # Tue Jan 21 15:24:17 2014
VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 { ## 要記住這個卷組的編號,接下來要建立
id = "vcm98B-U8Ii-rB2z-Z0hP-0svE-DiM7-lsHXSe"
seqno = 18
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192 # 4 Megabytes
max_lv = 0
max_pv = 0
        physical_volumes {
                pv0 {
id = "OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv" ## 這個就是咱們要找的物理卷編號pvuuid
device = "/dev/sda3" # Hint only
                        status = ["ALLOCATABLE"]
flags = []

查看一下實際卷組的編號,準備刪除XenServer自動恢復時候建立的卷。

[root@host202 backup]# vgscan
  Reading all physical volumes.  This may take a while...
Found volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" using metadata type lvm2

依照實驗步驟,刪除無用的卷組

[root@host202 backup]# vgremove VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa
Do you really want to remove volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" containing 1 logical volumes? [y/n]: y
Logical volume "MGT" successfully removed
Volume group "VG_XenStorage-4883c621-cad8-e6db-7d17-b33ac4eb1aaa" successfully removed

看一下物理卷,而後準備刪除,要注意下面的黑體字,說明物理卷是空的,可是不要懼怕,只要沒有往這個物理卷裏邊寫過東西,原先的內容就還均可以恢復。刪的時候要注意如下命令的黑體下劃線部分,你的磁盤分區位置和我機器上的是不一樣的。

[root@host202 backup]# pvscan
PV /dev/sda3 lvm2 [456.73 GB]
Total: 1 [456.73 GB] / in use: 0 [0 ] / in no VG: 1 [456.73 GB]

刪掉這個物理卷

[root@host202 backup]# pvremove /dev/sda3
Labels on physical volume "/dev/sda3" successfully wiped

4.2 第二階段,恢復lvm

根據實驗步驟,咱們從新建立名字和uuid同樣的物理卷,這裏黑體部分就是記下來的pvuuid。另外千萬不要忘記帶下劃線的/dev/sda3部分,個人機器和讀者你的機器是不一樣的,看好上一步pvremove的是哪個分區,建錯了就什麼都沒了哦

[root@host202 backup]# pvcreate --restorefile ./VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923  -uuid OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv /dev/sda3
Can only set uuid on one volume at once
Run `pvcreate --help' for more information.
[root@host202 backup]# pvcreate --uuid OfQbfY-Fbvf-p5KW-8s8x-iyrx-VZ4F-ogDpIv /dev/sda3
Physical volume "/dev/sda3" successfully created

而後開始恢復磁盤卷,記住磁盤卷的名字是從第一階段的第一步裏得來的。先測試,再實際寫入。

[root@host202 backup]# vgcfgrestore --test --file VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
Test mode: Metadata will NOT be updated.
Restored volume group VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923

實際寫入並恢復lvm分區信息,再次提醒磁盤物理卷名字

[root@host202 backup]# vgcfgrestore  --file VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
Restored volume group VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923

看看戰果如何,仍是蠻喜人的,先看邏輯卷的狀況。你們注意inactive的狀態

[root@host202 backup]# lvscan
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT' [4.00 MB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0' [8.02 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5ceec995-26ec-4986-931f-3d1804807650' [192.38 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-3a3a681d-c1c2-4636-a656-f9901343d33d' [92.19 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a69ae385-924c-42e7-af38-2e38ffeaf851' [8.02 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a3e49a56-2326-44d4-a136-3e4a28beded7' [6.02 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-2b1a8fca-90d7-4ff4-b12a-aa2c8b589ba0' [6.02 GB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5e734d3c-2669-432d-8d38-4099d320375d' [8.00 MB] inherit
inactive '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6' [6.02 GB] inherit

卷組狀況也看來不錯

[root@host202 backup]# vgscan
Reading all physical volumes. This may take a while...
Found volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923" using metadata type lvm2

物理卷的狀況看起來也很喜人,並且看看下劃線黑體字部分,咱們的磁盤空間顯然已經回來了。

[root@host202 ~]# pvscan
PV /dev/sda3 VG VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 lvm2 [456.71 GB / 138.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]

依據實驗過程,激活整個卷組

[root@host202 backup]# vgchange -ay VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923
9 logical volume(s) in volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923" now active
[root@host202 backup]# lvscan
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT' [4.00 MB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5ceec995-26ec-4986-931f-3d1804807650' [192.38 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-3a3a681d-c1c2-4636-a656-f9901343d33d' [92.19 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a69ae385-924c-42e7-af38-2e38ffeaf851' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-a3e49a56-2326-44d4-a136-3e4a28beded7' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-2b1a8fca-90d7-4ff4-b12a-aa2c8b589ba0' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-5e734d3c-2669-432d-8d38-4099d320375d' [8.00 MB] inherit
ACTIVE '/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6' [6.02 GB] inherit

4.3 第三階段,磁盤檢查

這裏的磁盤檢查和實驗環境的徹底不同,由於XenServer使用了微軟的VHD格式,因此千萬不能用e2fsck來修復,不然數據永久丟失!

咱們使用專用的修復工具來進行修復,若是不想麻煩的話,就手工一個個敲,若是量大的話,能夠到參考文獻中找檢查腳本。不過有TAB鍵自動完善的功能,10個左右就直接敲命令吧

[root@host202 backup]# vhd-util check -n /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6
/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-db2d7fd2-018a-4719-ae73-046d402224c6 is valid
 
vhd-util check -n /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT
/dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT appears invalid; dumping headers
VHD Footer Summary:
-------------------
Cookie : XSSMc
Features : (0x01000000)
File format version : Major: 15423, Minor: 30829
Data offset : 77913575334348
Timestamp : Tue Jul 4 22:41:33 1922
Creator Application : '.0" '
Creator version : Major: 16190, Minor: 2620
Creator OS : Unknown!
Original disk size : 7997602797382 MB (83860943508677 Bytes)
Current disk size : 634683573958 MB (66551396324759 Bytes)
Geometry : Cyl: 29801, Hds: 111, Sctrs: 110
: = 177671 MB (186301547520 Bytes)
Disk type : Unknown type!
Checksum            : 0x74686963|0xffffe4c8 (Bad!)
UUID : 6b0a093c-2f61-6c6c-6f63-6174696f6e3e
Saved state : Yes
Hidden : 60
VHD Header Summary:
-------------------
Cookie :
Data offset (unusd) : 0
Table offset : 0
Header version : 0x00000000
Max BAT size : 0
Block size : 0 (0 MB)
Parent name :
Parent UUID : 00000000-0000-0000-0000-000000000000
Parent timestamp : Sat Jan 1 00:00:00 2000
Checksum : 0x0|0xffffffff (Bad!)

掃描結果不容樂觀,個人機器上前兩個分區都壞了,一個是MGT,另一個是數據盤,大約8G大小。進一步檢查,看看分區表是否還在

[root@host202 ~]# fdisk -l /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT

Disk /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT: 4 MB, 4194304 bytes
255 heads, 63 sectors/track, 0 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/MGT doesn't contain a valid partition table

分區表已經損壞!在搜索資料過程當中,發現MGT是VDI插入時自動建立的,所以決定重建MGT,而且拋棄8G的那個盤,理由是我發現有不少8G和6G大小的VHD,懷疑這些都是自動生成的快照。

 4.4 第四階段,從新識別VHD

 從新識別磁盤的方案很簡單,就是把本地存儲庫忘記(forget)掉,而後再從新介紹(introduce)一次,對於沒法識別的錯誤VHD,咱們要把它更名,Xen會掃描VHD-*形式的磁盤鏡像名稱,咱們簡單修改爲old-VHD-*便可跳過掃描。

首先咱們要識別一下信息,用pvscan找到存儲庫的uuid,下方黑色畫線部分要記下來

[root@host202 ~]# pvscan
PV /dev/sda3 VG VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923 lvm2 [456.71 GB / 138.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]

而後找到本機磁盤 /dev/sda3對應的id,再次強調,我這裏是/dev/sda3,可是讀者你的機器不必定是這個磁盤,不能搞錯。

ls -al /dev/disk/by-id

再找到本機的uuid,用xe命令。黑體部分的就是主機uuid了

[root@host202 ~]# xe host-list
uuid ( RO)                : 0bb221af-3f0b-44ff-9dba-2564fd7b8a11
name-label ( RW): host202
name-description ( RW): Default install of XenServer

 而後看看本機的SR名字是否正確,這裏顯然斜體部份內容是錯誤的,因此須要重建SR

[root@host202 ~]# xe sr-list type=lvm
uuid ( RO)                : 4883c621-cad8-e6db-7d17-b33ac4eb1aaa
name-label ( RW): Local Storage
name-description ( RW):
host ( RO): host202
type ( RO): lvm
content-type ( RO):

 重建的思路是先把SR相關的VDI作一個unplug操做,而後forget掉SR,再從新建立一個名字正確的SR,插入VDI以後會自動生成新的MGT,再讓XenServer本身掃描出剩餘的好的VHD。

先要找出SR關聯的pbd,

xe pbd-list sr-uuid=4883c621-cad8-e6db-7d17-b33ac4eb1aaa

而後忘記SR

 xe sr-forget uuid=4883c621-cad8-e6db-7d17-b33ac4eb1aaa

再開始建立SR,就是這一步折騰我好久,

xe sr-create host-uuid=0bb221af-3f0b-44ff-9dba-2564fd7b8a11 content-type=user name-label="Local Storage" shared=false device-config:device=/dev/disk/by-id/scsi-3600605b00283629017a39a1525dc3ec8-part3 type=lvm
 

 

 

 

[root@host202 ~]# lvrename /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0 /dev/VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923/old-VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0
Renamed "VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0" to "old-VHD-b4df3ed3-d6fd-4276-832b-a3a0f1c70bd0" in volume group "VG_XenStorage-0b3d830f-b140-3fdf-f384-7c56f1e72923"

[root@host202 ~]# xe sr-scan uuid=0b3d830f-b140-3fdf-f384-7c56f1e72923          

 

 

 

 

 

 

 

[root@host204 backup]# pvscan
PV /dev/sda3 VG VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23 lvm2 [456.71 GB / 135.02 GB free]
Total: 1 [456.71 GB] / in use: 1 [456.71 GB] / in no VG: 0 [0 ]
[root@host204 backup]# xe pbd-list sr-uuid=df81f6b1-22ae-3fad-8f24-7654baa4f385
uuid ( RO) : 4a8f5318-98b0-f932-2f98-950198ab6e28
host-uuid ( RO): 78c36865-1129-45f1-98ae-e0428625652e
sr-uuid ( RO): df81f6b1-22ae-3fad-8f24-7654baa4f385
device-config (MRO): device: /dev/disk/by-id/scsi-3600605b00281e90017a3c8ab1eaa9739-part3
currently-attached ( RO): true


[root@host204 backup]# xe host-list
uuid ( RO) : 92d731ad-3936-4cfd-8584-ecc16b425114
name-label ( RW): host205
name-description ( RW): avm


uuid ( RO) : 78c36865-1129-45f1-98ae-e0428625652e
name-label ( RW): host204
name-description ( RW): Default install of XenServer


uuid ( RO) : 0bb221af-3f0b-44ff-9dba-2564fd7b8a11
name-label ( RW): host202
name-description ( RW): Default install of XenServer

 

[root@host204 backup]# xe pbd-unplug uuid=4a8f5318-98b0-f932-2f98-950198ab6e28

 

[root@host204 backup]# xe sr-list host=host204
uuid ( RO) : df81f6b1-22ae-3fad-8f24-7654baa4f385
name-label ( RW): Local storage
name-description ( RW):
host ( RO): host204
type ( RO): lvm
content-type ( RO): user


uuid ( RO) : 04509a62-85b7-b5b0-95fe-6fcbfb14323f
name-label ( RW): DVD drives
name-description ( RW): Physical DVD drives
host ( RO): host204
type ( RO): udev
content-type ( RO): iso


uuid ( RO) : c44f02e6-5717-211a-eed0-f2ef74ee6e0d
name-label ( RW): Removable storage
name-description ( RW):
host ( RO): host204
type ( RO): udev
content-type ( RO): disk

 

[root@host204 backup]# xe sr-forget uuid=df81f6b1-22ae-3fad-8f24-7654baa4f385

 

[root@host204 backup]# xe sr-introduce uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23 type=lvm name-label="Local Storage"
844f33b1-36ce-a8a1-699f-6e53c2ca3a23

 

[root@host204 backup]# lvscan
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/MGT' [4.00 MB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e5163350-7a65-4424-9e98-91ed74b1771b' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-3ad95f97-cc0a-4033-b832-ceeaac19ddf6' [192.38 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e163f2b5-0d1a-4e2a-8bc9-0d9ab467a01a' [50.11 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-74bc6f50-c8a4-4f50-af0f-db463d2d0cad' [8.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-c6ad7774-1419-49aa-a984-0348e4848683' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-52429a66-a0bf-410a-8858-f9e45c1e700a' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-bb78fd95-7746-46a6-ab6a-fab578b7d64e' [6.02 GB] inherit
ACTIVE '/dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-5f27bce7-6cf5-4cce-a8d6-c77fbfa51774' [45.09 GB] inherit
[root@host204 backup]# lvrename /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/MGT /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/oldMGT
Renamed "MGT" to "oldMGT" in volume group "VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23"
[root@host204 backup]# lvrename /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/VHD-e5163350-7a65-4424-9e98-91ed74b1771b /dev/VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23/bad-VHD-e5163350-7a65-4424-9e98-91ed74b1771b
Renamed "VHD-e5163350-7a65-4424-9e98-91ed74b1771b" to "bad-VHD-e5163350-7a65-4424-9e98-91ed74b1771b" in volume group "VG_XenStorage-844f33b1-36ce-a8a1-699f-6e53c2ca3a23"

 

[root@host204 backup]# xe pbd-create sr-uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23 host-uuid=78c36865-1129-45f1-98ae-e0428625652e device-config:device=/dev/disk/by-id/scsi-3600605b00281e90017a3c8ab1eaa9739-part3
e45ba036-c59e-e3e3-d8b5-19be0cbfe336
[root@host204 backup]# xe pbd-plug uuid=e45ba036-c59e-e3e3-d8b5-19be0cbfe336

 

[root@host204 backup]# xe sr-scan uuid=844f33b1-36ce-a8a1-699f-6e53c2ca3a23

 

 

 

 

 

 

 參考文獻

  1. 硬盤ext2/3文件系統superblock損壞修復試驗 http://blog.sina.com.cn/s/blog_4b51d4690100ndhm.html
  2. Recovering a Lost LVM Volume Disk http://www.novell.com/coolsolutions/appnote/19386.html 
  3. XenServer Databaser Tool  http://support.citrix.com/article/CTX121564 
  4. VDI Metadata Corruption   http://discussions.citrix.com/topic/300932-vdi-metadata-corruption/
  5. XenServer Metadata Corrupt Workaround  http://virtualdesktopninja.com/VDINinja/2012/xenserver-metadata-corrupt-workaround/
  6. http://www.ganomi.com/wiki/index.php?title=Check_for_consistency_in_the_VHD_metadata
  7. http://blog.adamsbros.org/2009/05/30/recover-lvm-volume-groups-and-logical-volumes-without-backups/
  8. http://discussions.citrix.com/topic/282493-vdi-is-not-available-xenserver-56fp1/page-2
  9. http://rritw.com/a/bianchengyuyan/C__/20130814/411428.html
  10. http://support.citrix.com/article/CTX136342
  11. http://help.31dns.net/index.php/category/xenserver/
  12. http://golrizs.com/2012/01/how-to-reinstall-xenserver-and-preserve-virtual-machines-on-a-local-disk/ 
  13. http://www.xenme.com/1796
  14. http://blogs.citrix.com/2013/06/27/openstack-xenserver-type-image-to-volume/
  15. http://natesbox.com/blog/data-recovery-finding-vhd-files/
  16. http://itknowledgeexchange.techtarget.com/linux-lotus-domino/recovering-files-from-an-lvm-or-ext3-partition-with-testdisk/
  17. http://zhangyu.blog.51cto.com/197148/1095637
  18. 詳解MBR分區結構以及GPT分區結構  http://dengqi.blog.51cto.com/5685776/1348951
  19. FAT32文件系統詳解 http://dengqi.blog.51cto.com/5685776/1349327
  20. 分析NTFS文件系統內部結構 http://dengqi.blog.51cto.com/5685776/1351300
  21. NTFS文件系統數據恢復-解析分區結構  http://blog.csdn.net/jha334201553/article/details/9088921
  22. Troubleshooting Disks and File Systems    http://technet.microsoft.com/en-us/library/bb457122.aspx
  23. http://support.microsoft.com/kb/234048
  24. Logical Disk Management http://www.ntfs.com/ldm.htm
  25. https://stackoverflow.com/questions/8427372/windows-spanned-disks-ldm-restoration-with-linux
  26. http://uranus.chrysocome.net/explore2fs/es2fs.htm
  27. http://blog.csdn.net/ljianhui/article/details/8604140
  28. https://superuser.com/questions/693045/how-to-recover-partitions-from-an-external-hard-disk
  29. http://www.r-tt.com/Articles/External_Disk_Recovery/
  30. http://major.io/2010/12/14/mounting-a-raw-partition-file-made-with-dd-or-dd_rescue-in-linux/

 

 

 

 

mysql Cannot find or open table x/x 及解決辦法
http://blog.csdn.net/xiangliangyu/article/details/8450765

mysql經過idb文件恢復數據
http://blog.csdn.net/xiangliangyu/article/details/8450812

Can I find out what version of MySQL from the data files?
https://dba.stackexchange.com/questions/41338/can-i-find-out-what-version-of-mysql-from-the-data-files

Can I find mysql version from data files, need for data restoration
https://stackoverflow.com/questions/16324569/can-i-find-mysql-version-from-data-files-need-for-data-restoration

How to Recover Data using the InnoDB Recovery Tool
http://www.chriscalender.com/?p=49

MySQL 不停服務來啓用 innodb_file_per_table
http://www.php-oa.com/2012/04/20/mysql-innodb_file_per_table.html

工具https://github.com/jaylevitt/recover_innodb_tableshttps://launchpad.net/percona-data-recovery-tool-for-innodbhttp://www.percona.com/docs/wiki/innodb-data-recovery-tool:mysql-data-recovery:start

相關文章
相關標籤/搜索