- Ubuntu Documentation
- Introduction
- Requirements
- Installing via the GUI
- Partitioning the disk
- Configuring the RAID
- Boot Loader
- Boot from Degraded Disk
- Verify the RAID
- Troubleshooting
- Swap space doesn’t come up, error message in dmesg
- Using the mdadm CLI
- Checking the status of your RAID
- Disk Array Operation
- Known bugs
- Resources
- Install Linux Mint 18.3 on software raid (mdraid) device
- Add a comment
- Comments (newest first)
Ubuntu Documentation
Introduction
RAID is a method of using multiple hard drives to act as one. There are two purposes of RAID:
- Expand drive capacity: RAID 0. If you have 2 x 500 GB HDD then total space become 1 TB.
- Prevent data loss in case of drive failure: For example RAID 1, RAID 5, RAID 6, and RAID 10.
There are three ways to create a RAID:
- Software-RAID: Where the RAID is created by software.
- Hardware-RAID: A special controller used to build RAID. Hardware RAID is generally faster, and does not place load on the CPU, and hardware RAID can be used with any OS
FakeRAID: Since RAID hardware is very expensive, many motherboard manufacturers use multi-channel controllers with special BIOS features to perform RAID. This is a form of software RAID using special drivers, and it is not necessarily faster than true software RAID. Read FakeRaidHowto for details.
Requirements
- If you’re building a server, the server install ISO includes the necessary options.
If you’re building a desktop then you need the «Alternate» install ISO for Ubuntu. Read Getting Ubuntu Alternate Install disk and How to do a Ubuntu Alternate Install
How to Burn an ISO
Installing via the GUI
Install Ubuntu until you get to partitioning the disks
Partitioning the disk
Warning: the /boot filesystem cannot use any softRAID level other than 1 with the stock Ubuntu bootloader. If you want to use some other RAID level for most things, you’ll need to create separate partitions and make a RAID1 device for /boot.
Warning: this will remove all data on hard drives.
1. Select «Manual» as your partition method
2. Select your hard drive, and agree to «Create a new empty partition table on this device ?»
3. Select the «FREE SPACE» on the 1st drive then select «automatically partition the free space
4. Ubuntu will create 2 partitions: / and swap, as shown below:
5. On / partition select «bootable flag» and set it to «on»
6. Repeat steps 2 to 5 for the other hard drive
Configuring the RAID
- Once you have completed your partitioning in the main «Partition Disks» page select «Configure Software RAID»
- Select «Yes»
- Select «Create new MD drive»
- Select RAID type: RAID 0, RAID 1, RAID 5 or RAID 6
- Number of devices. RAID 0 and 1 need 2 drives. 3 for RAID 5 and 4 for RAID 6.
- Number of spare devices. Enter 0 if you have no spare drive.
- select which partitions to use..
- Repeat steps 3 to 7 with each pair of partitions you have created.
- Filesystem and mount points will need to be specified for each RAID device. By default they are set to «do not use».
- Once done, select finish.
Boot Loader
In case your next HDD won’t boot then simply install Grub to another drive:
Boot from Degraded Disk
If the default HDD fails then RAID will ask you to boot from a degraded disk. If your server is located in a remote area, the best practice may be to configure this to occur automatically:
- edit /etc/initramfs-tools/conf.d/mdadm
- change «BOOT_DEGRADED=false» to «BOOT_DEGRADED=true»
# Please provide URL to support claim: (this option is not supported from mdadm-3.2.5-5ubuntu3 / Ubuntu 14.04 onwards)
- Additionally, this can be specified on the kernel boot line with the bootdegraded=[true|false]
- You also can use #dpkg-reconfigure mdadm rather than CLI!
Verify the RAID
- shut-down your server
- remove the power and cable data of your first drive
- start your server and see if your server can boot from a degraded disk.
Troubleshooting
Swap space doesn’t come up, error message in dmesg
Provided the RAID is working fine this can be fixed with:
Using the mdadm CLI
For those that want full control over the RAID configuration, the mdadm CLI provides this.
Checking the status of your RAID
Two useful commands to check the status are:
From this information you can see that the available personalities on this machine are «raid1, raid6, raid4, and raid5» which means this machine is set-up to use raid devices configured in a raid1, raid6, raid4 and raid5 configuration.
You can also see in the three example meta devices that there are two raid 1 mirrored meta devices. These are md0 and md5. You can see that md5 is a raid1 array and made up of disk /dev/sda partition 7, and /dev/sdb partition 7, containing 62685504 blocks, with 2 out of 2 disks available and both in sync.
The same can be said of md0 only it is smaller (you can see from the blocks parameter) and is made up of /dev/sda1 and /dev/sdb1.
md6 is different in that we can see it is a raid 5 array, striped across three disks. These are /dev/sdc1, /dev/sde1 and /dev/sdd1, with a 64k «chunk» size or write size. Algorithm 2 shows it is a write algorithm pattern, which is «left disk to right disk» writing across the array. You can see that all three disks are present and in sync.
Replace * with the partition number.
Disk Array Operation
Note: You can add, remove disks, or set them as faulty without stopping an array.
1. To stop an array, type:
Where /dev/md0 is the array device.
2. To remove a disk from an array:
Where /dev/md0 is the array device and /dev/sda is the faulty disk.
3. Add a disk to an array:
Where /dev/md0 is the array device and /dev/sda is the new disk. Note: This is not the same as «growing» the array!
4. Start an Array, to reassemble (start) an array that was previously created:
ddadm will scan for defined arrays and start assembling it.
5. To track the status of the array as it gets started:
Known bugs
Ubuntu releases starting with 12.04 does not support nested raids like levels 1+0 or 5+0 due to an unresolved issue https://bugs.launchpad.net/ubuntu/+source/mdadm/+bug/1171945
Resources
https://wiki.ubuntu.com/HotplugRaid Keeping your data synced and mirrored on external drives.
Installation/SoftwareRAID (последним исправлял пользователь Matt 2015-09-12 06:06:22)
The material on this wiki is available under a free license, see Copyright / License for details
You can contribute to this wiki, see Wiki Guide for details
Источник
Install Linux Mint 18.3 on software raid (mdraid) device
Published on May 26th 2018 — Listed in Hardware Linux
When I re-vamped my computer (bought in 2011) a few days ago, I replaced the two internal 750GB hard drives with three SSD’s.
- Drive 1 (/dev/sda) 500GB Samsung 850 Evo
- Drive 2 (/dev/sdb) 500GB Samsung 850 Evo
- Drive 3 (/dev/sdc) 120GB ADATA SSD S510
I wanted to use both Samsung drives as a raid-1 for my main installation, Linux Mint 18.3.
The standalone SSD would be used for a Windows 7 installation for dual booting.
When I launched the Linux Mint 18.3 installation, I couldn’t find any options to create software raid. So I created them manually (mdadm. ) and restarted the installer. At the end of the installation the installer asks to reboot. That’s what I did. Just to come to the grub loader and it coudln’t find any operating system. Great :-/
After some try’n’err, I finally got to a way which works. If you want to install Linux Mint 18.3 on a software raid, follow these steps. Make sure you are using the correct device names, in my case they were /dev/sda and /dev/sdb.
1) Create the partitions on /dev/sda
I chose a very simple approach here with two partitions. The main partition almost fills up the whole disk, only leaving 4GB left for the second partition (swap).
# sfdisk -l /dev/sda
Disk /dev/sda: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4f1db047
Device Boot Start End Sectors Size Id Type
/dev/sda1 * 2048 968384511 968382464 461.8G 83 Linux
/dev/sda2 968384512 976773119 8388608 4G 82 Linux swap / Solaris
2) Copy the partition table from SDA to SDB
The following command dumps (-d) the partition table from /dev/sda and inserts it into /dev/sdb:
# sfdisk -d /dev/sda | sfdisk /dev/sdb
Checking that no-one is using this disk right now . OK
Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x16a9c579
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Script header accepted.
>>> Created a new DOS disklabel with disk identifier 0x4f1db047.
Created a new partition 1 of type ‘Linux’ and of size 461.8 GiB.
/dev/sdb2: Created a new partition 2 of type ‘Linux swap / Solaris’ and of size 4 GiB.
/dev/sdb3:
New situation:
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 968384511 968382464 461.8G 83 Linux
/dev/sdb2 968384512 976773119 8388608 4G 82 Linux swap / Solaris
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
3) Create the raid devices
First I created /dev/md0, which will hold the Linux Mint installation:
# mdadm —create /dev/md0 —level=1 —raid-devices=2 /dev/sda1 /dev/sdb1
mdadm: /dev/sda1 appears to contain an ext2fs file system
size=484191232K mtime=Fri May 25 15:31:47 2018
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
—metadata=0.90
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri May 25 12:50:09 2018
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
Then create /dev/md1 which will be used as swap partition:
# mdadm —create /dev/md1 —level=1 —raid-devices=2 /dev/sda2 /dev/sdb2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
—metadata=0.90
mdadm: /dev/sdb2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri May 25 12:50:23 2018
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
4) Wait for the sync to be completed
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
4190208 blocks super 1.2 [2/2] [UU]
resync=DELAYED
md0 : active raid1 sdb1[1] sda1[0]
484060160 blocks super 1.2 [2/2] [UU]
[>. ] resync = 0.7% (3421824/484060160) finish=39.7min speed=201283K/sec
bitmap: 4/4 pages [16KB], 65536KB chunk
Yes, patience you must have.
# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sdb2[1] sda2[0]
4190208 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sdb1[1] sda1[0]
484060160 blocks super 1.2 [2/2] [UU]
bitmap: 0/4 pages [0KB], 65536KB chunk
5) Format the raid device /dev/md0
I will be using an ext4 filesystem, so:
# mkfs.ext4 /dev/md0
mke2fs 1.42.13 (17-May-2015)
Discarding device blocks: done
Creating filesystem with 121015040 4k blocks and 30261248 inodes
Filesystem UUID: 8f662d46-4759-4b81-b879-eb60dd643f41
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
6) Launch the installer
But launch it from the command line:
# ubiquity -b
7) In the installer.
When the installer asks how about the installation type, select «Something else»:
The raid devices /dev/md0 (with ext4 as type) and /dev/md1 (with swap as type) should be shown in the list:
Double-click on the row /dev/md0 with the values and select use the partition as ext4 mounted as /.
Double-click on the row /dev/md1 with the values and select use the partition as swap.
Important: Make sure the other swap partitions (from devices SDA and SDB) are set to «do not use this partition». Otherwise the installer will fail and somehow crashes.
Make sure you select the row /dev/md0 with the values, then click on «Install now»:
8) At the end of the installation.
Very important: DO NOT click on «Restart Now». Click on «Continue Testing» instead. Otherwise you will have the same failing boot effect as I described at the begin of this article.
9) Prepare the Linux Mint installation to chroot into
Launch a terminal window and mount /dev/md0:
# mount /dev/md0 /mnt
Also make sure you are mounting sys and proc file systems as bind mounts into /mnt:
# for i in /dev /dev/pts /sys /proc; do mount —bind $i /mnt/$i; done
In case the resolv.conf inside the Linux Mint installation is empty, enter a nameserver manually:
# cat /mnt/etc/resolv.conf
mint
# echo «nameserver 1.1.1.1» > /mnt/etc/resolv.conf
Now chroot into your Linux Mint installation, mounted as /mnt:
10) Fix grub in the terminal
Now install the package mdadm into the Linux Mint installation. I will show the full output here:
mint / # apt-get install mdadm
Reading package lists. Done
Building dependency tree
Reading state information. Done
Suggested packages:
default-mta | mail-transport-agent dracut-core
The following NEW packages will be installed:
mdadm
0 upgraded, 1 newly installed, 0 to remove and 326 not upgraded.
Need to get 394 kB of archives.
After this operation, 1,208 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 mdadm amd64 3.3-2ubuntu7.6 [394 kB]
Fetched 394 kB in 0s (1,491 kB/s)
Preconfiguring packages .
Selecting previously unselected package mdadm.
(Reading database . 199757 files and directories currently installed.)
Preparing to unpack . /mdadm_3.3-2ubuntu7.6_amd64.deb .
Unpacking mdadm (3.3-2ubuntu7.6) .
Processing triggers for systemd (229-4ubuntu21) .
Processing triggers for ureadahead (0.100.0-19) .
Processing triggers for doc-base (0.10.7) .
Processing 4 added doc-base files.
Registering documents with scrollkeeper.
Processing triggers for man-db (2.7.5-1) .
Setting up mdadm (3.3-2ubuntu7.6) .
Generating mdadm.conf. done.
update-initramfs: deferring update (trigger activated)
Generating grub configuration file .
Warning: Setting GRUB_TIMEOUT to a non-zero value when GRUB_HIDDEN_TIMEOUT is set is no longer supported.
Found linux image: /boot/vmlinuz-4.10.0-38-generic
Found initrd image: /boot/initrd.img-4.10.0-38-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
ERROR: isw: Could not find disk /dev/sdd in the metadata
File descriptor 3 (pipe:[1799227]) leaked on lvs invocation. Parent PID 29155: /bin/sh
/run/lvm/lvmetad.socket: connect failed: No such file or directory
WARNING: Failed to connect to lvmetad. Falling back to internal scanning.
Found Windows 7 (loader) on /dev/sdd1
done
Running in chroot, ignoring request.
update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults
Processing triggers for systemd (229-4ubuntu21) .
Processing triggers for ureadahead (0.100.0-19) .
Processing triggers for initramfs-tools (0.122ubuntu8.9) .
update-initramfs: Generating /boot/initrd.img-4.10.0-38-generic
Warning: No support for locale: en_US.utf8
You can ignore the errors about drive /dev/sdd (it was an additional USB drive, nothing to do with the installation).
Very important here: When mdadm was installed into the Linux Mint installation, a new Kernel initramfs was created and also the grub config was created. Also the mdadm.conf was written. Obviously this step (installing mdadm into the Linux Mint installation and therefore self-awareness of being a Linux raid) was missed by the installer.
11) Verification
After mdadm was installed, a bunch of necessary files were created. Let’s start with grub:
mint / # ll /boot/grub/
total 2368
drwxr-xr-x 2 root root 4096 May 26 08:08 ./
drwxr-xr-x 3 root root 4096 May 26 08:08 ../
-rw-r—r— 1 root root 712 Nov 24 2017 gfxblacklist.txt
-r—r—r— 1 root root 9734 May 26 08:08 grub.cfg
-rw-r—r— 1 root root 2398585 Nov 24 2017 unicode.pf2
grub.cfg was only created once mdadm was installed. No wonder, a boot was not possible without this manual fix.
What does it contain?
mint / # cat /boot/grub/grub.cfg
#
# DO NOT EDIT THIS FILE
#
# It is automatically generated by grub-mkconfig using templates
# from /etc/grub.d and settings from /etc/default/grub
#
[. ]
insmod part_msdos
insmod part_msdos
insmod diskfilter
insmod mdraid1x
insmod ext2
set root=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′
if [ x$feature_platform_search_hint = xy ]; then
search —no-floppy —fs-uuid —set=root —hint=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′ 8f662d46-4759-4b81-b879-eb60dd643f41
else
search —no-floppy —fs-uuid —set=root 8f662d46-4759-4b81-b879-eb60dd643f41
fi
font=»/usr/share/grub/unicode.pf2″
fi
[. ]
menuentry ‘Linux Mint 18.3 Cinnamon 64-bit’ —class ubuntu —class gnu-linux —class gnu —class os $menuentry_id_option ‘gnulinux-simple-8f662d46-4759-4b81-b879-eb60dd643f41’ <
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_msdos
insmod part_msdos
insmod diskfilter
insmod mdraid1x
insmod ext2
set root=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′
if [ x$feature_platform_search_hint = xy ]; then
search —no-floppy —fs-uuid —set=root —hint=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′ 8f662d46-4759-4b81-b879-eb60dd643f41
else
search —no-floppy —fs-uuid —set=root 8f662d46-4759-4b81-b879-eb60dd643f41
fi
linux /boot/vmlinuz-4.10.0-38-generic root=UUID=8f662d46-4759-4b81-b879-eb60dd643f41 ro quiet splash $vt_handoff
initrd /boot/initrd.img-4.10.0-38-generic
>
submenu ‘Advanced options for Linux Mint 18.3 Cinnamon 64-bit’ $menuentry_id_option ‘gnulinux-advanced-8f662d46-4759-4b81-b879-eb60dd643f41’ <
menuentry ‘Linux Mint 18.3 Cinnamon 64-bit, with Linux 4.10.0-38-generic’ —class ubuntu —class gnu-linux —class gnu —class os $menuentry_id_option ‘gnulinux-4.10.0-38-generic-advanced-8f662d46-4759-4b81-b879-eb60dd643f41’ <
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_msdos
insmod part_msdos
insmod diskfilter
insmod mdraid1x
insmod ext2
set root=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′
if [ x$feature_platform_search_hint = xy ]; then
search —no-floppy —fs-uuid —set=root —hint=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′ 8f662d46-4759-4b81-b879-eb60dd643f41
else
search —no-floppy —fs-uuid —set=root 8f662d46-4759-4b81-b879-eb60dd643f41
fi
echo ‘Loading Linux 4.10.0-38-generic . ‘
linux /boot/vmlinuz-4.10.0-38-generic root=UUID=8f662d46-4759-4b81-b879-eb60dd643f41 ro quiet splash $vt_handoff
echo ‘Loading initial ramdisk . ‘
initrd /boot/initrd.img-4.10.0-38-generic
>
menuentry ‘Linux Mint 18.3 Cinnamon 64-bit, with Linux 4.10.0-38-generic (upstart)’ —class ubuntu —class gnu-linux —class gnu —class os $menuentry_id_option ‘gnulinux-4.10.0-38-generic-init-upstart-8f662d46-4759-4b81-b879-eb60dd643f41’ <
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_msdos
insmod part_msdos
insmod diskfilter
insmod mdraid1x
insmod ext2
set root=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′
if [ x$feature_platform_search_hint = xy ]; then
search —no-floppy —fs-uuid —set=root —hint=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′ 8f662d46-4759-4b81-b879-eb60dd643f41
else
search —no-floppy —fs-uuid —set=root 8f662d46-4759-4b81-b879-eb60dd643f41
fi
echo ‘Loading Linux 4.10.0-38-generic . ‘
linux /boot/vmlinuz-4.10.0-38-generic root=UUID=8f662d46-4759-4b81-b879-eb60dd643f41 ro quiet splash $vt_handoff init=/sbin/upstart
echo ‘Loading initial ramdisk . ‘
initrd /boot/initrd.img-4.10.0-38-generic
>
menuentry ‘Linux Mint 18.3 Cinnamon 64-bit, with Linux 4.10.0-38-generic (recovery mode)’ —class ubuntu —class gnu-linux —class gnu —class os $menuentry_id_option ‘gnulinux-4.10.0-38-generic-recovery-8f662d46-4759-4b81-b879-eb60dd643f41’ <
recordfail
load_video
insmod gzio
if [ x$grub_platform = xxen ]; then insmod xzio; insmod lzopio; fi
insmod part_msdos
insmod part_msdos
insmod diskfilter
insmod mdraid1x
insmod ext2
set root=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′
if [ x$feature_platform_search_hint = xy ]; then
search —no-floppy —fs-uuid —set=root —hint=’mduuid/650864013e2d41cf1f2acfeafc5c2bd7′ 8f662d46-4759-4b81-b879-eb60dd643f41
else
search —no-floppy —fs-uuid —set=root 8f662d46-4759-4b81-b879-eb60dd643f41
fi
echo ‘Loading Linux 4.10.0-38-generic . ‘
linux /boot/vmlinuz-4.10.0-38-generic root=UUID=8f662d46-4759-4b81-b879-eb60dd643f41 ro recovery nomodeset
echo ‘Loading initial ramdisk . ‘
initrd /boot/initrd.img-4.10.0-38-generic
>
>
As root device (set root) an uuid using mdraid (mduuid/650864013e2d41cf1f2acfeafc5c2bd7) was used. Let’s doublecheck that with the entries in mdadm.conf:
mint / # cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/0 metadata=1.2 UUID=65086401:3e2d41cf:1f2acfea:fc5c2bd7 name=mint:0
ARRAY /dev/md/1 metadata=1.2 UUID=ad052a0a:eb1f9198:ec842848:215b650b name=mint:1
ARRAY metadata=imsm UUID=0b24ad7f:9b251541:a98a3748:f6333faa
ARRAY /dev/md/RAID1 container=0b24ad7f:9b251541:a98a3748:f6333faa member=0 UUID=aaa62640:f0d57fc8:6c097c8f:547b9c8f
# This file was auto-generated on Sat, 26 May 2018 08:08:42 +0200
# by mkconf $Id$
The UUID for /dev/md/0 looks familiar ;-). It’s the same UUID as used in the grub config. So far so good.
Let’s check /etc/fstab, too:
mint / # cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use ‘blkid’ to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/md0 during installation
UUID=8f662d46-4759-4b81-b879-eb60dd643f41 / ext4 errors=remount-ro 0 1
# swap was on /dev/md1 during installation
UUID=b8277371-01fb-4aa3-bec1-9c0a4295deea none swap sw 0 0
Here another UUID is used (because it’s a device UUID, not a mdraid UUID). We can verify these by checking in /dev/disk/by-uuid:
mint / # ls -la /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 180 May 26 08:05 .
drwxr-xr-x 6 root root 120 May 25 18:59 ..
lrwxrwxrwx 1 root root 10 May 26 08:05 101EC9371EC9171E -> ../../sdd2
lrwxrwxrwx 1 root root 10 May 26 08:05 1EF881D5F881AB99 -> ../../sdc1
lrwxrwxrwx 1 root root 9 May 26 08:05 2017-11-24-13-25-42-00 -> ../../sr0
lrwxrwxrwx 1 root root 9 May 26 08:08 8f662d46-4759-4b81-b879-eb60dd643f41 -> ../../md0
lrwxrwxrwx 1 root root 9 May 26 08:08 b8277371-01fb-4aa3-bec1-9c0a4295deea -> ../../md1
lrwxrwxrwx 1 root root 10 May 26 08:05 CEAABB7CAABB601F -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 26 08:05 E646DD2A46DCFBED -> ../../sdd3
Both UUID’s used in fstab (for the root partition and for swap) are here.
12) Grub install on the physical drives
Now that everything looks in order, we can install grub to /dev/sda and /dev/sdb.
mint / # grub-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.
mint / # grub-install /dev/sdb
Installing for i386-pc platform.
Installation finished. No error reported.
Very good, no errors.
Now I exited the chroot environment and rebooted the machine.
mint / # exit
# reboot
13) Booting
And finally, success: Linux Mint 18.3 now boots from the software raid-1 /dev/md0 device.
Update June 11th 2018:
After I installed Linux Mint on the raid device, I discovered that Windows 7 (on /dev/sdc) did not boot anymore and failed with the following error:
BOOTMGR is missing
Press Ctrl+Alt+Del to restart
All attempts to repair the Windows boot with the Windows 7 DVD failed. I had to phyiscally unplug the two Samsung SSD’s and then boot from the Windows 7 DVD again.
This time Windows 7 Repair was able to fix the boot loader. Seems Windows 7 repair requires the Windows drive to be discovered as the first drive (/dev/sda) and only then the repair works.
After I was able to boot into my Windows 7 installation on the ADATA SSD again, I replugged the two Samsung SSD’s. This made the Windows drive to /dev/sdc again. But with the fixed boot loader, I can now boot into Windows 7 from the Grub menu.
Add a comment
Comments (newest first)
anonymous from wrote on Apr 15th, 2020:
thank you very much for your guide, it is very helpful!
Junior SMS from Goiânia/GO — Brazil wrote on Mar 21st, 2020:
Dear Claudio, after hours of searching and a number of wasted installations, your guide just solve my problem! So, thank you SO MUCH for your time writing e publishing.
Claudio from Sweden wrote on Apr 17th, 2019:
Júnior, become root (sudo su -) and try it without the sudo command. Or install «sudo» package.
Júnior Lourenção from Foz do Iguaçu/Paraná/Brasil wrote on Apr 17th, 2019:
O comando: # for i in /dev /dev/pts /sys /proc; do mount —bind $i /mnt/$i; done
retornou erro:
sudo for i in /dev /dev/pts /sys /proc; o mount —bind $i /mnt/$i;
sudo: for: command not found
o: command not found
Como proceder?
Aguardo retorno.
Obrigado.
Источник