Today I am working on setting up a BackupPC server to take remote internal centralized backups of some of our other servers on the cheap.
I already had BackupPC installed and the basics configured but I needed to add a new drive to the system (for additional backup data storage) and I also needed to setup a new NIC connection. My Ubuntu Server is running on Microsoft Hyper-V 3.0 on a Server 2012 host machine so adding all the new hardware was as simple as a few clicks.
Normally I am a command-line guy but this server is going to be managed ongoing by folks who are less Linux savvy so I wanted to install some additional software that would make their life easier. To that end, I am using Webmin.
During the course of adding additional storage to my VM I ran into some headaches related to Hyper-V and Linux storage formatting of GPT disks larger than 2 TB.
Sounds like a very specific use case? I think it is quicky becoming more common as A.) Storage gets cheaper and therefore larger and B.) Microsoft Hyper-V sees more adoption as it is now decently featured and has attractive pricing for people with existing Windows infrastructure. Hopefully this article will help you avoid the trouble I ran into when setting up a new large disk on an Ubuntu Hyper-V VM…
First, I am using Webmin when I work on partitioning my new attached drive. I suggest you do the same if you are following along. Not everyone else where I work is as thrilled about a CLI only server as I am and I would guess it is a similar story where you work. So, as a courtesy to your co-workers :), you should take a look at how to install Webmin here:
http://www.ubuntugeek.com/how-to-install-webmin-on-ubuntu-13-04-raring-ringtail-server.html
If you are partitioning out your hard drive and want a very good high-level overview of the differences between EXT2, EXT3, and EXT4 file systems then I suggest you take a look at this very good an succinct treatment of the subject here:
http://www.thegeekstuff.com/2011/05/ext2-ext3-ext4/
Of course life wasn’t simple when I tried to work with my new attached storage… Webmin kept failing to create a file system on my new disk. Ugh!
After digging around I found out why here:
http://social.technet.microsoft.com/Forums/…
Apparently VHDX files > 2TB using GPT (obviously due to the size) will not work with the MKFS command normally. So you have to add a flag to the MKFS command to make it work. So, back to the command line… I had to run the following command once I got the drive partitioned as one big partition in order to make my file system:
Once that was done, I could then return to webmin and mount my new volume. I created a new root folder called /backupdata as that would be obvious and simple enough to reference by BackupPC. Once it was mounted in webmin I hit the command line again and checked my mounted volume list with the following command:
And it showed up as 2TB! Excellent!
Most of you are probably scratching your head as I went through a lot before I got to that point. So the full Procedure is as follows using Webmin and the Command Line to get a new drive attached greater than 2 TB:
- Create a new VHDX File in Hyper-V > 2TB
- Attach the Disk to your Linux VM and start the VM
- Open the Webmin Console and go to Hardware –> Partitions on Local Disks
- You should see a new SCSI device with 0 partitions. Click on the device
- Click “Wipe Partitions”
- Select “GPT (For 2T or larger disks) and click “Wipe and Re-Label”
- Click “Add Primary Partition”
- Select type “Linux,” enter a Partition Name and click “create”
- From your server’s command line run the command “sudo partprobe”
- Then, from Webmin, you should stil be on the screen “Edit Disk Partitions,” Click on the new disk partition and note the information next to “Device File” ex. /dev/sdb1
- From the command line, run the command “sudo mkfs.ext4 -K /dev/sdb1”, substituting your partition’s “Device file” name
- Once the command finishes running go back to Webmin, under “hardware” on the left, click on “partitions on Local Disks” to refresh the page. Then click on your device, finally, click on your partition number to take you back to your partition screen.
- Next to “Mount Partition On” put in the folder location you want the storage mounted (ex. “/backupdata”) and change it to EXT4. Then Click the “Mount Partition On:” button.
- On the next screen, you can review everything (I left it all default) and click “create”
- If everything was correct, it should dump you to the “Disk and Network Filesystems” screen and you should see your new mounted storage at the location you selected in your filesystem. You can now use it!
- From the command line, you can double-check with the command “df -h” to make sure it shows
During this command my dynamically allocated VHDX file grew to roughly 35GB for a 2.2TB max storage size system.
—–
After the above was done, I needed to move my currently installed BackupPC data folder to the new drive. I came across some really erroneous advice on how to do this which was an over-simplified use of symlinks that wouldn’t actually do anything. After some further searching however I came across this page here which was quite helpful:
I had several gigs worth of backups already in existance that had to be moved and this procedure was the way to go. Before I started though, I stopped the backuppc service with the following command:
I then dove into copying my existing files over:
Additional References: