Server Fault Asked by ALittleStitiousPuppet on December 27, 2020
I am trying to mount a second disk on a running Linux (RHEL 7.8) AWS instance, install a customized bootable Linux environment to it, and then change that disk to an AMI so we can boot new Linux instances from it. Since this is in the cloud, I don’t have the ability to boot an ISO or kickstart to run the standard installation procedures. I realize this is a roundabout way to do this, but let’s just say it’s a requirement.
I create a partition on the disk, create the XFS filesystem, mount it, and install the Base and Core group of packages to it, as well as the kernel and grub2 packages. No issues there. I run a grub2-install on the new disk, and then chroot to its path and run grub2-mkconfig -o /boot/grub2/grub.cfg. I did make sure to use the UUID for the new disk in the fstab, and verified that the disk UUID was being used in the grub configurations.
I then shutdown the instance, snapshot the volume, and then turn that snapshot into an AMI. No issues there. Booting up an instance from it, I was able to get a login prompt. That’s as far as I’ve been able to get, though. It seems sshd never starts up, so I can’t access it, I’m just seeing the system log showing me a login prompt. No matter what I do, I can’t seem to get the openssh server to startup and respond. I did verify the symlink systemd uses to enable processes is in place for sshd. I’m assuming I’m missing some configuration, or additional packages I need to get a functioning Linux instance. There doesn’t seem to be much information out there on how to do something like this, but it seems like it should be possible with the right mix of packages and configurations.
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP