Super User Asked by user3728501 on February 18, 2021
I rebooted my server after creating 3 raid 1 arrays with mdadm, and after rebooting, systemd was spitting out errors about the raid devices timing out when trying to assemble them at boot.
I was given an option to enter the root password for maintainance, which I have done.
I tried running
mdadm --assemble --scan
but this produced no output.
I’m not sure what else I should check.
This is a debian 10 system, which runs inside a KVM (lib-virt) on another instance of debian 10. There is a HBA which is passed through as a pci device to the VM. This may or may not be relevant.
afaik the mdadm.conf is setup correctly, and fstab is set correctly.
But, again, I might be mistaken.
What should I do now?
Data is probably not particularly important as we only copied things over last night. Still it would be nice not to lose everything since it took the whole day to copy.
This is the output of
mdadm --examine --verbose /dev/sda
/dev/sda:
MBR Magic: aa55
Parition[0]: 1953525167 sectors at 1 (type ee)
Is this helpful?
cat /proc/mdstat
Personalities : [linear] .... etc ....
unused devices: <none>
That’s it – there’s no list of raid devices there (sorry should have said this earlier)
Commands run:
For example, after my raids failed on the first reboot, I managed to rebuild them with the following commands without losing any data as far as I can tell
sudo mdadm --create --verbose /dev/md2 --level=1 --raid-devices=2 /dev/sdc /dev/sdd
cat /proc/mdstat
.... syncing
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
sudo reboot
sudo mdadm --assemble --scan
cat /proc/mdstat
.... nothing there
Get help from others!
Recent Questions
Recent Answers
© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP