Ubuntu 18.04 VM in emergency/maintenance mode due to failed corrupted raided disk












2















I have a VM which has an attached raided device with fstab entry:



/dev/md127 /mnt/blah ext4 nofail 0 2


The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startup the following occurred in syslog:



systemd-fsck[1272]: /dev/md127 contains a file system with errors, check forced.
systemd-fsck[1272]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
systemd-fsck[1272]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
systemd-fsck[1272]: #011(i.e., without -a or -p options)
systemd-fsck[1272]: fsck failed with exit status 4.
systemd-fsck[1272]: Running request emergency.target/start/replace
systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
systemd[1]: Failed to start File System Check on /dev/md127.
systemd[1]: Dependency failed for /mnt/blah.
systemd[1]: Dependency failed for Provisioner client daemon.


My guess is that the OS goes to emergency/maintenance mode because of the corrupt raided disks:



systemctl --state=failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● systemd-fsck@dev-md127.service loaded failed failed File System Check on /dev/md127


What i want is for the VM to startup regardless of whether the raided drives are corrupt/unmountable, so it shouldn't go to emergency/maintenance mode. I followed these posts to attempt at disabling emergency/maintenance mode:




  • How to disable systemd agressive emergency shell behaviour?

  • How to determine exactly why Systemd enters emergency mode

  • Emergency Mode and Local Disk


I had to first create the directory local-fs.target.d in /etc/systemd/system/, which felt wrong. I then created a nofail.conf in /etc/systemd/system/local-fs.target.d/nofail.conf containing:



[Unit]
OnFailure=


After loading that drop file, I was able to confirm that the drop file was found by local-fs.target:



sudo systemctl status local-fs.target 
● local-fs.target - Local File Systems
Loaded: loaded (/lib/systemd/system/local-fs.target; static; vendor preset: enabled)
Drop-In: /etc/systemd/system/local-fs.target.d
└─nofail.conf
Active: active since Tue 2019-01-08 12:36:41 UTC; 3h 55min ago
Docs: man:systemd.special(7)


BUT, after rebooting, the VM still ended up in emergency/maintenance mode. Have i missed something? Does the nofail.conf solution not work with raided disks?





EDIT: I was able to get a print out of the logs when the system booted to emergency mode (sorry it's a screenshot since i don't have access to the host and had to ask the owner for it):



enter image description here



Here's the output from systemctl for systemd-fsck@dev-md127:



 sudo systemctl status --no-pager --full systemd-fsck@dev-md127
● systemd-fsck@dev-md127.service - File System Check on /dev/md127
Loaded: loaded (/lib/systemd/system/systemd-fsck@.service; static; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-01-10 12:05:44 UTC; 2h 57min ago
Docs: man:systemd-fsck@.service(8)
Process: 1025 ExecStart=/lib/systemd/systemd-fsck /dev/md127 (code=exited, status=1/FAILURE)
Main PID: 1025 (code=exited, status=1/FAILURE)

systemd[1]: Starting File System Check on /dev/md127...
systemd-fsck[1025]: /dev/md127 contains a file system with errors, check forced.
systemd-fsck[1025]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
systemd-fsck[1025]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
systemd-fsck[1025]: (i.e., without -a or -p options)
systemd-fsck[1025]: fsck failed with exit status 4.
systemd-fsck[1025]: Running request emergency.target/start/replace
systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
systemd[1]: Failed to start File System Check on /dev/md127.


As i pointed out earlier, i have nofail set in /etc/fstab. Now the questions are:




  1. What is the dependency in the failed dependency in the screenshot?

  2. If fsck fails on /dev/md127, why does it enter emergency mode and how do i disable that?


EDIT 2:



A couple of other things i can add are:




  1. the vm is a kvm vm

  2. it's a software raid


Kind regards,
Ankur










share|improve this question





























    2















    I have a VM which has an attached raided device with fstab entry:



    /dev/md127 /mnt/blah ext4 nofail 0 2


    The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startup the following occurred in syslog:



    systemd-fsck[1272]: /dev/md127 contains a file system with errors, check forced.
    systemd-fsck[1272]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
    systemd-fsck[1272]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    systemd-fsck[1272]: #011(i.e., without -a or -p options)
    systemd-fsck[1272]: fsck failed with exit status 4.
    systemd-fsck[1272]: Running request emergency.target/start/replace
    systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
    systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
    systemd[1]: Failed to start File System Check on /dev/md127.
    systemd[1]: Dependency failed for /mnt/blah.
    systemd[1]: Dependency failed for Provisioner client daemon.


    My guess is that the OS goes to emergency/maintenance mode because of the corrupt raided disks:



    systemctl --state=failed
    UNIT LOAD ACTIVE SUB DESCRIPTION
    ● systemd-fsck@dev-md127.service loaded failed failed File System Check on /dev/md127


    What i want is for the VM to startup regardless of whether the raided drives are corrupt/unmountable, so it shouldn't go to emergency/maintenance mode. I followed these posts to attempt at disabling emergency/maintenance mode:




    • How to disable systemd agressive emergency shell behaviour?

    • How to determine exactly why Systemd enters emergency mode

    • Emergency Mode and Local Disk


    I had to first create the directory local-fs.target.d in /etc/systemd/system/, which felt wrong. I then created a nofail.conf in /etc/systemd/system/local-fs.target.d/nofail.conf containing:



    [Unit]
    OnFailure=


    After loading that drop file, I was able to confirm that the drop file was found by local-fs.target:



    sudo systemctl status local-fs.target 
    ● local-fs.target - Local File Systems
    Loaded: loaded (/lib/systemd/system/local-fs.target; static; vendor preset: enabled)
    Drop-In: /etc/systemd/system/local-fs.target.d
    └─nofail.conf
    Active: active since Tue 2019-01-08 12:36:41 UTC; 3h 55min ago
    Docs: man:systemd.special(7)


    BUT, after rebooting, the VM still ended up in emergency/maintenance mode. Have i missed something? Does the nofail.conf solution not work with raided disks?





    EDIT: I was able to get a print out of the logs when the system booted to emergency mode (sorry it's a screenshot since i don't have access to the host and had to ask the owner for it):



    enter image description here



    Here's the output from systemctl for systemd-fsck@dev-md127:



     sudo systemctl status --no-pager --full systemd-fsck@dev-md127
    ● systemd-fsck@dev-md127.service - File System Check on /dev/md127
    Loaded: loaded (/lib/systemd/system/systemd-fsck@.service; static; vendor preset: enabled)
    Active: failed (Result: exit-code) since Thu 2019-01-10 12:05:44 UTC; 2h 57min ago
    Docs: man:systemd-fsck@.service(8)
    Process: 1025 ExecStart=/lib/systemd/systemd-fsck /dev/md127 (code=exited, status=1/FAILURE)
    Main PID: 1025 (code=exited, status=1/FAILURE)

    systemd[1]: Starting File System Check on /dev/md127...
    systemd-fsck[1025]: /dev/md127 contains a file system with errors, check forced.
    systemd-fsck[1025]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
    systemd-fsck[1025]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
    systemd-fsck[1025]: (i.e., without -a or -p options)
    systemd-fsck[1025]: fsck failed with exit status 4.
    systemd-fsck[1025]: Running request emergency.target/start/replace
    systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
    systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
    systemd[1]: Failed to start File System Check on /dev/md127.


    As i pointed out earlier, i have nofail set in /etc/fstab. Now the questions are:




    1. What is the dependency in the failed dependency in the screenshot?

    2. If fsck fails on /dev/md127, why does it enter emergency mode and how do i disable that?


    EDIT 2:



    A couple of other things i can add are:




    1. the vm is a kvm vm

    2. it's a software raid


    Kind regards,
    Ankur










    share|improve this question



























      2












      2








      2








      I have a VM which has an attached raided device with fstab entry:



      /dev/md127 /mnt/blah ext4 nofail 0 2


      The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startup the following occurred in syslog:



      systemd-fsck[1272]: /dev/md127 contains a file system with errors, check forced.
      systemd-fsck[1272]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
      systemd-fsck[1272]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      systemd-fsck[1272]: #011(i.e., without -a or -p options)
      systemd-fsck[1272]: fsck failed with exit status 4.
      systemd-fsck[1272]: Running request emergency.target/start/replace
      systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
      systemd[1]: Failed to start File System Check on /dev/md127.
      systemd[1]: Dependency failed for /mnt/blah.
      systemd[1]: Dependency failed for Provisioner client daemon.


      My guess is that the OS goes to emergency/maintenance mode because of the corrupt raided disks:



      systemctl --state=failed
      UNIT LOAD ACTIVE SUB DESCRIPTION
      ● systemd-fsck@dev-md127.service loaded failed failed File System Check on /dev/md127


      What i want is for the VM to startup regardless of whether the raided drives are corrupt/unmountable, so it shouldn't go to emergency/maintenance mode. I followed these posts to attempt at disabling emergency/maintenance mode:




      • How to disable systemd agressive emergency shell behaviour?

      • How to determine exactly why Systemd enters emergency mode

      • Emergency Mode and Local Disk


      I had to first create the directory local-fs.target.d in /etc/systemd/system/, which felt wrong. I then created a nofail.conf in /etc/systemd/system/local-fs.target.d/nofail.conf containing:



      [Unit]
      OnFailure=


      After loading that drop file, I was able to confirm that the drop file was found by local-fs.target:



      sudo systemctl status local-fs.target 
      ● local-fs.target - Local File Systems
      Loaded: loaded (/lib/systemd/system/local-fs.target; static; vendor preset: enabled)
      Drop-In: /etc/systemd/system/local-fs.target.d
      └─nofail.conf
      Active: active since Tue 2019-01-08 12:36:41 UTC; 3h 55min ago
      Docs: man:systemd.special(7)


      BUT, after rebooting, the VM still ended up in emergency/maintenance mode. Have i missed something? Does the nofail.conf solution not work with raided disks?





      EDIT: I was able to get a print out of the logs when the system booted to emergency mode (sorry it's a screenshot since i don't have access to the host and had to ask the owner for it):



      enter image description here



      Here's the output from systemctl for systemd-fsck@dev-md127:



       sudo systemctl status --no-pager --full systemd-fsck@dev-md127
      ● systemd-fsck@dev-md127.service - File System Check on /dev/md127
      Loaded: loaded (/lib/systemd/system/systemd-fsck@.service; static; vendor preset: enabled)
      Active: failed (Result: exit-code) since Thu 2019-01-10 12:05:44 UTC; 2h 57min ago
      Docs: man:systemd-fsck@.service(8)
      Process: 1025 ExecStart=/lib/systemd/systemd-fsck /dev/md127 (code=exited, status=1/FAILURE)
      Main PID: 1025 (code=exited, status=1/FAILURE)

      systemd[1]: Starting File System Check on /dev/md127...
      systemd-fsck[1025]: /dev/md127 contains a file system with errors, check forced.
      systemd-fsck[1025]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
      systemd-fsck[1025]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      systemd-fsck[1025]: (i.e., without -a or -p options)
      systemd-fsck[1025]: fsck failed with exit status 4.
      systemd-fsck[1025]: Running request emergency.target/start/replace
      systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
      systemd[1]: Failed to start File System Check on /dev/md127.


      As i pointed out earlier, i have nofail set in /etc/fstab. Now the questions are:




      1. What is the dependency in the failed dependency in the screenshot?

      2. If fsck fails on /dev/md127, why does it enter emergency mode and how do i disable that?


      EDIT 2:



      A couple of other things i can add are:




      1. the vm is a kvm vm

      2. it's a software raid


      Kind regards,
      Ankur










      share|improve this question
















      I have a VM which has an attached raided device with fstab entry:



      /dev/md127 /mnt/blah ext4 nofail 0 2


      The raided disks are corrupted and during startup the unit entered emergency/maintence mode, which means only the local host user could exit this mode and start it up normally. During normal startup the following occurred in syslog:



      systemd-fsck[1272]: /dev/md127 contains a file system with errors, check forced.
      systemd-fsck[1272]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
      systemd-fsck[1272]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      systemd-fsck[1272]: #011(i.e., without -a or -p options)
      systemd-fsck[1272]: fsck failed with exit status 4.
      systemd-fsck[1272]: Running request emergency.target/start/replace
      systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
      systemd[1]: Failed to start File System Check on /dev/md127.
      systemd[1]: Dependency failed for /mnt/blah.
      systemd[1]: Dependency failed for Provisioner client daemon.


      My guess is that the OS goes to emergency/maintenance mode because of the corrupt raided disks:



      systemctl --state=failed
      UNIT LOAD ACTIVE SUB DESCRIPTION
      ● systemd-fsck@dev-md127.service loaded failed failed File System Check on /dev/md127


      What i want is for the VM to startup regardless of whether the raided drives are corrupt/unmountable, so it shouldn't go to emergency/maintenance mode. I followed these posts to attempt at disabling emergency/maintenance mode:




      • How to disable systemd agressive emergency shell behaviour?

      • How to determine exactly why Systemd enters emergency mode

      • Emergency Mode and Local Disk


      I had to first create the directory local-fs.target.d in /etc/systemd/system/, which felt wrong. I then created a nofail.conf in /etc/systemd/system/local-fs.target.d/nofail.conf containing:



      [Unit]
      OnFailure=


      After loading that drop file, I was able to confirm that the drop file was found by local-fs.target:



      sudo systemctl status local-fs.target 
      ● local-fs.target - Local File Systems
      Loaded: loaded (/lib/systemd/system/local-fs.target; static; vendor preset: enabled)
      Drop-In: /etc/systemd/system/local-fs.target.d
      └─nofail.conf
      Active: active since Tue 2019-01-08 12:36:41 UTC; 3h 55min ago
      Docs: man:systemd.special(7)


      BUT, after rebooting, the VM still ended up in emergency/maintenance mode. Have i missed something? Does the nofail.conf solution not work with raided disks?





      EDIT: I was able to get a print out of the logs when the system booted to emergency mode (sorry it's a screenshot since i don't have access to the host and had to ask the owner for it):



      enter image description here



      Here's the output from systemctl for systemd-fsck@dev-md127:



       sudo systemctl status --no-pager --full systemd-fsck@dev-md127
      ● systemd-fsck@dev-md127.service - File System Check on /dev/md127
      Loaded: loaded (/lib/systemd/system/systemd-fsck@.service; static; vendor preset: enabled)
      Active: failed (Result: exit-code) since Thu 2019-01-10 12:05:44 UTC; 2h 57min ago
      Docs: man:systemd-fsck@.service(8)
      Process: 1025 ExecStart=/lib/systemd/systemd-fsck /dev/md127 (code=exited, status=1/FAILURE)
      Main PID: 1025 (code=exited, status=1/FAILURE)

      systemd[1]: Starting File System Check on /dev/md127...
      systemd-fsck[1025]: /dev/md127 contains a file system with errors, check forced.
      systemd-fsck[1025]: /dev/md127: Inodes that were part of a corrupted orphan linked list found.
      systemd-fsck[1025]: /dev/md127: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
      systemd-fsck[1025]: (i.e., without -a or -p options)
      systemd-fsck[1025]: fsck failed with exit status 4.
      systemd-fsck[1025]: Running request emergency.target/start/replace
      systemd[1]: systemd-fsck@dev-md127.service: Main process exited, code=exited, status=1/FAILURE
      systemd[1]: systemd-fsck@dev-md127.service: Failed with result 'exit-code'.
      systemd[1]: Failed to start File System Check on /dev/md127.


      As i pointed out earlier, i have nofail set in /etc/fstab. Now the questions are:




      1. What is the dependency in the failed dependency in the screenshot?

      2. If fsck fails on /dev/md127, why does it enter emergency mode and how do i disable that?


      EDIT 2:



      A couple of other things i can add are:




      1. the vm is a kvm vm

      2. it's a software raid


      Kind regards,
      Ankur







      ubuntu kvm startup raid corruption






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited 14 hours ago







      Ankur22

















      asked Jan 8 at 16:39









      Ankur22Ankur22

      112




      112






















          1 Answer
          1






          active

          oldest

          votes


















          0














          Your system tried to repair the corruption and boot automatically, but it could not. That is why it went to emergency mode.



          This is not something that you can just override, because the filesystem cannot be mounted in this state, and you specified to mount the filesystem on boot.



          You can try one of the following:




          • Don't corrupt the filesystem. Unmount it normally instead of hard powering off the VM while data are still being written to it.

          • Don't mount the filesystem automatically (set noauto in /etc/fstab). The VM will start, but you will still have to manually come in later and mount the filesystem.

          • Switch to a more resilient filesystem, such as XFS.






          share|improve this answer
























          • Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

            – Ankur22
            10 hours ago













          • @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

            – Michael Hampton
            10 hours ago











          • That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

            – Ankur22
            10 hours ago













          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "106"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493279%2fubuntu-18-04-vm-in-emergency-maintenance-mode-due-to-failed-corrupted-raided-dis%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Your system tried to repair the corruption and boot automatically, but it could not. That is why it went to emergency mode.



          This is not something that you can just override, because the filesystem cannot be mounted in this state, and you specified to mount the filesystem on boot.



          You can try one of the following:




          • Don't corrupt the filesystem. Unmount it normally instead of hard powering off the VM while data are still being written to it.

          • Don't mount the filesystem automatically (set noauto in /etc/fstab). The VM will start, but you will still have to manually come in later and mount the filesystem.

          • Switch to a more resilient filesystem, such as XFS.






          share|improve this answer
























          • Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

            – Ankur22
            10 hours ago













          • @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

            – Michael Hampton
            10 hours ago











          • That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

            – Ankur22
            10 hours ago


















          0














          Your system tried to repair the corruption and boot automatically, but it could not. That is why it went to emergency mode.



          This is not something that you can just override, because the filesystem cannot be mounted in this state, and you specified to mount the filesystem on boot.



          You can try one of the following:




          • Don't corrupt the filesystem. Unmount it normally instead of hard powering off the VM while data are still being written to it.

          • Don't mount the filesystem automatically (set noauto in /etc/fstab). The VM will start, but you will still have to manually come in later and mount the filesystem.

          • Switch to a more resilient filesystem, such as XFS.






          share|improve this answer
























          • Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

            – Ankur22
            10 hours ago













          • @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

            – Michael Hampton
            10 hours ago











          • That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

            – Ankur22
            10 hours ago
















          0












          0








          0







          Your system tried to repair the corruption and boot automatically, but it could not. That is why it went to emergency mode.



          This is not something that you can just override, because the filesystem cannot be mounted in this state, and you specified to mount the filesystem on boot.



          You can try one of the following:




          • Don't corrupt the filesystem. Unmount it normally instead of hard powering off the VM while data are still being written to it.

          • Don't mount the filesystem automatically (set noauto in /etc/fstab). The VM will start, but you will still have to manually come in later and mount the filesystem.

          • Switch to a more resilient filesystem, such as XFS.






          share|improve this answer













          Your system tried to repair the corruption and boot automatically, but it could not. That is why it went to emergency mode.



          This is not something that you can just override, because the filesystem cannot be mounted in this state, and you specified to mount the filesystem on boot.



          You can try one of the following:




          • Don't corrupt the filesystem. Unmount it normally instead of hard powering off the VM while data are still being written to it.

          • Don't mount the filesystem automatically (set noauto in /etc/fstab). The VM will start, but you will still have to manually come in later and mount the filesystem.

          • Switch to a more resilient filesystem, such as XFS.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered 11 hours ago









          Michael HamptonMichael Hampton

          5,58911742




          5,58911742













          • Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

            – Ankur22
            10 hours ago













          • @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

            – Michael Hampton
            10 hours ago











          • That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

            – Ankur22
            10 hours ago





















          • Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

            – Ankur22
            10 hours ago













          • @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

            – Michael Hampton
            10 hours ago











          • That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

            – Ankur22
            10 hours ago



















          Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

          – Ankur22
          10 hours ago







          Hi Michael, thanks for the reply. This isn't a critical drive, and i've added nofail option in fstab, so it should boot normally. If it doesn't even boot so that ssh daemon starts, then it becomes extremely difficult to fix this problem in a timely manor. I believe that this option should work. I've tried the same setup with virtualbox and the VM boots up normally even if there are disk errors or the disk is not present, so now it seems that it could be a KVM issue.

          – Ankur22
          10 hours ago















          @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

          – Michael Hampton
          10 hours ago





          @Ankur22 That is not what nofail does. It means only that the device will be ignored if it does not exist. If the device exists, this does nothing.

          – Michael Hampton
          10 hours ago













          That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

          – Ankur22
          10 hours ago







          That's interesting. The wording on the man page for systemd.mount suggest something else nofail With nofail, this mount will be only wanted, not required, by local-fs.target or remote-fs.target. This means that the boot will continue even if this mount point is not mounted successfully. whereas fstab does indeed suggest it will only ignore incase of missing device nofail do not report errors for this device if it does not exist. I know that there used to be a nobootwait option in 12.04, is there no similar option to avoid emergency mode?

          – Ankur22
          10 hours ago




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Unix & Linux Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f493279%2fubuntu-18-04-vm-in-emergency-maintenance-mode-due-to-failed-corrupted-raided-dis%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Loup dans la culture

          How to solve the problem of ntp “Unable to contact time server” from KDE?

          Connection limited (no internet access)