mount.nfs: Stale file handle error - cannot umount
Every time I try to mount a NFS share I get this:
>> mount -t nfs gitlab-replica-storage.blah.com:/export/registry-gitlab-prod-data-vol /mnt/test
mount.nfs: Stale file handle
The problem is that I cannot umount, as it says:
>> umount -f -l /mnt/test
umount: /mnt/test: not mounted
I tried checking if any process was using the mountpoint, but that is not the case.
Any other alternative to troubleshoot this?
As clarification:
- I can mount it in another machine.
- I cannot mount it in another mountpoint on the affected machine.
mount nfs unmounting
add a comment |
Every time I try to mount a NFS share I get this:
>> mount -t nfs gitlab-replica-storage.blah.com:/export/registry-gitlab-prod-data-vol /mnt/test
mount.nfs: Stale file handle
The problem is that I cannot umount, as it says:
>> umount -f -l /mnt/test
umount: /mnt/test: not mounted
I tried checking if any process was using the mountpoint, but that is not the case.
Any other alternative to troubleshoot this?
As clarification:
- I can mount it in another machine.
- I cannot mount it in another mountpoint on the affected machine.
mount nfs unmounting
Are you positive the/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?
– Jaken551
Mar 23 '18 at 12:56
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
Try to add-v
tomount -t
command. Seedmesg
and/var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?
– Yurij Goncharuk
Mar 23 '18 at 15:00
add a comment |
Every time I try to mount a NFS share I get this:
>> mount -t nfs gitlab-replica-storage.blah.com:/export/registry-gitlab-prod-data-vol /mnt/test
mount.nfs: Stale file handle
The problem is that I cannot umount, as it says:
>> umount -f -l /mnt/test
umount: /mnt/test: not mounted
I tried checking if any process was using the mountpoint, but that is not the case.
Any other alternative to troubleshoot this?
As clarification:
- I can mount it in another machine.
- I cannot mount it in another mountpoint on the affected machine.
mount nfs unmounting
Every time I try to mount a NFS share I get this:
>> mount -t nfs gitlab-replica-storage.blah.com:/export/registry-gitlab-prod-data-vol /mnt/test
mount.nfs: Stale file handle
The problem is that I cannot umount, as it says:
>> umount -f -l /mnt/test
umount: /mnt/test: not mounted
I tried checking if any process was using the mountpoint, but that is not the case.
Any other alternative to troubleshoot this?
As clarification:
- I can mount it in another machine.
- I cannot mount it in another mountpoint on the affected machine.
mount nfs unmounting
mount nfs unmounting
edited Jul 19 '18 at 19:34
Jeff Schaller
42.1k1156133
42.1k1156133
asked Mar 23 '18 at 12:51
djuarezdjuarez
11517
11517
Are you positive the/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?
– Jaken551
Mar 23 '18 at 12:56
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
Try to add-v
tomount -t
command. Seedmesg
and/var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?
– Yurij Goncharuk
Mar 23 '18 at 15:00
add a comment |
Are you positive the/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?
– Jaken551
Mar 23 '18 at 12:56
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
Try to add-v
tomount -t
command. Seedmesg
and/var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?
– Yurij Goncharuk
Mar 23 '18 at 15:00
Are you positive the
/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?– Jaken551
Mar 23 '18 at 12:56
Are you positive the
/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?– Jaken551
Mar 23 '18 at 12:56
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
Try to add
-v
to mount -t
command. See dmesg
and /var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?– Yurij Goncharuk
Mar 23 '18 at 15:00
Try to add
-v
to mount -t
command. See dmesg
and /var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?– Yurij Goncharuk
Mar 23 '18 at 15:00
add a comment |
4 Answers
4
active
oldest
votes
The error, ESTALE, was originally introduced to handle the situation
where a file handle, which NFS uses to uniquely identify a file on the
server, no longer refers to a valid file on the server. This can
happen when the file is removed on the server, either by an
application on the server, some other client accessing the server, or
sometimes even by another mounted file system from the same client.
The NFS server also returns this error when the file resides upon a
file system which is no longer exported. Additionally, some NFS
servers even change the file handle when a file is renamed, although
this practice is discouraged.
This error occurs even if a file or directory, with the same name, is
recreated on the server without the client being aware of it. The
file handle refers to a specific instance of a file and deleting the
file and then recreating it creates a new instance of the file.
The error, ESTALE, is usually seen when cached directory information
is used to convert a pathname to a dentry/inode pair. The information
is discovered to be out of date or stale when a subsequent operation
is sent to the NFS server. This can easily happen in system calls
such as stat(2) when the pathname is converted a dentry/inode pair
using cached information, but then a subsequent GETATTR call to the
server discovers that the file handle is no longer valid.
This error can also occur when a change is made on the server in
between looking up different components of the pathname to be looked
up or between a successful lookup and a subsequent operation.
Original link about ESTALE: ESTALE LWN .
I suggest to you check files and directories on NFS server or say to admin of NFS server to do this.
Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it:
# To free pagecache
echo 1 > /proc/sys/vm/drop_caches
# To free dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
# To free pagecache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
add a comment |
A mount -t nfs
fails with Stale file handle
if the server has some stale exports entries for that client.
Example scenario: this might happen when the server reboots without the client umounting the nfs volumes first. When the server is back and the client then umounts and tries to mount the nfs volume the server might respond with:
mount.nfs: Stale file handle
You can check for this via looking at /proc/fs/nfs/exports
or /proc/fs/nfsd/exports
. If there is entry for the client it might be a stale one.
You can fix this via explicitly un-exporting and re-exporting the relevant exports on the server. For example to do this with all exports:
# exportfs -ua
# cat /proc/fs/nfs/exports
# exportfs -a
After this the client's mount -t nfs ...
should succeed.
Note that mount yielding ESTALE
is quite different from some other system call (like open/readdir/unlink/chdir ...) returning ESTALE
. It's export being stale vs. a file handle being stale. A stale file handle easily happens with NFS (e.g. a client has a file handle but the file got deleted on the server).
add a comment |
Check whether the export is actually mounted:
# cat /proc/mounts | grep nfs
Stale file handle error means that the NFS server holds an old version of the files in his export path. An NFS server restart can sometimes help.
But with older OSs (RHEL/CentOS 6.9) it is sometimes better to revert to NFS3 instead of NFS4. In my experience older NFS4 clients have sometimes difficulties with the newer NFS4.1 servers. This is especially true for file locking.
add a comment |
Find the stale mount entry on the NFS server:
showmount -a | grep ip_address_of_nfs_client
If you see lines related with the IP address of the NFS client and the share you are trying to mount, remove the stale entries from the rmtab:
vi /var/lib/nfs/rmtab
Reload the rpc.mountd so it sees the new rmtab:
killall rpc.mountd ; /usr/sbin/rpc.mountd
New contributor
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "106"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f433051%2fmount-nfs-stale-file-handle-error-cannot-umount%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
The error, ESTALE, was originally introduced to handle the situation
where a file handle, which NFS uses to uniquely identify a file on the
server, no longer refers to a valid file on the server. This can
happen when the file is removed on the server, either by an
application on the server, some other client accessing the server, or
sometimes even by another mounted file system from the same client.
The NFS server also returns this error when the file resides upon a
file system which is no longer exported. Additionally, some NFS
servers even change the file handle when a file is renamed, although
this practice is discouraged.
This error occurs even if a file or directory, with the same name, is
recreated on the server without the client being aware of it. The
file handle refers to a specific instance of a file and deleting the
file and then recreating it creates a new instance of the file.
The error, ESTALE, is usually seen when cached directory information
is used to convert a pathname to a dentry/inode pair. The information
is discovered to be out of date or stale when a subsequent operation
is sent to the NFS server. This can easily happen in system calls
such as stat(2) when the pathname is converted a dentry/inode pair
using cached information, but then a subsequent GETATTR call to the
server discovers that the file handle is no longer valid.
This error can also occur when a change is made on the server in
between looking up different components of the pathname to be looked
up or between a successful lookup and a subsequent operation.
Original link about ESTALE: ESTALE LWN .
I suggest to you check files and directories on NFS server or say to admin of NFS server to do this.
Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it:
# To free pagecache
echo 1 > /proc/sys/vm/drop_caches
# To free dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
# To free pagecache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
add a comment |
The error, ESTALE, was originally introduced to handle the situation
where a file handle, which NFS uses to uniquely identify a file on the
server, no longer refers to a valid file on the server. This can
happen when the file is removed on the server, either by an
application on the server, some other client accessing the server, or
sometimes even by another mounted file system from the same client.
The NFS server also returns this error when the file resides upon a
file system which is no longer exported. Additionally, some NFS
servers even change the file handle when a file is renamed, although
this practice is discouraged.
This error occurs even if a file or directory, with the same name, is
recreated on the server without the client being aware of it. The
file handle refers to a specific instance of a file and deleting the
file and then recreating it creates a new instance of the file.
The error, ESTALE, is usually seen when cached directory information
is used to convert a pathname to a dentry/inode pair. The information
is discovered to be out of date or stale when a subsequent operation
is sent to the NFS server. This can easily happen in system calls
such as stat(2) when the pathname is converted a dentry/inode pair
using cached information, but then a subsequent GETATTR call to the
server discovers that the file handle is no longer valid.
This error can also occur when a change is made on the server in
between looking up different components of the pathname to be looked
up or between a successful lookup and a subsequent operation.
Original link about ESTALE: ESTALE LWN .
I suggest to you check files and directories on NFS server or say to admin of NFS server to do this.
Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it:
# To free pagecache
echo 1 > /proc/sys/vm/drop_caches
# To free dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
# To free pagecache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
add a comment |
The error, ESTALE, was originally introduced to handle the situation
where a file handle, which NFS uses to uniquely identify a file on the
server, no longer refers to a valid file on the server. This can
happen when the file is removed on the server, either by an
application on the server, some other client accessing the server, or
sometimes even by another mounted file system from the same client.
The NFS server also returns this error when the file resides upon a
file system which is no longer exported. Additionally, some NFS
servers even change the file handle when a file is renamed, although
this practice is discouraged.
This error occurs even if a file or directory, with the same name, is
recreated on the server without the client being aware of it. The
file handle refers to a specific instance of a file and deleting the
file and then recreating it creates a new instance of the file.
The error, ESTALE, is usually seen when cached directory information
is used to convert a pathname to a dentry/inode pair. The information
is discovered to be out of date or stale when a subsequent operation
is sent to the NFS server. This can easily happen in system calls
such as stat(2) when the pathname is converted a dentry/inode pair
using cached information, but then a subsequent GETATTR call to the
server discovers that the file handle is no longer valid.
This error can also occur when a change is made on the server in
between looking up different components of the pathname to be looked
up or between a successful lookup and a subsequent operation.
Original link about ESTALE: ESTALE LWN .
I suggest to you check files and directories on NFS server or say to admin of NFS server to do this.
Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it:
# To free pagecache
echo 1 > /proc/sys/vm/drop_caches
# To free dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
# To free pagecache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
The error, ESTALE, was originally introduced to handle the situation
where a file handle, which NFS uses to uniquely identify a file on the
server, no longer refers to a valid file on the server. This can
happen when the file is removed on the server, either by an
application on the server, some other client accessing the server, or
sometimes even by another mounted file system from the same client.
The NFS server also returns this error when the file resides upon a
file system which is no longer exported. Additionally, some NFS
servers even change the file handle when a file is renamed, although
this practice is discouraged.
This error occurs even if a file or directory, with the same name, is
recreated on the server without the client being aware of it. The
file handle refers to a specific instance of a file and deleting the
file and then recreating it creates a new instance of the file.
The error, ESTALE, is usually seen when cached directory information
is used to convert a pathname to a dentry/inode pair. The information
is discovered to be out of date or stale when a subsequent operation
is sent to the NFS server. This can easily happen in system calls
such as stat(2) when the pathname is converted a dentry/inode pair
using cached information, but then a subsequent GETATTR call to the
server discovers that the file handle is no longer valid.
This error can also occur when a change is made on the server in
between looking up different components of the pathname to be looked
up or between a successful lookup and a subsequent operation.
Original link about ESTALE: ESTALE LWN .
I suggest to you check files and directories on NFS server or say to admin of NFS server to do this.
Maybe some old pagecache, inode, dentry cache entries are exists on NFS server. Please clean it:
# To free pagecache
echo 1 > /proc/sys/vm/drop_caches
# To free dentries and inodes
echo 2 > /proc/sys/vm/drop_caches
# To free pagecache, dentries and inodes
echo 3 > /proc/sys/vm/drop_caches
edited Mar 23 '18 at 14:34
answered Mar 23 '18 at 13:41
Yurij GoncharukYurij Goncharuk
2,3272622
2,3272622
add a comment |
add a comment |
A mount -t nfs
fails with Stale file handle
if the server has some stale exports entries for that client.
Example scenario: this might happen when the server reboots without the client umounting the nfs volumes first. When the server is back and the client then umounts and tries to mount the nfs volume the server might respond with:
mount.nfs: Stale file handle
You can check for this via looking at /proc/fs/nfs/exports
or /proc/fs/nfsd/exports
. If there is entry for the client it might be a stale one.
You can fix this via explicitly un-exporting and re-exporting the relevant exports on the server. For example to do this with all exports:
# exportfs -ua
# cat /proc/fs/nfs/exports
# exportfs -a
After this the client's mount -t nfs ...
should succeed.
Note that mount yielding ESTALE
is quite different from some other system call (like open/readdir/unlink/chdir ...) returning ESTALE
. It's export being stale vs. a file handle being stale. A stale file handle easily happens with NFS (e.g. a client has a file handle but the file got deleted on the server).
add a comment |
A mount -t nfs
fails with Stale file handle
if the server has some stale exports entries for that client.
Example scenario: this might happen when the server reboots without the client umounting the nfs volumes first. When the server is back and the client then umounts and tries to mount the nfs volume the server might respond with:
mount.nfs: Stale file handle
You can check for this via looking at /proc/fs/nfs/exports
or /proc/fs/nfsd/exports
. If there is entry for the client it might be a stale one.
You can fix this via explicitly un-exporting and re-exporting the relevant exports on the server. For example to do this with all exports:
# exportfs -ua
# cat /proc/fs/nfs/exports
# exportfs -a
After this the client's mount -t nfs ...
should succeed.
Note that mount yielding ESTALE
is quite different from some other system call (like open/readdir/unlink/chdir ...) returning ESTALE
. It's export being stale vs. a file handle being stale. A stale file handle easily happens with NFS (e.g. a client has a file handle but the file got deleted on the server).
add a comment |
A mount -t nfs
fails with Stale file handle
if the server has some stale exports entries for that client.
Example scenario: this might happen when the server reboots without the client umounting the nfs volumes first. When the server is back and the client then umounts and tries to mount the nfs volume the server might respond with:
mount.nfs: Stale file handle
You can check for this via looking at /proc/fs/nfs/exports
or /proc/fs/nfsd/exports
. If there is entry for the client it might be a stale one.
You can fix this via explicitly un-exporting and re-exporting the relevant exports on the server. For example to do this with all exports:
# exportfs -ua
# cat /proc/fs/nfs/exports
# exportfs -a
After this the client's mount -t nfs ...
should succeed.
Note that mount yielding ESTALE
is quite different from some other system call (like open/readdir/unlink/chdir ...) returning ESTALE
. It's export being stale vs. a file handle being stale. A stale file handle easily happens with NFS (e.g. a client has a file handle but the file got deleted on the server).
A mount -t nfs
fails with Stale file handle
if the server has some stale exports entries for that client.
Example scenario: this might happen when the server reboots without the client umounting the nfs volumes first. When the server is back and the client then umounts and tries to mount the nfs volume the server might respond with:
mount.nfs: Stale file handle
You can check for this via looking at /proc/fs/nfs/exports
or /proc/fs/nfsd/exports
. If there is entry for the client it might be a stale one.
You can fix this via explicitly un-exporting and re-exporting the relevant exports on the server. For example to do this with all exports:
# exportfs -ua
# cat /proc/fs/nfs/exports
# exportfs -a
After this the client's mount -t nfs ...
should succeed.
Note that mount yielding ESTALE
is quite different from some other system call (like open/readdir/unlink/chdir ...) returning ESTALE
. It's export being stale vs. a file handle being stale. A stale file handle easily happens with NFS (e.g. a client has a file handle but the file got deleted on the server).
answered Jun 3 '18 at 7:56
maxschlepzigmaxschlepzig
34.2k33137213
34.2k33137213
add a comment |
add a comment |
Check whether the export is actually mounted:
# cat /proc/mounts | grep nfs
Stale file handle error means that the NFS server holds an old version of the files in his export path. An NFS server restart can sometimes help.
But with older OSs (RHEL/CentOS 6.9) it is sometimes better to revert to NFS3 instead of NFS4. In my experience older NFS4 clients have sometimes difficulties with the newer NFS4.1 servers. This is especially true for file locking.
add a comment |
Check whether the export is actually mounted:
# cat /proc/mounts | grep nfs
Stale file handle error means that the NFS server holds an old version of the files in his export path. An NFS server restart can sometimes help.
But with older OSs (RHEL/CentOS 6.9) it is sometimes better to revert to NFS3 instead of NFS4. In my experience older NFS4 clients have sometimes difficulties with the newer NFS4.1 servers. This is especially true for file locking.
add a comment |
Check whether the export is actually mounted:
# cat /proc/mounts | grep nfs
Stale file handle error means that the NFS server holds an old version of the files in his export path. An NFS server restart can sometimes help.
But with older OSs (RHEL/CentOS 6.9) it is sometimes better to revert to NFS3 instead of NFS4. In my experience older NFS4 clients have sometimes difficulties with the newer NFS4.1 servers. This is especially true for file locking.
Check whether the export is actually mounted:
# cat /proc/mounts | grep nfs
Stale file handle error means that the NFS server holds an old version of the files in his export path. An NFS server restart can sometimes help.
But with older OSs (RHEL/CentOS 6.9) it is sometimes better to revert to NFS3 instead of NFS4. In my experience older NFS4 clients have sometimes difficulties with the newer NFS4.1 servers. This is especially true for file locking.
answered Mar 23 '18 at 13:41
monkeywrenchmonkeywrench
32
32
add a comment |
add a comment |
Find the stale mount entry on the NFS server:
showmount -a | grep ip_address_of_nfs_client
If you see lines related with the IP address of the NFS client and the share you are trying to mount, remove the stale entries from the rmtab:
vi /var/lib/nfs/rmtab
Reload the rpc.mountd so it sees the new rmtab:
killall rpc.mountd ; /usr/sbin/rpc.mountd
New contributor
add a comment |
Find the stale mount entry on the NFS server:
showmount -a | grep ip_address_of_nfs_client
If you see lines related with the IP address of the NFS client and the share you are trying to mount, remove the stale entries from the rmtab:
vi /var/lib/nfs/rmtab
Reload the rpc.mountd so it sees the new rmtab:
killall rpc.mountd ; /usr/sbin/rpc.mountd
New contributor
add a comment |
Find the stale mount entry on the NFS server:
showmount -a | grep ip_address_of_nfs_client
If you see lines related with the IP address of the NFS client and the share you are trying to mount, remove the stale entries from the rmtab:
vi /var/lib/nfs/rmtab
Reload the rpc.mountd so it sees the new rmtab:
killall rpc.mountd ; /usr/sbin/rpc.mountd
New contributor
Find the stale mount entry on the NFS server:
showmount -a | grep ip_address_of_nfs_client
If you see lines related with the IP address of the NFS client and the share you are trying to mount, remove the stale entries from the rmtab:
vi /var/lib/nfs/rmtab
Reload the rpc.mountd so it sees the new rmtab:
killall rpc.mountd ; /usr/sbin/rpc.mountd
New contributor
New contributor
answered 2 hours ago
JFPJFP
1
1
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Unix & Linux Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2funix.stackexchange.com%2fquestions%2f433051%2fmount-nfs-stale-file-handle-error-cannot-umount%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Are you positive the
/export/registry-gitlab-prod-data-vol
directory exists and has the correct permissions?– Jaken551
Mar 23 '18 at 12:56
@Jaken551 Yes, it is accessible by someone else. In fact I can mount it in another machine.
– djuarez
Mar 23 '18 at 13:00
Try to add
-v
tomount -t
command. Seedmesg
and/var/log/messages
also. Maybe addition info will be issued. Are you trying reboot your machine?– Yurij Goncharuk
Mar 23 '18 at 15:00