Permissions Ownerships Gets Masked For Volume Mounts Issue

Posted on  by admin

So following on from my Synology HA thread I'm still tinkering with the Synology but I have reservations about support and IOPS. It has gotten me very interested in using NFS for our VMDK based stuff though - the key point here is 'interested', I'm not sure there's a single business justification for going NFS over iSCSI. 'Plan A' was a pair of boxes (rackmount ML350's most likely) with a shitload of local storage in a stretched cluster each running two HP VSAs giving around 15-20tb usable storage after hardware RAID10 and Network RAID10.

Permissions Ownerships Gets Masked For Volume Mounts Issue

Doesn't do NFS but does do failover. Let's say I wanted to bring NFS into the mix but still have failover between locations - what would you suggest I look at?

We spoke to Netapp and the price for a pair of 2240's with support and the licenses just about finished me off and that doesn't even do failover. Nobody seems to make a unified VSA (my ideal solution) that does NFS and iSCSI and Failover.

The fail over between locations is the problem in my mind. Local failover is easier since the cluster will use a shared IP address that is passed between the nodes in the cluster. You can setup a cluster across a WAN link but sharing the cluster IP address is an issue and if you have any WAN issues your NAS cluster can go into split brain condition (where both nodes think they are the master node) often. We are looking at a local NAS cluster as a target for Veeam with replication to another site for DR purposes. Hutchingsp wrote: It has gotten me very interested in using NFS for our VMDK based stuff though - the key point here is 'interested', I'm not sure there's a single business justification for going NFS over iSCSI.

That's the easy one. NFS is broadly supported (iSCSI is too, but not as broadly - for example all enterprise NFS servers work but not all iSCSI implementations can be safely used with VMware like the one in OpenFiler), is super easy to configure correctly (correctly being key) and very easy to hand over to someone else for support. NFS is about making the risk of you getting something wrong lower and the risk of someone else getting something wrong lower.

It's about safety through simplicity. Plus its about just doing less work for the same results, but that is pretty trivial. NFS keeps you from doing bad things. Hutchingsp wrote: Nobody seems to make a unified VSA (my ideal solution) that does NFS and iSCSI and Failover. That's because the term VSA refers only to doing NFS, not SAN. So the term itself is the problem there. There will be an appliance that does both soon, I can say no more;) But really, at a VSA level, doing both isn't valuable.

You really only want NFS for that in a VMware scenario and only want iSCSI for that in a HyperV scenario so you'd get a special product for each case. HP or VMware VSA for VMware and Starwind for HyperV. Just no market for something that does both at once.

And in the Xen world you would never, ever do either because both approaches are just attempts to mimic what Xen does under the hood without needing to add the heavy protocols on top. Hutchingsp wrote: The only reason I'm throwing iSCSI in the mix is we have some servers, our file server being the best example, where there's 5x2tb LUNs provisioned to it via iSCSI and mounted within the guest. I'd love to hear some opinions on how you a) Get around the 2tb VMDK limit if you ever needed a volume 2tb b) Pros/Cons of large VMDK files The DRDB solution is a bit too home brew:) VMFS 5 can do massive datastores without problems (64TB).

VMFS-3 can do them with extents FYI (yuck!). NTFS is terrible over 2TB, just don't do it. If you need a file server file system to be 60TB switch to proprietary filers. (I like BluArc, but they are not cheap). I also then smack whoever failed to find a reasonable way to vault or divide data off, as indexes at that size require SSD's to function well no matter what your using.

(Isilon and BluArc can tier the index this way into cache at least). Hutchingsp wrote: Mentioning archiving and 'data life-cycle' will get you some funny looks around here, because disk space is cheap right.

(actually it is, backing it up isn't). The trick is to keep old stuff still technically available, but tier the hell out of it, replicate it once, and be done with it. Couple tricks to deal with this behavior.

Get a modular SAN with AutoTiering, and put 80% of the storage on Massive bulk RAID 60's on NL-SAS. (think 20+2). Replicate it once, and ignore it. (Running scale out with lots of ice cold data is a waste of money on controllers/switch ports). If you want to go extreme.

Get a NAS head that can sub/tier transparently to an external Object store or even NFS source (BluArcs doe this). Put a NFS - LTO6 gateway in (Crossroads makes one) and laugh as people complain about old data requires 5 minutes to recall. (Crossroads at least has a few TB of cache so you don't shoeshine). Pennies for storage (LTO6 tapes) PB's of density per rack with no power usage, and budget saved (well at scale, this is a six figure setup to start with, but scales insanely, as even the baby 3080's can do 2PB each, with 256TB file systems that can be grouped behind a common namespace). I suspect you do this for companies a wee bit larger than ours John:-) I'm trying to focus on the cause which is people being so bloody disorganised and stashing stuff in random folders on their groups drive etc. The way I see it, which may be a little simplistically, is if we can get to a point where each project has a folder and we know when a project has finished it's actually a very simple process at our size of business to periodically move the closed/old projects to a slower/less redundant online 'Archive' area. Where it gets fun is when you have a whole bunch of ' server share1 My Stuff' and server share2 Projects Do Not Delete' that's been there so long that nobody knows what half of it is and the guy that does probably left 5 years ago.

Hutchingsp wrote: So following on from my Synology HA thread I'm still tinkering with the Synology but I have reservations about support and IOPS. It has gotten me very interested in using NFS for our VMDK based stuff though - the key point here is 'interested', I'm not sure there's a single business justification for going NFS over iSCSI.

Permissions Ownerships Gets Masked For Volume Mounts Issues

'Plan A' was a pair of boxes (rackmount ML350's most likely) with a shitload of local storage in a stretched cluster each running two HP VSAs giving around 15-20tb usable storage after hardware RAID10 and Network RAID10. Doesn't do NFS but does do failover. Let's say I wanted to bring NFS into the mix but still have failover between locations - what would you suggest I look at? We spoke to Netapp and the price for a pair of 2240's with support and the licenses just about finished me off and that doesn't even do failover. Nobody seems to make a unified VSA (my ideal solution) that does NFS and iSCSI and Failover.

This is not true. We layer failover NFS and CIFS/SMB on top of an HA iSCSI volumes so you can use both block and file level access @ the same time. I see a HUGE issue with building failover thing between locations: with uplinks from one site going down there's no way to know is second site down or not.

So either one of the sites should have 'primary' token (and 'secondary' one will go offline automatically so it's not true redundancy) or you'll have a pretty classic brain split issue. No way to sync the data when both sites will be up and running. You need to keep storage on ONE site or keep THIRD site with a storage.

Hutchingsp wrote: One of our vendors mentioned Nexsan - I know of them but nothing more. NST series looks to do quite a bit. I was thinking and why couldn't I do a 'hybrid' with vSphere VSA for NFS and if I needed/wanted iSCSI run P4000 as well since they're both just running as VMs - clustered NFS and clustered iSCSI with auto failover surely? Thing is I thought the vSphere VSA demanded all your datastores?

Permissions

Nice company (lucky to know people from management in person) but they are doing old school - hardware appliances. You've mentioned VSA so I'm a bit frustrated what do you actually want.

Program File Bit Set Description of Usage /usr/bin/chage setuid Finds out password aging information (via the -l option). /usr/bin/chfn setuid Changes finger information. /usr/bin/chsh setuid Changes the login shell. /usr/bin/crontab setuid Edits, lists, or removes a crontab file. /usr/bin/wall setgid Sends a system-wide message.

/usr/bin/write setgid Sends a message to another user. /usr/bin/Xorg setuid Invokes the X Windows server. /usr/libexec/openssh/ssh-keysign setuid Runs the SSH helper program for host-based authentication. /usr/sbin/mount.nfs setuid Mounts an NFS file system.