16:01:46 <jgriffith> #startmeeting cinder
16:01:47 <openstack> Meeting started Wed Sep  5 16:01:46 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:48 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:49 <openstack> The meeting name has been set to 'cinder'
16:01:58 <jgriffith> Sorry I'm a few minutes late
16:02:02 <jgriffith> busy busy
16:02:11 <jgriffith> Who do we have?
16:02:19 <winston-d> hi, john
16:02:24 <jgriffith> winston-d: Hello!
16:02:30 <rongze> hi,every one
16:02:46 <winston-d> hi, rongze
16:02:48 <jgriffith> rongze: Hi ya
16:03:00 <jgriffith> Jus the three of us?
16:03:27 <clayg> heheh
16:03:30 <winston-d> :)
16:03:40 <jgriffith> :)
16:03:44 <rongze> hehe
16:03:53 <jgriffith> So I had a couple of things I wanted to talk about
16:04:08 <dtynan> hello
16:04:10 * jgriffith rifling desk for notes
16:04:34 <DuncanT> Hey, sorry
16:04:36 <jgriffith> #topic snapshot deletes
16:04:42 <jgriffith> https://bugs.launchpad.net/cinder/+bug/1023755
16:04:44 <uvirtbot> Launchpad bug 1023755 in nova "Unable to delete the volume snapshot" [Undecided,New]
16:04:57 <jgriffith> So this thorn in the side is staring to make a bit more sense
16:05:21 <jgriffith> It seems that performing the dd to LVM snapshot volums >= 1G tends to result in a kernel hang
16:05:33 <jgriffith> In the cases that it doesn't hang, unfortunately it's DOG slow
16:06:16 <jgriffith> It appears based on a little bit of googling that this is a kernel bug
16:06:29 <jgriffith> So, I've been trying to find other solutions
16:06:40 <jgriffith> But ultimately I'm wondering:
16:06:51 <jgriffith> 1. Do we need to zero out the snapshot LVM volume at all?
16:07:36 <DuncanT> The problem with not zeroing is you risk leaking your data to the next users of the space
16:07:37 <jgriffith> 2. If we do, any suggestions on another method that might be a bit less intensive?
16:07:46 <jgriffith> DuncanT: Yes, understood
16:08:05 <jgriffith> DuncanT: However, being a snap it's on the COW blocks that are actually there right?
16:08:05 <DuncanT> We do our scrubbing out-of-line, but that is kind of tricky with LVM
16:08:09 <bswartz> is that a limitation of LVM?
16:08:18 <winston-d> maybe we can do that in a async way
16:08:24 <jgriffith> DuncanT: With LVM it's not possible that I can see
16:08:30 <jgriffith> winston-d: How do you mean?
16:08:48 <jgriffith> winston-d: The problem is the kernel hangs if you try to dd the entire volume
16:09:37 <winston-d> kernel hangs is another problem, async is to deal with DOG slow.
16:09:39 <DuncanT> It would be an easy enough hack to break the dd into chunks?
16:09:43 <clayg> jgriffith: probably because it can't fit all the exceptions
16:09:53 <jgriffith> winston-d: Ohh... we already background it so that's not a killer anyway
16:10:07 <jgriffith> clayg: ?
16:10:16 <clayg> how many extents do we allocate for the snapshot (same size as original volume?)
16:10:27 <jgriffith> clayg: Yes (same size)
16:10:42 <jgriffith> clayg: Oh, I think I know where you're going
16:10:51 <jgriffith> clayg: I don't think that's the problem though
16:10:58 <clayg> so when you write to a snapshot you have to track a) the new data and b) the exception metadata
16:11:10 <jgriffith> clayg: Ok... keep going :)
16:11:12 <clayg> you can overflow a snapshot
16:11:22 <clayg> either with writes into the snapshot, or writes into the origin
16:11:29 <jgriffith> But here's the thing...
16:11:39 <jgriffith> You can repro this by:
16:11:42 <jgriffith> 1. creat volume
16:11:49 <jgriffith> 2. create snapshot
16:11:52 <jgriffith> 3. delete snapshot
16:12:05 <jgriffith> Never mounted/wrote data etc
16:12:16 <clayg> delete snapshot == write full size of snapshot data into snapshot (zero it out)
16:12:25 <bswartz> who designed LVM to expose unwritten blocks as containing previously-written data instead of zeros?
16:12:32 <clayg> in my experience - it won't fit
16:12:51 <jgriffith> clayg: Hmmm... my understanding was different but you make a good point
16:13:18 <jgriffith> clayg: The trouble is it's not like we get some percentage in and it pukes
16:13:31 <clayg> bswartz: instead of making raw volumes, you could make sparse volumes, you get an empty exception chain (no way to read existing data) - but there is a performance hit (reads and writes)
16:13:38 <jgriffith> clayg: It's unbelievably slow from the start and hits all sorts of kernel time outs etc
16:14:02 <clayg> jgriffith: once the exception list overflows and extent it's expensive to write new blocks (or read old ones really)
16:14:24 <jgriffith> clayg: makes sense.. kinda
16:14:27 <bswartz> We could zero on creation instead of deletion
16:14:39 <jgriffith> bswartz: Same problem and wouldn't solve the issue
16:14:39 <bswartz> that would sidestep the problem
16:14:47 <jgriffith> bswartz: It is the same problem
16:14:54 <jgriffith> bswartz: Remember in this case we never wrote anything
16:14:55 <clayg> jgriffith: well delete just becomes an lvremove
16:14:59 <DuncanT> Can you get the underlying disk range and blank it directly? Not familiar enough with lvm to know, sorry
16:15:03 <jgriffith> bswartz: So in essence we're doing exactly what you state
16:15:12 <jgriffith> clayg: yes
16:15:17 <clayg> jgriffith: and on create - since it's a raw volume, the dd preforms more like you would expect
16:15:18 <bswartz> if you zero on creation instead of deletion then you never need to zero snapshots ever
16:15:34 <clayg> DuncanT: absolutely, ls /dev/mapper and a bit of dmsetup
16:15:50 <jgriffith> bswartz: Wait... are you saying zero on volume creation?
16:15:56 <clayg> [16:14]      bswartz | We could zero on creation instead of deletion
16:15:58 <jgriffith> bswartz: I'm confused by what you're suggesting?
16:16:09 <DuncanT> Except that a) customers tend to expect their data to be gone once they delete the volume b) create then becomes slow
16:16:13 <jgriffith> clayg: in essence that's what we're doing anyway!
16:16:25 <bswartz> if you zero newly created volumes then nobody will ever see old data in their volumes
16:16:28 <jgriffith> clayg: I'm saying in this particular case I never used the volume or the snap
16:16:47 <jgriffith> clayg: So I don't see the difference, it's basicly creating then zeroing already
16:16:52 <jgriffith> zerioing is the problem
16:16:59 <winston-d> jgriffith, do you have link to kernel bug?
16:17:08 <bswartz> yeah but zeroing new volumes allows you to not bother zeroing snapshots
16:17:12 <bswartz> that's all I was getting at
16:17:14 <DuncanT> clayg: I thought you could read the tables out of the dev/mapper devices but I haven't tried... might end up making a nasty mess of the lvm metadata though
16:17:34 <clayg> DuncanT: yes absolutly on both counts
16:17:37 <jgriffith> winston-d: Nope, haven't tracked it down but found a bit of info from other folks having similar issues with 3.2 and dev/mapper files
16:17:54 <jgriffith> bswartz: How?
16:18:25 <jgriffith> bswartz: users aren't likely to snapshot an empty volume
16:18:42 <jgriffith> bswartz: I kinda see what you're saying but don't think it solves the secure data issue
16:19:07 <clayg> jgriffith: if curing 'creating' you write zeros to the entire volume - you never "expose" data across volumes
16:19:08 <jgriffith> The struggle I'm having is this... you can ensure the security leakage problem, or you can actually be able to delete snapshots :)
16:19:27 <jgriffith> clayg: bswartz: AHHHH
16:19:36 <jgriffith> clayg: bswartz: Finally, I see what you're saying :)
16:19:50 <clayg> jgriffith: np, I think DuncanT already made the valid cretiques why it's not a good idea
16:19:58 <clayg> but you have to admit it would *work* :P
16:20:15 <jgriffith> :)  Yes, I believe it would
16:20:37 <clayg> s/a good idea/the ideal solution/
16:20:45 <clayg> bswartz: it's a quite good idea
16:20:47 <DuncanT> If the otehr option is 'not working', take the slow option :-)
16:21:16 <DuncanT> We (HP) don't particularly care, we don't use LVM...
16:21:48 <rongze> it is lvm issuse...
16:22:19 <jgriffith> rongze: Yes, it's lvm only
16:22:33 <jgriffith> So it's ugly but a work around:
16:22:42 <winston-d> nobody use lvm iscsi in production, right?
16:22:42 <jgriffith> 1. zero out newly created volumes
16:22:57 <jgriffith> 2. zero out delete volumes still as well (try to meet expectations)
16:23:28 <jgriffith> 3. Skip zero out on snapshot delete
16:23:38 <clayg> jgriffith: seems quite reasonable to me
16:23:53 <clayg> step 4. Make it better later
16:24:01 <jgriffith> clayg: Yes!
16:24:03 <winston-d> jgriffith, make sense for bug fix
16:24:07 <jgriffith> Step 4 is very important :)
16:24:09 <DuncanT> Seems reasonable now - a flag to turn off all the zeroing (or only zero the first meg of new volumes, more sensibly) for test deployments might be appreciated too
16:24:11 <clayg> lol
16:24:23 <jgriffith> I suppose we could even do this intelligently based on kernel versin?
16:24:39 <jgriffith> DuncanT: That's my next topic! :)
16:25:18 <jgriffith> Ok... seems like we have a plan for this one
16:25:26 <jgriffith> Any last thoughts before I move on?
16:25:29 <DuncanT> Not sure if it is a kernel version issue - fixed size snapshots that aren't bigger than the origin will always overflow if you fill them (which is what the basic dd does)
16:25:51 <jgriffith> DuncanT: Here's a proposal
16:26:03 <jgriffith> DuncanT: I'll try the same test with a snapshot > volume size
16:26:18 <jgriffith> DuncanT: My theory is that it doesn't have anything to do with the size
16:26:26 <DuncanT> ok
16:26:29 <jgriffith> DuncanT: But maybe/hopefully I'll be wrong
16:26:48 <jgriffith> DuncanT: If I come across something different I'll send a note out to everyone
16:26:59 <jgriffith> Sound good to everyone?
16:27:04 <DuncanT> Sounds good to me
16:27:17 <winston-d> yes
16:27:21 <jgriffith> clayg: thoughts?
16:27:36 <clayg> jgriffith: sounds good to me
16:27:43 <jgriffith> cool!  Moving on
16:27:53 <clayg> I think "filling up the snapshot exception chain with zeros" is a bad idea ripe to be abandoned
16:28:40 <jgriffith> clayg: Well last night at about 11:00 I definitely agreed with you on that!
16:28:52 <clayg> ya, moving on
16:28:55 <jgriffith> :)
16:29:08 <jgriffith> #topic configurable zeroing of volumes on delete
16:29:39 <jgriffith> We talked about setting this via flags and where it should live/be implemented
16:29:45 <jgriffith> here are my opinions:
16:29:54 <jgriffith> 1. It should be implemented in the driver
16:30:08 <jgriffith> That way it can be implemented/handled specifically however is needed
16:30:18 <jgriffith> Vendors that have more clever options can use them
16:30:22 <jgriffith> etc etc
16:30:44 <jgriffith> 2. Rather than using a flag and making it a global settging, make it an optional argument to the delete call
16:31:15 <jgriffith> This makes the most sense to me, an admin can use a flag to set an over-ride policy for all tenants/volumes
16:31:23 <DuncanT> 1. Tentatively agree 2. Disagree
16:31:39 <jgriffith> But if they want to provide an option to the tenant on a case by case basis they can
16:31:50 <jgriffith> DuncanT: Ok... Let's start with the tentative:
16:32:10 <jgriffith> Reasons not to implement in the driver?
16:32:22 <jgriffith> And where would you implement it?
16:33:13 <winston-d> are we talking about where should the flag be defined or implemented?
16:33:18 <DuncanT> I raised the point about it being something may volume providers might well want, meaning it is maybe better done as a library call, but I might well be wrong about the number of drivers that actually want the basic version, so I'm entirely happy to be told I was worrying excessively - it isn't like it is tricky code
16:33:34 <bswartz> I prefer (1)
16:33:55 <rongze> I suppot implement in driver
16:33:56 <jgriffith> DuncanT: I guess my reasoning for having it in the driver is to achieve exactly what you describe
16:34:22 <DuncanT> Yeah, ok, I'm convinced, keep it in the drive, I withdraw the tentative
16:34:34 <jgriffith> The third party drivers over-ride the delete operations anyway so they can do their magic however they see fit and the rest of the world doesn't have to know/care abou tit
16:34:41 <jgriffith> Sweet!
16:34:48 <clayg> jgriffith: only way that makes sense to me
16:34:56 <jgriffith> clayg: agreed
16:34:59 <DuncanT> I have a small worry that some 3rd party drives might not consider it, but I'm not prepared to worry overly :-)
16:35:08 <clayg> ... but I don't really see your point on not having it as a driver specific flag?
16:35:11 <jgriffith> DuncanT: That's the beauty of it
16:35:14 <DuncanT> s/drives/drivers/
16:35:16 <jgriffith> DuncanT: They don't have to
16:35:31 <clayg> DuncanT: operators would not deploy those drivers?  :P
16:35:36 <jgriffith> DuncanT: Then it's just not supported and they do whatever they *normally* do on a delete
16:36:02 <jgriffith> DuncanT: They have to implement delete_volume right?
16:36:19 <jgriffith> DuncanT: So they just ignore options regarding zeroing out etc
16:36:30 <jgriffith> Seems like the right way to go to me
16:36:50 <jgriffith> Or as clayg states, those drivers don't get to play :)
16:36:56 <jgriffith> Just kidding
16:37:02 <DuncanT> L-)
16:37:05 <DuncanT> :-)
16:37:11 <DuncanT> Gah, can't type
16:37:20 <DuncanT> Ok, shall we consider 2. ?
16:37:29 <clayg> so - if it's implemented in the driver - why isn't it a driver specific flag?
16:37:33 <jgriffith> Yes, if everybody is comfortable with item 1
16:37:35 <bswartz> I'd just like to point out that leaving it up the drivers is effectively the situation we have right now
16:37:53 <jgriffith> bswartz: True.. but
16:37:54 <bswartz> so this isn't a change except for the LVM-based driver
16:38:05 <rongze> I think only lvm care about the flag...
16:38:08 <jgriffith> bswartz: The reality is the driver is the one who has to implement/do the work anyway
16:38:39 <jgriffith> bswartz: It may or may not be, depends on what the driver/vendor is capable of
16:38:53 <rongze> other drivers can do nothing
16:39:05 <winston-d> rongze, why is that?
16:39:10 <jgriffith> rongze: Yeah, but some devices may have options here
16:39:35 <jgriffith> rongze: other than just LVM... for example an HP array has a shred method
16:39:43 <clayg> or it may not apply to a sparse volume, or file based backing store.
16:40:00 <jgriffith> rongze: And it may also have a DOD compliant shred
16:40:07 <clayg> customer could always zero volume before calling deleting
16:40:13 <jgriffith> So this would allow those to be selected/implemented
16:40:20 <rongze> yes
16:40:39 <jgriffith> clayg: yes, I personally prefer the customer do what they want up front :)
16:40:51 <creiht> hah
16:40:55 <clayg> so back to... you were suggesting something about an addative addition to the api?
16:41:04 <jgriffith> clayg: Ahh... right
16:41:14 <jgriffith> So I'm not a huge fan of flags
16:41:22 <clayg> I LOVE FLAGS!
16:41:25 <clayg> oh wait..
16:41:28 <jgriffith> They're global for every tennant, every volume etc etc
16:41:38 * jgriffith slaps clayg upside the head
16:42:09 <jgriffith> So say for example a tenant has a two volumes...
16:42:19 <jgriffith> One has credit card and billing info data stored on it
16:42:33 <jgriffith> The other has pictures of kittens
16:42:44 <clayg> is there a bug for this?  who acctually raised the issue?  I think it's totally reasonable that a deployer/operator would say (these are your volume options, this is our security policy - deal with it)
16:43:15 <DuncanT> I think, in general, zeroing volumes is a good and necessary thing
16:43:16 <jgriffith> clayg: So the issue was raised in a bug... lemme find it
16:43:20 <clayg> and then they just choose the impl (and flags) that best match the service level they want to apply
16:43:36 <DuncanT> I *really* don't think relying on the customer to do it is reasonable
16:43:41 <jgriffith> The reasoning was they wanted the ability to speed things up, this is mostly only applicable to LVM case
16:44:02 <DuncanT> The only time you might want to turn it off is a test build, where speed is more useful
16:44:24 <jgriffith> DuncanT: Fair, but really it sounds like maybe it's not even a necessary option any more?
16:44:27 <DuncanT> Getting data security right is hard, don't let the customer get it wrong where at all possible
16:44:38 <jgriffith> DuncanT: I can see your point
16:44:59 <DuncanT> jgriffith: 'I built my openstack cloud and it leaked all my data to other users' is not a good headline
16:45:00 <clayg> DuncanT: yeah well, write zero's once, or some other silly "shred" business is where I tell the customer they're welcome to whatever.  I think a simple wipe over the drive with zeros is all a deployer would want to do (but that's assuming it's a raw device, vhd's and other file based backends don't ever really "wipe" the just do their append thing)
16:45:03 <rongze> I agree DuncanT
16:45:25 <DuncanT> So even the devstack default should be 'safe'
16:45:36 <DuncanT> But a flag for power users might be appreciated
16:45:50 <jgriffith> clayg: DuncanT Ok, so I'm wondering if this is even something we should mess with then
16:45:56 <DuncanT> (simple wipe of zeros is as safe as a shread)
16:46:03 <clayg> DuncanT: +1
16:46:27 <jgriffith> I mean really we don't want to do this anywhere else except maybe in testing, but even then it's not that big of a deal is it?
16:46:35 <bswartz> so are we proposing wiping with zeros on creation or deletion (for LVM volumes)
16:46:46 <clayg> I would suggest serious developers not even do it in testing - who raised the bug?
16:47:03 <clayg> bswartz: that seems to be the way we're going
16:47:29 <bswartz> but we're already wiping on deletion -- and that is the root cause of the bug (unless I misunderstand)
16:47:38 <jgriffith> https://bugs.launchpad.net/cinder/+bug/1022511
16:47:39 <uvirtbot> Launchpad bug 1022511 in nova "Allow for configurable policy for wiping data when deleting volumes" [Undecided,In progress]
16:48:07 <DuncanT> bswartz: wiping snapshots is the (hang) problem, not normal volumes
16:48:11 <bswartz> my suggestion for address the bug was to move the zero operation from delete time to create time, and to no longer zero deleted snapshots
16:48:13 <jgriffith> bswartz: Correct... but the bug is another issue and it's ONLY snapshot LVM's
16:48:41 <jgriffith> bswartz: yes, and that's the way we're going with the bug
16:49:02 <jgriffith> bswartz: This topic was about the bug raised for folks wanting to be able to configure various methods of wiping data
16:49:14 <jgriffith> but it's sounding like maybe this is a none-issue
16:49:30 <jgriffith> We stick with zeroing on LVM and let the third part vendors do whatever they do
16:49:59 <clayg> yeah I'd mark the bug as "opinion" and ask the submitter to file a blueprint
16:50:03 <clayg> ^ for grizzly!
16:50:03 <uvirtbot> clayg: Error: "for" is not a valid command.
16:50:07 <bswartz> On NetApp, newly created volumes are sparse (all zeros) so it doesn't really affect us either way
16:50:08 <jgriffith> the initial reasoning behind the bug does make some sense
16:50:10 <clayg> uvirtbot: I hate you
16:50:10 <uvirtbot> clayg: Error: "I" is not a valid command.
16:50:23 <jgriffith> bswartz: Rigth but we have to think of the base case as well
16:50:47 <clayg> "environments where security is not a concern" is should hopefully be very few
16:51:11 <winston-d> clayg, haha, uvirtbot hits back really soon
16:51:12 <jgriffith> Ok... so based on our conversation I'm going to nix this for now
16:51:33 <DuncanT> I'd suggest a) leave it up to the driver b) default the lvm/iscsi driver to zero as agreed c) have a flag for power users who don't care about zeroing and just want to quickly test stuff on devstack
16:51:41 <jgriffith> clayg and virtbot are always entertaining
16:52:17 <jgriffith> DuncanT: a = yes, b = yes, c = I don't see the point
16:52:27 <jgriffith> I don't want devstack tests running without this
16:52:36 <jgriffith> The snapshot delete bug is the perfect example
16:52:49 <bswartz> I agree with jgriffith here
16:52:56 <DuncanT> jgriffith: I sometimes do load testing on devstack, and the dd is the biggest time consumer, by a large factor
16:53:23 <DuncanT> I can always continue to hack this by hand if nobody agrees with me :-)
16:53:27 <bswartz> if you want the default driver to not zero for performance reasons, you can hack the driver in your environment
16:53:47 <jgriffith> DuncanT: I see your point, I really do and I don't disagree entirely
16:54:03 <DuncanT> but I'm far from the only person who wants this: https://lists.launchpad.net/openstack/msg14333.html
16:54:13 <jgriffith> The problem is that if everybody gets in the habbit of doing this in their devstack tests we never see bugs like the LVM one above until it's too late
16:54:25 <clayg> maybe just have a flag for the lvm driver where you can which command to use (instead of dd, you could give it "echo")
16:54:33 <jgriffith> DuncanT: yes, lots of people state they want it, that's why I initially thought it would be good
16:55:08 <clayg> jgriffith: even if everyone else runs quick dd in their devsetups, me and you don't have to :)
16:55:08 <jgriffith> clayg: Yeah, that was my initial thought on all of this, but now I'm concerned about the test matrix :(
16:55:15 <jgriffith> :)
16:55:24 <jgriffith> clayg: Ok, that's a good compromise for me
16:55:33 <DuncanT> clayg++
16:55:35 <jgriffith> I'll conceed
16:55:54 <jgriffith> Ok... so we move forward with implementing this:
16:56:00 <jgriffith> 1. It's a flag set by admin/power user
16:56:14 <jgriffith> 2. Probably not Folsom but Grizzly time frame
16:56:17 <clayg> either way, I think the bug as written is an opinion, and w/o a blue print all that other stuff doesn't belong in Folsom.
16:56:31 <jgriffith> clayg: agreed...
16:56:56 <jgriffith> Ok... anybody have anything else on this topic?
16:57:09 <jgriffith> Last minute pleas to change our minds and get it in Folsom etc?
16:57:20 <jgriffith> s/pleas/plees/
16:57:46 <jgriffith> #topic docs
16:57:56 <jgriffith> Ok... I need help!
16:58:01 * DuncanT hides under the table
16:58:10 <clayg> ehhe - I acctually really do have to leave
16:58:14 <clayg> that's funny
16:58:16 <jgriffith> Anybody and everybody we need to come up with good documentation
16:58:22 <jgriffith> clayg: Not funny at all :(
16:58:33 <jgriffith> Alright... I guess that closes that topic
16:58:37 <jgriffith> I'll deal with it
16:58:38 <clayg> but maybe I can help... I've been working on the api fixes - maybe I could start there?
16:59:07 <jgriffith> So really I'd even be happy if folks just wrote up a google doc on their driver and I'll convert it to the xml etc
16:59:09 <clayg> do the api docs go into sphix or that other openstack-docs project that anne deals with?
16:59:28 <jgriffith> Or even better any feature/change you implemented do a short write up on it and send it to me
16:59:29 <bswartz> I'm joining another meeting in 1 minute -- I do have plans to document the netapp drivers
16:59:32 <jgriffith> sphinx
16:59:36 <jgriffith> Oh... no
16:59:40 <jgriffith> openstack-docs
16:59:46 <clayg> yeah... I'll have to look into that
16:59:48 <jgriffith> I sent the link in the last meeting
17:00:07 <clayg> last thought, I have to bolt, jenkins is all up in my reviews blowing up my change sets?
17:00:23 <clayg> it *really* looks like a environment problem and not a valid failure
17:00:27 <jgriffith> clayg: I'll have a look, been having issues the past few days just doing rechecks
17:00:39 <jgriffith> I'll check them out and run recheck if I dont see something in there
17:00:58 <jgriffith> #topic open discussion
17:01:07 <jgriffith> alright, we have 30 seconds :)
17:01:12 <clayg> jgriffith: thanx
17:01:16 <jgriffith> clayg: NP
17:01:41 <jgriffith> Anybody have anything pressing they want to bring up?
17:01:52 <jgriffith> Keep in mind RC1 cut off at the end of this week
17:02:14 <jgriffith> Please keep an eye on reviews (I still have a couple large ones that I need eyes on)
17:02:17 <creiht> I have a quick question
17:02:19 <jgriffith> And there are others rolling in
17:02:22 <jgriffith> creiht: Go
17:02:44 <creiht> What should the expected behavior be for someone who tries to attach/detach a volume to an instance that has been shutdown?
17:03:04 <jgriffith> creiht: Hmmm... that's interesting
17:03:12 <creiht> indeed :)
17:03:15 <jgriffith> creiht: TBH I hadn't thought of it
17:03:31 <winston-d> successful i guess?
17:03:32 <jgriffith> creiht: First thought is that since the instances are ephemeral it should fail
17:03:37 <jgriffith> :)
17:03:44 <jgriffith> So much for my first thought
17:03:47 <winston-d> just like you install a new hd drive into your PC?
17:04:26 <jgriffith> yeah, but libvirt won't have a way to make the connection
17:04:40 <jgriffith> creiht: Have you tried this?
17:05:05 <jgriffith> Of course I'm not even familiar with how you have a *shutdown* instance
17:05:17 <jgriffith> But that's my own ignorance I have to deal with :)
17:05:28 <DuncanT> When are we supposed to have the backports to nova-volume done?
17:05:42 <jgriffith> DuncanT: I think that will start next week
17:06:08 <jgriffith> DuncanT: I looked a bit yesterday and we've been fairly good about doing this as we go anyway so it may not be so bad
17:06:23 <DuncanT> You're right, there isn't much
17:06:24 <jgriffith> I'll probably do a great big meld on the /volume directory  :)
17:06:24 <winston-d> can i restart a 'shutdown' instance? if so, then attach should be successful.
17:06:48 <jgriffith> winston-d: If you can then I can see your point
17:06:59 <jgriffith> winston-d: I just don't know how that works with libvirt
17:07:20 <jgriffith> Really, that becomes more of a nova-compute question and I'll have to play around with it
17:07:37 <jgriffith> #action jgriffith Look into creiht request about attach to shutdown instance
17:07:57 <winston-d> libvirt can track down the change into instance xml configuration, i think
17:08:12 <creiht> winston-d: yes
17:08:14 <jgriffith> winston-d: yes, I think you're correct
17:08:32 <jgriffith> winston-d: In which case it should just execute when the instance starts
17:08:39 <creiht> The more common use case is someone shuts down their instance
17:08:48 <creiht> and wants to detach a volume, so they can attach elsewhere
17:09:18 <jgriffith> creiht: In that case would it matter?
17:09:36 <jgriffith> creiht: In the case of LVM would the device still be considered as mounted?
17:09:41 <winston-d> creiht, so 'shutdown' a instance doesn't automatically detach volume?
17:10:18 <jgriffith> err... that last part didn't really make sense I don't think
17:10:26 <creiht> winston-d: it doesn't
17:10:44 <jgriffith> creiht: winston-d Would that be something we should implement?
17:10:45 <creiht> jgriffith: when a volume is attached, it can't be attached to another instance
17:10:58 <jgriffith> creiht: Yeah, understand
17:11:01 <creiht> jgriffith: I don't think we want to auto-detach on shutdown
17:11:14 <winston-d> creiht, then i think we should allow detach from shutdown instance.
17:11:15 <creiht> as you want volumes to persist if you reboot
17:11:23 <creiht> that's what I was thinking
17:11:36 <winston-d> creiht, i agree
17:11:38 <creiht> I just don't think anyone else has really thought it through
17:12:01 <creiht> it is one of those edge cases you don't think about until you have a customer trying to do it :)
17:12:09 <jgriffith> creiht: winston-d Seems like a good case to me
17:12:22 <jgriffith> Grizzly, we'll do a blueprint
17:12:33 <jgriffith> Seems like the best answer
17:12:44 <winston-d> what if someone live migrate a instance with volume attached? how are we dealing with this case?
17:12:56 <jgriffith> although I still have to understand the case of shutting down an instance and restarting it
17:13:05 <jgriffith> Didn't know/think you could even do that
17:13:22 <creiht> winston-d: in that case they are detached and re-attached
17:13:39 <winston-d> jgriffith, hotplug
17:13:50 <winston-d> creiht, yeah, that's what i guess.
17:13:51 <rongze> jgriffith, what blueprint?
17:14:07 <jgriffith> rongze: I was suggesting that a blueprint should be created for:
17:14:19 <jgriffith> allowing detach from an instance that is shutdown
17:14:41 <winston-d> as well as attach to an instance that is shutdown?
17:14:51 <jgriffith> winston-d: Ooops.. yes, forgot that part
17:14:57 <rongze> nice blueprint
17:15:02 <jgriffith> :)
17:15:07 <jgriffith> Pretty cool actually
17:15:31 <rongze> using mobilephone to login irc is too bad....
17:15:36 <jgriffith> LOL
17:15:44 <jgriffith> I tried that once... no fun
17:16:08 <jgriffith> creiht: does that sum up what you were thinking?
17:16:16 <winston-d> if you're just lurking... it might be ok
17:16:20 <creiht> jgriffith: I believe so
17:16:46 <jgriffith> cool, we can add additional thoughts/ideas once we get a blueprint posted
17:16:57 <creiht> thx
17:17:10 <jgriffith> creiht: Grizzly-1
17:17:34 <jgriffith> Alright... I've gotta run unfortunately
17:17:54 <jgriffith> Thanks everyone!
17:17:58 <jgriffith> #endmeeting