Wednesday, 2012-09-05

*** ncode has quit IRC00:00
*** danwent has joined #openstack-meeting00:01
*** lloydde_ has quit IRC00:01
*** adjohn has quit IRC00:03
*** gyee has joined #openstack-meeting00:05
*** s0mik has joined #openstack-meeting00:06
*** ncode has joined #openstack-meeting00:12
*** PotHix has quit IRC00:14
*** gyee has quit IRC00:16
*** markmcclain has quit IRC00:19
*** hemna has quit IRC00:27
*** file has joined #openstack-meeting00:28
*** s0mik has quit IRC00:37
*** danwent has quit IRC00:42
*** nati_ueno has quit IRC00:48
*** nati_ueno has joined #openstack-meeting00:48
*** lloydde has joined #openstack-meeting00:54
*** salv-orlando has quit IRC00:55
*** zul has quit IRC00:57
*** reed has quit IRC01:05
*** zul has joined #openstack-meeting01:06
*** Mandell has quit IRC01:06
*** johnpostlethwait has quit IRC01:07
*** dendro-afk is now known as dendrobates01:12
*** markmcclain has joined #openstack-meeting01:34
*** jakedahn is now known as jakedahn_zz01:38
*** danwent has joined #openstack-meeting01:50
*** Gordonz has quit IRC01:58
*** lloydde has quit IRC02:00
*** reed has joined #openstack-meeting02:15
*** s0mik has joined #openstack-meeting02:26
*** lloydde has joined #openstack-meeting02:37
*** gallth is now known as tgall_foo02:41
*** tgall_foo has quit IRC02:41
*** tgall_foo has joined #openstack-meeting02:41
*** dkehn has quit IRC02:41
*** thinrhino has joined #openstack-meeting02:42
*** jakedahn_zz is now known as jakedahn02:46
*** dwcramer has quit IRC02:47
*** lloydde has quit IRC02:50
*** s0mik has quit IRC02:52
*** ywu_ has quit IRC02:57
*** s3u has joined #openstack-meeting03:03
*** dwcramer has joined #openstack-meeting03:03
*** thinrhino has quit IRC03:04
*** jdurgin has quit IRC03:12
*** s0mik has joined #openstack-meeting03:12
*** dwcramer has quit IRC03:18
*** ncode has quit IRC03:20
*** dwcramer has joined #openstack-meeting03:20
*** Mandell has joined #openstack-meeting03:26
*** s0mik has quit IRC03:26
*** jakedahn is now known as jakedahn_zz03:31
*** jakedahn_zz is now known as jakedahn03:31
*** dwcramer has quit IRC03:35
*** Gordonz has joined #openstack-meeting03:58
*** lloydde has joined #openstack-meeting04:00
*** Gordonz has quit IRC04:02
*** lifeless has quit IRC04:03
*** lloydde has quit IRC04:05
*** Toanster has joined #openstack-meeting04:06
*** dendrobates is now known as dendro-afk04:13
*** jakedahn is now known as jakedahn_zz04:13
*** lifeless has joined #openstack-meeting04:15
*** Toanster_ has joined #openstack-meeting04:19
*** rkukura has quit IRC04:24
*** thinrhino has joined #openstack-meeting04:28
*** thinrhino has quit IRC04:29
*** thinrhino has joined #openstack-meeting04:29
*** rkukura has joined #openstack-meeting04:30
*** rkukura has quit IRC04:37
*** Guest22204 has quit IRC04:37
*** sacharya has quit IRC04:38
*** rkukura has joined #openstack-meeting04:40
*** cp16net is now known as cp16net|away04:43
*** garyk has quit IRC04:43
*** cp16net|away is now known as cp16net04:43
*** Mandell has quit IRC04:51
*** ywu_ has joined #openstack-meeting04:57
*** ywu_ has quit IRC05:02
*** blamar has quit IRC05:06
*** blamar has joined #openstack-meeting05:15
*** s3u has quit IRC05:30
*** garyk has joined #openstack-meeting05:40
*** eglynn has joined #openstack-meeting05:57
*** nati_ueno has quit IRC06:01
*** danwent has quit IRC06:03
*** Ghe_Rivero is now known as GheRivero06:10
*** ttrifonov_zZzz is now known as ttrifonov06:18
*** littleidea has quit IRC06:22
*** rafaduran has joined #openstack-meeting06:44
*** blamar has quit IRC06:49
*** markmcclain has quit IRC06:50
*** jhenner has joined #openstack-meeting06:55
*** ywu_ has joined #openstack-meeting06:58
*** ywu_ has quit IRC07:02
*** almaisan-away is now known as al-maisan07:03
*** Kiall is now known as zz_Kiall07:06
*** AlexYang has joined #openstack-meeting07:11
*** EmilienM has joined #openstack-meeting07:12
*** zz_Kiall is now known as Kiall07:24
*** thinrhino has quit IRC07:26
*** thinrhino has joined #openstack-meeting07:27
*** thinrhino has quit IRC07:31
*** jhenner has quit IRC07:43
*** thinrhino has joined #openstack-meeting08:08
*** salv-orlando has joined #openstack-meeting08:09
*** jhenner has joined #openstack-meeting08:14
*** derekh has joined #openstack-meeting08:15
*** salv-orlando has quit IRC08:17
*** salv-orlando has joined #openstack-meeting08:17
*** darraghb has joined #openstack-meeting08:18
*** salv-orlando has quit IRC08:22
*** salv-orlando has joined #openstack-meeting08:25
*** salv-orlando has quit IRC08:27
*** reed has quit IRC08:36
*** thinrhino has quit IRC08:41
*** thinrhino has joined #openstack-meeting08:41
*** thinrhino has quit IRC08:46
*** salv-orlando has joined #openstack-meeting08:55
*** ywu_ has joined #openstack-meeting08:58
*** salv-orlando has quit IRC09:00
*** ywu_ has quit IRC09:02
*** Kiall has quit IRC09:22
*** Kiall has joined #openstack-meeting09:23
*** jd___ has left #openstack-meeting09:34
*** thinrhino has joined #openstack-meeting09:48
*** shang_ has quit IRC09:53
*** shang has joined #openstack-meeting09:54
*** shang has quit IRC10:05
*** sandywalsh has quit IRC10:13
*** AlexYang has left #openstack-meeting10:22
*** shang has joined #openstack-meeting10:22
*** ywu_ has joined #openstack-meeting10:58
*** ywu_ has quit IRC11:03
*** milner has joined #openstack-meeting11:08
*** sandywalsh has joined #openstack-meeting11:14
*** sandywalsh_ has joined #openstack-meeting11:23
*** sandywalsh has quit IRC11:24
*** mnewby has joined #openstack-meeting11:39
*** thinrhino has quit IRC11:44
*** markvoelker has joined #openstack-meeting11:45
*** ryanpetrello has joined #openstack-meeting11:54
*** markvoelker has quit IRC12:13
*** markvoelker has joined #openstack-meeting12:16
*** salv-orlando has joined #openstack-meeting12:21
*** dendro-afk is now known as dendrobates12:24
*** ryanpetrello has quit IRC12:26
*** salv-orlando has quit IRC12:27
*** troytoman-away is now known as troytoman12:31
*** troytoman is now known as troytoman-away12:33
dolphmls12:40
*** littleidea has joined #openstack-meeting12:46
*** GheRivero has quit IRC12:49
*** littleidea has quit IRC12:58
*** ywu_ has joined #openstack-meeting12:59
*** littleidea has joined #openstack-meeting13:00
*** mnewby has quit IRC13:02
*** ttrifonov is now known as ttrifonov_zZzz13:02
*** ttrifonov_zZzz is now known as ttrifonov13:02
*** ywu_ has quit IRC13:03
*** dprince has joined #openstack-meeting13:03
*** ayoung has joined #openstack-meeting13:03
*** blamar has joined #openstack-meeting13:10
*** dkehn has joined #openstack-meeting13:13
*** blamar has quit IRC13:14
*** spn has joined #openstack-meeting13:15
*** Dr_Who has joined #openstack-meeting13:15
*** Dr_Who has joined #openstack-meeting13:15
*** dolphm_ has joined #openstack-meeting13:18
*** dolphm_ has quit IRC13:18
*** thinrhino has joined #openstack-meeting13:18
*** mnewby has joined #openstack-meeting13:18
*** ryanpetrello has joined #openstack-meeting13:22
*** thinrhino has quit IRC13:24
*** gallth has joined #openstack-meeting13:30
*** Dr_Who has quit IRC13:31
*** ryanpetrello has quit IRC13:34
*** thinrhino has joined #openstack-meeting13:40
*** sacharya has joined #openstack-meeting13:45
*** littleidea has quit IRC13:50
*** dwcramer has joined #openstack-meeting13:53
*** mattray has joined #openstack-meeting13:53
*** thinrhino has quit IRC13:53
*** kiffer84 has quit IRC13:55
*** thinrhino has joined #openstack-meeting13:55
*** sacharya has quit IRC13:56
*** kiffer84 has joined #openstack-meeting14:02
*** hggdh has quit IRC14:04
*** Toanster1 has joined #openstack-meeting14:07
*** markmcclain has joined #openstack-meeting14:08
*** hggdh has joined #openstack-meeting14:12
*** gallth has quit IRC14:14
*** PotHix has joined #openstack-meeting14:14
*** AlanClark has joined #openstack-meeting14:17
*** dtynan has joined #openstack-meeting14:18
*** Toanster_ has quit IRC14:21
*** Toanster has quit IRC14:22
*** Toanster1 is now known as toanster14:25
*** toanster has quit IRC14:26
*** thinrhino has quit IRC14:26
*** Toanster has joined #openstack-meeting14:27
*** Mandell has joined #openstack-meeting14:28
*** sacharya has joined #openstack-meeting14:30
*** Dr_Who has joined #openstack-meeting14:31
*** Dr_Who has joined #openstack-meeting14:31
*** andrewbogott_afk is now known as andrewbogott14:31
*** Mandell has quit IRC14:32
*** Toanster has quit IRC14:32
*** Toanster has joined #openstack-meeting14:33
*** Mandell has joined #openstack-meeting14:37
*** littleidea has joined #openstack-meeting14:39
*** heckj has joined #openstack-meeting14:44
*** alexpilotti has quit IRC14:44
*** Dr_Who has quit IRC14:47
*** Mandell has quit IRC14:49
*** Dr_Who has joined #openstack-meeting14:49
*** Dr_Who has joined #openstack-meeting14:49
*** markmcclain has quit IRC14:58
*** ywu_ has joined #openstack-meeting14:59
*** ywu_ has quit IRC15:03
*** heckj has quit IRC15:03
*** alexpilotti has joined #openstack-meeting15:05
*** markmcclain has joined #openstack-meeting15:06
*** GheRivero has joined #openstack-meeting15:06
*** littleidea has quit IRC15:07
*** littleidea has joined #openstack-meeting15:13
*** littleidea has quit IRC15:14
*** lloydde has joined #openstack-meeting15:17
*** andrewbogott is now known as andrewbogott_afk15:19
*** littleidea has joined #openstack-meeting15:23
*** reed has joined #openstack-meeting15:26
*** rongze has joined #openstack-meeting15:28
*** lloydde has quit IRC15:31
*** jaypipes has joined #openstack-meeting15:32
*** al-maisan is now known as almaisan-away15:33
*** dolphm_ has joined #openstack-meeting15:34
*** ryanpetrello has joined #openstack-meeting15:38
*** rongze has quit IRC15:38
*** GheRivero is now known as Ghe_Rivero15:41
*** blamar has joined #openstack-meeting15:41
*** Ghe_Rivero is now known as GheRivero15:41
*** almaisan-away is now known as al-maisan15:42
*** heckj has joined #openstack-meeting15:42
*** blamar_ has joined #openstack-meeting15:42
*** littleidea has quit IRC15:43
*** troytoman-away is now known as troytoman15:45
*** rongze has joined #openstack-meeting15:45
*** andrewbogott_afk is now known as andrewbogott15:45
*** blamar has quit IRC15:46
*** blamar_ is now known as blamar15:46
*** ttrifonov is now known as ttrifonov_zZzz15:51
*** Gordonz has joined #openstack-meeting15:51
*** ttrifonov_zZzz is now known as ttrifonov15:52
*** bswartz has joined #openstack-meeting15:56
*** winston-d has joined #openstack-meeting16:00
*** rafaduran has quit IRC16:00
*** rhagarty has joined #openstack-meeting16:01
winston-dhi guys16:01
jgriffithHello!16:01
jgriffith#startmeeting cinder16:01
openstackMeeting started Wed Sep  5 16:01:46 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.16:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:01
openstackThe meeting name has been set to 'cinder'16:01
jgriffithSorry I'm a few minutes late16:01
jgriffithbusy busy16:02
*** ncode has joined #openstack-meeting16:02
jgriffithWho do we have?16:02
winston-dhi, john16:02
jgriffithwinston-d: Hello!16:02
rongzehi,every one16:02
winston-dhi, rongze16:02
jgriffithrongze: Hi ya16:02
jgriffithJus the three of us?16:03
*** clayg has joined #openstack-meeting16:03
claygheheh16:03
winston-d:)16:03
jgriffith:)16:03
rongzehehe16:03
jgriffithSo I had a couple of things I wanted to talk about16:03
dtynanhello16:04
* jgriffith rifling desk for notes16:04
DuncanTHey, sorry16:04
jgriffith#topic snapshot deletes16:04
*** openstack changes topic to "snapshot deletes"16:04
jgriffithhttps://bugs.launchpad.net/cinder/+bug/102375516:04
uvirtbotLaunchpad bug 1023755 in nova "Unable to delete the volume snapshot" [Undecided,New]16:04
jgriffithSo this thorn in the side is staring to make a bit more sense16:04
*** al-maisan is now known as almaisan-away16:05
jgriffithIt seems that performing the dd to LVM snapshot volums >= 1G tends to result in a kernel hang16:05
jgriffithIn the cases that it doesn't hang, unfortunately it's DOG slow16:05
jgriffithIt appears based on a little bit of googling that this is a kernel bug16:06
*** ryanpetrello has quit IRC16:06
jgriffithSo, I've been trying to find other solutions16:06
jgriffithBut ultimately I'm wondering:16:06
jgriffith1. Do we need to zero out the snapshot LVM volume at all?16:06
DuncanTThe problem with not zeroing is you risk leaking your data to the next users of the space16:07
jgriffith2. If we do, any suggestions on another method that might be a bit less intensive?16:07
jgriffithDuncanT: Yes, understood16:07
*** lloydde has joined #openstack-meeting16:07
jgriffithDuncanT: However, being a snap it's on the COW blocks that are actually there right?16:08
DuncanTWe do our scrubbing out-of-line, but that is kind of tricky with LVM16:08
bswartzis that a limitation of LVM?16:08
winston-dmaybe we can do that in a async way16:08
jgriffithDuncanT: With LVM it's not possible that I can see16:08
jgriffithwinston-d: How do you mean?16:08
*** Dr_Who has quit IRC16:08
*** lloydde has quit IRC16:08
jgriffithwinston-d: The problem is the kernel hangs if you try to dd the entire volume16:08
winston-dkernel hangs is another problem, async is to deal with DOG slow.16:09
DuncanTIt would be an easy enough hack to break the dd into chunks?16:09
claygjgriffith: probably because it can't fit all the exceptions16:09
jgriffithwinston-d: Ohh... we already background it so that's not a killer anyway16:09
jgriffithclayg: ?16:10
clayghow many extents do we allocate for the snapshot (same size as original volume?)16:10
jgriffithclayg: Yes (same size)16:10
jgriffithclayg: Oh, I think I know where you're going16:10
jgriffithclayg: I don't think that's the problem though16:10
claygso when you write to a snapshot you have to track a) the new data and b) the exception metadata16:10
jgriffithclayg: Ok... keep going :)16:11
claygyou can overflow a snapshot16:11
claygeither with writes into the snapshot, or writes into the origin16:11
jgriffithBut here's the thing...16:11
jgriffithYou can repro this by:16:11
jgriffith1. creat volume16:11
jgriffith2. create snapshot16:11
jgriffith3. delete snapshot16:11
*** jdurgin has joined #openstack-meeting16:11
jgriffithNever mounted/wrote data etc16:12
claygdelete snapshot == write full size of snapshot data into snapshot (zero it out)16:12
bswartzwho designed LVM to expose unwritten blocks as containing previously-written data instead of zeros?16:12
*** spn has quit IRC16:12
claygin my experience - it won't fit16:12
jgriffithclayg: Hmmm... my understanding was different but you make a good point16:12
jgriffithclayg: The trouble is it's not like we get some percentage in and it pukes16:13
claygbswartz: instead of making raw volumes, you could make sparse volumes, you get an empty exception chain (no way to read existing data) - but there is a performance hit (reads and writes)16:13
jgriffithclayg: It's unbelievably slow from the start and hits all sorts of kernel time outs etc16:13
claygjgriffith: once the exception list overflows and extent it's expensive to write new blocks (or read old ones really)16:14
jgriffithclayg: makes sense.. kinda16:14
bswartzWe could zero on creation instead of deletion16:14
jgriffithbswartz: Same problem and wouldn't solve the issue16:14
bswartzthat would sidestep the problem16:14
jgriffithbswartz: It is the same problem16:14
jgriffithbswartz: Remember in this case we never wrote anything16:14
claygjgriffith: well delete just becomes an lvremove16:14
DuncanTCan you get the underlying disk range and blank it directly? Not familiar enough with lvm to know, sorry16:14
jgriffithbswartz: So in essence we're doing exactly what you state16:15
jgriffithclayg: yes16:15
claygjgriffith: and on create - since it's a raw volume, the dd preforms more like you would expect16:15
bswartzif you zero on creation instead of deletion then you never need to zero snapshots ever16:15
*** dolphm_ has quit IRC16:15
claygDuncanT: absolutely, ls /dev/mapper and a bit of dmsetup16:15
jgriffithbswartz: Wait... are you saying zero on volume creation?16:15
*** ryanpetrello has joined #openstack-meeting16:15
clayg[16:14]      bswartz | We could zero on creation instead of deletion16:15
jgriffithbswartz: I'm confused by what you're suggesting?16:15
DuncanTExcept that a) customers tend to expect their data to be gone once they delete the volume b) create then becomes slow16:16
jgriffithclayg: in essence that's what we're doing anyway!16:16
bswartzif you zero newly created volumes then nobody will ever see old data in their volumes16:16
jgriffithclayg: I'm saying in this particular case I never used the volume or the snap16:16
*** hemna has joined #openstack-meeting16:16
jgriffithclayg: So I don't see the difference, it's basicly creating then zeroing already16:16
*** Mandell has joined #openstack-meeting16:16
jgriffithzerioing is the problem16:16
winston-djgriffith, do you have link to kernel bug?16:16
bswartzyeah but zeroing new volumes allows you to not bother zeroing snapshots16:17
bswartzthat's all I was getting at16:17
DuncanTclayg: I thought you could read the tables out of the dev/mapper devices but I haven't tried... might end up making a nasty mess of the lvm metadata though16:17
claygDuncanT: yes absolutly on both counts16:17
jgriffithwinston-d: Nope, haven't tracked it down but found a bit of info from other folks having similar issues with 3.2 and dev/mapper files16:17
*** rhagarty has quit IRC16:17
jgriffithbswartz: How?16:17
jgriffithbswartz: users aren't likely to snapshot an empty volume16:18
jgriffithbswartz: I kinda see what you're saying but don't think it solves the secure data issue16:18
claygjgriffith: if curing 'creating' you write zeros to the entire volume - you never "expose" data across volumes16:19
jgriffithThe struggle I'm having is this... you can ensure the security leakage problem, or you can actually be able to delete snapshots :)16:19
jgriffithclayg: bswartz: AHHHH16:19
jgriffithclayg: bswartz: Finally, I see what you're saying :)16:19
claygjgriffith: np, I think DuncanT already made the valid cretiques why it's not a good idea16:19
claygbut you have to admit it would *work* :P16:19
jgriffith:)  Yes, I believe it would16:20
claygs/a good idea/the ideal solution/16:20
claygbswartz: it's a quite good idea16:20
DuncanTIf the otehr option is 'not working', take the slow option :-)16:20
DuncanTWe (HP) don't particularly care, we don't use LVM...16:21
rongzeit is lvm issuse...16:21
*** dolphm_ has joined #openstack-meeting16:21
jgriffithrongze: Yes, it's lvm only16:22
jgriffithSo it's ugly but a work around:16:22
winston-dnobody use lvm iscsi in production, right?16:22
jgriffith1. zero out newly created volumes16:22
jgriffith2. zero out delete volumes still as well (try to meet expectations)16:22
jgriffith3. Skip zero out on snapshot delete16:23
claygjgriffith: seems quite reasonable to me16:23
claygstep 4. Make it better later16:23
jgriffithclayg: Yes!16:24
winston-djgriffith, make sense for bug fix16:24
jgriffithStep 4 is very important :)16:24
DuncanTSeems reasonable now - a flag to turn off all the zeroing (or only zero the first meg of new volumes, more sensibly) for test deployments might be appreciated too16:24
clayglol16:24
*** Gordonz has quit IRC16:24
jgriffithI suppose we could even do this intelligently based on kernel versin?16:24
jgriffithDuncanT: That's my next topic! :)16:24
*** Gordonz has joined #openstack-meeting16:24
jgriffithOk... seems like we have a plan for this one16:25
jgriffithAny last thoughts before I move on?16:25
DuncanTNot sure if it is a kernel version issue - fixed size snapshots that aren't bigger than the origin will always overflow if you fill them (which is what the basic dd does)16:25
jgriffithDuncanT: Here's a proposal16:25
jgriffithDuncanT: I'll try the same test with a snapshot > volume size16:26
jgriffithDuncanT: My theory is that it doesn't have anything to do with the size16:26
DuncanTok16:26
jgriffithDuncanT: But maybe/hopefully I'll be wrong16:26
jgriffithDuncanT: If I come across something different I'll send a note out to everyone16:26
jgriffithSound good to everyone?16:26
DuncanTSounds good to me16:27
winston-dyes16:27
jgriffithclayg: thoughts?16:27
claygjgriffith: sounds good to me16:27
jgriffithcool!  Moving on16:27
claygI think "filling up the snapshot exception chain with zeros" is a bad idea ripe to be abandoned16:27
jgriffithclayg: Well last night at about 11:00 I definitely agreed with you on that!16:28
claygya, moving on16:28
jgriffith:)16:28
jgriffith#topic configurable zeroing of volumes on delete16:29
*** openstack changes topic to "configurable zeroing of volumes on delete"16:29
jgriffithWe talked about setting this via flags and where it should live/be implemented16:29
jgriffithhere are my opinions:16:29
jgriffith1. It should be implemented in the driver16:29
jgriffithThat way it can be implemented/handled specifically however is needed16:30
jgriffithVendors that have more clever options can use them16:30
jgriffithetc etc16:30
*** Dr_Who has joined #openstack-meeting16:30
*** Dr_Who has joined #openstack-meeting16:30
jgriffith2. Rather than using a flag and making it a global settging, make it an optional argument to the delete call16:30
jgriffithThis makes the most sense to me, an admin can use a flag to set an over-ride policy for all tenants/volumes16:31
DuncanT1. Tentatively agree 2. Disagree16:31
jgriffithBut if they want to provide an option to the tenant on a case by case basis they can16:31
jgriffithDuncanT: Ok... Let's start with the tentative:16:31
jgriffithReasons not to implement in the driver?16:32
jgriffithAnd where would you implement it?16:32
winston-dare we talking about where should the flag be defined or implemented?16:33
DuncanTI raised the point about it being something may volume providers might well want, meaning it is maybe better done as a library call, but I might well be wrong about the number of drivers that actually want the basic version, so I'm entirely happy to be told I was worrying excessively - it isn't like it is tricky code16:33
*** kindaopsdevy has joined #openstack-meeting16:33
bswartzI prefer (1)16:33
rongzeI suppot implement in driver16:33
jgriffithDuncanT: I guess my reasoning for having it in the driver is to achieve exactly what you describe16:33
DuncanTYeah, ok, I'm convinced, keep it in the drive, I withdraw the tentative16:34
jgriffithThe third party drivers over-ride the delete operations anyway so they can do their magic however they see fit and the rest of the world doesn't have to know/care abou tit16:34
jgriffithSweet!16:34
claygjgriffith: only way that makes sense to me16:34
jgriffithclayg: agreed16:34
DuncanTI have a small worry that some 3rd party drives might not consider it, but I'm not prepared to worry overly :-)16:34
clayg... but I don't really see your point on not having it as a driver specific flag?16:35
jgriffithDuncanT: That's the beauty of it16:35
DuncanTs/drives/drivers/16:35
jgriffithDuncanT: They don't have to16:35
claygDuncanT: operators would not deploy those drivers?  :P16:35
jgriffithDuncanT: Then it's just not supported and they do whatever they *normally* do on a delete16:35
jgriffithDuncanT: They have to implement delete_volume right?16:36
jgriffithDuncanT: So they just ignore options regarding zeroing out etc16:36
jgriffithSeems like the right way to go to me16:36
jgriffithOr as clayg states, those drivers don't get to play :)16:36
jgriffithJust kidding16:36
DuncanTL-)16:37
DuncanT:-)16:37
DuncanTGah, can't type16:37
DuncanTOk, shall we consider 2. ?16:37
claygso - if it's implemented in the driver - why isn't it a driver specific flag?16:37
jgriffithYes, if everybody is comfortable with item 116:37
bswartzI'd just like to point out that leaving it up the drivers is effectively the situation we have right now16:37
jgriffithbswartz: True.. but16:37
bswartzso this isn't a change except for the LVM-based driver16:37
rongzeI think only lvm care about the flag...16:38
jgriffithbswartz: The reality is the driver is the one who has to implement/do the work anyway16:38
jgriffithbswartz: It may or may not be, depends on what the driver/vendor is capable of16:38
rongzeother drivers can do nothing16:38
*** dolphm_ has quit IRC16:39
winston-drongze, why is that?16:39
jgriffithrongze: Yeah, but some devices may have options here16:39
jgriffithrongze: other than just LVM... for example an HP array has a shred method16:39
claygor it may not apply to a sparse volume, or file based backing store.16:39
jgriffithrongze: And it may also have a DOD compliant shred16:40
claygcustomer could always zero volume before calling deleting16:40
jgriffithSo this would allow those to be selected/implemented16:40
rongzeyes16:40
*** danwent has joined #openstack-meeting16:40
jgriffithclayg: yes, I personally prefer the customer do what they want up front :)16:40
creihthah16:40
claygso back to... you were suggesting something about an addative addition to the api?16:40
jgriffithclayg: Ahh... right16:41
jgriffithSo I'm not a huge fan of flags16:41
claygI LOVE FLAGS!16:41
claygoh wait..16:41
jgriffithThey're global for every tennant, every volume etc etc16:41
* jgriffith slaps clayg upside the head16:41
jgriffithSo say for example a tenant has a two volumes...16:42
jgriffithOne has credit card and billing info data stored on it16:42
jgriffithThe other has pictures of kittens16:42
claygis there a bug for this?  who acctually raised the issue?  I think it's totally reasonable that a deployer/operator would say (these are your volume options, this is our security policy - deal with it)16:42
DuncanTI think, in general, zeroing volumes is a good and necessary thing16:43
jgriffithclayg: So the issue was raised in a bug... lemme find it16:43
claygand then they just choose the impl (and flags) that best match the service level they want to apply16:43
DuncanTI *really* don't think relying on the customer to do it is reasonable16:43
jgriffithThe reasoning was they wanted the ability to speed things up, this is mostly only applicable to LVM case16:43
*** matiu has quit IRC16:43
DuncanTThe only time you might want to turn it off is a test build, where speed is more useful16:44
jgriffithDuncanT: Fair, but really it sounds like maybe it's not even a necessary option any more?16:44
DuncanTGetting data security right is hard, don't let the customer get it wrong where at all possible16:44
*** dolphm_ has joined #openstack-meeting16:44
jgriffithDuncanT: I can see your point16:44
*** anniec has quit IRC16:44
*** darraghb has quit IRC16:44
*** garyk has quit IRC16:44
DuncanTjgriffith: 'I built my openstack cloud and it leaked all my data to other users' is not a good headline16:44
claygDuncanT: yeah well, write zero's once, or some other silly "shred" business is where I tell the customer they're welcome to whatever.  I think a simple wipe over the drive with zeros is all a deployer would want to do (but that's assuming it's a raw device, vhd's and other file based backends don't ever really "wipe" the just do their append thing)16:45
rongzeI agree DuncanT16:45
DuncanTSo even the devstack default should be 'safe'16:45
DuncanTBut a flag for power users might be appreciated16:45
jgriffithclayg: DuncanT Ok, so I'm wondering if this is even something we should mess with then16:45
DuncanT(simple wipe of zeros is as safe as a shread)16:45
claygDuncanT: +116:46
jgriffithI mean really we don't want to do this anywhere else except maybe in testing, but even then it's not that big of a deal is it?16:46
bswartzso are we proposing wiping with zeros on creation or deletion (for LVM volumes)16:46
claygI would suggest serious developers not even do it in testing - who raised the bug?16:46
claygbswartz: that seems to be the way we're going16:47
bswartzbut we're already wiping on deletion -- and that is the root cause of the bug (unless I misunderstand)16:47
jgriffithhttps://bugs.launchpad.net/cinder/+bug/102251116:47
uvirtbotLaunchpad bug 1022511 in nova "Allow for configurable policy for wiping data when deleting volumes" [Undecided,In progress]16:47
DuncanTbswartz: wiping snapshots is the (hang) problem, not normal volumes16:48
bswartzmy suggestion for address the bug was to move the zero operation from delete time to create time, and to no longer zero deleted snapshots16:48
jgriffithbswartz: Correct... but the bug is another issue and it's ONLY snapshot LVM's16:48
jgriffithbswartz: yes, and that's the way we're going with the bug16:48
jgriffithbswartz: This topic was about the bug raised for folks wanting to be able to configure various methods of wiping data16:49
jgriffithbut it's sounding like maybe this is a none-issue16:49
jgriffithWe stick with zeroing on LVM and let the third part vendors do whatever they do16:49
claygyeah I'd mark the bug as "opinion" and ask the submitter to file a blueprint16:49
clayg^ for grizzly!16:50
uvirtbotclayg: Error: "for" is not a valid command.16:50
bswartzOn NetApp, newly created volumes are sparse (all zeros) so it doesn't really affect us either way16:50
jgriffiththe initial reasoning behind the bug does make some sense16:50
clayguvirtbot: I hate you16:50
uvirtbotclayg: Error: "I" is not a valid command.16:50
jgriffithbswartz: Rigth but we have to think of the base case as well16:50
clayg"environments where security is not a concern" is should hopefully be very few16:50
*** s0mik has joined #openstack-meeting16:50
winston-dclayg, haha, uvirtbot hits back really soon16:51
jgriffithOk... so based on our conversation I'm going to nix this for now16:51
DuncanTI'd suggest a) leave it up to the driver b) default the lvm/iscsi driver to zero as agreed c) have a flag for power users who don't care about zeroing and just want to quickly test stuff on devstack16:51
jgriffithclayg and virtbot are always entertaining16:51
jgriffithDuncanT: a = yes, b = yes, c = I don't see the point16:52
jgriffithI don't want devstack tests running without this16:52
jgriffithThe snapshot delete bug is the perfect example16:52
bswartzI agree with jgriffith here16:52
DuncanTjgriffith: I sometimes do load testing on devstack, and the dd is the biggest time consumer, by a large factor16:52
DuncanTI can always continue to hack this by hand if nobody agrees with me :-)16:53
bswartzif you want the default driver to not zero for performance reasons, you can hack the driver in your environment16:53
*** ryanpetrello has quit IRC16:53
jgriffithDuncanT: I see your point, I really do and I don't disagree entirely16:53
DuncanTbut I'm far from the only person who wants this: https://lists.launchpad.net/openstack/msg14333.html16:54
jgriffithThe problem is that if everybody gets in the habbit of doing this in their devstack tests we never see bugs like the LVM one above until it's too late16:54
claygmaybe just have a flag for the lvm driver where you can which command to use (instead of dd, you could give it "echo")16:54
jgriffithDuncanT: yes, lots of people state they want it, that's why I initially thought it would be good16:54
claygjgriffith: even if everyone else runs quick dd in their devsetups, me and you don't have to :)16:55
jgriffithclayg: Yeah, that was my initial thought on all of this, but now I'm concerned about the test matrix :(16:55
jgriffith:)16:55
jgriffithclayg: Ok, that's a good compromise for me16:55
DuncanTclayg++16:55
jgriffithI'll conceed16:55
jgriffithOk... so we move forward with implementing this:16:55
jgriffith1. It's a flag set by admin/power user16:56
jgriffith2. Probably not Folsom but Grizzly time frame16:56
claygeither way, I think the bug as written is an opinion, and w/o a blue print all that other stuff doesn't belong in Folsom.16:56
*** jog0 has joined #openstack-meeting16:56
jgriffithclayg: agreed...16:56
*** jog0 has joined #openstack-meeting16:56
jgriffithOk... anybody have anything else on this topic?16:56
jgriffithLast minute pleas to change our minds and get it in Folsom etc?16:57
jgriffiths/pleas/plees/16:57
jgriffith#topic docs16:57
*** openstack changes topic to "docs"16:57
jgriffithOk... I need help!16:57
* DuncanT hides under the table16:58
claygehhe - I acctually really do have to leave16:58
claygthat's funny16:58
jgriffithAnybody and everybody we need to come up with good documentation16:58
jgriffithclayg: Not funny at all :(16:58
jgriffithAlright... I guess that closes that topic16:58
jgriffithI'll deal with it16:58
claygbut maybe I can help... I've been working on the api fixes - maybe I could start there?16:58
jgriffithSo really I'd even be happy if folks just wrote up a google doc on their driver and I'll convert it to the xml etc16:59
claygdo the api docs go into sphix or that other openstack-docs project that anne deals with?16:59
jgriffithOr even better any feature/change you implemented do a short write up on it and send it to me16:59
bswartzI'm joining another meeting in 1 minute -- I do have plans to document the netapp drivers16:59
jgriffithsphinx16:59
jgriffithOh... no16:59
jgriffithopenstack-docs16:59
claygyeah... I'll have to look into that16:59
jgriffithI sent the link in the last meeting16:59
*** ywu_ has joined #openstack-meeting17:00
clayglast thought, I have to bolt, jenkins is all up in my reviews blowing up my change sets?17:00
claygit *really* looks like a environment problem and not a valid failure17:00
jgriffithclayg: I'll have a look, been having issues the past few days just doing rechecks17:00
jgriffithI'll check them out and run recheck if I dont see something in there17:00
jgriffith#topic open discussion17:00
*** openstack changes topic to "open discussion"17:00
jgriffithalright, we have 30 seconds :)17:01
claygjgriffith: thanx17:01
jgriffithclayg: NP17:01
*** clayg has left #openstack-meeting17:01
jgriffithAnybody have anything pressing they want to bring up?17:01
jgriffithKeep in mind RC1 cut off at the end of this week17:01
jgriffithPlease keep an eye on reviews (I still have a couple large ones that I need eyes on)17:02
creihtI have a quick question17:02
jgriffithAnd there are others rolling in17:02
jgriffithcreiht: Go17:02
creihtWhat should the expected behavior be for someone who tries to attach/detach a volume to an instance that has been shutdown?17:02
*** derekh has quit IRC17:02
jgriffithcreiht: Hmmm... that's interesting17:03
creihtindeed :)17:03
jgriffithcreiht: TBH I hadn't thought of it17:03
winston-dsuccessful i guess?17:03
jgriffithcreiht: First thought is that since the instances are ephemeral it should fail17:03
jgriffith:)17:03
jgriffithSo much for my first thought17:03
winston-djust like you install a new hd drive into your PC?17:03
*** lloydde has joined #openstack-meeting17:04
jgriffithyeah, but libvirt won't have a way to make the connection17:04
jgriffithcreiht: Have you tried this?17:04
*** ywu_ has quit IRC17:04
jgriffithOf course I'm not even familiar with how you have a *shutdown* instance17:05
jgriffithBut that's my own ignorance I have to deal with :)17:05
DuncanTWhen are we supposed to have the backports to nova-volume done?17:05
jgriffithDuncanT: I think that will start next week17:05
jgriffithDuncanT: I looked a bit yesterday and we've been fairly good about doing this as we go anyway so it may not be so bad17:06
*** kindaopsdevy has left #openstack-meeting17:06
DuncanTYou're right, there isn't much17:06
jgriffithI'll probably do a great big meld on the /volume directory  :)17:06
winston-dcan i restart a 'shutdown' instance? if so, then attach should be successful.17:06
jgriffithwinston-d: If you can then I can see your point17:06
jgriffithwinston-d: I just don't know how that works with libvirt17:06
jgriffithReally, that becomes more of a nova-compute question and I'll have to play around with it17:07
jgriffith#action jgriffith Look into creiht request about attach to shutdown instance17:07
winston-dlibvirt can track down the change into instance xml configuration, i think17:07
creihtwinston-d: yes17:08
jgriffithwinston-d: yes, I think you're correct17:08
jgriffithwinston-d: In which case it should just execute when the instance starts17:08
creihtThe more common use case is someone shuts down their instance17:08
creihtand wants to detach a volume, so they can attach elsewhere17:08
jgriffithcreiht: In that case would it matter?17:09
jgriffithcreiht: In the case of LVM would the device still be considered as mounted?17:09
winston-dcreiht, so 'shutdown' a instance doesn't automatically detach volume?17:09
jgriffitherr... that last part didn't really make sense I don't think17:10
creihtwinston-d: it doesn't17:10
jgriffithcreiht: winston-d Would that be something we should implement?17:10
creihtjgriffith: when a volume is attached, it can't be attached to another instance17:10
jgriffithcreiht: Yeah, understand17:10
creihtjgriffith: I don't think we want to auto-detach on shutdown17:11
winston-dcreiht, then i think we should allow detach from shutdown instance.17:11
creihtas you want volumes to persist if you reboot17:11
creihtthat's what I was thinking17:11
*** GheRivero has quit IRC17:11
winston-dcreiht, i agree17:11
creihtI just don't think anyone else has really thought it through17:11
creihtit is one of those edge cases you don't think about until you have a customer trying to do it :)17:12
jgriffithcreiht: winston-d Seems like a good case to me17:12
jgriffithGrizzly, we'll do a blueprint17:12
jgriffithSeems like the best answer17:12
winston-dwhat if someone live migrate a instance with volume attached? how are we dealing with this case?17:12
jgriffithalthough I still have to understand the case of shutting down an instance and restarting it17:12
jgriffithDidn't know/think you could even do that17:13
creihtwinston-d: in that case they are detached and re-attached17:13
winston-djgriffith, hotplug17:13
winston-dcreiht, yeah, that's what i guess.17:13
rongzejgriffith, what blueprint?17:13
*** lloydde has quit IRC17:13
jgriffithrongze: I was suggesting that a blueprint should be created for:17:14
*** markmcclain has quit IRC17:14
jgriffithallowing detach from an instance that is shutdown17:14
winston-das well as attach to an instance that is shutdown?17:14
jgriffithwinston-d: Ooops.. yes, forgot that part17:14
rongzenice blueprint17:14
jgriffith:)17:15
jgriffithPretty cool actually17:15
rongzeusing mobilephone to login irc is too bad....17:15
jgriffithLOL17:15
jgriffithI tried that once... no fun17:15
jgriffithcreiht: does that sum up what you were thinking?17:16
winston-dif you're just lurking... it might be ok17:16
creihtjgriffith: I believe so17:16
jgriffithcool, we can add additional thoughts/ideas once we get a blueprint posted17:16
creihtthx17:16
jgriffithcreiht: Grizzly-117:17
jgriffithAlright... I've gotta run unfortunately17:17
jgriffithThanks everyone!17:17
jgriffith#endmeeting17:17
*** openstack changes topic to "cidner-usecases"17:18
openstackMeeting ended Wed Sep  5 17:17:58 2012 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:18
winston-dthx, john17:18
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2012/cinder.2012-09-05-16.01.html17:18
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2012/cinder.2012-09-05-16.01.txt17:18
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2012/cinder.2012-09-05-16.01.log.html17:18
*** dtynan has left #openstack-meeting17:18
rongzethx,john17:18
*** rafaduran has joined #openstack-meeting17:18
*** rongze has left #openstack-meeting17:20
*** ryanpetrello has joined #openstack-meeting17:20
*** Dr_Who has quit IRC17:21
*** matiu has joined #openstack-meeting17:22
*** matiu has joined #openstack-meeting17:22
*** lloydde has joined #openstack-meeting17:24
*** anniec has joined #openstack-meeting17:24
*** anniec_ has joined #openstack-meeting17:25
*** dolphm_ has quit IRC17:25
*** Dr_Who has joined #openstack-meeting17:26
*** Dr_Who has quit IRC17:26
*** Dr_Who has joined #openstack-meeting17:26
*** lloydde has quit IRC17:28
*** anniec has quit IRC17:28
*** anniec_ is now known as anniec17:28
*** jog0 has quit IRC17:29
*** ryanpetrello has quit IRC17:33
*** dendrobates is now known as dendro-afk17:40
*** ywu has quit IRC17:41
*** ryanpetrello has joined #openstack-meeting17:43
*** littleidea has joined #openstack-meeting17:43
*** ryanpetrello has quit IRC17:44
*** winston-d has quit IRC17:48
*** nati_ueno has joined #openstack-meeting17:52
*** dolphm_ has joined #openstack-meeting17:53
*** mattray has quit IRC17:54
*** andrewbogott is now known as andrewbogott_afk17:54
*** gyee has joined #openstack-meeting17:57
*** AlanClark has quit IRC17:59
*** dolphm_ has quit IRC18:03
*** thinrhino has joined #openstack-meeting18:04
*** dendro-afk is now known as dendrobates18:05
*** thinrhino has quit IRC18:05
*** andrewbogott_afk is now known as andrewbogott18:06
*** markmcclain has joined #openstack-meeting18:06
*** thinrhino has joined #openstack-meeting18:07
*** bswartz has left #openstack-meeting18:16
*** bcwaldon has left #openstack-meeting18:26
*** jhenner has quit IRC18:28
*** jhenner1 has joined #openstack-meeting18:28
*** ayoung_ has joined #openstack-meeting18:31
*** lloydde has joined #openstack-meeting18:34
*** dolphm_ has joined #openstack-meeting18:36
*** lloydde has quit IRC18:40
*** jhenner1 has quit IRC18:41
*** garyk has joined #openstack-meeting18:48
*** novas0x2a|laptop has quit IRC18:52
*** mattray has joined #openstack-meeting18:54
*** danwent has quit IRC18:58
*** s0mik has quit IRC18:59
*** Dr_Who has quit IRC18:59
*** ywu has joined #openstack-meeting19:00
*** Dr_Who has joined #openstack-meeting19:01
*** Dr_Who has joined #openstack-meeting19:01
*** ywu has quit IRC19:04
*** nati_ueno has quit IRC19:05
*** nati_ueno has joined #openstack-meeting19:06
*** thinrhino has quit IRC19:06
*** dprince has quit IRC19:11
*** alexpilotti has quit IRC19:14
*** mnewby has quit IRC19:17
*** mnewby has joined #openstack-meeting19:17
*** novas0x2a|laptop has joined #openstack-meeting19:21
*** s0mik has joined #openstack-meeting19:22
*** jhenner has joined #openstack-meeting19:26
*** jhenner has quit IRC19:36
*** anniec has quit IRC19:41
*** dendrobates is now known as dendro-afk19:55
*** anniec has joined #openstack-meeting19:56
*** n0ano has joined #openstack-meeting20:02
*** troytoman is now known as troytoman-away20:02
*** heckj_ has joined #openstack-meeting20:07
*** heckj has quit IRC20:07
*** dendro-afk is now known as dendrobates20:11
*** chalupaul has quit IRC20:13
*** dolphm has quit IRC20:14
*** danwent has joined #openstack-meeting20:32
*** chalupaul has joined #openstack-meeting20:34
*** troytoman-away is now known as troytoman20:45
*** ryanpetrello has joined #openstack-meeting20:48
*** anniec_ has joined #openstack-meeting20:50
*** anniec has quit IRC20:53
*** anniec_ is now known as anniec20:53
*** dolphm_ has quit IRC20:58
*** anniec has quit IRC21:00
*** ywu has joined #openstack-meeting21:00
*** ywu has quit IRC21:05
*** markvoelker has quit IRC21:06
*** anniec has joined #openstack-meeting21:08
*** anniec_ has joined #openstack-meeting21:09
*** ryanpetrello has quit IRC21:09
*** johnpur has joined #openstack-meeting21:10
*** johnpur has quit IRC21:10
*** johnpur has joined #openstack-meeting21:11
*** anniec has quit IRC21:12
*** anniec_ is now known as anniec21:12
*** maurosr has joined #openstack-meeting21:13
*** almaisan-away is now known as al-maisan21:19
*** adjohn has joined #openstack-meeting21:23
*** ryanpetrello has joined #openstack-meeting21:25
*** kindaopsdevy_ has joined #openstack-meeting21:27
*** al-maisan is now known as almaisan-away21:27
*** kindaopsdevy_ has left #openstack-meeting21:27
*** AndChat|396416 has joined #openstack-meeting21:31
*** ryanpetrello has quit IRC21:33
*** troytoman is now known as troytoman-away21:33
*** maurosr has quit IRC21:34
*** AndChat|396416 has quit IRC21:35
*** Dr_Who has quit IRC21:39
*** nati_ueno has quit IRC21:43
*** jog0 has joined #openstack-meeting21:44
*** markmcclain has quit IRC21:47
*** sacharya has quit IRC21:55
*** dwcramer has quit IRC21:57
*** kindaopsdevy__ has joined #openstack-meeting22:01
*** kindaopsdevy__ has left #openstack-meeting22:01
*** anniec has quit IRC22:02
*** anniec has joined #openstack-meeting22:03
*** anniec_ has joined #openstack-meeting22:04
*** anniec has quit IRC22:08
*** anniec_ is now known as anniec22:08
*** dolphm has joined #openstack-meeting22:09
*** ryanpetrello has joined #openstack-meeting22:10
*** ozstacker has quit IRC22:11
*** dolphm has quit IRC22:13
*** rafaduran has left #openstack-meeting22:17
*** s0mik has quit IRC22:32
*** ijw1 has joined #openstack-meeting22:36
*** ijw has quit IRC22:36
*** eglynn has quit IRC22:38
*** eglynn has joined #openstack-meeting22:38
*** s0mik has joined #openstack-meeting22:41
*** ryanpetrello has quit IRC22:46
*** anniec has quit IRC22:46
*** sandywalsh_ has quit IRC22:47
*** nati_ueno has joined #openstack-meeting22:48
*** mattray has quit IRC22:52
*** anniec has joined #openstack-meeting22:58
*** ijw1 has quit IRC23:00
*** ijw has joined #openstack-meeting23:00
*** ywu has joined #openstack-meeting23:01
*** ywu has quit IRC23:05
*** littleidea has quit IRC23:09
*** littleidea has joined #openstack-meeting23:11
*** kindaopsdevy has joined #openstack-meeting23:26
*** kindaops_ has joined #openstack-meeting23:28
*** kindaops_ has left #openstack-meeting23:28
*** EmilienM has quit IRC23:29
*** kindaopsdevy has left #openstack-meeting23:30
*** matwood has joined #openstack-meeting23:39
*** dwcramer has joined #openstack-meeting23:40
*** andrewbogott is now known as andrewbogott_afk23:40
*** ryanpetrello has joined #openstack-meeting23:42
*** ryanpetrello has quit IRC23:46
*** s0mik has quit IRC23:47
*** ryanpetrello has joined #openstack-meeting23:47
*** novas0x2a|laptop has quit IRC23:47
*** hemna has quit IRC23:49
*** anniec has quit IRC23:50
*** anniec_ has joined #openstack-meeting23:50
*** s0mik has joined #openstack-meeting23:53
*** ryanpetrello has quit IRC23:54
*** ryanpetrello has joined #openstack-meeting23:56

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!