15:59:50 #startmeeting cinder 15:59:50 Meeting started Wed Nov 25 15:59:50 2015 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:59:53 The meeting name has been set to 'cinder' 16:00:10 Hi! 16:00:12 o/ 16:00:12 Hi o/ 16:00:12 hi 16:00:15 Hi 16:00:17 hello 16:00:18 o/ 16:00:19 Hi 16:00:27 Hey everyone. 16:00:48 hi 16:00:52 hi 16:01:50 #Announcements 16:01:59 #topic Announcements 16:02:04 Sorry, slow today. :) 16:02:17 Just want to point out M-1 is coming up quick. 16:02:24 hi 16:02:43 Based on the new deadlines, that's not a major point, but good to realize we are already that far into the cycle. 16:03:02 #link https://launchpad.net/cinder/+milestone/mitaka-1 16:03:30 Pretty decent list already there. :) 16:03:34 #topic Bug Status 16:03:53 Bugs are looking good. The numbers have actually gone down. 16:04:05 Thanks to those spending some time cleaning that up. 16:04:19 I've seen a lot of good updates there. 16:04:46 It looks like a good number of them could just be from the linkage not working and not changing the bug to Fix Committed when patches have gone through. 16:04:46 hi 16:04:55 My hope is a lot of them can just be closed out at this point. 16:05:26 #info Cinder: 473 bugs, CinderClient 43 bugs, OS-Brick: 15 bugs 16:05:37 mtanino: You've done quite a few. Thanks! 16:05:50 Still several nova volume related bugs. 16:05:54 hello? 16:06:09 If anyone can take a look through the link and see if we can provide any input. 16:06:22 mtanino: Hi! Just thanking you. Nothing to worry about. :) 16:06:32 oops, meeting. sure :) 16:06:40 #link https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes 16:07:13 OK, enough boring administrative stuff. :) 16:07:24 #topic Copy encryptors from Nova to os-brick 16:07:31 lixiaoy1: Hi 16:07:58 hi 16:08:09 For reference, I added this to the nova meeting agenda too 16:08:15 Meeting is tomorrow 16:08:27 DuncanT: Awesome, thanks. I'll try to be there too. 16:08:54 I am concerned how to move forward. As Nova doesn't want cinder to decrypt volumes. 16:09:35 lixiaoy1: So if I remember right, some of our use cases are being questioned, but there is general agreement that copy volume to image will need us to be able to decrypt. 16:09:38 it's a problem in case, if we'll implement attach w/o nova 16:09:44 lixiaoy1: Is that right? 16:09:58 e0ne: Oh yea, that too. 16:10:18 nova is meeting tomorrow? 16:10:36 tomorrow is a US holiday 16:10:39 two cases: 1. create encrypted voluem from image, 2. retype volume with different encryptions. for example, retype unencrypted volume to encrypted 16:10:39 #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/079964.html 16:10:47 DuncanT: I don't see link to nova meeting in our agenda 16:10:55 bswartz: Oh yeah, good point. 16:11:05 what about this one? https://review.openstack.org/#/c/247577/ Integrate Castellan for Key Management 16:11:05 #link https://review.openstack.org/#/c/247372/ 16:11:12 #link https://review.openstack.org/#/c/248593/ 16:11:16 bswartz: November 26th 2015 1400 UTC, #openstack-meeting (http://www.timeanddate.com/worldclock/fixedtime.html?iso=20151126T140000 16:11:37 bswartz: At least according to https://wiki.openstack.org/wiki/Meetings/Nova 16:11:39 xyang: this is just move key manager to common library. keymgr under cinder 16:11:43 interesting -- I bet many US-based people will be absent 16:11:54 Well, John is in the UK, so maybe they don't care. 16:12:04 I think john still wanted to run the Nova meeting, there was a mail on the list today 16:12:17 sry, still digging for the link 16:13:14 think putting the patch up to move the existing nova code into brick is worth posting. There's a great deal of designing of new security models going on, but not a lot of solving existing problems, so I suspect nova will move to brick code without too much trouble 16:13:28 xyang: https://github.com/openstack/cinder/tree/master/cinder/keymgr 16:13:41 lixiaoy1: thanks 16:14:08 So, not to be a jerk, but nova can't block us adding it to os-brick and cinder. Whether they want to use it or not is up to them. 16:14:21 Definitely not a stance I want to take, but that's the reality. 16:14:31 I think we've clearly identified the need for it. 16:15:05 smcginnis: +1 16:15:18 #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/080388.html 16:15:22 And saying it's a security issue when anyone can go out to the nova source and get the code to decrypt a volume anyway kind of makes it an invalid argument IMO. 16:15:30 smcginnis: I think once the code is in brick, the patch to change nova to use that will go in - it doesn't actually change anything for them 16:15:35 smcginnis: I agree. I think we can't leave cinder things to nova. That is back to nova-volume, and it blocks our future goal independent SDS 16:15:58 DuncanT: I hope so. 16:16:36 Any other thoughts on this from anyone else? 16:17:05 So safe to say lixiaoy1 can move ahead with the work? 16:17:10 +1 16:17:26 yeah I want to offer a general +1 for that approach 16:17:48 the discussion on the ML made me a bit worried, but perhaps that's been resolved 16:18:06 Great. lixiaoy1, do you need any other input to be able to proceed? 16:18:30 bswartz: Maybe not resolved, but close enough. ;) 16:18:35 smcginnis: thanks, that is enough. 16:18:58 smcginnis: +1 16:19:31 lixiaoy1: Great. Thanks for pushing on with this. The patches and ML posts have been good to get the discussion out there. 16:19:55 #topic CI Documentation 16:20:02 #link https://wiki.openstack.org/wiki/Documentation/VendorDrivers 16:20:09 I've added a column to the CI table. 16:20:23 I have three points for bringing this up. 16:20:51 First, we haven't really standardized on a recheck trigger for everyone, so there is confusion on how to do that when we see a CI failing and want result. 16:21:13 If everyone operating a CI can update this table with the string(s) that will run there's, we at least have a reference then. 16:21:20 I like DIE DIE DIE, can we build on that? 16:21:22 it's nice. 16:21:33 Ideally I would like to move to a common pattern, but this is a start. 16:21:41 dulek: Yep. :) 16:21:53 Second point is just to remind everyone that this page exists. 16:22:12 So new driver submitters and current operators, please make sure this is up to date with your info. 16:22:32 smcginnis: not to sidetrack, but what are the reasons for duplicating the work here in this wiki with driverlog. 16:22:40 I think we've had once or twice where the operator contact changed but this page still referenced someone who was no longer with the company. 16:23:09 thingee: Easy reference and update I think. But I had wondered the same thing at the time. 16:23:22 I think for a little while there we had 3-4 places duplicating the same info. 16:23:35 And DuncanT did some work to try to make this automated. 16:23:56 With Cinder getting more serious in defcore, I think it's important for us to be more focused with driverlog 16:24:03 since it'll be the source of truth 16:24:08 thingee: can you add a new column in driverlog for the recheck command? 16:24:10 according to the market place 16:24:56 thingee: Just to make sure, you're talking about this one? http://stackalytics.com/report/driverlog?project_id=openstack%2Fcinder 16:25:48 smcginnis: I have no affiliation with stackalytics nor trust it. 16:25:53 smcginnis: I'm talking about https://github.com/openstack/driverlog/blob/master/etc/default_data.json 16:25:55 thingee: :) 16:26:09 So there's at least three places to look. :[ 16:26:30 xyang: there is a ci field. Not familiar with it though https://github.com/openstack/driverlog/blob/master/etc/default_data.json#L164 16:26:30 But I think stackalytics pulls form the driverlog. 16:26:54 yea I thought those were one in the same 16:27:02 But I see driverlog and this wiki as slightly different. 16:27:19 There are companies where the contact for the driver is different than the contact for the CI. 16:27:40 fyi stackanalytics will soon be hosted/supported directly by openstack.org 16:27:44 So in that respect, I'm fine keeping two different places. 16:27:47 smcginnis: because they were updated by different people:) 16:28:12 IIRC a ci should always post a link to it's wiki page when commenting on a change. Shouldn't the recheck command be documented in there? (see http://docs.openstack.org/infra/system-config/third_party.html#requirements) 16:28:24 smcginnis: the wiki was probably set up initially by the doc team and now it was updated by others 16:28:44 kaisers: True, that is listed as a requirement. 16:29:01 I don't think it's widely followed though currently. 16:29:07 Advantage is that that link should normally be posted with each comment of the CI 16:29:11 smcginnis: ok :) 16:29:29 Not that they shouldn't. :) 16:30:22 smcginnis: shoudl not? "All comments from your CI system must contain a link to the wiki page for your CI system." 16:30:28 The official third party documentation does point to here: https://wiki.openstack.org/wiki/ThirdPartySystems 16:30:37 smcginnis: oops :) 16:30:42 kaisers: Not saying they shouldn't. Just pointing out they currently don't. 16:30:57 maybe we should enforce it? 16:30:57 smcginnis: ack 16:31:06 Yeah, we probably should. 16:31:18 CI does need some attention for sure. 16:31:47 So how about this. I'll put a note in the first wiki page pointing everyone to go to https://wiki.openstack.org/wiki/ThirdPartySystems 16:32:04 And we start making sure at least that one has all the info we need. 16:32:11 And start deprecating the other one. 16:32:23 And maybe driverlog can be updated to point to there to get more details at some point. 16:32:52 #action CI maintainers to update CI info with recheck trigger details. 16:32:55 #link https://wiki.openstack.org/wiki/ThirdPartySystems 16:33:28 Make sense? 16:33:33 +1 16:33:37 sounds good to me 16:33:47 +1 16:34:15 Good. Then hopefully we can get rid of one of the multiple locations. 16:34:32 I'll update the Cinder wiki that leads to the old one. 16:34:45 #action smcginnis to update third party CI links on Cinder wiki. 16:35:24 #topic Open discussion 16:35:31 The floor is open... 16:36:08 we've set up a block of rooms for the January mid-cycle 16:36:18 Sorry! 16:36:24 you can reserve via this link: 16:36:26 tbarron: Any deadline for booking? 16:36:27 I meant to cover that in announcements! 16:36:35 http://hiltongardeninn.hilton.com/en/gi/groups/personalized/R/RDUSPGI-NET-20160125/index.jhtml 16:36:42 tbarron: Thank you for arranging that. 16:36:42 deadline is January 4 16:37:10 after that the negotiated rate of $130/night may not be available. 16:37:50 Also reminder for folks to add your name to the etherpad if you are planning on attening. 16:38:00 Physical or virtual, just so we know what to expect. 16:38:02 #link https://etherpad.openstack.org/p/mitaka-cinder-midcycle 16:38:51 Currently they are blocking off 30 rooms, but we can adjust the quota if it starts filling up :-) 16:38:51 And thank you Pure and IBM for signing up to host dinners a couple of the nights. 16:40:27 Any other topics? 16:40:57 so I recently started contributing and haven't been around for a midcycle, what does virtually attending look like? 16:41:09 dulek: and anyone coming from abroad: I think you can reserve and then cancel later (up to one day prior) withoout penalty. 16:41:12 ntpttr: We usually get a google hangout going. 16:41:13 that's all 16:41:29 tbarron: Oh, that's cool. I'll talk with my management. 16:41:32 ntpttr: We might video stream too like we did at the summit. 16:41:41 smcginnis: all right sounds good, thanks 16:41:55 ntpttr: Not as good as being there, but I think it works OK. 16:41:56 +1 for video streaming -- google hangouts does NOT scale well 16:41:57 smcginnis: yeah the youtube channel was definitely nice 16:42:08 ntpttr: https://www.youtube.com/channel/UCJ8Koy4gsISMy0qW3CWZmaQ 16:42:10 If hemnafk or I can attend, we'll bring the recording devices, etc... 16:42:26 OK, good feedback. I'll see if our AV club can set something up again. (hemna) :) 16:42:39 kmartin: Awesome, thank you. 16:43:03 Alright, guess we're done here. Thanks everyone. 16:43:12 Thanks 16:43:14 #endmeeting