15:59:32 #startmeeting kolla 15:59:33 Meeting started Wed Sep 27 15:59:32 2017 UTC and is due to finish in 60 minutes. The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:59:37 The meeting name has been set to 'kolla' 15:59:49 #topic w00t! 15:59:56 \o/ wooo! 16:00:01 wO0t! 16:00:10 w00t!\0/ 16:00:17 o/ 16:00:24 woot 16:00:58 o/ 16:01:10 morning 16:01:12 \o\ /o/ -o- 16:01:19 evening guys 16:01:56 0/ 16:03:05 #topic announcements 16:03:14 I don't have any:) 16:03:18 community? 16:03:21 o/ 16:03:28 o/ 16:03:37 guess not 16:03:45 #topic ptg recap continued 16:03:54 afair we didn't do kolla-k8s last time 16:04:24 that's what I remember 16:04:31 so biggest one was https://etherpad.openstack.org/p/kolla-queens-ptg-k8s-release-roadmap 16:04:37 we revised 1.0 requirements 16:05:31 I won't dig into that today 16:06:06 Should we make each of those main bullets into a blueprint? or? 16:06:21 we also met with tripleo guys and discussed cooperation - there seems to be lots of potential there, but tripleo needs to design their k8s model 16:06:36 britthouser, well, we don't have to 16:06:48 sounds good. 16:07:00 process is for us, not we for it 16:07:17 I, for one, like lightweight form etherpads 16:07:30 anyway, that's what we need to do over next 6 months 16:07:43 since there is no value to release 1.0 prior to release of openstack 16:07:51 (well less than 6 months) 16:08:00 inc0, so, we cannot share our design with tripleo? 16:08:01 bottom line, we're aiming for Rocky 16:08:05 Queens* 16:08:25 duonghq, well, some of it yes, but they aren't interested in helm 16:08:55 how about our k8s resources? 16:09:14 helm renders them 16:09:27 our k8s resources are represented as helm charts 16:09:47 understood 16:09:48 they can use this knowledge tho, and I thin they will 16:09:55 anyway, let's move on 16:10:29 #topic ansible-become and keystone upgrade (duonghq) 16:10:34 shoot duonghq 16:10:47 ah, nice, thank inc0 16:11:12 first, about the ansible-become bp, it's from last 2 cycle 16:11:39 so, due to it's beginning of Q, can somebody let it merge, and we will fix issue (if any) soon 16:11:52 due to it's not a small change 16:11:59 #link https://review.openstack.org/#/c/398682/ 16:12:02 it's 1st ps 16:13:26 can I get some opinions? 16:14:12 I'll review it first thing after meeting 16:14:23 thank inc0 16:14:27 about my 2nd topic, 16:14:53 implement upgrade mechanism of Keystone 16:14:54 https://review.openstack.org/#/c/425446/ 16:15:19 this ps does not reach zero-downtime acctually (due to container restart at the same time) 16:15:42 but it's 1st step to implement mechanism from keystone, 16:16:06 the 2nd step is create new repo for ansible plugins 16:16:11 and apply new strategy 16:16:27 I hope that I can push it soon in Q 16:16:28 or document to use --forks 1 16:16:55 inc0, or integrate it in our CLI? 16:16:58 as default option 16:17:06 forks 1 should never be default 16:17:22 it will make deployment at scale incredibly long 16:17:46 is there a way to make it default for only upgrade playbooks? 16:18:02 we don't want to make it default there too 16:18:05 so, I think 1.5st step is add the documentation to use --forks 1 (before we can figure out some way to do better solution) 16:18:16 because for this single use case it will make upgrade well...incredibly long 16:18:16 will make compute upgrades long too 16:18:34 and time is even more critical in upgrade scenario 16:18:40 yeah I guess not all upgrades need to be sequential like that. 16:19:00 only this single service really 16:19:03 sure 16:19:10 also downtime we're looking at is sub-secondf 16:19:32 hmm, we need it for any service which implemented zero-downtime mechanism by their self 16:19:42 technically we can just drain connections for this task 16:20:03 if we impmelent haproxy connection draining 16:20:03 inc0, you mean we do it at haproxy-layer? 16:20:06 right 16:20:16 that will be apparent zero downtime 16:20:34 rabbitmq will handle non-api services 16:20:53 so we need draining the connection, a little buffering and we get zero-downtime? 16:21:01 between draining and graceful shutdown we should be good 16:21:18 we still have small time windows when we do not have any api-service available 16:21:30 and haproxy doesn't have retry ability 16:21:34 Yeah, but need allow admin socket in haproxy 16:21:49 Sec guys will not like that 16:22:11 during service restarts you'll always have this risk 16:22:38 well, let's do research ok? 16:22:47 nice 16:22:57 I mean for full zero downtime we'll need that anyway 16:23:12 to make sure haproxy won't forwards requests to api while it's restarting 16:23:36 there's always lag between service down and haproxy noticing that 16:23:49 in other words, we need haproxy hold the request for awhile 16:24:06 and with our speed whole upgrade of container can be faster than this notice period 16:24:47 so, I'll try and get you some measure 16:24:55 thanks Duong 16:25:24 btw, from Boston summit, I have a small demo about buffering layer 16:25:47 ok, that's all from me 16:25:49 :) 16:26:09 after finish with keystone, I'll move to neutron 16:26:44 keystone change worked, duonghq and myself tested during PTG 16:26:57 #topic Doc restructure (krtaylor) 16:27:07 krtaylor you ahve the floor 16:27:11 Thanks inc0 16:27:36 Quick update: I created an etherpad and blueprint for the doc restructure work this cycle 16:27:46 #link https://etherpad.openstack.org/p/kolla-queens-doc-restructure 16:27:48 I think is good as it is now to merge 16:27:59 #link https://blueprints.launchpad.net/kolla/+spec/queens-doc-restructure 16:28:11 I also sent an email to the list last week with all this info 16:28:33 and there is a patch that includes all the ToC changes that we discussed at PTG 16:28:43 #link https://review.openstack.org/#/c/504801 16:29:02 that is looking pretty good 16:29:25 next step is to get interested folks to jump into the etherpad and put their name next to a section they'd like to rework/refresh or add a section 16:29:38 Else I'll just keep chipping away at it 16:30:06 right, let's all help Kurt, I can't stress enough how important it is 16:30:14 count me in. 16:31:13 anything else krtaylor_ ? 16:31:13 +1 16:31:19 Cool! Thanks everyone - there is some good work that already exists 16:31:28 +1 16:31:37 so few more remarks from me 16:31:59 as soon as we get our images published to dockerhub 16:32:02 That we need to go through, but if everyone can review and get what we have closed out so we can move forward, or put all notes in etherpad 16:32:20 Anyway, thats about all I have 16:32:21 I'll rework quickstart to skip build part at all (for kolla-ansible) 16:32:46 I haven't touched koala-ansible at all yet 16:33:09 makes sense. 16:33:23 kolla-ansible that is 16:33:41 ok, can we move on? 16:33:55 Fine with me, thanks inc0 16:34:04 #topic rabbitmq clustering crysis 16:34:13 krtaylor_: for kolla-ansible https://review.openstack.org/#/c/507004/ 16:34:15 I took liberty of injecting my own topic;) 16:34:22 so we need to fix rabbitmq 16:34:30 it was misbehaving a lot lately 16:34:38 and rabbitmq clusterer is deprecated upstream 16:34:53 clustering now is part of rabbitmq core 16:35:04 which means we should rework our rabbitmq deployment 16:35:18 can I have volunteer to take this one? 16:35:53 :( 16:36:36 I will look into it 16:36:46 our current state https://www.youtube.com/watch?v=tgj3nZWtOfA 16:36:53 thanks coolsvap 16:37:36 ok, that's it from me:) 16:37:44 #topic open discussion 16:37:50 spsurya__, I added that to the etherpad, thanks! 16:38:04 I have one thing. 16:38:07 * jamesbenson run away! ...grabs the holy grenade... 16:38:24 shoot britthouser 16:38:49 I remember awhile back some teams moved their meetings into their team channel. I wondered what folks here thought about doing that? So we'd run this meeting in #openstack-kolla instead of #openstack-meeting-4 16:39:53 easier to not miss or to read meeting notes from logs 16:40:06 agreed +1 from me 16:40:08 I log all channels/queries to local hdd 16:40:21 which teams do that now? 16:40:36 so that way even if I was not on a meeting I can read log next day without checking where to find it 16:41:04 Does our channel have the meeting (logs) bot added? 16:41:07 I'd have to check @inc0. It was months ago, when the time slots in the meeting channels were getting full. 16:41:18 -1 cause its not separate but +1 cause I can catch up in slack if need be 16:41:18 please do britthouser 16:42:05 having separate meeting room has it's merits 16:42:11 I have a qq as well. I think last week or the week before. we talked about external ceph. I'm working on an ansible script but wondering if anyone else was doing the same? 16:42:15 Ok will do. Something to think about, don't have to decide today. Just wanted to put it out there for our subconciouses to mull over. 16:42:16 I would prefer to have a meeting log, but don't care what channel it is in 16:42:19 people wont join in the middle 16:42:20 britthouser: I think one of the advantage of having official meeting channels is people are kinda subscribed to them, so if we need someone during the meeting there is more chance we will find them in official meeting channels than looking for them 16:42:46 all meetings are logged 16:42:48 bye 16:43:16 thats one thing but I am fine with being held in openstack-kolla channel 16:43:25 http://eavesdrop.openstack.org/meetings/kolla/ 16:43:26 krtaylor_: http://eavesdrop.openstack.org/irclogs/%23openstack-meeting-4/ 16:43:37 you always have 16:43:41 +1 as long as we can start meeting/endmeeting etc 16:43:52 you have this link in meeting wiki 16:44:17 Yes, as long as we'd keep that in -kolla, I'm +1 16:44:34 I think we do 16:44:38 anyway, that's for later 16:44:53 * britthouser yields the floor 16:45:02 inc0: http://lists.openstack.org/pipermail/openstack-dev/2017-June/118899.html 16:45:03 fyui 16:45:19 fyi* 16:45:54 thanks coolsvap that's helpful 16:46:16 anyone else has anything for open discussion? 16:46:46 I think last week or the week before. we talked about external ceph. I'm working on an ansible script but wondering if anyone else was doing the same? 16:47:37 sorry you said that before jamesbenson :( 16:47:41 well there is ceph-ansible 16:47:43 no worries 16:47:59 https://github.com/ceph/ceph-ansible 16:48:17 okay, I know I had issues with ceph-ansible, I thought I remember others mentioning the same. But if we are moving that way, no problem :-) 16:48:18 but people report mixed results' 16:48:29 we don't know yet 16:48:40 current plan is that we upgrade to L 16:48:44 yeah, this is an ansible ceph-deploy method, not ceph's ansible deploy 16:48:45 so queens is L 16:49:06 and then, before Q is released we decide to either keep ceph deployment or deprecate it 16:49:09 yeah I got L currently as external, it's very nice and ^_^ 16:49:21 if we deprecate it, removal will coincide with next LTS ceph release 16:49:30 so M wouldn't be in Kolla any more 16:50:18 on one hand I'd like not having ceph, on the other we need 1. easy and reliable ceph deployment mechanism 16:50:26 2. migration path from our current deployment 16:50:26 okay, so a few months before we figure out anything. Are the ceph guys working with us on incorporating it at all? 16:50:43 yes, we talked to them and we'll work together to figure it out 16:50:48 cool 16:51:03 Pike isn't going to get L at all though, correct? 16:51:11 no 16:51:18 well, since you use external 16:51:22 it doesn't really matter for you 16:51:30 we do lots of deploys. ;-) 16:51:32 but kolla-deployed pike is Jewel 16:51:44 ok, sorry;) 16:51:56 what I wanted to say is client-side shouldn't really matter 16:52:03 yeah 16:52:24 that's all for me then 16:52:31 just wanted that status update 16:52:35 thanks 16:52:41 ceph, in my book, is as important as mariadb 16:52:46 or almost as important 16:53:00 both definitely play critical roles. 16:53:04 and I don't see us getting rid of mariadb deploy anytime soon 16:53:15 so I wouldn't be surprised if we actually keep the ceph 16:53:31 streamlined deployment experience is important 16:54:13 ok, anyone else have a topic? 16:55:56 guess not 16:55:59 thank you all 16:56:03 #endmeeting kolla