16:00:26 #startmeeting containers 16:00:26 Meeting started Tue Mar 28 16:00:26 2017 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:28 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:30 The meeting name has been set to 'containers' 16:00:33 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2017-03-28_1600_UTC Our Agenda 16:00:37 #topic Roll Call 16:00:41 Adrian Otto 16:00:42 o/ 16:00:45 o/ 16:00:47 Madhuri Kumari 16:00:49 Ton Ngo 16:00:50 Spyros Trigazis 16:00:51 Jaycen Grant 16:00:51 Corey O'Brien 16:01:10 hello vijendar_ hieulq_ mkrai_ tonanhngo strigazi jvgrant_ and coreyob 16:02:05 hello juggler 16:02:08 o/ 16:02:12 o/ 16:02:32 o/ 16:02:41 hello randallburt and jasond 16:02:57 o 16:02:59 o/ 16:03:01 o/ 16:03:02 hello Drago 16:03:23 hi swatson_ 16:03:40 let's begin. 16:03:43 #topic Announcements 16:03:45 (none) 16:03:45 yatin karel 16:04:01 any announcements from team members? 16:04:04 hi yatinkarel 16:04:33 I don't have an announcement but did want to ask about the OSC vote 16:05:02 swatson_: okay, we can touch base on that 16:05:04 #topic Action Items 16:05:04 (none) 16:05:21 #topic OSC command name discussion 16:05:25 swatson_ I don't like coe but works for me and my team 16:05:43 swatson_ from my side is +1 16:05:50 yikes 16:05:53 * adrian_otto looking for the link to the email thread about this 16:05:55 but better than nothing 16:06:02 randallburt exactly 16:06:18 kerberos \o/ 16:06:36 feel the same way. I don't like it but i can't think of much better with the limitations we have 16:06:49 +1 for coe 16:07:02 #link http://lists.openstack.org/pipermail/openstack-dev/2017-March/114640.html My email yesterday asking for us to express a preference between two options 16:07:03 Sounds like a basic consensus then 16:07:13 coe +1 for me 16:07:25 +1 for coe 16:07:44 Going off the latest ML message, do we keep "cluster" for the commands too? Or drop it? 16:07:46 I have started hearing "container orchestration" from other context, so it may becoming commonly used now 16:08:03 e.g. keep it as "openstack coe cluster create..." or go with a simplified "openstack coe create..." 16:08:21 swatson_, then what about ct 16:08:29 right, keep it imo 16:08:35 yatinkarel: "openstack coe template create..." 16:08:52 and "openstack coe ca show/sign", etc. 16:09:00 rather it be explicit about the object being manipulated 16:09:07 I'm on the fence, as it's our main resource 16:09:09 i think we should keep resources name same as we use now 16:09:25 yatinkarel: agreed 16:09:29 nova list is nice though 16:09:32 vs nova server list 16:09:41 openstack server list 16:09:45 the question is whether to keep the term "cluster" in the openstack command or not 16:09:56 my gut says yes, keep it in. 16:10:10 my gut shrugs 16:10:38 it's possible that we could start with the word cluster, and later drop it if we find it burdensome to use 16:10:46 +1 16:10:52 +1 16:11:04 +1 16:11:09 adrian_otto: sounds good 16:11:15 it could still alias back for compatibility if we make that decision dow the road 16:11:17 +1 16:11:26 +1 16:11:31 +1 16:11:34 +1 16:11:45 +1 16:11:52 +1 16:11:58 +1 16:12:11 Alright, I'll update my reviews for "openstack coe cluster..." away from "openstack infra cluster..." 16:13:02 thanks everyone. 16:13:08 Any opposing viewpoints to consider before releasing swatson_ to change that? 16:13:43 ok, thanks. 16:13:57 #topic Blueprints/Bugs/Reviews/Ideas 16:14:05 Essential Blueprints 16:14:11 #link https://blueprints.launchpad.net/magnum/+spec/flatten-attributes Flatten Attributes [strigazi] 16:15:07 I'm finishing team, sorry for this delay. Cleaning UTs 16:15:32 strigazi: any input needed from the team on this? 16:15:57 team: any discussion or questions on this work item? 16:15:59 not right now 16:16:14 ok, will advance to the next... 16:16:20 #link https://blueprints.launchpad.net/magnum/+spec/nodegroups Nodegroups [Drago] 16:16:34 Still nothing from me. jvgrant_? 16:16:48 been on vacation the last week so no updates from me 16:17:02 ok, last one is... 16:17:04 #link https://blueprints.launchpad.net/magnum/+spec/cluster-upgrades Cluster Upgrades [strigazi] 16:17:26 I gave some input in this in the driver-resource spec 16:18:10 I don't have anyting else 16:18:11 strigazi: thanks for the input i'll take a look at that today 16:18:28 About NGs 16:19:01 Do you think we can start touching the heat backend and add a POC with jinja? 16:19:30 The mosy obvious usecase for us is AZs 16:19:38 What do you think? 16:20:14 strigazi: to clarify you're suggesting we pick a driver to do this in, as a first step toward Nodegroup implementation? 16:20:37 to support availability zones 16:20:46 yes, 16:21:02 There is a dedicated bp for this on me :) 16:21:36 https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones 16:21:49 #link https://blueprints.launchpad.net/magnum/+spec/magnum-availability-zones Availability Zones Feature 16:22:01 Does it make sense? 16:22:36 Probably with the use of labels 16:22:45 yes 16:23:02 so the swarm driver would be where you'd try it first? 16:23:20 adding the AZ list to the CT first 16:23:45 why predefine the list? 16:24:03 seems that could be a parameter with defaults supplied by the driver 16:24:14 or is that maybe what you meant? 16:24:18 yes, it requires some assumptions like this % of nodes in one az and that in the other or can be a list 16:24:41 a list of percentages 16:25:36 in the basic implementaion allow two AZs but start buiding resource groups with jinja 16:27:06 team? 16:27:07 sounds fine to me. My only guidance is to try not to bake the semantics into the CT. Keep that in the driver's config. 16:27:31 we can see about generalizing it after we're tried it in a single driver 16:27:43 s/we're/we've/ 16:27:55 I could use some help because flatten-attrs is my priority 16:28:18 need help writing the spec? 16:29:10 I would start from proof of concept implementaion in parallel 16:29:27 in parallel with the spec 16:29:37 great 16:30:13 ok, if when I have something I'll oing you 16:30:23 ok 16:30:31 s/oing/ping 16:31:02 Other Work Items 16:31:08 any others from the team? 16:31:30 if not, I'll advance to Open Discussion 16:31:38 #topic Open Discussion 16:31:53 I want your input on something 16:32:34 I have started some time ago a swarm-mode driver which works fine and we have deployed last week. 16:33:05 There is an experimental ci for it too but no tests yet since it relies on the new python-docker client 16:33:12 The question is 16:34:07 Do we k continue to maintain the old swam? Maybe replace it with swarm-mode and make the old swam -> swarm-legacy ? 16:34:11 How can you have a ci without tests? I'm confused. 16:34:52 adrian_otto The ci runs and always fails since it tries to runt tests that do not exist 16:35:02 is there a good reason to keep the legacy driver at all? 16:35:29 Not for me, but someone may have users of it? 16:36:02 is there an upgrade path from the current driver to the new one? 16:36:07 no 16:36:20 not that I know of 16:36:25 I'm thinking.... 16:36:34 it should still work 16:37:02 because swarm mode should adopt the running continers 16:37:16 and the legacy swarm driver had no concept of services or anything swarm specific that I can think of 16:37:34 I haven't tried and swarm mode doesn't even have etcd 16:37:57 I don't think that it will read etcd to import containers 16:38:00 that does not matter 16:38:15 etcd only holds the information about the cluster membership 16:38:27 the actual list of containers is still managed by docker 16:38:34 ok 16:38:41 say you had a two node cluster... 16:39:02 and in swarm you run "docker ps" 16:39:10 it will hit the apis of both servers, and combine the results in the resulting list 16:39:37 the etcd is used to determine where those API calls are routed. 16:40:13 so I think it could be upgradable 16:40:43 I'll have a look but not sure if it's worth it 16:41:02 in which case it could still be named "swarm" 16:41:32 if it's upgradable? 16:42:05 we might hit problems with different versions of devmapper somehow corrupting the /var/lib/docker contents 16:42:27 depending on how old the swarms are that we try to upgrade 16:43:02 I've had a few docker version upgrades that went horribly wrong 16:43:17 requiring me to discard /var/lib/docker and start over 16:43:38 That doesn't sound fun 16:43:48 no, but that has not happened recently 16:44:07 it's usually after a kernel upgrade that had a devmapper upgrade with it 16:44:14 after that, docker was busted. 16:44:27 the last few were fine 16:45:13 has anyone else seen problems going from swarm to 1.13+ with swarm mode? 16:46:15 strigazi: seems the rest of the team fell asleep ;-) 16:46:21 still here :) 16:46:49 :) me too 16:46:53 I'll continue as swarm-mode and we see how it goes then 16:47:23 are there any pre-planning etherpads out yet for: 16:47:26 #link https://www.openstack.org/summit/ 16:47:27 maybe we can find a volunteer to assist with putting the tests together for that 16:47:45 juggler: not yet 16:47:45 I was writing about this just now 16:47:54 *the tests 16:47:56 * randallburt startles awake 16:48:03 adrian_otto: thanks 16:48:15 lol randallburt 16:48:25 :) 16:48:39 maybe we can wrap a little early today? 16:48:46 sure 16:48:56 no problem 16:49:33 Thanks everyone for attending today. Our next meeting will be on 2017-04-04 at 1600 UTC. See you then. 16:49:38 #endmeeting