16:00:58 <adrian_otto> #startmeeting containers
16:00:58 <openstack> Meeting started Tue Jan  6 16:00:58 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:02 <openstack> The meeting name has been set to 'containers'
16:01:05 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-01-06_1600_UTC Our Agenda
16:01:11 <thomasem> o/
16:01:13 <adrian_otto> #topic Roll Call
16:01:16 <adrian_otto> Adrian Otto
16:01:18 <thomasem> Thomas Maddox
16:01:25 <adrian_otto> hi thomasem!
16:01:26 <apmelton> Andrew Melton
16:01:27 <sdake> containers containers bobainers \p\
16:01:27 <diga_> Digambar Patil
16:01:30 <jay-laua-513> jay-lau-513
16:01:38 <thomasem> Hey adrian_otto!!
16:01:47 <hongbin> Hongbin Lu
16:01:55 <sdake> what is with the 513 jay lau
16:01:55 <adrian_otto> good day everyone
16:02:15 <adrian_otto> my guess is he is a racer, and 513 is his race number
16:02:15 <sdake> why not "jay-lau" :)
16:02:29 <sdake> now that would make sense
16:02:50 <adrian_otto> the fasters Lau in the west
16:03:00 <jay-lau-513> sdake my lucky number ;-)
16:03:00 <adrian_otto> except he lives in the East
16:03:03 <sdake> or east as it may be
16:03:36 <adrian_otto> he had 512 and then he incremented it.
16:03:49 <adrian_otto> anyway, glad to have you all here. let's proceed to Announcements
16:03:50 <sdake> 513 is a prime nubmer I think
16:04:02 <adrian_otto> #topic Announcements
16:04:11 <adrian_otto> 1) Welcome Jay Lau to magnum-core!
16:04:26 <adrian_otto> he is our most recent addition
16:04:26 <jay-lau-513> thanks, my honor to join this team
16:04:44 <sdake> team growing fast
16:04:47 <sdake> lots of interst in containers
16:04:48 <adrian_otto> we are honored to have you as well
16:05:05 <adrian_otto> we also added another core reviewer:
16:05:10 <adrian_otto> 2) Welcome Motohiro/Yuanying Otsuka to magnum-core!
16:05:43 <thomasem> Welcome!
16:05:55 <sdake> he may be sleeping
16:06:01 <sdake> I think its middle of night for him now
16:06:15 <adrian_otto> both of these new reviewers have been very active. Traditionally designating cores takes a few months, but in these early stages, I'm willing to propose additions sooner.
16:06:41 <adrian_otto> anyone else interested in serving as a core reviewer may see me for guidance
16:06:51 <adrian_otto> Any other announcements from team members?
16:07:43 <adrian_otto> ok, in December we set a January target date for our first tagged release. Who remembers the date?
16:08:04 <diga_> 13th jan
16:08:11 <adrian_otto> ding!
16:08:32 <adrian_otto> So there is one work week remaining before that date
16:08:49 <adrian_otto> during a later section I'll re-raise this fur further discussion.
16:09:07 <adrian_otto> but one quick topic first
16:09:08 <adrian_otto> #topic Blueprint/Task Review
16:09:14 <adrian_otto> #link https://review.openstack.org/144203 Enable tests for db objects
16:09:32 <adrian_otto> do we have https://review.openstack.org/#/q/owner:abhishek%2540cloudscaling.com+status:open,n,z present today?
16:10:00 <adrian_otto> I think he's absent, so maybe I'll follow up on this later
16:10:14 <adrian_otto> So, on the topic of blueprints, tasks, and bugs
16:10:26 <adrian_otto> first of all, I want to compliment the team on remarkable progress
16:10:42 <sdake> yay - we almost got launching pods/services in a micro os in a bay :)
16:10:48 <adrian_otto> I feel like the commit throughput is way up, and there are lots of solid commits hitting the repo
16:11:45 <adrian_otto> so take a moment to look around you and recognize terrific progress from those around you, and know that we appreciate your efforts.
16:12:17 <adrian_otto> next, let's identify any must-have tasks that should be completed by Jan 13
16:12:31 <adrian_otto> and I will help to make sure any of them that need a bird dog have one
16:12:51 <adrian_otto> thoughts on must-have work that remains pending or in-progress?
16:13:01 <sdake> we need to be able to specify minion servers to kubectl commands
16:13:05 <sdake> I think there is a review up for that
16:13:21 <adrian_otto> thanks sdake. Let's take a moment to look for that.
16:13:24 <sdake> we need to be able to pass the pod/service data to kubectl commands
16:13:30 <sdake> I think there is a review up for that too
16:13:52 <adrian_otto> #link https://blueprints.launchpad.net/magnum Our Blueprint List
16:13:52 <sdake> I've just been operating under the principle we won't actually get container scheduling sorted out for milestone #1
16:14:10 <adrian_otto> sdake: yes, that's acceptable and appropriate
16:14:16 <sdake> but I think if we have kube working that should be good enough
16:14:52 <sdake> Ideally we need one of two things 1) a heat template that works for ironic based upon larsks repo or 2) updated documentation that shows how to deploy in virtual environments as opposed to ironic
16:15:06 <sdake> I think #1 is likely not to occur in the next week
16:15:41 <sdake> so I think we should set another limit then, which is we only intend milestone #1 to launch on virtual machines, not integrated with Ironic
16:15:51 <jay-lau-513> for #1, is it possible that we merge lasrk's code to etc/magnum?
16:15:55 <adrian_otto> diga_: you are working on a containerized environment for Magnum that would suit #2 above, correct?
16:16:08 <diga_> yes
16:16:23 <sdake> larsks code is for virt only, not for ironic
16:16:30 <sdake> we need two templates, one for each environment type
16:16:31 <jay-lau-513> sdake or just write some readme telling end user where to get the template?
16:16:34 <sdake> the network is different
16:16:49 <sdake> jay-lau-513 I'd prefer to merge it into our repo and the license is compatible
16:16:54 <sdake> then we can just keep it up to date from larsks repo
16:17:22 <adrian_otto> can we identify two Stackers on the magnum team to co-own that responsibility?
16:17:34 <sdake> although atm it works great, I doubt there will be many changes, except possibly to handle ironic
16:17:39 <jay-lau-513> ok, I see,
16:17:55 <adrian_otto> as that will require watching the code in the original project
16:18:12 <sdake> I'll take on the docs part, and I'll take on merging new changes from larsks repo
16:18:20 <sdake> but need someone else to do the original copy :)
16:18:27 <adrian_otto> ok, any volunteers to aid sdake?
16:18:47 <diga_> I can help him
16:18:52 <jay-lau-513> I can help sdake
16:19:05 <jay-lau-513> two volunteers ;-)
16:19:15 <sdake> jay a copy with the correct install bits should do the trick
16:19:24 <jay-lau-513> yes
16:19:32 <adrian_otto> ok, perfect, we should be in good shape
16:19:55 <adrian_otto> I know sdake is planning some time away, so feel free to select diga_ or jay-lau-513 as a delegate for this accordingly.
16:20:13 <rprakash> ironic you mean not through nova but direct?
16:20:25 <sdake> I mean through nova, but the ironic network model is different
16:20:32 <sdake> the heat tempalte larsks produced is based upon neutron
16:20:45 <sdake> ironic supports flat networking as 1 model and ovs enabled switches as another model
16:21:05 <sdake> in the case of #1, where most of our users are going to be, we need a atemplate that does flat networking
16:21:12 <rprakash> I see - thanks for that claroification
16:21:55 <adrian_otto> #link https://github.com/larsks/heat-kubernetes heat-kubernetes
16:21:59 <sdake> I'm glad someone understands it, I sure dont :(
16:22:30 <adrian_otto> ^^ for jay-lau-513 and diga_ to reference
16:22:41 <adrian_otto> ok, any other must-haves for Jan 13?
16:22:43 <sdake> so ya, launching pods, launching services, inside a bay, that is a good set of features for milestone #1
16:22:51 <diga_> ok
16:22:57 <sdake> ideally we need a virtual interface to represent the cluster
16:23:04 <sdake> I'm not sure if we have time for that or not
16:23:08 <jay-lau-513> I think that we can also launch replication controllers ;)
16:23:19 <sdake> the way that works is we put a LB in front of every minion
16:23:44 <sdake> that probably requires a new heat template
16:23:49 <adrian_otto> jay-lau-513: indeed we can.
16:24:21 <adrian_otto> sdake, why have an lb in front of a minion?
16:24:34 <diga_> I have two responsibilities   1) setup magnum repo in container   2) heat-kubernetes
16:24:35 <sdake> that way you dont have to figure out which minion to talk to
16:24:37 <jay-lau-513> sdake why one cluster need a lb
16:24:47 <adrian_otto> diga_: yes
16:25:26 <diga_> yep
16:25:32 <adrian_otto> sdake, why do we care which minion is used?
16:25:33 <sdake> I think it would be handy to have 1 IP address represent the entire user-experience for the kubernetes cluster
16:25:52 <sdake> eg: minions 1.1.1.1 minion 2.2.2.2
16:25:53 <adrian_otto> this sounds to me like scheduling logic
16:25:56 <sdake> your app connects to 2.2.2.2
16:25:58 <sdake> 2.2.2.2 dies
16:26:03 <sdake> now your app is busted
16:26:06 <sdake> because it doesn't know about 1.1.1.1
16:26:21 <sdake> that is what the lb fixes
16:26:26 <sdake> 1 IP address for all the minions
16:26:45 <adrian_otto> ok, so you are not talking about a crontrol plane, you are talking about the data plane for availability of apps
16:26:56 <sdake> ya, although to set it up is control plane :)
16:27:10 <adrian_otto> I see now, thanks.
16:27:26 <sdake> probably can wait until milestone #2
16:27:32 * dims__ says o/ a bit late :)
16:27:39 <rprakash> bays controls pods , so you mean lb->minion->bays to data plane is the flow?
16:27:42 <adrian_otto> hi dims__
16:28:02 <sdake> rprakash a pod service and replication controller is luanched in a bay
16:28:10 <rprakash> dp pods
16:28:16 <sdake> a bay is a collection of nodes running a micro os such as coreos or atomic
16:28:39 <sdake> one of the bay's nodes is a master (running etcd) the others are minions (running kubeproxy etc)
16:28:54 <sdake> to control the cluster, you use magnum which contacts the bay's master node
16:29:39 <rprakash> got it it's orchestration from cp to dp pods - hanks
16:30:52 <adrian_otto> #link https://blueprints.launchpad.net/magnum/milestone-1 BPs for milestone-1
16:31:08 <adrian_otto> let's take a look at the Delivery column on the above link
16:31:21 <sdake> lots of green :)
16:31:28 <adrian_otto> Indeed!
16:31:36 <adrian_otto> for the ones that are blue, can any be updated?
16:31:49 <adrian_otto> should any be re-scoped?
16:32:20 <sdake> I updated https://blueprints.launchpad.net/magnum/+spec/implement-magnum-bays
16:32:38 <adrian_otto> ideally, I'd like the whiteboards in the open BP's to indicate what work is remaining so I can plan accordingly
16:33:05 <sdake> magnum-backend-docker-* can probably go to milestone #2 I suspect
16:33:10 <adrian_otto> thanks sdake
16:33:11 <sdake> unless they are working now
16:33:37 <adrian_otto> diga_: is there remaining implementation on that BP for milestone-1?
16:34:01 <sdake> I marked https://blueprints.launchpad.net/magnum/+spec/backend-bay-heat-kube as implemented
16:34:15 <diga_> both the blueprint work is completed I guess
16:34:40 <sdake> can you control docker containers via magnum client?
16:34:41 <dims__> adrian_otto: all the magnum-container-* are implemented, but work only against the docker daemon specified in the magnum.conf
16:34:50 <dims__> sdake: yep
16:34:55 <sdake> sweet
16:35:08 <sdake> well I guess that doesn't need to be rescoped then
16:35:16 <adrian_otto> ok, so do we need a task filed to expand that to work on more daemons?
16:35:21 <dims__> right adrian_otto
16:35:49 <adrian_otto> ok, I'll file a BP for milestone-2
16:35:57 <diga_> https://blueprints.launchpad.net/magnum/+spec/magnum-agent-for-nova
16:36:00 <sdake> adrian_otto can we add to the agenda the creation of hte release announcement in etherpad, please - in case I forget
16:36:08 <diga_> this is scoped for ml2
16:36:29 <adrian_otto> yes, let's make an etherpad for drafts now… one moment and I will do that and link it here
16:36:31 <diga_> are we finally going with zaqar or not ?
16:36:42 <sdake> I dont think we need zaqar
16:36:52 <sdake> we already have the list of minion ips
16:36:58 <diga_> ok
16:37:02 <sdake> we can round robin select for scheduling if necessary
16:37:09 <sdake> I think we need to think through the scheduling of the containers though
16:37:13 <sdake> its no easy task
16:37:21 <diga_> ok
16:37:25 <sdake> especially when you throw multi-node networking into the mix
16:37:34 <diga_> ok
16:37:47 <adrian_otto> #link https://etherpad.openstack.org/p/magnum-release Where we will draft our release announcement.
16:39:48 <sdake> feel free to help write it folks :)
16:40:01 <sdake> lets spend 5-10 mins - team effort :)
16:40:24 <adrian_otto> #topic Draft Magnum Release Announcement
16:42:44 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-docker-backend-selection Selection of multiple docker backends
16:42:50 <adrian_otto> ^^ dims__
16:43:00 <dims__> adrian_otto: thanks
16:43:39 <apmelton> adrian_otto: I should be able to help out with that piece
16:44:58 <dims__> apmelton: awesome
16:45:10 <apmelton> along with the magnum bay stuff as well
16:45:25 <apmelton> I'm still getting up to speed with everything that's been going on since I last attended
16:48:27 <thomasem> Same here
16:51:26 <thomasem> Going to have to look at Zaquar.
16:51:35 <dims__> sdake: any pointer to uOS we can add?
16:53:56 <adrian_otto> #topic Open Discussion
16:54:13 <apmelton> adrian_otto: will there be a magnum mid-cycle?
16:54:28 <apmelton> mid-cycle sprint/meetup*
16:56:27 <sdake> ok looking good to me
16:56:42 <sdake> uOS is Fedora Atomic or CoreOS
16:56:48 <sdake> I just call it uOS
16:56:54 <sdake> not sure if it has an official name - lets make one :)
16:58:44 <jay-lau-513> sdake what does u mean for uOS?
16:58:45 <adrian_otto> ok, time is almost up
16:59:14 <adrian_otto> editing the release announcement will continue after we adjourn, and will be discussed in #openstack-containers
16:59:38 <sdake> well if you dont like the u, delete it ;)
16:59:42 <adrian_otto> our next team meeting will be 2015-01-13 at 2200 UTC
16:59:52 <sdake> thanks folks :)
16:59:56 <adrian_otto> thanks everyone for attending!
16:59:58 <thomasem> cheers!
17:00:04 <adrian_otto> #endmeeting