16:01:19 <strigazi> #startmeeting containers
16:01:20 <openstack> Meeting started Tue Sep 13 16:01:19 2016 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:23 <openstack> The meeting name has been set to 'containers'
16:01:29 <strigazi> #topic Roll Call
16:01:31 <Drago1> o/
16:01:33 <muralia> murali allada
16:01:33 <hongbin> o/
16:01:35 <tonanhngo> Ton Ngo
16:01:39 <jvgrant> Jaycen Grant
16:01:39 <eghobo> o/
16:01:43 <dane_leblanc__> o/
16:01:51 <rpothier> o/
16:01:56 <hongbin> I don't have stable internet connection today, so strigazi will chair today meeting
16:02:18 <strigazi> Thanks for joining the meeting Drago1 muralia hongbin tonanhngo jvgrant eghobo dane_leblanc__ rpothier
16:02:23 <strigazi> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-09-13_1600_UTC
16:02:34 <strigazi> #topic Announcements
16:02:44 <strigazi> I have two minor ones
16:03:04 <strigazi> magnum's debian packages are moved in gerrit now
16:03:10 <strigazi> deb-magnum
16:03:17 <strigazi> and deb-python-magnumclient
16:03:35 <strigazi> all the contributions can be done there from now on
16:04:04 <strigazi> #link https://review.openstack.org/#/admin/projects/openstack/deb-magnum
16:04:18 <strigazi> #link https://review.openstack.org/#/admin/projects/openstack/deb-python-magnumclient
16:04:25 <strigazi> #topic Review Action Items
16:04:32 <strigazi> hongbin clean up the review queue (WIP: Adrian Otto left comments on the inactive reviews to prompt for actions) DONE
16:04:44 <strigazi> #topic Essential Blueprints Review
16:04:50 <strigazi> 1. Support baremetal container clusters (strigazi)
16:04:55 <strigazi> #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
16:04:59 <strigazi> the split between vm and bm k8s-fedora is done
16:05:07 <muralia> nice
16:05:12 <strigazi> this week it is expected to push an update of the drivers spec for the common dir structure and mesos baremetal (which I'm testing)
16:05:20 <strigazi> also I'll update the user-guide for adding a new driver, writing docs is always more difficult than you expect
16:05:33 <tonanhngo> +1 :)
16:05:47 <strigazi> and mkrai is wokring on bm for swarm
16:05:58 <adrian_otto> o/
16:06:02 <strigazi> questions?
16:06:11 <strigazi> hi adrian_otto
16:06:17 <adrian_otto> hi
16:06:36 <strigazi> next
16:06:40 <strigazi> 2. Magnum User Guide for Cloud Operator (tango || tonanhngo)
16:06:44 <strigazi> #link https://blueprints.launchpad.net/magnum/+spec/user-guide
16:07:02 <tonanhngo> The Scaling section was merged, thanks everyone for the helpful review.
16:07:31 <tonanhngo> I am still working on the Horizon and native client section, will upload patch shortly
16:07:36 <tonanhngo> that's all for now
16:08:00 <strigazi> thanks Ton
16:08:11 <strigazi> 3. COE Bay Drivers (muralia)
16:08:15 <strigazi> #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers
16:08:50 <muralia> I'm still working on tests. unit tests are done. fixing functional tests. this is a lot fo work.
16:09:37 <strigazi> Is there something specific that breaks?
16:09:58 <muralia> lots of tests are broken because we need to add a driver mock
16:10:15 <strigazi> ack
16:10:22 <muralia> there were close too 100 unit tests failing because of this
16:10:25 <muralia> i fixed those.
16:10:33 <muralia> now looking at functional tests
16:10:51 <muralia> thats all. just making progress slowly.
16:11:33 <strigazi> thanks
16:11:43 <strigazi> 4. Rename bay to cluster (jvgrant)
16:11:47 <strigazi> #link https://blueprints.launchpad.net/magnum/+spec/rename-bay-to-cluster
16:12:02 <jvgrant> All patches have now been merged!! :)
16:12:15 <adrian_otto> whoot!!
16:12:22 <muralia> nice!
16:12:24 <jvgrant> only references to bay/baymodel should be history and backwards compatibility
16:12:24 <strigazi> great!
16:12:50 <jvgrant> Thanks to everyone who helped with the giant reviews
16:13:00 <strigazi> I haven't used bay for more than a week
16:13:03 <strigazi> :)
16:13:04 <tonanhngo> Worth announcing at the next summit
16:13:05 <jvgrant> and swatson who helped a ton with the client portion
16:13:48 <jvgrant> that is all
16:13:51 <strigazi> well done jvgrant
16:14:09 <strigazi> Do you want to mark the bp as complete?
16:14:17 <jvgrant> yeah
16:14:29 <adrian_otto> I know that was a hefty task, jvgrant and swatson. Thanks for banging that out.
16:14:49 <strigazi> next topic
16:14:54 <strigazi> #topic Kuryr Integration Update (tonanhngo)
16:15:12 <tonanhngo> I attended the Kuryr meeting yesterday
16:15:37 <tonanhngo> The Release 1 is progressing, and they are working on release 2
16:16:04 <tonanhngo> However this only supports baremetal.  Container in VM requires more work
16:16:43 <tonanhngo> There is a proposal for a different implementation to support containers in VM, using IPVLAN
16:17:08 <tonanhngo> Looks like they will proceed with a POC to flush out the pros/cons
16:17:53 <tonanhngo> I added a few more patches to round out the integration with the earlier Kuryr, but they are still marked as WIP.
16:18:18 <strigazi> Is there anything I could test on fake bm? (soon actual bm)
16:18:27 <tonanhngo> because of security concern, and we expect them to change in a few weeks when they have release 2
16:19:17 <tonanhngo> Not really ATM, because we still need the REST server in release 2
16:19:32 <strigazi> ack
16:19:34 <tonanhngo> I think for now, we can pause for a little while
16:19:48 <tonanhngo> and track the development in Kuryr
16:20:22 <tonanhngo> That's all for now
16:20:40 <strigazi> thanks Ton
16:20:52 <strigazi> #topic Other blueprints/Bugs/Reviews/Ideas
16:21:06 <strigazi> Magnum Newton release
16:21:15 <strigazi> #link http://releases.openstack.org/newton/schedule.html Newton release schedule
16:21:35 <strigazi> The final release is on Sep 26-30
16:22:09 <strigazi> #topic Open Discussion
16:22:22 <strigazi> I have two topics
16:22:40 <hongbin> i have a question, after you
16:22:51 <tonanhngo> I want to bring up some issues with the k8s load balancer also.
16:23:14 <strigazi> 1. I tested fedora atomic 24 with  docker 1.10 and uploaded it to fedorapeople
16:23:35 <strigazi> so you can use it or build it with this change:
16:23:48 <strigazi> #link https://review.openstack.org/#/c/344779/
16:23:58 <tonanhngo> No change required to the current templates/scripts ?
16:24:39 <muralia> strigazi: why not update to docker 1.11?
16:24:43 <strigazi> haven't noticed anything and the feunctional tests pass. aslo we've been using docker 1.10 with f23 for more than 2 months
16:25:16 <strigazi> f24 ships with only 1.10, but
16:25:41 <strigazi> I did a custom build with docker 1.11 from f25
16:25:56 <strigazi> I'll publish instructions and the image tomorrow
16:26:14 <muralia> nice.
16:26:29 <strigazi> if we want to use what is upstream we must use 1.10
16:26:35 <strigazi> until november 25
16:26:54 <strigazi> I have also build with 1.12 from f26 :)
16:27:03 <strigazi> I was greedy I guess
16:27:22 <strigazi> and two:
16:27:42 <strigazi> What about updating mesos to 1.0 for newton
16:28:43 <strigazi> mesos is unstable at magnum in some sense anyway, If we update we might attract more users
16:28:47 <muralia> i think we should update as many images as we can for newton
16:28:58 <strigazi> team?
16:29:31 <jvgrant> +1
16:29:39 <muralia> +1
16:29:52 <tonanhngo> if we have the resource to carry out
16:30:00 <strigazi> hongbin you have some concerns about updating
16:30:14 <strigazi> before the release
16:30:31 <hongbin> yes, we are at rc1 right now, which is almost close to the final release
16:30:42 <hongbin> i do have concern to upgrade the COE version
16:30:48 <hongbin> which could be a big change
16:31:01 <strigazi> we could do it at driver level though, after the release
16:31:12 <muralia> yes, im ok with that too
16:31:45 <hongbin> ok, that means we will release the driver in the next cycle
16:32:13 <hongbin> or you want to backport the driver upgrade?
16:32:18 <muralia> hopefully not, but with the pace at which its taking to fix tests. that might be possible
16:32:27 <muralia> if we can backport, we should do that
16:32:56 <hongbin> then, keep in mind the backport policy of openstack
16:33:19 <muralia> hmm, havent looked at it. anything specific I should be aware of?
16:33:23 <hongbin> the reviewers should review the backport patch againest the openstack backport policy
16:33:30 <muralia> ok
16:33:51 <hongbin> muralia: basically, it said don't bakport anything besides bug fixes
16:33:51 <strigazi> ok, so updates of COEs on the next release
16:34:05 <muralia> thanks
16:34:36 <strigazi> at CERN we'll update the drivers soon  anyway
16:35:09 <tonanhngo> User can always pull from master if they want the feature
16:35:11 <strigazi> I don't have anythig else
16:35:23 <hongbin> i have a question for the team
16:35:45 <hongbin> are we ready to make a final release? any patch that hasn't been merged?
16:36:44 <hongbin> no?
16:36:46 <strigazi> I have some final updates on the install-guide
16:36:52 <tonanhngo> We might consider a fix for the kubernetes loadbalancer
16:36:59 <muralia> yes, we might be ready to do so. The driver work seems to be the only one remaining, but I'm concerned that such a big change in the last minute might nit be fine.
16:37:34 <hongbin> ok
16:37:59 <strigazi> tonanhngo: do we have a bug for this?
16:38:24 <tonanhngo> Dane just reported the problem last week
16:38:52 <tonanhngo> let me open a bug
16:39:09 <dane_leblanc__> There was an old bug opened a while ago
16:39:22 <tonanhngo> the problem is more prevalent, but we can start with a partial fix
16:39:41 <tonanhngo> I guess you are done Hongbin?
16:39:41 <dane_leblanc__> #link https://bugs.launchpad.net/magnum/+bug/1524025
16:39:42 <openstack> Launchpad bug 1524025 in Magnum "Kubernetes external loadbalancer is not getting created" [Undecided,In progress] - Assigned to Dane LeBlanc (leblancd)
16:39:54 <hongbin> tonanhngo: yes
16:40:21 <hongbin> sorry, final comment.
16:40:33 <hongbin> it looks we are ready to freeze the repo now?
16:41:09 <muralia> yup.
16:41:09 <strigazi> Can we do it tomorrow, to update the install-guide?
16:41:26 <hongbin> strigazi: yes, will wait for your patch and ton's patch
16:41:40 <strigazi> Ton you
16:41:45 <tonanhngo> So the minor problem is that the configuration for k8s controller changed a bit because it is now a container instead of a process.  I have a simple patch for that.
16:41:46 <hongbin> but the rest of the patches should be freeze now
16:42:14 <hongbin> any other concern?
16:42:20 <strigazi> tonanhngo: I'm covered by your last message
16:43:12 <strigazi> #action hongbin to freeze the magnum service repo
16:43:58 <hongbin> thanks. that is from me
16:44:00 <dane_leblanc__> tonanhngo: Is there more needed than this patch that just merged: #link https://review.openstack.org/368996
16:44:02 <tonanhngo> However the larger problem is that the k8s plugin for OpenStack still uses LBaaS V1 and Keystone V2.
16:44:35 <tonanhngo> We can't even get LBaaS V1 on devstack anymore, and in Newton
16:45:10 <tonanhngo> There is support for LBaaS V2 in K8s release 1.3, but all our image still has 1.2
16:45:59 <tonanhngo> we can consider building a custom image with Fedora 24 and K8s 1.3
16:46:21 <tonanhngo> and update our scripts to work with this image
16:46:55 <tonanhngo> Support for Keystone V3 is apparently still being tested
16:48:07 <strigazi> I'll give it a go with k8s 1.3, it's do-able but a custom build
16:48:52 <tonanhngo> I am wondering whether it's OK to let K8s load balancer be broken for the Newton release, and add fix later
16:49:20 <tonanhngo> or should we try to get it working with custom image
16:49:36 <strigazi> we can fix it but document
16:49:51 <strigazi> that k8s lbaas requires a custom image
16:49:58 <tonanhngo> ok
16:50:34 <tonanhngo> sounds good, I will check out Spyros new image tomorrow
16:51:03 <strigazi> ok
16:51:14 <tonanhngo> I also have a quick note to share with the team, and an open invitation
16:51:50 <tonanhngo> Some of us (myself, Spyros, Ricardo, my colleague Winnie) have been working on scalability for Magnum and COE
16:52:17 <tonanhngo> We requested a large cluster to run the Rally benchmarks we have been developing
16:52:45 <tonanhngo> We should be getting access to a 360 nodes cluster soon, from the CNCF lab (similar to OSIC)
16:53:08 <tonanhngo> Since this is public resource, we want to keep it open to the team
16:53:40 <tonanhngo> It's also a major undertaking to install OpenStack there, manage it, and run benchmarks
16:54:04 <tonanhngo> If anyone is interested in joining the effort, you are quite welcome to give a hand
16:54:41 <muralia> awesome. thanks for that update
16:54:49 <tonanhngo> The result will be public, and we hope this will help with adoption
16:55:22 <adrian_otto> tonanhngo: I can allocate some time to help out with it.
16:55:49 <tonanhngo> That would be awesome.  I am worried about installing OpenStack on 360 nodes
16:56:11 <rajiv__> can we use openstack-ansible project?
16:56:20 <adrian_otto> rajiv__: yes.
16:56:23 <rajiv__> I am not sure about the stability of it.
16:56:28 <tonanhngo> I am thinking about ansible, or Kolla
16:56:37 <adrian_otto> that will help a lot. We recently did a lot of work to make that work well with Magnum
16:57:22 <strigazi> rdo-manager whould be a good option but the nodes run ubuntu trusty
16:58:17 <adrian_otto> I want to discuss https://review.openstack.org/352358 which adds a more restricted security group for the cluster. I'm not sure about having essentially an "all closed" network security policy on all new clusters. From a security perspective it's a best practice to be secure by default. From a practical perspective, it means that every "real" application will require a custom COE driver.
16:58:17 <rajiv__> does anyone have configuration information?
16:59:00 <adrian_otto> rajiv__: this is your patch
16:59:10 <strigazi> We can move on openstack-containers
16:59:19 <strigazi> time is up team
16:59:30 <adrian_otto> ok
16:59:36 <strigazi> #endmeeting