16:00:14 <hongbin> #startmeeting containers
16:00:16 <openstack> Meeting started Tue Sep 27 16:00:14 2016 UTC and is due to finish in 60 minutes.  The chair is hongbin. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:19 <openstack> The meeting name has been set to 'containers'
16:00:21 <hongbin> #topic Roll Call
16:00:27 <Drago> o/
16:00:29 <strigazi> Spyros Trigazis
16:00:35 <hieulq_> Hieu LE o/
16:00:40 <swatson> Stephen Watson o/
16:00:43 <eghobo> o/
16:00:43 <rpothier> Rob Pothier
16:00:44 <mkrai> Madhuri Kumari
16:00:48 <jvgrant> Jaycen Grant
16:00:48 <tonanhngo> Ton Ngo
16:01:13 <dane_leblanc__> o/
16:01:14 <vijendar> o/
16:01:18 <adrian_otto> Adrian Otto
16:01:37 <hongbin> Thanks for joining the meeting Drago strigazi hieulq_ swatson eghobo rpothier mkrai jvgrant tonanhngo dane_leblanc__ vijendar adrian_otto
16:01:42 <hongbin> #topic Announcements
16:02:02 <hongbin> 1. Magnum PTL election end. adrian_otto is your PTL in Ocata cycle
16:02:16 <adrian_otto> I would like to extend my most sincere thanks to hongbin for serving as our PTL over the Newton release cycle. We deeply appreciate your leadership during this time, and look forward to working together more.
16:02:18 <tonanhngo> Thanks Hongbin for leading Magnum the Newston cycle!
16:02:39 <jvgrant> +1
16:02:41 <swatson> +1
16:02:52 <strigazi> Thanks hongbin
16:02:53 <hieulq_> thanks hongbin and congrats adrian_otto
16:02:54 <dane_leblanc__> +1
16:02:55 <hongbin> Thanks. Glad to work with you in newton
16:02:57 <mkrai> Thanks hongbin :)
16:02:57 <strigazi> Congrats Adrian
16:03:05 <mkrai> Congratulations adrian_otto
16:03:19 <vijendar> Thanks hongbin and congrats adrian_otto
16:03:20 <hongbin> I will work with adrian_otto this week for the transition.
16:03:39 <hongbin> #topic Review Action Items
16:04:01 <hongbin> let me find teh list of AIs
16:04:19 <hongbin> 1. hongbin talk to kuryr ptl to setup a join-session in design summit
16:04:31 <hongbin> The Kuryr PTL agreed to have a joined session
16:04:51 <hongbin> I believe adrian_otto will follow up the details later
16:04:56 <adrian_otto> I will, thanks.
16:05:11 <adrian_otto> let's record an action for me to follow up with him
16:05:12 <hongbin> 2, hongbin contact the heat team about a join-session for discussing heat performance
16:05:31 <hongbin> #action adrian_otto follow up with Kuryr PTL to arrage a joined session
16:05:58 <hongbin> I believe strigazi is working on this?
16:06:05 <strigazi> Yes
16:06:22 <hongbin> strigazi: how is the session going?
16:06:42 <strigazi> I contacted the heat ptl and he told me that they are doing a large stacks sessions where we can join
16:06:51 <strigazi> with tripleo
16:07:08 <strigazi> There will be no specific
16:07:13 <strigazi> heat-magnum session
16:07:47 <strigazi> We can start the discussion on that
16:07:52 <adrian_otto> We should plan to attend that session.
16:07:52 <strigazi> and see how it goes
16:08:07 <tonanhngo> They should have an etherpad for planning, we should add our topics for discussion
16:08:15 <adrian_otto> the only concern I have is making sure it's placed on the schedule such that it does not conflict with other priorities of ours.
16:08:45 <adrian_otto> tonanhngo: can you help us track down a link to that etherpad?
16:08:46 <tonanhngo> hopefully this session would not conflict with our own session
16:08:56 <strigazi> I can continue on that and have more details
16:09:04 <tonanhngo> Sure, I will follow
16:09:07 <strigazi> I'm preparing a list of detailed subjects
16:09:13 <adrian_otto> hongbin: do you happen to know who's putting the schedule together this year?
16:09:32 <hongbin> adrian_otto: i think it is ttx
16:09:34 <strigazi> They also have a PTL change
16:09:39 <adrian_otto> thanks hongbin
16:09:58 <hongbin> ok, that is all for the AIs
16:10:09 <hongbin> #topic Essential Blueprints Review
16:10:17 <hongbin> 1. Support baremetal container clusters (strigazi)
16:10:23 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support
16:10:33 <hongbin> strigazi: ^^
16:11:00 <strigazi> While working on the mesos bm driver, I noticed the only difference between that vm and bm drivers
16:11:16 <strigazi> are the private network and the lack of cinder support
16:11:54 <strigazi> With a colleague here we investigated how we can make the private network optional but enabled by default
16:12:16 <strigazi> and pass the selected network as a parameter (we have that option already)
16:12:30 <strigazi> So, of the mesos driver
16:12:37 <strigazi> s/of/for
16:13:04 <strigazi> I will only modify the current driver to support both flavor with the same heat templates
16:13:23 <strigazi> Eventually this can happen for the other driver too
16:13:52 <strigazi> we are testing how the optional network goes at the moment
16:13:56 <strigazi> that's it
16:14:12 <hongbin> thanks strigazi
16:14:27 <hongbin> any comment regarding the ironic integration?
16:14:39 <hongbin> 2. Magnum User Guide for Cloud Operator (tango)
16:14:47 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/user-guide
16:14:51 <hongbin> tonanhngo: ^^
16:14:53 <tonanhngo> The section on Horizon and native clients merged.  Thanks again everyone for the helpful comments.
16:15:08 <tonanhngo> With this, I think the key sections in the guide are in place
16:15:30 <tonanhngo> I will pause for awhile to focus on the scalability work for the summit
16:15:44 <strigazi> tonanhngo: I have a suggestion
16:15:59 <tonanhngo> strigazi: Sure
16:16:01 <strigazi> to discuss until the summit and at the summit
16:16:27 <strigazi> The user guide has a lot of good content but has become very big
16:16:47 <strigazi> also there is content for ops and end users
16:16:59 <strigazi> we could start spliting the guide
16:17:15 <strigazi> to architecture, ops and user
16:17:30 <strigazi> what do you think?
16:17:42 <adrian_otto> that was our original intent as of the Tokyo summit
16:17:51 <tonanhngo> That's a good idea.  I was thinking of starting an operator guide.  The role distinction is basically who can manages the infrastructure
16:18:12 <tonanhngo> It's true that some parts of the current user guide is really for operator
16:18:34 <adrian_otto> the service operator has one set of interests, and the service consumer has another set.
16:18:36 <tonanhngo> We have talked about tuning, which would be an operator guide
16:18:45 <adrian_otto> so it might be nice to split along that line.
16:18:46 <strigazi> I could yes
16:18:54 <strigazi> (only yes)
16:19:21 <strigazi> What I wanted to say is
16:19:37 <strigazi> I want to add content for the ironic integration and i'm not sure where
16:19:51 <strigazi> Maybe Arch guide?
16:20:15 <tonanhngo> Who would be the audience for the arch guide?
16:20:18 <adrian_otto> seems to me that fits in the operator guide
16:20:31 <strigazi> Mostly ops but
16:20:52 <strigazi> won't have any instructions to follow or setup steps
16:21:11 <strigazi> it will be the description of the service
16:21:40 <strigazi> With not a strict definition it's ops guide
16:21:54 <tonanhngo> So maybe we can start the operator guide by pulling the appropriate sections from the current user guide, and add more.
16:22:08 <tonanhngo> Then see if a separate arch guide would be needed
16:22:18 <adrian_otto> seems sensible
16:22:27 <strigazi> makes sense
16:22:32 <strigazi> thanks Ton
16:22:39 <tonanhngo> Thanks Spyros
16:22:45 <adrian_otto> I worry that an arch guide standalone may be rather short
16:22:58 <adrian_otto> leaving the reader with an appetite for more specifics
16:23:11 <strigazi> if the ops guide grows to much we split later
16:23:18 <strigazi> s/to/too
16:23:43 <tonanhngo> It could start as a section in the operator guide, and we can direct the reader there
16:24:01 <strigazi> new file?
16:24:13 <tonanhngo> Yes definitely
16:24:23 <strigazi> great
16:24:54 <hongbin> ok, next topic
16:24:58 <hongbin> 3. COE Bay Drivers (muralia)
16:25:04 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers
16:25:10 <hongbin> muralia: ^^
16:25:12 <Drago> I have muralia's update
16:25:26 <Drago> "so i updated my driver patch. https://review.openstack.org/#/c/374906 . not sure why some of the functional tests are failing at the gates. they pass for me in devstack."
16:25:37 <Drago> "ive addressed all comments. just need more reviews while i fix those failing tests"
16:25:44 <Drago> That is all
16:26:12 <hongbin> Thanks Drago
16:26:25 <hongbin> #topic Kuryr Integration Update (tango)
16:26:32 <hongbin> tonanhngo: ^^
16:26:39 <tonanhngo> I attended the Kuryr meeting yesterday
16:26:53 <tonanhngo> The IPVLAN proposal seems to move along quickly
16:27:35 <tonanhngo> A POC was made available and they already started putting the implementation into the Kuryr lib
16:27:57 <tonanhngo> some part will go into the libnetwork for Swarm
16:28:35 <tonanhngo> The nested container support is coming slowly
16:28:50 <tonanhngo> similarly for the Rest/API driver
16:29:07 <tonanhngo> so nothing much for us to move on
16:30:06 <tonanhngo> As they implement the IPVLAN support, I think issues are starting to emerge
16:30:49 <tonanhngo> We will have to see how these are addressed.  If they cannot be solved, then they will just be limitation to be aware of
16:30:57 <tonanhngo> That's about all
16:30:59 <adrian_otto> so the implementation is not working completely yet?
16:31:21 <tonanhngo> They just have the bare minimum to show the concept
16:31:34 <tonanhngo> the full implementation is in progress now
16:31:37 <adrian_otto> ok
16:31:56 <hongbin> thanks tonanhngo
16:32:03 <hongbin> #topic Open Discussion
16:32:28 <hongbin> Anyone has topic to discuss?
16:32:35 <tonanhngo> So there is a fix for the Kubernetes load balancer, at least for LBaaS v1
16:32:42 <strigazi> I have an image with k8s 1.3
16:33:06 <strigazi> That was for Ton ^^
16:34:23 <hongbin_> it looks my nick just changed (not sure why)
16:34:45 <strigazi> tonanhngo, with k8s 1.3, lbaas v2 is expected to work right?
16:37:06 <hongbin_> it looks folks are keep disconnecting
16:37:15 <Drago1> Yeah, something wrong with freenode maybe
16:37:17 <swatson> It looks like IRC is going haywire
16:37:27 <Drago1> I got kicked and had to reconnect
16:37:39 <strigazi> I'm here
16:38:07 <jvgrant> same, but i see tons of notifications of people dropping and adding
16:38:22 <adrian_otto> sweet!!
16:38:34 <strigazi> I have filtered the all, so for me it's just weird
16:38:52 <adrian_otto1> IRC is having a complete fit
16:39:09 <tango> Lost my connection
16:39:26 <jvgrant> strange that some of us aren't impacted
16:40:06 <tonanhngo> probably just some servers
16:40:46 <hongbin> become stable now?
16:41:00 <strigazi> tonanhngo, about the fix
16:41:26 <strigazi> you set hostname-override for all cases
16:41:47 <strigazi> for clouds without proper dns that is a problem
16:42:10 <tonanhngo> Yes, we need that because the user can decide to use the load balancer at any time
16:42:28 <tonanhngo> The DNS issue also impacts some of the kubectl command
16:42:35 <tonanhngo> like exec
16:42:35 <strigazi> yeap
16:42:41 <adrian_otto1> strigazi: I must have missed something because I don't understand the context of your remarks. Is hostname-override related to LBaaS? How?
16:43:01 <tonanhngo> The context is as follows
16:44:10 <strigazi> I assume Ton is typing
16:44:35 <tango> Yes I show up as both tango and tonanhngo :)
16:45:20 <tango> So to make this happens, we need to register with kube-apiserver the Nova name of the instance for the node
16:45:39 <tango> We do this using the hostname-override option in kubelet
16:46:32 <tango> This is one of the requirement for any feature in Kubernetes that requires interaction with OpenStack
16:46:51 <hongbin_> I got disconnected again
16:47:15 <tango> So, this holds for the load balancer and cinder volume support, and likely other features in the future.
16:47:50 <tango> So back to the DNS question, it might be worth considering adding a DNS as part of the Kubernetes cluster
16:48:45 <strigazi> We can update /etc/hosts on all nodes,  we need the heat-agents for that
16:49:35 <tango> That's another option
16:49:40 <tango> maybe easier
16:49:58 <strigazi> That do you mean have a DNS as part of the cluster?
16:50:12 <tango> install a DNS service
16:50:20 <tango> and register the node names there
16:50:51 <strigazi> like skyDNS? I though  it was only the services
16:51:04 <strigazi> *only for
16:52:31 <tango> It would be good to try out
16:52:46 <hongbin_> strigazi: tango : you could use the /etc/hosts as a DNS. what it needs is to populate it with IP addresses/hostnames of all the instances
16:53:08 <Drago1> Heat agents will definitely be on the nodes too
16:53:19 <Drago1> They're needed for lifecycle operations
16:54:18 <adrian_otto> Docker tried an implementation recently where each node in a swarm had a mirrored copy of a hosts file, and that proved problematic.
16:54:41 <adrian_otto> probably because they did not have a way to handle inconsistencies in that distributed state
16:54:56 <hongbin_> i see
16:55:08 <adrian_otto> if you have a node that's responding too slowly, it can miss updates, and then the state is inconsistent
16:55:30 <adrian_otto> paket loss or other split cluster conditions can lead to taht
16:55:55 <adrian_otto> so what seems like a simple solution starts to become a complex distributed system once you try to address all the edge cases
16:56:07 <tango> True
16:56:24 <tango> it will be very hard to debug
16:56:46 <adrian_otto> so using something on top of a quarum data store (example etcd, zookeeper...) tend to be better
16:58:03 <Drago1> Quaram? Is that quorum or is there something called quarum? I'm unfamiliar
16:58:04 <adrian_otto> SkyDNS uses etcd as a backing store for the resource records.
16:58:26 <adrian_otto> sorry, I meant "quarum" as a category
16:58:44 <adrian_otto> we have only a couple of minutes remaining.
16:58:47 <strigazi> SkyDNS, isn't only for the services to be registered?
16:59:00 <strigazi> let's go to out channel
16:59:01 <tango> This seems like a general problem that needs some investigation.
16:59:16 <tango> ok
16:59:20 <adrian_otto> yeah, we can continue this in #openstack-containers
16:59:50 <hongbin> ok. thanks for joining the meeting
16:59:55 <hongbin> #endmeeting