14:02:50 <apuimedo> #startmeeting kuryr
14:02:51 <openstack> Meeting started Mon Sep 12 14:02:50 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:55 <openstack> The meeting name has been set to 'kuryr'
14:03:00 <devvesa> o/
14:03:00 <apuimedo> Hi and welcome to yet another Kuryr weekly meeting
14:03:05 <limao> o/
14:03:07 <apuimedo> who's here for the show?
14:03:14 <icoughla> hello
14:03:15 <garyloug> o/
14:03:18 <janonymous> o/
14:03:23 <lmdaly> o/
14:04:15 <ivc_> o7
14:04:45 <apuimedo> :-)
14:05:05 <apuimedo> #info devvesa limao icoughla garyloug lmdaly ivc_ and apuimedo present
14:05:22 <apuimedo> first of all, I want to apologize for putting the agenda up only today
14:05:29 <apuimedo> I'll to it earlier this week
14:06:05 <apuimedo> https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_12th.2C_2016
14:06:14 <pablochacin> apuimedo: o/
14:06:17 <apuimedo> alright, let's get started
14:06:22 <apuimedo> welcome pablochacin!
14:06:36 <apuimedo> #topic kuryr: new config options
14:07:19 <apuimedo> #info Today the patch that marks the first openstack/kuryr release got merged https://review.openstack.org/316320
14:07:56 <apuimedo> #Now all the authentication options for talking to Neutron reside in the [neutron] config group
14:08:01 <apuimedo> arrgg
14:08:16 <apuimedo> #info Now all the authentication options for talking to Neutron reside in the [neutron] config group
14:08:41 <apuimedo> #info kuryr-libnetwork and kuryr-kubernetes can now use the new kuryr.lib.utils code to support keystone v3
14:09:23 <apuimedo> #info the options for authentication now are loaded and registered by openstack/keystoneauth
14:09:34 <apuimedo> Questions about the new config?
14:11:53 <apuimedo> alright, moving on
14:12:02 <apuimedo> #topic kuryr: release
14:12:42 <apuimedo> #info We are changing the name of the openstack/kuryr repository to match the pypi package it produces https://review.openstack.org/368864
14:13:10 <apuimedo> #info a patch for cutting the first kuryr-lib release has been sent https://review.openstack.org/368622
14:13:35 <apuimedo> Now progress on adding support for the RPC/REST that Vikas is spearheading can resume in the project
14:13:41 <apuimedo> questions about the release?
14:15:13 <apuimedo> can people read me?
14:15:19 <ivc_> yup
14:15:22 <icoughla> yes
14:15:23 <apuimedo> ok, just checking
14:15:28 <banix> yed :)
14:15:41 <apuimedo> alright, moving on to the next topic
14:16:00 <apuimedo> #topic kuryr-libnetwork: keystone v3 support
14:17:04 <apuimedo> #info a patch has been sent to use the new kuryr-lib config facilities. It has been successfully manually tested to work with devstack and keystone v3 https://review.openstack.org/367033
14:17:47 <apuimedo> #info after the v3 patch gets merged, those of you that have kuryr.conf tuned should update to the new keystoneauth standard options
14:18:24 <apuimedo> an example can be found here:
14:18:27 <apuimedo> #link https://paste.fedoraproject.org/427103/47367050/
14:18:31 <banix> will using the v2 be still an option?
14:18:45 <apuimedo> banix: yes. It is specifically tested to work with both :-)
14:19:01 <banix> great
14:19:08 <apuimedo> #info it will support v2password v3password v2token and v3token
14:19:30 <apuimedo> additionally, now devstack creates a kuryr service in keystone and kuryr operates in service mode instead of admin
14:19:36 <apuimedo> (when run under devstack)
14:20:40 <apuimedo> the patch still needs some unit test fixes
14:20:47 <apuimedo> and once the kuryr-lib release is cut
14:21:01 <apuimedo> we'll move to depend on kuryr-lib 0.1.0 instead of git's version
14:21:15 <apuimedo> any questions?
14:22:22 <limao> Looks like kuryr-libnetwork Jenkins is broken after merged keystone v3 code in kuryr-lib
14:23:14 <apuimedo> limao: yup, that belongs to the next topic
14:23:16 <apuimedo> :P
14:23:22 <limao> :)
14:23:23 <apuimedo> #topic kuryr-libnetwork testing fixes
14:23:58 <apuimedo> #info kuryr-lib's keystone v3 patch requires kuryr-libnetwork's keystone v3 patch to be merged to be working again
14:24:13 <apuimedo> it is possible that some additional fix may be required
14:24:31 <apuimedo> this should be a big focus this week, get all the gates green
14:24:54 <apuimedo> #action apuimedo, banix, vikas, irenab to get kuryr-libnetwork's v3 patch merged
14:25:10 <apuimedo> #action apuimedo to get gates working again
14:25:29 <apuimedo> I'll appreciate help with the gates
14:25:44 <apuimedo> since people like banix and limao have more experience with them :P
14:25:45 <janonymous> +1
14:25:58 <banix> sure
14:26:01 <limao> +1
14:26:28 <apuimedo> thanks
14:26:50 <apuimedo> #action janonymous banix and limao to work on gates and testing fixes too
14:27:54 <apuimedo> #info limao already sent a patch for fixing rally tests this week, so it is a continued focus https://review.openstack.org/#/c/365851/
14:29:11 <apuimedo> #info janonymous sent a fix for devstack that is important for ubuntu based tests https://review.openstack.org/#/c/367753/
14:29:33 <apuimedo> #action banix to review and hopefully merge https://review.openstack.org/#/c/367753/
14:30:09 <apuimedo> I'll have to check how we do it, now that the gates fail, maybe we'll need to disable its voting status momentarily
14:30:20 <apuimedo> since there's a few patches that need to get in
14:31:19 <apuimedo> anything else on kuryr-libnetwork's tests?
14:31:28 <apuimedo> oh, sorry
14:32:05 <apuimedo> #info: limao verified that before the keystone v3 kuryr-lib patch, only the fullstack gate was failing https://review.openstack.org/#/c/365683/
14:32:19 <apuimedo> limao: I think that test patch can be abandoned now
14:32:43 <apuimedo> #topic kuryr-libnetwork: container-in-vm
14:32:52 <limao> Yeah, I will abandoned it, thanks
14:33:15 <apuimedo> this topic is about the Vikas' work, but since he is not on the meeting today, let's move the update to the mailing list
14:33:33 <apuimedo> #action vikas to update the mailing list about the container-in-vm kuryr-libnetwork patches
14:33:46 <apuimedo> we'll tackle the ipvlan proposal in another topic
14:34:06 <apuimedo> anybody with questions/things to share on kuryr-libnetwork?
14:35:05 <tonanhngo> Quick question
14:35:21 <apuimedo> tonanhngo: go ahead, pleas
14:35:24 <apuimedo> *please
14:35:43 <tonanhngo> does the current release work in VM yet, or we need to wait for the container-in-vm just mentioned?
14:36:06 <apuimedo> tonanhngo: need to wait. It's the immediate focus now that we sent the release patch
14:36:18 <tonanhngo> ah ok, thanks for clarifying.
14:36:54 <irenab> apuimedo: I think it should work, but have different networking than VMs
14:37:45 <apuimedo> irenab: I believe tonanhngo inquires about having the containers in the VMs use kuryr networking
14:38:04 <tonanhngo> Yes
14:39:16 <apuimedo> alright, let's go ahead with the next topic
14:40:09 <apuimedo> #topic kuryr-kubernetes: asyncio style upstreaming
14:40:45 <apuimedo> #info devvesa got the first translator layer patch merged https://review.openstack.org/#/c/363758/
14:41:28 <apuimedo> devvesa: any update?
14:42:11 <apuimedo> #action apuimedo to address vikas' comments to  https://review.openstack.org/#/c/365600/
14:43:57 <apuimedo> I guess that's a not ATM
14:43:59 <apuimedo> :-)
14:44:12 <apuimedo> #action apuimedo to upstream a first version of the CNI driver
14:44:40 <apuimedo> #topic kuryr-kubernetes: py2/py3 eventlet style PoC
14:44:52 <apuimedo> #chair ivc_
14:44:53 <openstack> Current chairs: apuimedo ivc_
14:44:54 <ivc_> still wip
14:45:16 <apuimedo> ivc_: any update? Anything we can help with?
14:45:24 <devvesa> apuimedo, hi
14:45:33 <apuimedo> devvesa: hi
14:45:38 <devvesa> Well, not too much this week.
14:45:50 <devvesa> I started to watch the CNI patch
14:46:01 <ivc_> im gluing things together atm. but that turned out slightly more complicated than i'd expected
14:46:03 <devvesa> https://review.openstack.org/#/c/320545/
14:46:04 <apuimedo> devvesa: got it, anything the rest can help with? I offered to take over the CNI patch above
14:46:24 <apuimedo> ivc_: that's the normal course of integration software :P
14:46:51 <ivc_> also was exploring python-k8sclient
14:46:54 <apuimedo> devvesa: I can take it from there then ;-)
14:46:56 <devvesa> No... I hope things will be faster when we'll be able to bind containers and we'll have the Pods/ports ready
14:47:07 <devvesa> apuimedo: thanks! Yes. I am not sure how much I'll be able to help
14:47:16 <apuimedo> ivc_: did you check the k8s client that midokura's PoC uses for testing?
14:47:19 <devvesa> all yours :)
14:47:24 <apuimedo> devvesa: I expect reviews!
14:47:36 <devvesa> I guarantee I'll do it
14:47:47 <apuimedo> thanks devvesa !
14:48:16 <apuimedo> ivc_: is there any particular thing that is causing difficulties that we could lend a hand with?
14:49:27 <ivc_> apuimedo not something particular. just trying to ensure active-active ha
14:50:02 <apuimedo> ivc_: would you like to have some ha videoconf meeting?
14:50:15 <irenab> ivc_: anything yiou would like to share as devref   ?
14:51:11 <ivc_> i'd like to, but first i need to figure out some things for myself
14:51:37 <ivc_> more specifically how k8s watch events are dispatched
14:52:09 <apuimedo> you mean order and such?
14:52:15 <apuimedo> ivc_: ^^
14:52:20 <ivc_> first approach was to fire-and-forget the event handler on separate greenlet as soon as it arived
14:52:33 <ivc_> but that does not help ordering/concurrency ^^
14:52:50 <ivc_> so right now i'm grouping events by their target
14:53:10 <apuimedo> ivc_: let's take the discussion, if you don't mind to #openstack-kuryr after this meeting. We have one more topic to cover, and I want more time for this ha discussion ;-)
14:53:20 <ivc_> sure
14:53:24 <apuimedo> #topic ipvlan address pairs proposal
14:53:28 <apuimedo> #chair icoughla
14:53:29 <openstack> Current chairs: apuimedo icoughla ivc_
14:53:55 <apuimedo> who has had time to check icoughla's proposal to the mailing list?
14:54:03 <hongbin> o/
14:54:40 <apuimedo> #link http://lists.openstack.org/pipermail/openstack-dev/2016-September/103461.html
14:54:49 <apuimedo> hongbin: very happy to read that
14:54:54 <apuimedo> what do you think about it?
14:55:11 <hongbin> it looks good, although I haven't looked into it in details
14:56:35 <apuimedo> personally I would like such a deployment option to take place
14:57:11 <apuimedo> icoughla: are you targetting first kuryr-libnetwork, first kuryr-kubernetes, only one of them, or both at the same time
14:57:20 <apuimedo> (they need slightly different approaches)
14:57:28 <hongbin> me too. it seems this proposal is able to address the credential problem (requires to store neutron & rabbitmq credentials in tenant VMs)
14:57:57 <icoughla> kuryr-libnetwork first
14:58:10 <icoughla> although k8s is what we are really interested in
14:58:11 <apuimedo> icoughla: glad to hear that
14:58:21 <apuimedo> icoughla: better to move first in stable ground :P
14:58:30 <apuimedo> (or quasi stable)
14:58:31 <icoughla> we have a working poc in kuryr-libnetwork atm
14:58:42 <apuimedo> icoughla: can we get a link to that
14:59:09 <apuimedo> #info icoughla and his team have an IPVLAN, address pairs working PoC for kuryr-libnetwork container-in-vm
14:59:28 <icoughla> sure
14:59:40 <icoughla> interested to hear the feedback
14:59:51 <apuimedo> we'll need the contributors, specially cores like irenab, banix and vikas to weigh in
14:59:53 <hongbin> A question, how about the use cases of container-in-baremetal ?
14:59:56 <apuimedo> personally I welcome the effort
15:00:05 <apuimedo> hongbin: you mean with ironic?
15:00:16 <hongbin> yes, or non-ironic
15:00:49 <icoughla> we have not prioritised bare-metal use cases yet but plan to
15:01:04 <apuimedo> hongbin: well, bare-metal is what we already have working
15:01:24 <apuimedo> I think we may be able to cut a kuryr-libnetwork 0.1.0 release this week or begining of the next one
15:01:32 <apuimedo> baremetal only
15:01:44 <hongbin> apuimedo: icoughla ack
15:01:49 <apuimedo> 1.0.0 should be with at least one container-in-vm solution
15:02:23 <apuimedo> icoughla: I'll answer in the mailing list that we'd like a link to the PoC and make a plan for upstreaming it
15:02:33 <sigmavirus> o/ are y'all going a bit over-time?
15:02:36 <icoughla> ok
15:02:38 <apuimedo> sigmavirus: we are
15:02:41 <apuimedo> sorry
15:02:43 <apuimedo> closing now
15:02:48 <jimbaker> thanks
15:02:48 <sigmavirus> thanks :)
15:02:50 <apuimedo> Thank you all for jojining
15:03:00 <apuimedo> continued discussion in #openstack-kuryr
15:03:05 <apuimedo> #endmeeting