14:02:50 #startmeeting kuryr 14:02:51 Meeting started Mon Sep 12 14:02:50 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:55 The meeting name has been set to 'kuryr' 14:03:00 o/ 14:03:00 Hi and welcome to yet another Kuryr weekly meeting 14:03:05 o/ 14:03:07 who's here for the show? 14:03:14 hello 14:03:15 o/ 14:03:18 o/ 14:03:23 o/ 14:04:15 o7 14:04:45 :-) 14:05:05 #info devvesa limao icoughla garyloug lmdaly ivc_ and apuimedo present 14:05:22 first of all, I want to apologize for putting the agenda up only today 14:05:29 I'll to it earlier this week 14:06:05 https://wiki.openstack.org/wiki/Meetings/Kuryr#Meeting_September_12th.2C_2016 14:06:14 apuimedo: o/ 14:06:17 alright, let's get started 14:06:22 welcome pablochacin! 14:06:36 #topic kuryr: new config options 14:07:19 #info Today the patch that marks the first openstack/kuryr release got merged https://review.openstack.org/316320 14:07:56 #Now all the authentication options for talking to Neutron reside in the [neutron] config group 14:08:01 arrgg 14:08:16 #info Now all the authentication options for talking to Neutron reside in the [neutron] config group 14:08:41 #info kuryr-libnetwork and kuryr-kubernetes can now use the new kuryr.lib.utils code to support keystone v3 14:09:23 #info the options for authentication now are loaded and registered by openstack/keystoneauth 14:09:34 Questions about the new config? 14:11:53 alright, moving on 14:12:02 #topic kuryr: release 14:12:42 #info We are changing the name of the openstack/kuryr repository to match the pypi package it produces https://review.openstack.org/368864 14:13:10 #info a patch for cutting the first kuryr-lib release has been sent https://review.openstack.org/368622 14:13:35 Now progress on adding support for the RPC/REST that Vikas is spearheading can resume in the project 14:13:41 questions about the release? 14:15:13 can people read me? 14:15:19 yup 14:15:22 yes 14:15:23 ok, just checking 14:15:28 yed :) 14:15:41 alright, moving on to the next topic 14:16:00 #topic kuryr-libnetwork: keystone v3 support 14:17:04 #info a patch has been sent to use the new kuryr-lib config facilities. It has been successfully manually tested to work with devstack and keystone v3 https://review.openstack.org/367033 14:17:47 #info after the v3 patch gets merged, those of you that have kuryr.conf tuned should update to the new keystoneauth standard options 14:18:24 an example can be found here: 14:18:27 #link https://paste.fedoraproject.org/427103/47367050/ 14:18:31 will using the v2 be still an option? 14:18:45 banix: yes. It is specifically tested to work with both :-) 14:19:01 great 14:19:08 #info it will support v2password v3password v2token and v3token 14:19:30 additionally, now devstack creates a kuryr service in keystone and kuryr operates in service mode instead of admin 14:19:36 (when run under devstack) 14:20:40 the patch still needs some unit test fixes 14:20:47 and once the kuryr-lib release is cut 14:21:01 we'll move to depend on kuryr-lib 0.1.0 instead of git's version 14:21:15 any questions? 14:22:22 Looks like kuryr-libnetwork Jenkins is broken after merged keystone v3 code in kuryr-lib 14:23:14 limao: yup, that belongs to the next topic 14:23:16 :P 14:23:22 :) 14:23:23 #topic kuryr-libnetwork testing fixes 14:23:58 #info kuryr-lib's keystone v3 patch requires kuryr-libnetwork's keystone v3 patch to be merged to be working again 14:24:13 it is possible that some additional fix may be required 14:24:31 this should be a big focus this week, get all the gates green 14:24:54 #action apuimedo, banix, vikas, irenab to get kuryr-libnetwork's v3 patch merged 14:25:10 #action apuimedo to get gates working again 14:25:29 I'll appreciate help with the gates 14:25:44 since people like banix and limao have more experience with them :P 14:25:45 +1 14:25:58 sure 14:26:01 +1 14:26:28 thanks 14:26:50 #action janonymous banix and limao to work on gates and testing fixes too 14:27:54 #info limao already sent a patch for fixing rally tests this week, so it is a continued focus https://review.openstack.org/#/c/365851/ 14:29:11 #info janonymous sent a fix for devstack that is important for ubuntu based tests https://review.openstack.org/#/c/367753/ 14:29:33 #action banix to review and hopefully merge https://review.openstack.org/#/c/367753/ 14:30:09 I'll have to check how we do it, now that the gates fail, maybe we'll need to disable its voting status momentarily 14:30:20 since there's a few patches that need to get in 14:31:19 anything else on kuryr-libnetwork's tests? 14:31:28 oh, sorry 14:32:05 #info: limao verified that before the keystone v3 kuryr-lib patch, only the fullstack gate was failing https://review.openstack.org/#/c/365683/ 14:32:19 limao: I think that test patch can be abandoned now 14:32:43 #topic kuryr-libnetwork: container-in-vm 14:32:52 Yeah, I will abandoned it, thanks 14:33:15 this topic is about the Vikas' work, but since he is not on the meeting today, let's move the update to the mailing list 14:33:33 #action vikas to update the mailing list about the container-in-vm kuryr-libnetwork patches 14:33:46 we'll tackle the ipvlan proposal in another topic 14:34:06 anybody with questions/things to share on kuryr-libnetwork? 14:35:05 Quick question 14:35:21 tonanhngo: go ahead, pleas 14:35:24 *please 14:35:43 does the current release work in VM yet, or we need to wait for the container-in-vm just mentioned? 14:36:06 tonanhngo: need to wait. It's the immediate focus now that we sent the release patch 14:36:18 ah ok, thanks for clarifying. 14:36:54 apuimedo: I think it should work, but have different networking than VMs 14:37:45 irenab: I believe tonanhngo inquires about having the containers in the VMs use kuryr networking 14:38:04 Yes 14:39:16 alright, let's go ahead with the next topic 14:40:09 #topic kuryr-kubernetes: asyncio style upstreaming 14:40:45 #info devvesa got the first translator layer patch merged https://review.openstack.org/#/c/363758/ 14:41:28 devvesa: any update? 14:42:11 #action apuimedo to address vikas' comments to https://review.openstack.org/#/c/365600/ 14:43:57 I guess that's a not ATM 14:43:59 :-) 14:44:12 #action apuimedo to upstream a first version of the CNI driver 14:44:40 #topic kuryr-kubernetes: py2/py3 eventlet style PoC 14:44:52 #chair ivc_ 14:44:53 Current chairs: apuimedo ivc_ 14:44:54 still wip 14:45:16 ivc_: any update? Anything we can help with? 14:45:24 apuimedo, hi 14:45:33 devvesa: hi 14:45:38 Well, not too much this week. 14:45:50 I started to watch the CNI patch 14:46:01 im gluing things together atm. but that turned out slightly more complicated than i'd expected 14:46:03 https://review.openstack.org/#/c/320545/ 14:46:04 devvesa: got it, anything the rest can help with? I offered to take over the CNI patch above 14:46:24 ivc_: that's the normal course of integration software :P 14:46:51 also was exploring python-k8sclient 14:46:54 devvesa: I can take it from there then ;-) 14:46:56 No... I hope things will be faster when we'll be able to bind containers and we'll have the Pods/ports ready 14:47:07 apuimedo: thanks! Yes. I am not sure how much I'll be able to help 14:47:16 ivc_: did you check the k8s client that midokura's PoC uses for testing? 14:47:19 all yours :) 14:47:24 devvesa: I expect reviews! 14:47:36 I guarantee I'll do it 14:47:47 thanks devvesa ! 14:48:16 ivc_: is there any particular thing that is causing difficulties that we could lend a hand with? 14:49:27 apuimedo not something particular. just trying to ensure active-active ha 14:50:02 ivc_: would you like to have some ha videoconf meeting? 14:50:15 ivc_: anything yiou would like to share as devref ? 14:51:11 i'd like to, but first i need to figure out some things for myself 14:51:37 more specifically how k8s watch events are dispatched 14:52:09 you mean order and such? 14:52:15 ivc_: ^^ 14:52:20 first approach was to fire-and-forget the event handler on separate greenlet as soon as it arived 14:52:33 but that does not help ordering/concurrency ^^ 14:52:50 so right now i'm grouping events by their target 14:53:10 ivc_: let's take the discussion, if you don't mind to #openstack-kuryr after this meeting. We have one more topic to cover, and I want more time for this ha discussion ;-) 14:53:20 sure 14:53:24 #topic ipvlan address pairs proposal 14:53:28 #chair icoughla 14:53:29 Current chairs: apuimedo icoughla ivc_ 14:53:55 who has had time to check icoughla's proposal to the mailing list? 14:54:03 o/ 14:54:40 #link http://lists.openstack.org/pipermail/openstack-dev/2016-September/103461.html 14:54:49 hongbin: very happy to read that 14:54:54 what do you think about it? 14:55:11 it looks good, although I haven't looked into it in details 14:56:35 personally I would like such a deployment option to take place 14:57:11 icoughla: are you targetting first kuryr-libnetwork, first kuryr-kubernetes, only one of them, or both at the same time 14:57:20 (they need slightly different approaches) 14:57:28 me too. it seems this proposal is able to address the credential problem (requires to store neutron & rabbitmq credentials in tenant VMs) 14:57:57 kuryr-libnetwork first 14:58:10 although k8s is what we are really interested in 14:58:11 icoughla: glad to hear that 14:58:21 icoughla: better to move first in stable ground :P 14:58:30 (or quasi stable) 14:58:31 we have a working poc in kuryr-libnetwork atm 14:58:42 icoughla: can we get a link to that 14:59:09 #info icoughla and his team have an IPVLAN, address pairs working PoC for kuryr-libnetwork container-in-vm 14:59:28 sure 14:59:40 interested to hear the feedback 14:59:51 we'll need the contributors, specially cores like irenab, banix and vikas to weigh in 14:59:53 A question, how about the use cases of container-in-baremetal ? 14:59:56 personally I welcome the effort 15:00:05 hongbin: you mean with ironic? 15:00:16 yes, or non-ironic 15:00:49 we have not prioritised bare-metal use cases yet but plan to 15:01:04 hongbin: well, bare-metal is what we already have working 15:01:24 I think we may be able to cut a kuryr-libnetwork 0.1.0 release this week or begining of the next one 15:01:32 baremetal only 15:01:44 apuimedo: icoughla ack 15:01:49 1.0.0 should be with at least one container-in-vm solution 15:02:23 icoughla: I'll answer in the mailing list that we'd like a link to the PoC and make a plan for upstreaming it 15:02:33 o/ are y'all going a bit over-time? 15:02:36 ok 15:02:38 sigmavirus: we are 15:02:41 sorry 15:02:43 closing now 15:02:48 thanks 15:02:48 thanks :) 15:02:50 Thank you all for jojining 15:03:00 continued discussion in #openstack-kuryr 15:03:05 #endmeeting