14:00:12 #startmeeting kuryr 14:00:13 Meeting started Mon Oct 3 14:00:12 2016 UTC and is due to finish in 60 minutes. The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:17 The meeting name has been set to 'kuryr' 14:00:42 Hello and welcome to the weekly Kuryr IRC meeting 14:00:48 who's here today? 14:00:52 o/ 14:00:57 o/ 14:00:58 o/ 14:01:02 o/ 14:01:48 o/ 14:01:59 #info vikasc tonanhngo lmdaly hongbin ivc_ and apuimedo present 14:02:04 o/ 14:02:04 o/ 14:02:30 welcome everybody 14:02:33 let's get this started 14:02:44 first, I want to apologize for failing to set an agenda today 14:03:12 #topic kuryr-lib 14:03:48 o/ 14:04:09 #info tox coverage task was fixed and limao enabled an infra job for the coverage 14:05:00 #info: we abandoned the rest/rpc approach we were pursuing 14:05:53 #info the patch for ipvlan/macvlan support (mostly for containers in Nova instances) is approved and ready for merging https://review.openstack.org/375864 14:06:43 #action apuimedo and vikasc to check the keystone/neutron possibilities to have limited usage tokens or use neutron policy instead of the REST/RPC approach 14:07:14 this last action item is to address the Magnum deployments in which it would be good to minimize what gets stored on the worker nodes 14:07:26 questions? Other items? 14:07:37 Thanks for addressing this concern 14:07:59 tonanhngo: we still don't have a clear cut solution though 14:08:14 > use neutron policy instead of the REST/RPC approach 14:08:22 It has been a major issue in other areas also, and we don't have a good solution either 14:08:34 i think we can outline that we are talking about kuryr-specific neutron instance here right? 14:08:48 s/neutron/neutron api server/ 14:08:59 ivc_: sure we can 14:09:52 the worker nodes should only be able to perform actions with resources of a specific tenant and only the neutron api endpoints that kuryr use should be usable 14:09:58 (from the instances) 14:10:10 tonanhngo: hongbin: that should be it, right? 14:10:21 also, there is the problem of credential storage, iirc 14:10:36 yes that sounds about right 14:11:00 tonanhngo: for credential storage, what would be acceptable? 14:11:47 I think in Austin we discussed that keystone was adding some special restricted tokens, but I have not followed the progress of that 14:11:49 generally in production, we have to ensure that users can not take the info we store on their VM to overstep their priviledge 14:12:30 tonanhngo: should their own tenant token be fair game, then? 14:12:39 apuimedo tonanhngo, if we have a kuryr-specific instance that is only accessible from within the nova vm with containers, maybe we can just use no-auth and rely on firewall policies? 14:13:05 Yes, the approach we took so far with Kubernetes plugin is to let the user enter their own credential 14:13:20 ivc_: when you say a kuryr-specific instance 14:13:34 then whatever resources they request would be within their priviledge 14:13:37 you mean a neutron server instance only for the kuryr deployment, is that right? 14:13:47 apuimedo right 14:14:00 tonanhngo: that is most likely the first step we'll have to take 14:14:10 and probably also limited to just the tenant entitled to that server 14:14:16 ivc_: this specific instance would have to go to the same DB 14:14:23 as the rest of neutron, right? 14:14:34 yes 14:14:57 well, if we can constrain it adequately in no-auth mode, that is certainly one option 14:15:17 the only question is if it is possible in neutron 14:15:42 although we should take care that, in this case, if the containers get put on the same network as in the instance (as is the case in macvlan/ipvlan) that would mean that the containers themselves could perform neutron actions) 14:16:42 we could probably have neutron api server accessible with e.g. floating ip from containers 14:16:57 maybe even run that api server inside the nova vm 14:17:08 like octavia does with haproxy in nova 14:17:09 ivc_: I don't think it should be accessible 14:17:17 from the containers 14:17:31 and the neutron server, with access to the DB, should not run in the instance 14:17:37 s/from containers/from containers host vm/ 14:18:47 why not? we can have the vm connected to multiple networks - 1 for neutron db/rabbit and other for tenant containers 14:19:21 ivc_: it is owned by the tenant, I don't think tenant owned VMs should have access to the DB 14:19:37 yes, it is undesirable 14:19:52 true. agreed 14:20:07 vm belongs to the tenants, neutron db belongs to the neutron control plane. they should not talk to each other 14:20:20 right 14:20:51 I think it is sensible for now to start with having Neutron accessible and have the kuryr agents configured with the tenant token 14:20:55 what i was thinking is an approach similar to how octavia/lbaasv2 runs haproxy inside the vm 14:21:31 hongbin: after we have this, we have to tackle hardening of access to resources in the same way for all Magnum (be it for storage or networking) 14:21:37 so that would be neutron-api-server as-a-service :) 14:21:45 apuimedo: sure 14:21:54 ivc_: that VM is not managed by the tenants, is it? 14:22:07 apuimedo: however, the token will expire, no sure how you are going to deal with the expiration 14:22:17 i have not checked, but i think its not 14:22:50 hongbin: true, probably then pki or user/pass 14:23:03 apuimedo: i see 14:23:10 apuimedo or cert? 14:23:17 oh thats pki 14:23:20 :P 14:23:33 It may be worth coming to the Keystone session in Barcelona and ask them. I have talked to their PTL but they don't have a good solution either. 14:23:49 tonanhngo: when did you talk with them last? 14:23:57 about a month ago 14:24:14 quite recent. Well, that saves us some work 14:24:23 but yes, we should talk with them 14:24:23 he suggested a few ideas 14:24:50 including some new work on x509 14:24:53 tonanhngo: could you bring them to a mailing list thread with the [magnum][keystone][kuryr] tags? 14:25:21 Sure, will do 14:25:33 and I think we should invite them to the joint fish bowl session we'll have in Barcelona 14:25:46 hongbin: tonanhngo: what do you think? 14:25:59 Sounds good 14:26:01 good idea 14:26:13 cool 14:26:43 #action apuimedo to invite the keystone people to the magnum/kuryr fish bowl session 14:26:59 #topic: kuryr-libnetwork 14:27:25 #info we finished dropping netaddr in favor of ipaddress 14:28:17 #info dongcan continued cleaning up the codebase after the split by dropping duplicate constants 14:28:31 *yedongcan 14:29:36 #info apuimedo is working on adapting controllers.py with drivers for nested/baremetal to support both the current workflow and the ipvlan/macvlan based one that lmdaly's PoC showed 14:30:00 the idea is to have also a python path that you can configure in kuryr.conf 14:30:27 that will determine if you use neutron ports directly, neutron trunk/subports, neutron allowed address pairs 14:30:39 or third party implementations 14:31:24 the interface for these implementationc will probably be reduced and taken from kuryr-libnetwork 14:31:46 and when we add it to kuryr-kubernetes we can refactor as necessary and move it to kuryr-lib 14:31:50 questions? 14:32:24 apuimedo a suggestion for drivers 14:32:43 maybe we can follow os-vif path and have drivers in separate packages from main kuryr 14:33:16 like in https://github.com/openstack/os-vif they have os_vif pkg and vif_plug_* packages 14:33:39 ivc_: good point, when the time comes in the plan above to move it to kuryr-lib, we can instead put it in a separate package 14:33:57 let's not forget that option and bring it to the mailing list when the time comes 14:34:05 it can still be in kuryr-lib repo though 14:34:18 ah, got it 14:35:24 the problem is that it is complicated, for infra, to generate different distributable python packages from a single repo 14:35:50 well, anyway, let's take that to the ml in a little while 14:35:57 #topic kuryr-kubernetes 14:37:07 #info ivc's got k8s/neutron client patch approved https://review.openstack.org/#/c/376042/ 14:37:16 hopefully it will be merged soon 14:37:31 we will be removing the the python3 only parts from the repo 14:37:46 #info we'll have kuryr-kubernetes py2 and py3 compatible 14:38:12 i'm hoping to lift 'wip's from (some of) my other patches this week 14:38:56 ivc_: vikasc: (whoever wants to join) I'd like to propose a video meeting on Wednesday to discuss the effort 14:39:08 atm at port/pod processing integration with os-vif models and event pipeline cleanup 14:39:19 apuimedo sure 14:39:25 thanks 14:39:25 apuimedo, sure 14:39:30 I'll send an invitation 14:40:47 we have to keep in mind to try to add fullstack tests as soon as possible, i.e., as soon as we have a patch that gets ports bound, we should have the fullstack test that checks that the ports are bound and ping works 14:41:17 apuimedo are you working on cni driver or should i take care of it? 14:41:27 ivc_: I'm working on the kuryr-lib part of it 14:41:48 you should import parts of it for kuryr-kubernetes if it needs to talk to the local watcher 14:41:58 as I seem to recall you proposed 14:42:18 yup 14:42:48 is the kuryr-lib part under review? 14:43:24 ivc_: no, I'll try to submit tomorrow 14:43:43 ah ok. np 14:43:54 sorry about the delay 14:44:03 #action apuimedo to send kuryr-lib cni 14:44:14 anything else? 14:44:41 while we are at it, maybe we can use os-vif models in kuryr-lib binding drivers 14:45:04 ivc_: do we have a blueprint for that? 14:45:10 nope 14:45:20 we should do that 14:45:41 and then I guess, since it will be an internal detail, we'll put that after cni 14:45:51 its really just a drop-in replacement for neutron-client dict for port/subnet 14:45:53 cni will expect ovs-vif ovo 14:46:09 and later we can upgrade kuryr-lib bindings 14:46:35 #topic general 14:46:38 apuimedo: i have a comment for the k8s client implementation 14:46:47 hongbin: I just saw it on the phone 14:46:50 :-) 14:46:57 :) 14:47:16 hongbin: I think ivc_'s not using python-k8s-client was because it lacks watching support 14:47:22 ivc_: is that right? 14:47:25 yes 14:47:29 i see 14:47:54 this is a good suggestion for dims 14:48:10 oh, it seems there is something for watch there https://github.com/openstack/python-k8sclient/search?utf8=✓&q=watch 14:48:21 it does not work :) 14:48:27 apuimedo : unless there's a test, there is no guarantee that it works 14:48:43 :-) 14:48:47 api is there as it is generated from swagger but the RESTClient does not support it 14:49:15 right 14:49:25 RESTClient is blocking and expects to http GET the result as a whole 14:49:28 alright then. let's first get this with the current approach and let's try to work with dims on getting it working on openstack/python-k8sclient 14:49:31 but we need a streaming result 14:50:08 yeah 14:50:59 * apuimedo not a big friend of code generation, but let's see what we can do 14:51:19 any other topic? 14:52:07 #action apuimedo to update about the proposed work sessions (the fish bowl will be for the joint magnum/kuryr) 14:52:11 where will the driver features be in controllers? They will each have a create_or_update_port function? 14:52:35 or total change to controllers as it is now? 14:52:48 lmdaly: they will be in kuryr_libnetwork/drivers/ 14:53:05 and controllers will be slimmed down a bit 14:53:19 and load the configured driver 14:53:46 lmdaly: unfortunately I didn't think about all the names of the interface yet 14:54:39 okay 14:55:05 probably quite limited to ports at the start 14:55:16 and we can add networks if anybody shows a use case for that 14:55:39 are you planning to pass the nested port for binding from the drivers? 14:56:01 ah okay ports for now 14:56:32 lmdaly: what about I start with the current driver and you add the one for ipvlan ? 14:57:05 apuimedo, sounds good 14:57:07 cool 14:57:09 :-) 14:57:26 thanks lmdaly 14:57:41 you have any estimation on when you will have the current driver done? 14:58:01 I'm counting with wednesday 14:58:03 just for reference 14:58:13 okay cool, thanks :) 14:58:38 good 14:58:54 alright, time to close the meeting. Today we'll be on time for a change! 14:58:59 thank you all for joining 14:59:02 #endmeeting