14:00:12 <apuimedo> #startmeeting kuryr
14:00:13 <openstack> Meeting started Mon Oct  3 14:00:12 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:17 <openstack> The meeting name has been set to 'kuryr'
14:00:42 <apuimedo> Hello and welcome to the weekly Kuryr IRC meeting
14:00:48 <apuimedo> who's here today?
14:00:52 <vikasc> o/
14:00:57 <tonanhngo> o/
14:00:58 <lmdaly> o/
14:01:02 <hongbin> o/
14:01:48 <ivc_> o/
14:01:59 <apuimedo> #info vikasc tonanhngo lmdaly hongbin ivc_ and apuimedo present
14:02:04 <yedongcan> o/
14:02:04 <limao> o/
14:02:30 <apuimedo> welcome everybody
14:02:33 <apuimedo> let's get this started
14:02:44 <apuimedo> first, I want to apologize for failing to set an agenda today
14:03:12 <apuimedo> #topic kuryr-lib
14:03:48 <janonymous> o/
14:04:09 <apuimedo> #info tox coverage task was fixed and limao enabled an infra job for the coverage
14:05:00 <apuimedo> #info: we abandoned the rest/rpc approach we were pursuing
14:05:53 <apuimedo> #info the patch for ipvlan/macvlan support (mostly for containers in Nova instances) is approved and ready for merging https://review.openstack.org/375864
14:06:43 <apuimedo> #action apuimedo and vikasc to check the keystone/neutron possibilities to have limited usage tokens or use neutron policy instead of the REST/RPC approach
14:07:14 <apuimedo> this last action item is to address the Magnum deployments in which it would be good to minimize what gets stored on the worker nodes
14:07:26 <apuimedo> questions? Other items?
14:07:37 <tonanhngo> Thanks for addressing this concern
14:07:59 <apuimedo> tonanhngo: we still don't have a clear cut solution though
14:08:14 <ivc_> > use neutron policy instead of the REST/RPC approach
14:08:22 <tonanhngo> It has been a major issue in other areas also, and we don't have a good solution either
14:08:34 <ivc_> i think we can outline that we are talking about kuryr-specific neutron instance here right?
14:08:48 <ivc_> s/neutron/neutron api server/
14:08:59 <apuimedo> ivc_: sure we can
14:09:52 <apuimedo> the worker nodes should only be able to perform actions with resources of a specific tenant and only the neutron api endpoints that kuryr use should be usable
14:09:58 <apuimedo> (from the instances)
14:10:10 <apuimedo> tonanhngo: hongbin: that should be it, right?
14:10:21 <apuimedo> also, there is the problem of credential storage, iirc
14:10:36 <tonanhngo> yes that sounds about right
14:11:00 <apuimedo> tonanhngo: for credential storage, what would be acceptable?
14:11:47 <apuimedo> I think in Austin we discussed that keystone was adding some special restricted tokens, but I have not followed the progress of that
14:11:49 <tonanhngo> generally in production, we have to ensure that users can not take the info we store on their VM to overstep their priviledge
14:12:30 <apuimedo> tonanhngo: should their own tenant token be fair game, then?
14:12:39 <ivc_> apuimedo tonanhngo, if we have a kuryr-specific instance that is only accessible from within the nova vm with containers, maybe we can just use no-auth and rely on firewall policies?
14:13:05 <tonanhngo> Yes, the approach we took so far with Kubernetes plugin is to let the user enter their own credential
14:13:20 <apuimedo> ivc_: when you say a kuryr-specific instance
14:13:34 <tonanhngo> then whatever resources they request would be within their priviledge
14:13:37 <apuimedo> you mean a neutron server instance only for the kuryr deployment, is that right?
14:13:47 <ivc_> apuimedo right
14:14:00 <apuimedo> tonanhngo: that is most likely the first step we'll have to take
14:14:10 <ivc_> and probably also limited to just the tenant entitled to that server
14:14:16 <apuimedo> ivc_: this specific instance would have to go to the same DB
14:14:23 <apuimedo> as the rest of neutron, right?
14:14:34 <ivc_> yes
14:14:57 <apuimedo> well, if we can constrain it adequately in no-auth mode, that is certainly one option
14:15:17 <ivc_> the only question is if it is possible in neutron
14:15:42 <apuimedo> although we should take care that, in this case, if the containers get put on the same network as in the instance (as is the case in macvlan/ipvlan) that would mean that the containers themselves could perform neutron actions)
14:16:42 <ivc_> we could probably have neutron api server accessible with e.g. floating ip from containers
14:16:57 <ivc_> maybe even run that api server inside the nova vm
14:17:08 <ivc_> like octavia does with haproxy in nova
14:17:09 <apuimedo> ivc_: I don't think it should be accessible
14:17:17 <apuimedo> from the containers
14:17:31 <apuimedo> and the neutron server, with access to the DB, should not run in the instance
14:17:37 <ivc_> s/from containers/from containers host vm/
14:18:47 <ivc_> why not? we can have the vm connected to multiple networks - 1 for neutron db/rabbit and other for tenant containers
14:19:21 <apuimedo> ivc_: it is owned by the tenant, I don't think tenant owned VMs should have access to the DB
14:19:37 <hongbin> yes, it is undesirable
14:19:52 <ivc_> true. agreed
14:20:07 <hongbin> vm belongs to the tenants, neutron db belongs to the neutron control plane. they should not talk to each other
14:20:20 <apuimedo> right
14:20:51 <apuimedo> I think it is sensible for now to start with having Neutron accessible and have the kuryr agents configured with the tenant token
14:20:55 <ivc_> what i was thinking is an approach similar to how octavia/lbaasv2 runs haproxy inside the vm
14:21:31 <apuimedo> hongbin: after we have this, we have to tackle hardening of access to resources in the same way for all Magnum (be it for storage or networking)
14:21:37 <ivc_> so that would be neutron-api-server as-a-service :)
14:21:45 <hongbin> apuimedo: sure
14:21:54 <apuimedo> ivc_: that VM is not managed by the tenants, is it?
14:22:07 <hongbin> apuimedo: however, the token will expire, no sure how you are going to deal with the expiration
14:22:17 <ivc_> i have not checked, but i think its not
14:22:50 <apuimedo> hongbin: true, probably then pki or user/pass
14:23:03 <hongbin> apuimedo: i see
14:23:10 <ivc_> apuimedo or cert?
14:23:17 <ivc_> oh thats pki
14:23:20 <apuimedo> :P
14:23:33 <tonanhngo> It may be worth coming to the Keystone session in Barcelona and ask them.  I have talked to their PTL but they don't have a good solution either.
14:23:49 <apuimedo> tonanhngo: when did you talk with them last?
14:23:57 <tonanhngo> about a month ago
14:24:14 <apuimedo> quite recent. Well, that saves us some work
14:24:23 <apuimedo> but yes, we should talk with them
14:24:23 <tonanhngo> he suggested a few ideas
14:24:50 <tonanhngo> including some new work on x509
14:24:53 <apuimedo> tonanhngo: could you bring them to a mailing list thread with the [magnum][keystone][kuryr] tags?
14:25:21 <tonanhngo> Sure, will do
14:25:33 <apuimedo> and I think we should invite them to the joint fish bowl session we'll have in Barcelona
14:25:46 <apuimedo> hongbin: tonanhngo: what do you think?
14:25:59 <tonanhngo> Sounds good
14:26:01 <hongbin> good idea
14:26:13 <apuimedo> cool
14:26:43 <apuimedo> #action apuimedo to invite the keystone people to the magnum/kuryr fish bowl session
14:26:59 <apuimedo> #topic: kuryr-libnetwork
14:27:25 <apuimedo> #info we finished dropping netaddr in favor of ipaddress
14:28:17 <apuimedo> #info dongcan continued cleaning up the codebase after the split by dropping duplicate constants
14:28:31 <apuimedo> *yedongcan
14:29:36 <apuimedo> #info apuimedo is working on adapting controllers.py with drivers for nested/baremetal to support both the current workflow and the ipvlan/macvlan based one that lmdaly's PoC showed
14:30:00 <apuimedo> the idea is to have also a python path that you can configure in kuryr.conf
14:30:27 <apuimedo> that will determine if you use neutron ports directly, neutron trunk/subports, neutron allowed address pairs
14:30:39 <apuimedo> or third party implementations
14:31:24 <apuimedo> the interface for these implementationc will probably be reduced and taken from kuryr-libnetwork
14:31:46 <apuimedo> and when we add it to kuryr-kubernetes we can refactor as necessary and move it to kuryr-lib
14:31:50 <apuimedo> questions?
14:32:24 <ivc_> apuimedo a suggestion for drivers
14:32:43 <ivc_> maybe we can follow os-vif path and have drivers in separate packages from main kuryr
14:33:16 <ivc_> like in https://github.com/openstack/os-vif they have os_vif pkg and vif_plug_* packages
14:33:39 <apuimedo> ivc_: good point, when the time comes in the plan above to move it to kuryr-lib, we can instead put it in a separate package
14:33:57 <apuimedo> let's not forget that option and bring it to the mailing list when the time comes
14:34:05 <ivc_> it can still be in kuryr-lib repo though
14:34:18 <apuimedo> ah, got it
14:35:24 <apuimedo> the problem is that it is complicated, for infra, to generate different distributable python packages from a single repo
14:35:50 <apuimedo> well, anyway, let's take that to the ml in a little while
14:35:57 <apuimedo> #topic kuryr-kubernetes
14:37:07 <apuimedo> #info ivc's got k8s/neutron client patch approved https://review.openstack.org/#/c/376042/
14:37:16 <apuimedo> hopefully it will be merged soon
14:37:31 <apuimedo> we will be removing the the python3 only parts from the repo
14:37:46 <apuimedo> #info we'll have kuryr-kubernetes py2 and py3 compatible
14:38:12 <ivc_> i'm hoping to lift 'wip's from (some of) my other patches this week
14:38:56 <apuimedo> ivc_: vikasc: (whoever wants to join) I'd like to propose a video meeting on Wednesday to discuss the effort
14:39:08 <ivc_> atm at port/pod processing integration with os-vif models and event pipeline cleanup
14:39:19 <ivc_> apuimedo sure
14:39:25 <apuimedo> thanks
14:39:25 <vikasc> apuimedo, sure
14:39:30 <apuimedo> I'll send an invitation
14:40:47 <apuimedo> we have to keep in mind to try to add fullstack tests as soon as possible, i.e., as soon as we have a patch that gets ports bound, we should have the fullstack test that checks that the ports are bound and ping works
14:41:17 <ivc_> apuimedo are you working on cni driver or should i take care of it?
14:41:27 <apuimedo> ivc_: I'm working on the kuryr-lib part of it
14:41:48 <apuimedo> you should import parts of it for kuryr-kubernetes if it needs to talk to the local watcher
14:41:58 <apuimedo> as I seem to recall you proposed
14:42:18 <ivc_> yup
14:42:48 <ivc_> is the kuryr-lib part under review?
14:43:24 <apuimedo> ivc_: no, I'll try to submit tomorrow
14:43:43 <ivc_> ah ok. np
14:43:54 <apuimedo> sorry about the delay
14:44:03 <apuimedo> #action apuimedo to send kuryr-lib cni
14:44:14 <apuimedo> anything else?
14:44:41 <ivc_> while we are at it, maybe we can use os-vif models in kuryr-lib binding drivers
14:45:04 <apuimedo> ivc_: do we have a blueprint for that?
14:45:10 <ivc_> nope
14:45:20 <apuimedo> we should do that
14:45:41 <apuimedo> and then I guess, since it will be an internal detail, we'll put that after cni
14:45:51 <ivc_> its really just a drop-in replacement for neutron-client dict for port/subnet
14:45:53 <apuimedo> cni will expect ovs-vif ovo
14:46:09 <apuimedo> and later we can upgrade kuryr-lib bindings
14:46:35 <apuimedo> #topic general
14:46:38 <hongbin> apuimedo: i have a comment for the k8s client implementation
14:46:47 <apuimedo> hongbin: I just saw it on the phone
14:46:50 <apuimedo> :-)
14:46:57 <hongbin> :)
14:47:16 <apuimedo> hongbin: I think ivc_'s not using python-k8s-client was because it lacks watching support
14:47:22 <apuimedo> ivc_: is that right?
14:47:25 <ivc_> yes
14:47:29 <hongbin> i see
14:47:54 <hongbin> this is a good suggestion for dims
14:48:10 <apuimedo> oh, it seems there is something for watch there https://github.com/openstack/python-k8sclient/search?utf8=✓&q=watch
14:48:21 <ivc_> it does not work :)
14:48:27 <dims> apuimedo : unless there's a test, there is no guarantee that it works
14:48:43 <apuimedo> :-)
14:48:47 <ivc_> api is there as it is generated from swagger but the RESTClient does not support it
14:49:15 <dims> right
14:49:25 <ivc_> RESTClient is blocking and expects to http GET the result as a whole
14:49:28 <apuimedo> alright then. let's first get this with the current approach and let's try to work with dims on getting it working on openstack/python-k8sclient
14:49:31 <ivc_> but we need a streaming result
14:50:08 <apuimedo> yeah
14:50:59 * apuimedo not a big friend of code generation, but let's see what we can do
14:51:19 <apuimedo> any other topic?
14:52:07 <apuimedo> #action apuimedo to update about the proposed work sessions (the fish bowl will be for the joint magnum/kuryr)
14:52:11 <lmdaly> where will the driver features be in controllers? They will each have a create_or_update_port function?
14:52:35 <lmdaly> or total change to controllers as it is now?
14:52:48 <apuimedo> lmdaly: they will be in kuryr_libnetwork/drivers/
14:53:05 <apuimedo> and controllers will be slimmed down a bit
14:53:19 <apuimedo> and load the configured driver
14:53:46 <apuimedo> lmdaly: unfortunately I didn't think about all the names of the interface yet
14:54:39 <lmdaly> okay
14:55:05 <apuimedo> probably quite limited to ports at the start
14:55:16 <apuimedo> and we can add networks if anybody shows a use case for that
14:55:39 <lmdaly> are you planning to pass the nested port for binding from the drivers?
14:56:01 <lmdaly> ah okay ports for now
14:56:32 <apuimedo> lmdaly: what about I start with the current driver and you add the one for ipvlan ?
14:57:05 <lmdaly> apuimedo, sounds good
14:57:07 <apuimedo> cool
14:57:09 <apuimedo> :-)
14:57:26 <apuimedo> thanks lmdaly
14:57:41 <lmdaly> you have any estimation on when you will have the current driver done?
14:58:01 <apuimedo> I'm counting with wednesday
14:58:03 <lmdaly> just for reference
14:58:13 <lmdaly> okay cool, thanks :)
14:58:38 <apuimedo> good
14:58:54 <apuimedo> alright, time to close the meeting. Today we'll be on time for a change!
14:58:59 <apuimedo> thank you all for joining
14:59:02 <apuimedo> #endmeeting