14:00:47 <apuimedo> #startmeeting kuryr
14:00:48 <openstack> Meeting started Mon Sep 26 14:00:47 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:49 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:51 <openstack> The meeting name has been set to 'kuryr'
14:01:04 <apuimedo> Hello and welcome all to another kuryr meeting
14:01:08 <vikasc> o/
14:01:14 <apuimedo> who's here for the meeting?
14:01:18 <limao_> o/
14:01:20 <lmdaly> o/
14:01:42 <apuimedo> irenab: banix: are you in there?
14:01:54 <banix> o/
14:02:11 <yedongcan> o/
14:02:20 <tonanhngo> 0/
14:02:35 <apuimedo> #info vikasc limao_ lmdaly banix yedongcan tonanhngo apuimedo present
14:02:43 <apuimedo> Thank you all for joining today!
14:02:58 <apuimedo> #topic kuryr: rest/rpc
14:03:17 <vikasc> i am sorry, could not update the patches.
14:03:27 <vikasc> tagetting this week
14:03:35 <apuimedo> #info A deployment configuration patch from vikasc got merged https://review.openstack.org/362023
14:03:49 <apuimedo> vikasc: no worries, I just cherry-picked what was ready to be merged
14:03:57 <mchiappero> o/
14:04:00 <vikasc> apuimedo, thanks
14:04:03 <apuimedo> looking forward to review new versions of the other ones
14:04:23 <apuimedo> #action vikasc to rework the other patches
14:04:32 <apuimedo> #link https://review.openstack.org/362023
14:04:33 <ivc_> o/
14:04:42 <apuimedo> #link https://review.openstack.org/361993
14:04:49 <apuimedo> welcome mchiappero and ivc_
14:05:13 <apuimedo> vikasc: is there anything about the rest/rpc you want to bring up?
14:05:41 <vikasc> apuimedo, nothing specific.
14:06:11 <apuimedo> reminder to others: this is about having the kuryr daemons that run in instances communicate with a kuryr server outside of the nova instances to perform the neutron/keystone actions
14:06:16 <apuimedo> vikasc: ok
14:06:30 <apuimedo> #topic kuryr: port binding
14:07:28 <apuimedo> #info lmdaly contributed a PoC patch for ipvlan based kuryr networking in the mailing list http://lists.openstack.org/pipermail/openstack-dev/2016-September/104166.html
14:08:14 <apuimedo> #info apuimedo took the ipvlan part of it, added macvlan too and submitted a WiP patch againts opentack/kuryr https://review.openstack.org/375864
14:08:25 <apuimedo> lmdaly: did you have time to take a look at the WIP patch?
14:08:51 <lmdaly> yeah I had a quick look - not too in depth look yet
14:08:52 <apuimedo> The idea is that if we add the ipvlan/macvlan code to kuryr-lib we'll be able to leverage it for both kuryr-libnetwork and kuryr-kubernetes
14:09:21 <apuimedo> so I took the liberty of doing that with part of what I saw on your patch
14:09:33 <apuimedo> I wonder if it would be usable for Ironic
14:09:39 * apuimedo has no idea, never tried ironic
14:10:04 <apuimedo> lmdaly: quick question, why do you set the L2_Mode for ipvlan instead of the L3_mode
14:10:25 <apuimedo> I am not familiar with the distinction between the modes in the kernel driver
14:12:14 <apuimedo> well, we can get back to it in a moment :-)
14:12:15 <mchiappero> L2 should allow brodcast frames to be received by the containers
14:12:41 <apuimedo> oh, that's interesting
14:12:49 <mchiappero> I can't tell about Ironc, but I guess it might work (to be verified)
14:13:02 <apuimedo> #topic kuryr-libnetwork: dropping netaddr
14:13:58 <apuimedo> #info we are dropping netaddr as ipaddress is part of the core standard library in python3 and it has a backport in python2. The change has already landed in kuryr-lib and it is under review for kuryr-libnetwork. Please use ipaddress in new code
14:15:02 <apuimedo> if there is any question on that, let me know. I'm now verifying it's operation
14:16:06 <apuimedo> #topic kuryr-libnetwork: container-in-vm
14:17:00 <apuimedo> #info Part of the PoC lmdaly submitted should be incorporated into the kuryr-libnetwork codebase http://lists.openstack.org/pipermail/openstack-dev/2016-September/104166.html
14:17:28 <apuimedo> #action apuimedo to find a way to pass the instance neutron port + the newly allocated IP for port-binding to happen
14:18:23 <apuimedo> lmdaly: limao_: does updating the neutron port with the extra addresses make the new addresses count as allocated, if you ask Neutron IPAM?
14:18:48 <limao_> no
14:19:06 <lmdaly> +1
14:19:53 <lmdaly> you should have the new ip address in create_or_update_port so should be able to pass to binding from there?
14:19:54 <limao_> :) in my mind, you have to create port to allocate ip
14:19:57 <apuimedo> alright, so in that case, I guess, in the patch to kuryr-libnetwork, we should create a Neutron port which we'll not really bind, and pass it as nested_neutron_port
14:20:05 <apuimedo> to kuryr-lib port binding method
14:20:14 <limao_> Yes, +1
14:20:41 <mchiappero> would it be possible?
14:20:47 <pablochacin> apuimedo: is this somehow related to the idea I proposed you the other day regarding enabling plugins for the CNIDriver?
14:21:04 <ivc_> does that mean we'll never see that port as 'ACTIVE' in neutron?
14:21:04 <apuimedo> pablochacin: no, no, we'll get to that later :P
14:21:23 <apuimedo> ivc_: that's the unfortunate side effect, yes
14:21:45 <apuimedo> ivc_: limao_ mchiappero lmdaly: is there a way to interface directly with Neutron's IPAM?
14:21:56 <limao_> Yes, it will be never ACTIVE, because we never "bind" this port in neutron
14:22:01 <ivc_> so we are using neutron port but neutron does not know about it ... can't say i like that idea
14:22:17 <apuimedo> IMHO it could be argued with Neutron folks that updating allowed_address_pairs could warrant an IPAM update on the neutron side
14:22:32 <lmdaly> briefly investigated but found that creating a port was the only way (that I could find) to reserve IP
14:22:58 <apuimedo> so how about we do two things in parallel:
14:23:31 <ivc_> apuimedo that would be a much cleaner approach imho. also are you sure that security groups / iptables work correctly?
14:23:42 <apuimedo> a) in kuryr-libnetwork for now we Create the ipam reservation port and pass it to kuryr-lib's port binding. Then update the instance port allowed address pairs
14:24:00 <apuimedo> b) Submit to Neutron the request to update the IPAM with allowed address pairs
14:25:04 <apuimedo> ivc_: IIRC, allowed address pairs cleans the iptables that would kill the communication
14:25:11 <apuimedo> s/cleans/tweaks/
14:25:39 <apuimedo> ivc_: security groups wise it won't matter, because all communication is allowed within a security group
14:25:53 <vikasc> apuimedo, how 'b' would be done?
14:26:18 <apuimedo> and since the instance port and the reservation port have the same security group, it should probably work
14:26:32 <apuimedo> vikasc: I'll start by asking some neutron folks if they think the idea is crazy
14:26:38 <apuimedo> then I'll update the ml
14:26:40 <lmdaly> There is also an issue with updating the allowed-address-pairs as the longer the list of IPs the longer it takes to update port - it would be much easier if it the list could be appended to when updating the instance port
14:26:46 <apuimedo> #action apuimedo to update the mailing list
14:26:54 <vikasc> apuimedo, got it, thanks
14:27:17 <limao_> I believe b) will not be accepted neutron in my mind, as allowed address pairs is to add exception for anti-ip-spoofing...
14:27:22 <apuimedo> lmdaly: that's why I want to propose that updating the instance port triggers the ipam reservation
14:27:49 <apuimedo> limao_: I was hoping for some flag
14:28:07 <limao_> OK, let's have a try ~
14:28:10 <apuimedo> lmdaly: as I agree that paying iptables price plus an extra roundtrip is not a good price
14:29:04 <limao_> Is there possible to have option 3), disable port security when create network
14:29:23 <apuimedo> limao_: I'm not sure that would be acceptable in the Magnum context
14:29:36 <mchiappero> also, the 10 IPs limits is a bit annoying, unless can be overridden programmatically
14:29:39 <hongbin> o/
14:29:58 <apuimedo> mchiappero: I think it is reasonable to ask for configuration changes
14:30:26 <mchiappero> apuimedo: but I presume the limit is there for a reason
14:31:00 <limao_> I'm not sure how to use security group since all the containers on the vms will share the vm port security group
14:31:01 <apuimedo> mchiappero: I can only imagine it is to prevent abuse
14:31:17 <mchiappero> apuimedo: overriding is usually not great, and can be confusing when troubleshooting, but in this use case it's definitely too low
14:31:52 <apuimedo> limao_: this ipvlan proposal assumes all the containers are on the same network as the instance, so it feels right that they implicitly receive the same security group policing as the instance port
14:32:03 <apuimedo> mchiappero: agreed
14:32:05 <mchiappero> apuimedo: it would be interesting to hear from the neutron folks
14:32:38 <hongbin> limao_: I guess it is OK, since security group is openstack-specific. Docker users won't care the security group too much
14:32:43 <apuimedo> mchiappero: in the future, ocata/pike, I hope we can use the "vlan" aware VMs with IP/mac segmentation instead of Vlans
14:32:58 <apuimedo> which will allow us to have containers in different nets than the instances
14:33:28 <apuimedo> but I feel like this proposal from lmdaly's team is a good short term compromise while we get there
14:33:41 <vikasc> limao_, sorry, i think i missed context somewhere. Why do we want to disable security groups?
14:33:41 <apuimedo> and it will provide us with a lot of information about the path forward
14:34:24 <apuimedo> hongbin: limao_: you'd have to disable port security for the instance port too, I'm afraid
14:34:29 <apuimedo> not just the containers
14:35:06 <limao_> vikasc:  port security is not only security group, but also anti-ip-spoofing , anti-arp-spoofing, anti-mac-spoofing
14:35:28 <hongbin> apuimedo: how about to have all containers to share a security group
14:35:30 <limao_> allowed address pair is to add exception for anti-ip-spoofing
14:35:40 <apuimedo> hongbin: that's what the proposal says
14:35:46 <hongbin> apuimedo: ok
14:35:58 <apuimedo> hongbin: as I said, I think it is a good first step
14:36:13 <apuimedo> but it definitely has its caveats as you can see
14:36:23 <limao_> apuimedo: port-security can be done in port level and network level I believe
14:36:46 <apuimedo> limao_: can you elaborate on the network level?
14:36:58 <apuimedo> (we should take this to #openstack-kuryr after the meeting)
14:37:05 <apuimedo> (other topics to tackle)
14:37:10 <vikasc> limao_, is it like undoing the effects of creating neutron port?
14:37:56 <limao_> Let's discuss after meeting vikasc, apuimedo
14:38:01 <apuimedo> agreed
14:38:04 <apuimedo> thanks limao_
14:38:07 <vikasc> limao_, sure +1
14:38:40 <apuimedo> #action discuss port creation/security group in container-in-vm at #openstack-kuryr
14:38:52 <apuimedo> #topic kuryr-libnetwork: outstanding bugs
14:39:23 <apuimedo> #info yedongcan has done some much needed bug scrubbing on the kuryr-libnetwork project
14:40:07 <apuimedo> I would request active contributors who have reported bugs to check if their open bugs are still unaddressed, if so, please, mark them as triaged, otherwise mark them as resolved
14:40:48 <apuimedo> in case it's not clear, please, bring it up on IRC or on the mailing list, we have a long list of bugs and we need to do a better job at cleaning up the queue
14:41:15 <apuimedo> I would like to propose that each week somebody takes the new bug triaging duty
14:41:43 <apuimedo> #info apuimedo proposes "weekly new bug triaging" rotating duty
14:42:16 <vikasc> +1
14:42:19 <apuimedo> #action apuimedo to review latest comments on https://launchpad.net/bugs/1626011
14:42:30 <openstack> apuimedo: Error: Could not gather data from Launchpad for bug #1626011 (https://launchpad.net/bugs/1626011). The error has been logged
14:42:41 <apuimedo> yedongcan: sorry I didn't get to your comments yet :(
14:43:23 <apuimedo> #topic kuryr-kubernetes
14:44:16 <apuimedo> #info We now have a kuryr-kubernetes devstack plugin. It should give you a nice setup for the integration work with the OSt pieces in the host and the unmodified Kubernetes pieces in containers
14:44:30 <apuimedo> let me know if you encounter any issues with it
14:44:56 <apuimedo> We should definitely use this to add fullstack tests as we contribute new code
14:45:41 <apuimedo> any questions about the devstack plugin?
14:46:10 <apuimedo> alright, moving on
14:46:35 <apuimedo> #topic kuryr-kubernetes: python3 asyncio
14:47:42 <apuimedo> #info pablochacin proposed on IRC a pluggable CNI approach so that if the CNI driver sees some specific annotations, it can execute scripts
14:48:11 <apuimedo> this would allow vendors and other integrations to be properly timed with the kuryr binding
14:48:34 <apuimedo> #info we'd allow plugin scripts before and after binding
14:49:30 <ivc_> what would the script api look like? what/how info would be pushed to them?
14:49:45 <ivc_> and do we have an actual use case for it?
14:50:02 <apuimedo> pablochacin: can elaborate on the example use cases
14:50:10 <ivc_> (but i like the idea in general)
14:51:02 <pablochacin> The idea is to allow third-party components to be informed about bindings
14:51:03 <apuimedo> the api would be basically adding annotation to the pod resource
14:51:14 <apuimedo> if the pod resource has an annotation like
14:51:25 <apuimedo> before_nic_binding
14:52:03 <apuimedo> {'my_extension': {..}}
14:52:09 <apuimedo> you'd go to the /usr/libexec/kuryr/before_nic_binding/my_extension
14:52:10 <ivc_> why could not 3rd party components watch the resources the same way we do?
14:52:15 <pablochacin> apuigmedo: I if you are talking about my proposal, i wasn't thinking about using pod annotations
14:52:30 <apuimedo> and pass it the data in {...}
14:52:50 <apuimedo> pablochacin: oh, sorry, I was projecting a bit of what I thought about it
14:53:00 <pablochacin> no problem.
14:53:04 <ivc_> we should probably discuss it on #openstack-kuryr after meeting
14:53:13 <apuimedo> pablochacin: I think the best is we do a mailing thread about it
14:53:16 <apuimedo> ivc_: ^^
14:53:20 <pablochacin> Ok, better
14:53:33 <apuimedo> we can then catch each other on IRC to bounce ideas off in real time too
14:53:42 <apuimedo> pablochacin: anything else from your side?
14:53:58 <apuimedo> #action pablochacin to send the CNI plug scripts proposal to the mailing list
14:55:03 <apuimedo> #topic kuryr-kubernetes: python2/3 eventlet
14:55:30 <apuimedo> #info ivc_ sent quite a few patches for his PoC to the kuryr-kubernetes gerrit queue
14:55:35 <apuimedo> but most importantly
14:55:45 <apuimedo> for its repercussions on the rest of the project
14:56:15 <apuimedo> #info ivc_ proposes adopting Oslo versioned objects for communication to the binding side of Kuryr
14:56:48 <apuimedo> so kuryr-libnetwork would use them when talking to the rest/rpc
14:57:02 <apuimedo> kuryr-lib port-binding should probably know about them
14:57:22 <apuimedo> and kuryr-lib CNI (which I'm doing) should definitely know about it
14:57:29 <ivc_> we can start with ovo for annotations now and translate to neutron port dict when passing to port-binding
14:57:42 <apuimedo> ivc_: yes, that's the easiest way
14:57:57 <apuimedo> kuryr-lib CNI translates to current port form for the current port-binding code
14:58:35 <ivc_> but eventually i see kuryr-lib binding using ovo
14:59:06 <apuimedo> #action apuimedo work oslo versioned objects into the kuryr-lib CNI driver
14:59:23 <apuimedo> ivc_: anything else you'd want to share?
14:59:43 <apuimedo> (two minutes remaining, sorry)
14:59:50 <ivc_> not really :)
15:00:04 <ivc_> oh just that ha works with eventlet :)
15:00:16 <banix> is there a minimum linux kernel version we require?
15:00:42 * banix jumps the gun
15:00:51 <apuimedo> #topic general
15:00:57 <jimbaker> hi, we are waiting on our #craton meeting to start
15:01:03 <syed_> o/
15:01:08 <apuimedo> banix: Do not think so, but generally el7 is the oldest
15:01:15 <apuimedo> jimbaker: syed_ closing, sorry
15:01:22 <banix> thx
15:01:27 <jimbaker> np
15:01:30 <apuimedo> #info devvesa presented kuryr in openstack nordic
15:01:33 <apuimedo> #endmeeting