14:11:49 #startmeeting kuryr 14:11:50 Meeting started Mon Nov 14 14:11:49 2016 UTC and is due to finish in 60 minutes. The chair is vikasc. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:11:51 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:11:54 The meeting name has been set to 'kuryr' 14:12:09 Hello everybody and welcome to another Kuryr weekly IRC meeting 14:12:13 o/ 14:12:15 hi 14:12:20 who's here to chat? 14:12:29 * pc_m lurking again 14:12:36 o/ 14:12:43 o/ 14:12:44 o/ 14:13:30 #info ivc_ irenab vikasc alraddarla_ lmdaly joined the meeting 14:13:40 o/ 14:13:52 #topic kuryr-libnetwork 14:14:47 #link https://review.openstack.org/#/c/394547/ 14:15:10 there are jenkins failures 14:15:15 yes 14:15:23 I'm updating the unit test files 14:15:30 lmdaly has a WIP patch for moving binding from join to create_endpoint 14:15:32 I hope to finish today or tomorrow 14:15:43 thanks mchiappero 14:17:27 I observed that currently kuryr-libnetwork is missing neutron call for updating port with --allowed-address-pair incase ipvlan driver is used in nested container case 14:17:53 vikasc, link to bug? 14:17:56 yes, that's because there is a patch missing 14:18:02 being worked on 14:18:09 We are working on a patch to create a driver based model for libnetwork to do this 14:18:15 irenab, no bug report yet 14:18:23 irenab, just an random observation 14:18:28 guys, please report bugs once you observe them 14:18:56 irenab, sure, i was planning to raise after discussion 14:19:11 thanks lmdaly and mchiappero for the update!! 14:19:36 lmdaly, can you please elaborate on driver based model? 14:20:28 not concrete on the idea yet, but thinking much like the model introduced in kuryr-lib 14:21:03 there is actually another open point regarding IPVLAN 14:21:26 with a config file option and different implementations based on the driver 14:21:56 the fact is that libnetwork seems to assign the MAC address, but the IPVLAN driver doesn't have such capability 14:22:00 so it fails 14:22:09 will push a WIP patch as soon as possible to get feedback on the structure 14:22:12 lmdaly, got it, thanks. sounds as a good approach to keep code base better separated 14:22:20 do we have any contact we can leverage in docker? 14:22:53 also the libnetwork specs are confusing at times, we need clarifications there 14:22:58 may be banix would have helped 14:23:03 I am not aware of any, probably mailing list option 14:23:04 if you know someone personaly let us know 14:23:28 ok, thanks 14:23:41 i messed up the order :P 14:23:49 #topic kuryr-lib 14:24:16 #link https://review.openstack.org/#/c/397057/ 14:24:45 vikasc, unit testing is missing 14:24:54 i pushed a small fix in ipvlan/macvlan driver ip assignment 14:25:33 I'm not too sure I understand this patch 14:25:45 the previous behaviour seemed to be correct to me 14:26:30 mchiappero, would you mind please adding your comments to patch 14:26:57 irenab, will add if fix is valid :) 14:27:06 I will, but I need to find some time to better review and maybe test it :) 14:27:22 in utils._configure_container_iface, it add ip onto the macvlan/ipvlan device, the ip should be not is vm-port 14:27:37 mchiappero, np, we can still discuss offline. i will ping you 14:27:51 the ip should be not is vm-port -> the ip should not be vm-port fixed ip 14:28:15 for the demo I passed in port as vm-port and nested_port as dangling neutron port with the container_ip 14:28:20 maybe the name is confusing, I thought nested port is the container port 14:28:37 it is/should br 14:28:44 *be 14:28:59 irenab, nested_port seems to be vm_port to me 14:29:17 irenab, because in veth driver, this is not being even used 14:29:20 I would have thought the same as irenab 14:29:23 then it should be nesting_port :-) 14:29:43 I was thought it is vm-port... 14:29:46 i will take a look again 14:29:51 me too 14:30:01 seems we need a name change :) 14:30:05 request everbody to please review the same 14:30:10 since nowhere actually use this param now.. 14:30:33 if its not in use, better to remove it 14:30:39 #link https://review.openstack.org/#/c/361993/ 14:31:00 it will be used for the driver patch we introduce for libnetwork 14:31:00 irenab, it will be used in trunk port case 14:31:30 then lets just improve the name 14:31:43 irenab, +1 14:31:59 #link https://review.openstack.org/#/c/361993/ 14:32:21 WIP patch for adding vlan driver support 14:32:45 vikasc, I posted few comments 14:33:12 thanks irenab, i will go through them 14:33:33 intention is to leverage neutron trunk ports 14:34:25 a quick question, in which use case we need to use local vlan? I see local vlan manage in that patch 14:35:07 I mean : https://review.openstack.org/#/c/361993/11/kuryr/lib/segmentation_type_drivers/vlan.py 14:35:08 limao_, for the isolation of traffic from containers 14:35:28 limao_, on the vm only 14:35:33 limao_, vlan is to identify the container 14:35:58 you mean in vm-nested case with subport trunk port? 14:36:07 limao_, yes 14:36:08 yes 14:36:49 In my currently understanding, the vlan in vm should not be local vlan, it should be same vlan as subport 14:37:33 limao_, vlan of subport menas? 14:37:41 (I'm not sure if I miss something here since I has not actually tried trunk port now) 14:38:03 limao_, sorry, can you please reword your question 14:38:17 The VM should be trunk port, in case it using vlan 100,200 14:38:37 then the container on this vm should just use eth0.100, eth0.200, right? 14:39:09 limao_, I think the idea is to use trunk port neutron APIs to get something similar to what OVN did: http://docs.openstack.org/developer/networking-ovn/containers.html 14:39:49 oh... let me check offline, thanks for the info , irenab, vikasc 14:40:12 limao_, irenab, we are we talking about vlan-aware-vms or ipvlan-based trunk ports? 14:40:31 I think it should be vlan-aware-vms 14:40:44 vikasc, may I raise one question that ks more related to the open discussion part, just need to drop in 5 mins 14:41:02 ivc_, vlan aware vms 14:41:11 irenab, sure, please go ahead 14:41:29 there is a design summit in February 14:41:57 I didn't see kuryr in the list of confirmed projects, I wonder if anyone plans to attend 14:42:07 PTG? 14:42:10 yes 14:42:29 I do not see it on list too 14:42:42 #action need to confirm with Toni on PTG. 14:43:10 vikasc, thanks. Please keep going according to the usual route 14:43:21 irenab, thanks :) 14:43:31 thanks irenab 14:43:40 ivc_, yes vlan aware vms 14:44:31 limao_, as per my understanding scope of vlans(vlan-per-container) will be local to vm only 14:44:53 ivc_ , vikasc: I was thought we do not need manage local vlan in kuryr, just use the vlan of the subport 14:44:56 limao_, i could not get chance to have a look at ovn way yet 14:45:34 limao_, vikasc, we need to manage vlans for subports 14:45:47 limao_, i think i got your point 14:46:18 because when you create sub-port in neutron we either specify the vid, or let neutron set it 14:46:45 but we need to coordinate subports (on neutron) with the ethX.Y inside the vm 14:46:59 ivc_, +1 14:47:28 Is this info can be get when container create? or we need to cache it in kuryr? 14:47:48 ivc_, we are talking about this https://review.openstack.org/#/c/361993/ 14:47:55 \o/ 14:48:09 limao_, we will have to manage in kuryr 14:48:14 apuimedo, hello 14:48:19 apuimedo :-) you get released ? 14:48:24 handover to you apuimedo 14:48:38 and many many congrats 14:48:40 have to leave, will catch up on the log 14:48:40 :) 14:49:07 yup 14:49:10 we're all home 14:49:16 :) 14:49:20 sorry I couldn't make it 50min earlier 14:49:21 doing well? 14:49:24 apuimedo, congrats on your daughter release ! 14:49:28 thanks! 14:49:33 congrats! :) 14:49:34 it was a hard fork! 14:49:36 sweets........... 14:49:45 apuimedo, laddu 14:49:50 :) 14:49:52 but the new process is running strong and consuming a lot of resources 14:49:54 as it should 14:50:04 thank you all 14:50:15 :) 14:50:25 are we in kuryr-kubernetes part or in open discussion already? 14:50:34 just starting 14:50:34 neither 14:50:42 #topic kuryr-kubernetes 14:51:00 #link https://review.openstack.org/#/c/376044/2/kuryr_kubernetes/controller/handlers/vif.py@71 14:51:04 only 9min remaining 14:51:14 * vikasc feeling sorry for poor time-management 14:51:17 sorry, it's because my email was not sent properly 14:51:37 polling for neutron port activation seems to be a hot topic 14:51:58 but i've checked and it is the same in kuryr-libnetwork 14:52:06 #link https://github.com/openstack/kuryr-libnetwork/blob/master/kuryr_libnetwork/controllers.py#L342-L360 14:52:07 #action apuimedo to address vikasc comments to https://review.openstack.org/#/c/396113/ 14:52:58 ivc_: yes, that was a relatively recent addition from banix 14:52:59 so i was thinking if it is worth adding some notification-based mechanism for both projects to check for port activation 14:53:04 only for ovs, iirc 14:53:21 well, we have an open door to do it in neutron for both 14:53:26 just as it does for nova 14:53:35 so that's the long term solution I'd like 14:53:48 or the one that I think is more doable 14:54:06 ivc_: did you see https://review.openstack.org/#/c/396113/ ? I think it addresses the loopback requirement you noted to me 14:54:53 apuimedo, yes, but need to test it (the cni binary is 'lo' instead of 'loopback') 14:55:32 yeah... I need to test it too 14:55:42 I had to run to the hospital just after writing it xD 14:56:00 :D 14:56:07 almost literally 14:56:10 :D 14:56:15 also 14:56:23 maybe the code was ready too 14:56:25 I want to add to devstack to set kubectl config 14:56:26 :P 14:56:30 :D 14:56:39 apuimedoļ¼š you will get more sleepless night now 14:56:40 to make development easier 14:56:47 limao_: yeah, more time for coding 14:56:58 :) 14:57:02 one eye on the baby and one on the screen, like a chameleon 14:57:12 apuimedo, height of optimism 14:57:43 indeed 14:58:00 anyway. I will be working part time for a few weeks now 14:58:14 I still should be able to finish cni and do these devstack improvements :-) 14:58:35 home is always first :) 14:58:43 limao_, +1 14:58:48 very true limao_ 14:58:57 anything else about kuryr-kubernetes before we close? 14:59:27 I'd appreciate that people try the devstack and start to play with the environment and review ivc_'s code if they haven't started already 14:59:41 apuimedo, well we could discuss the cni-daemon idea next week 15:00:06 since its a long-term idea anyway 15:00:21 ivc_: good point 15:00:40 ivc_: we can even do an extraordinary irc meeting some time in the week in #openstack-kuryr 15:00:50 vikasc: we must close the meeting now and move to #openstack-kuryr 15:00:57 apuimedo, thanks 15:01:00 I'm sure there's people waiting to take the room :P 15:01:02 sure 15:01:03 apuimedo, sure 15:01:16 #end-meeting 15:01:19 jimbaker: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 15:01:33 vikasc, ^^^ 15:01:45 vikasc: endmeeting 15:01:50 not end-meeting 15:01:52 #endmeeting