14:00:41 #startmeeting kuryr 14:00:43 Meeting started Mon Oct 30 14:00:41 2017 UTC and is due to finish in 60 minutes. The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:46 The meeting name has been set to 'kuryr' 14:01:09 o/ 14:01:31 Hi! 14:02:16 So, thanks for noticing the time change! (I almost didn't, damn daylight saving time...) 14:02:32 #topic kuryr-kubernetes 14:02:57 Anyone has anything on the topic? any reviews, patches or blockers that you'd like to share with us? :P 14:03:04 just the 3 of us?? 14:03:16 yep, I have something on kuryr-kubernetes 14:03:27 ltomasbo: speak now or stay silent forever 14:03:40 tbh, I'm wondering if people noticed the DST change, now serious 14:03:46 the nested devstack is broken by this patch 14:03:53 hi, sorry for being late 14:03:53 https://github.com/openstack-dev/devstack/commit/146332e349416ac0b3c9653b0ae68d55dbb3f9de 14:04:05 dmellado: Protip - set the meeting reminder to Iceland - they have UTC whole year. 14:04:16 dulek: I should! xD 14:04:19 that force etcd3 to the service_host instead of the nested VM 14:04:51 ltomasbo: did you open a bug regarding this? 14:04:55 but I'm not sure how to fix it from our side? 14:05:13 but not sure this is a 'bug' 14:05:22 it seems they changed it on purpose... 14:05:36 ltomasbo: any idea why this change is done? 14:05:59 nope, I just saw etcd3 was not starting and trying to bind on the service host 14:06:07 and I saw they cahnge that in devstack a month ago 14:06:17 (we really need a way to test nested behavior...) 14:06:38 the patch commit msg talks about IPv6 14:06:41 at least deployment part to start with 14:06:52 but not sure why they need to change to the service_host 14:07:36 hmmm nor do I 14:07:38 ltomasbo: agree, seems some IPv6_HOST_IP param could be added without breaking anything 14:07:42 there's a bug linked there 14:08:21 https://bugs.launchpad.net/devstack/+bug/1656329 14:08:22 Launchpad bug 1656329 in devstack "SERVICE_IP_VERSION=6 doesn't work in devstack" [Undecided,In progress] - Assigned to Dr. Jens Harbott (j-harbott) 14:08:24 #link https://bugs.launchpad.net/devstack/+bug/1656329 14:08:55 ltomasbo: it seems that haleyb was an assignee for this at the beginning 14:09:01 maybe we could try to sync up with him 14:09:20 dmellado: who me? :) 14:09:26 oh, here he is 14:09:36 do you have any idea behind the reason for that commit? 14:09:44 umm 14:09:45 it seems that it broke our nested deployment 14:09:56 seems hongbin already created a revert 14:10:17 https://review.openstack.org/#/c/508214/ 14:10:20 oh, cool in that way 14:10:30 but still I'd like to check why that change was needed 14:11:19 let me ping the patch's assignee 14:11:41 dmellado: i don't know much about the etcd change, but know that using IPv6 to the API was the goal, think there's a follow-up as well 14:12:54 well, in any case I'll ping frickler and will try to understand what's going on 14:13:01 dmellado: perhaps https://review.openstack.org/#/c/505168/ would fix things? i really should look at that agin 14:13:22 yes, he did most of the updates, best place to start 14:13:27 dmellado, haleyb the problem is when the etcd is not to be installed at the API/controller node 14:13:27 dmellado: you rang? ;) 14:14:00 the patch forces it to the service_host, regardless of where you include the enable_etcd3 14:14:13 so, multinode and nested is broken by this 14:14:14 hi frickler, yeah, we were discussing hongbin patch on https://review.openstack.org/#/c/508214/ 14:14:33 it broke our nested environment and we were wondering if you could just first of all, explain the current situation to us 14:14:46 while we don't mind fixing whatever is necessary we just got hit by this 14:17:26 so https://review.openstack.org/505502 fixed etcd3 setup when ipv6 is used for service addresses 14:17:30 hongbin: are you around? 14:17:43 dmellado: yes 14:17:44 and somehow this seems to have triggered an issue for kuryr 14:17:58 yeah, as we deploy etcd3 14:18:11 hongbin: you were discussing a partial revert, could you elaborate on it? 14:18:26 but you seem to be trying to run etcd3 on multiple nodes, which isn't currently supported by devstack 14:19:14 for nested case , it is not on the controller node, but not on the multiple nodes 14:19:43 dmellado: the patch above is a partial revert 14:20:21 irenab: is there a particular reason why it cannot run on the controller node? that is what devstack would support as it stands 14:21:43 frickler, we have a nested deployment, where we use the needed component to deploy a set of VMs 14:21:47 i don't think it is a hard requirements that etcd must run on controller node 14:21:56 frickler: this is for the case we run kuryr inside VM (this VM is deployed on devstack node) 14:22:01 and in that set of VMs we install kuryr and the required components to create a kubernetes cluster 14:22:33 so, both kuryr and etcd3 (and kubernetes) run inside the VMs, not on the controller 14:22:45 like if it was undercloud and overcloud (kind of) 14:23:34 the current problem is that devstack assumes etcd must run on controller. however, if etcd is not run on controller, the whole stack will fail 14:24:02 i would say the assumption that etcd must run on controller node is invalid 14:24:06 hmm, so I guess that usecase will need added logic for devstack to support it 14:25:41 it was working before... 14:25:54 when it was bound to host_IP instead of service_ip 14:26:05 currently consumers are pointed towards the SERVICE_HOST like here http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n330 14:26:14 #link https://review.openstack.org/#/c/511092/ 14:26:33 above patch switch the etcd url back to HOST_IP 14:26:43 ltomasbo: it was working in the sense that etcd could be started on other hosts only when ipv4 was used, but not in that services would connect to that etcd 14:27:28 it should, if you enable it at the controller, right? 14:27:45 it will depend of if your service_hosts == host_ip, right? 14:29:08 hongbin, this is pretty much what I did on my environment to make it work 14:29:18 HOST_IP is only locally defined, it will not work from a different host. so cinder in the link above will not connect to etcd on HOST_IP unless it is the same host 14:30:13 so, then the solutions should not be to change it to service_host, but to etc_host or something like that, right>/ 14:30:16 ? 14:30:38 ltomasbo: +1 14:30:39 so that you can still install etcd whereever you like, right? and point to the service 14:31:05 ltomasbo: that would be what I meant by "added logic" above, yes 14:31:10 heh 14:32:09 or you could do different etcd setup routines in your devstack plugin, because I don't think that this is a usecase that has much general interest 14:32:47 I'd say we should add an action about "configure etcd within kuryr-k8s devstack plugin" 14:32:59 frickler, can you point to some example about that? 14:33:17 that could be a fair solution 14:33:33 ltomasbo: I guess just put that logic within our devstack plugin 14:33:37 rather than in devstack overall 14:33:47 ok 14:34:22 sorry for hijack the meeting... xD 14:34:37 I guess we can move to another point... 14:34:46 heh, np! 14:35:09 #action move etcd logic to kuryr-k8s devstack plugin 14:35:37 dmellado: please open a but for that 14:35:41 bug 14:35:48 irenab: I will after the meeting 14:36:30 thanks 14:36:33 I also have a topic to address, basically as a follow up to the kuryr-tempest-plugin scenario manager I added a devstack plugin function to create a SG rule for icmp so pods could ping vms 14:36:45 but I've seen issues on the octavia and lbaasv2 gates which I'm unable to reproduce locally 14:36:56 before I ask infra to freeze one node, could anyone try to reproduce them? 14:37:24 https://review.openstack.org/#/c/515357/ 14:37:31 thanks in advance! ;) 14:37:39 anyone else on kuryr-k8s? 14:37:56 Anyone wants update on CNI daemon? 14:37:59 just a short updata on k8s net policies 14:38:11 dulek: sure 14:38:16 I want updates on both! xD 14:38:33 heh, go for it folks 14:38:37 let me go first, since update is short 14:38:41 Okay! 14:39:42 leyal and me in a process to map k8s net policies to neutron SGs and figure lout the optimal design, still not finalized 14:40:10 hopefully something to share in a few days 14:40:17 irenab, nice, any big issues so far? 14:40:49 one issue that may complicate is if we neeed to support selectors of a ! form 14:41:05 like role != DB 14:41:19 umm, right 14:41:56 leyal has some ideas how solve it, but this will require watchers per query 14:42:11 irenab: do you have any bp/spec for this? 14:42:23 I think we may drop this case for the first iteration 14:42:31 whatdo you think? 14:42:35 sounds reasonable to me 14:42:38 dmellado: a sec 14:42:43 fair enough for me as well 14:42:58 https://blueprints.launchpad.net/kuryr-kubernetes/+spec/k8s-network-policies 14:43:08 #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/k8s-network-policies 14:43:10 thanks irenab ;) 14:43:48 that’s all for now 14:43:54 dulek: you can go now ;P 14:43:56 Okay, so CNI daemon. 14:44:37 I've listened to apuimedo's advice and started using cotyledon. I've placed updated patch in separate commit: https://review.openstack.org/#/c/515186/ 14:44:54 #link https://review.openstack.org/#/c/515186/ 14:45:10 This works much better and works around issues I've had with signal handling and stuff. 14:45:23 Besides I've moved to Flask to implement the server. 14:45:36 dulek: +1 on flask ;) 14:45:59 There's an issue there, like ltomasbo noticed - processes are somehow leaking even though I've set threaded=True when creating server. 14:46:12 huh 14:46:24 so, it also happened in your env? 14:46:32 ltomasbo: Yup. 14:46:38 ok! 14:46:40 I'm working on that - probably I'll place that whole server under uwsgi - another thing apuimedo advised. 14:46:47 Maybe that'll help. 14:47:09 dulek: shoot out if you need help with that ;) 14:47:36 dmellado: I will if kuryr-libnetwork example will not help me. :) 14:47:43 Plus I'm writing a few more unit tests to increase the coverage. But besides that the patch is ready for reviews and testing! 14:47:59 all right, so let's do a quick round to the other topics as we're getting out of time 14:48:10 #topic kuryr-libnetwork 14:48:17 great work dulek! 14:48:20 anything on this? 14:48:32 hongbin: ? 14:48:55 hongbin: i don't have anything 14:49:11 hongbin: and on fuxi or fuxi-kubernetes? 14:49:25 #topic fuxi / fuxi-kubernetes 14:49:28 dmellado: nothing much from me as well 14:49:29 #chair hongbin 14:49:29 Current chairs: dmellado hongbin 14:49:45 hongbin: ack, thanks! 14:49:52 #topic open discussion 14:50:17 I forgot to mention about this patch https://review.openstack.org/#/c/510157/ 14:50:24 I discussed it with apuimedo 14:50:27 all right, so besides this I just wanted to remember you folks that we do now have the scenario manager ready to be used for any kind of tests you might want to add! 14:50:30 please do so! 14:50:35 ltomasbo: what was that about? 14:50:45 and we agree on a couple of modifications to skip more calls to neutron 14:51:08 (1 to get the ports intead of 2) plus only 2 for the network/subnet information retrival 14:51:10 * dmellado adds that to the review queue, thanks luis ;) 14:51:27 that will speed up remarkably that part 14:51:27 ltomasbo: it is still in WIP 14:51:30 (hopefully) 14:51:39 ltomasbo: but you still owe me a test 14:51:40 xD 14:51:42 it was not, but I added it after talking with apuimedo 14:51:52 irenab: btw, regarding the dragonflow kuryr gate 14:51:56 sorry on the delay and so 14:52:07 I'll be somehow quite busy during this week 14:52:16 but I do plan to fetch oanson next week 14:52:23 dmellado: not that urgent, but my idea was to add gate in dragonflow for kuryr integration 14:52:26 oanson: I've seen that you'll be at the OSS, won't you? 14:52:44 dmellado: he is 14:52:58 irenab: ack, then I'll have a talk with him there and will follow up with you too 14:53:10 dmellado: will be great if you can cover it with him next week 14:53:23 that said, if I get out of the zombie state when I get out of the aircraft 14:53:25 xD 14:53:52 dmellado: I am sure you will 14:54:14 irenab: heh, my trip is about 24.5 hours so let's see ;) 14:54:39 all right folks, thanks for attending! thanks frickler and haleyb for the pointers too! 14:55:04 so no weekly next week? 14:55:09 oh, yeah 14:55:21 I'd say nope, at least from apuimedo's and my side 14:55:49 #action send email to ML about no meeting next week (apuimedo|dmellado) 14:55:52 ok, I guess we can sync on the channel 14:55:59 ok, thanks! 14:56:22 dmellado: enjoy the summit 14:56:32 so if nobody has anything more to discuss I'll close the meeting ;) 14:56:34 thanks irenab! 14:56:38 #endmeeting