14:00:41 <dmellado> #startmeeting kuryr
14:00:43 <openstack> Meeting started Mon Oct 30 14:00:41 2017 UTC and is due to finish in 60 minutes.  The chair is dmellado. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:46 <openstack> The meeting name has been set to 'kuryr'
14:01:09 <ltomasbo> o/
14:01:31 <dulek> Hi!
14:02:16 <dmellado> So, thanks for noticing the time change! (I almost didn't, damn daylight saving time...)
14:02:32 <dmellado> #topic kuryr-kubernetes
14:02:57 <dmellado> Anyone has anything on the topic? any reviews, patches or blockers that you'd like to share with us? :P
14:03:04 <ltomasbo> just the 3 of us??
14:03:16 <ltomasbo> yep, I have something on kuryr-kubernetes
14:03:27 <dmellado> ltomasbo: speak now or stay silent forever
14:03:40 <dmellado> tbh, I'm wondering if people noticed the DST change, now serious
14:03:46 <ltomasbo> the nested devstack is broken by this patch
14:03:53 <irenab> hi, sorry for being late
14:03:53 <ltomasbo> https://github.com/openstack-dev/devstack/commit/146332e349416ac0b3c9653b0ae68d55dbb3f9de
14:04:05 <dulek> dmellado: Protip - set the meeting reminder to Iceland - they have UTC whole year.
14:04:16 <dmellado> dulek: I should! xD
14:04:19 <ltomasbo> that force etcd3 to the service_host instead of the nested VM
14:04:51 <dmellado> ltomasbo: did you open a bug regarding this?
14:04:55 <ltomasbo> but I'm not sure how to fix it from our side?
14:05:13 <ltomasbo> but not sure this is a 'bug'
14:05:22 <ltomasbo> it seems they changed it on purpose...
14:05:36 <irenab> ltomasbo: any idea why this change is done?
14:05:59 <ltomasbo> nope, I just saw etcd3 was not starting and trying to bind on the service host
14:06:07 <ltomasbo> and I saw they cahnge that in devstack a month ago
14:06:17 <ltomasbo> (we really need a way to test nested behavior...)
14:06:38 <ltomasbo> the patch commit msg talks about IPv6
14:06:41 <irenab> at least deployment part to start with
14:06:52 <ltomasbo> but not sure why they need to change to the service_host
14:07:36 <dmellado> hmmm nor do I
14:07:38 <irenab> ltomasbo: agree, seems some IPv6_HOST_IP param could be added without breaking anything
14:07:42 <dmellado> there's a bug linked there
14:08:21 <ltomasbo> https://bugs.launchpad.net/devstack/+bug/1656329
14:08:22 <openstack> Launchpad bug 1656329 in devstack "SERVICE_IP_VERSION=6 doesn't work in devstack" [Undecided,In progress] - Assigned to Dr. Jens Harbott (j-harbott)
14:08:24 <dmellado> #link https://bugs.launchpad.net/devstack/+bug/1656329
14:08:55 <dmellado> ltomasbo: it seems that haleyb was an assignee for this at the beginning
14:09:01 <dmellado> maybe we could try to sync up with him
14:09:20 <haleyb> dmellado: who me? :)
14:09:26 <dmellado> oh, here he is
14:09:36 <dmellado> do you have any idea behind the reason for that commit?
14:09:44 <ltomasbo> umm
14:09:45 <dmellado> it seems that it broke our nested deployment
14:09:56 <ltomasbo> seems hongbin already created a revert
14:10:17 <ltomasbo> https://review.openstack.org/#/c/508214/
14:10:20 <dmellado> oh, cool in that way
14:10:30 <dmellado> but still I'd like to check why that change was needed
14:11:19 <dmellado> let me ping the patch's assignee
14:11:41 <haleyb> dmellado: i don't know much about the etcd change, but know that using IPv6 to the API was the goal, think there's a follow-up as well
14:12:54 <dmellado> well, in any case I'll ping frickler and will try to understand what's going on
14:13:01 <haleyb> dmellado: perhaps https://review.openstack.org/#/c/505168/ would fix things?  i really should look at that agin
14:13:22 <haleyb> yes, he did most of the updates, best place to start
14:13:27 <ltomasbo> dmellado, haleyb the problem is when the etcd is not to be installed at the API/controller node
14:13:27 <frickler> dmellado: you rang? ;)
14:14:00 <ltomasbo> the patch forces it to the service_host, regardless of where you include the enable_etcd3
14:14:13 <ltomasbo> so, multinode and nested is broken by this
14:14:14 <dmellado> hi frickler, yeah, we were discussing hongbin patch on https://review.openstack.org/#/c/508214/
14:14:33 <dmellado> it broke our nested environment and we were wondering if you could just first of all, explain the current situation to us
14:14:46 <dmellado> while we don't mind fixing whatever is necessary we just got hit by this
14:17:26 <frickler> so https://review.openstack.org/505502 fixed etcd3 setup when ipv6 is used for service addresses
14:17:30 <dmellado> hongbin: are you around?
14:17:43 <hongbin> dmellado: yes
14:17:44 <frickler> and somehow this seems to have triggered an issue for kuryr
14:17:58 <dmellado> yeah, as we deploy etcd3
14:18:11 <dmellado> hongbin: you were discussing a partial revert, could you elaborate on it?
14:18:26 <frickler> but you seem to be trying to run etcd3 on multiple nodes, which isn't currently supported by devstack
14:19:14 <irenab> for nested case , it is not on the controller node, but not on the multiple nodes
14:19:43 <hongbin> dmellado: the patch above is  a partial revert
14:20:21 <frickler> irenab: is there a particular reason why it cannot run on the controller node? that is what devstack would support as it stands
14:21:43 <ltomasbo> frickler, we have a nested deployment, where we use the needed component to deploy a set of VMs
14:21:47 <hongbin> i don't think it is a hard requirements that etcd must run on controller node
14:21:56 <irenab> frickler: this is for the case we run kuryr inside VM (this VM is deployed on devstack node)
14:22:01 <ltomasbo> and in that set of VMs we install kuryr and the required components to create a kubernetes cluster
14:22:33 <ltomasbo> so, both kuryr and etcd3 (and kubernetes) run inside the VMs, not on the controller
14:22:45 <ltomasbo> like if it was undercloud and overcloud (kind of)
14:23:34 <hongbin> the current problem is that devstack assumes etcd must run on controller. however, if etcd is not run on controller, the whole stack will fail
14:24:02 <hongbin> i would say the assumption that etcd must run on controller node is invalid
14:24:06 <frickler> hmm, so I guess that usecase will need added logic for devstack to support it
14:25:41 <ltomasbo> it was working before...
14:25:54 <ltomasbo> when it was bound to host_IP instead of service_ip
14:26:05 <frickler> currently consumers are pointed towards the SERVICE_HOST like here http://git.openstack.org/cgit/openstack-dev/devstack/tree/lib/cinder#n330
14:26:14 <hongbin> #link https://review.openstack.org/#/c/511092/
14:26:33 <hongbin> above patch switch the etcd url back to HOST_IP
14:26:43 <frickler> ltomasbo: it was working in the sense that etcd could be started on other hosts only when ipv4 was used, but not in that services would connect to that etcd
14:27:28 <ltomasbo> it should, if you enable it at the controller, right?
14:27:45 <ltomasbo> it will depend of if your service_hosts == host_ip, right?
14:29:08 <ltomasbo> hongbin, this is pretty much what I did on my environment to make it work
14:29:18 <frickler> HOST_IP is only locally defined, it will not work from a different host. so cinder in the link above will not connect to etcd on HOST_IP unless it is the same host
14:30:13 <ltomasbo> so, then the solutions should not be to change it to service_host, but to etc_host or something like that, right>/
14:30:16 <ltomasbo> ?
14:30:38 <hongbin> ltomasbo: +1
14:30:39 <ltomasbo> so that you can still install etcd whereever you like, right? and point to the service
14:31:05 <frickler> ltomasbo: that would be what I meant by "added logic" above, yes
14:31:10 <dmellado> heh
14:32:09 <frickler> or you could do different etcd setup routines in your devstack plugin, because I don't think that this is a usecase that has much general interest
14:32:47 <dmellado> I'd say we should add an action about "configure etcd within kuryr-k8s devstack plugin"
14:32:59 <ltomasbo> frickler, can you point to some example about that?
14:33:17 <ltomasbo> that could be a fair solution
14:33:33 <dmellado> ltomasbo: I guess just put that logic within our devstack plugin
14:33:37 <dmellado> rather than in devstack overall
14:33:47 <ltomasbo> ok
14:34:22 <ltomasbo> sorry for hijack the meeting... xD
14:34:37 <ltomasbo> I guess we can move to another point...
14:34:46 <dmellado> heh, np!
14:35:09 <dmellado> #action move etcd logic to kuryr-k8s devstack plugin
14:35:37 <irenab> dmellado: please open a but for that
14:35:41 <irenab> bug
14:35:48 <dmellado> irenab: I will after the meeting
14:36:30 <irenab> thanks
14:36:33 <dmellado> I also have a topic to address, basically as a follow up to the kuryr-tempest-plugin scenario manager I added a devstack plugin function to create a SG rule for icmp so pods could ping vms
14:36:45 <dmellado> but I've seen issues on the octavia and lbaasv2 gates which I'm unable to reproduce locally
14:36:56 <dmellado> before I ask infra to freeze one node, could anyone try to reproduce them?
14:37:24 <dmellado> https://review.openstack.org/#/c/515357/
14:37:31 <dmellado> thanks in advance! ;)
14:37:39 <dmellado> anyone else on kuryr-k8s?
14:37:56 <dulek> Anyone wants update on CNI daemon?
14:37:59 <irenab> just a short updata on k8s net policies
14:38:11 <irenab> dulek: sure
14:38:16 <ltomasbo> I want updates on both! xD
14:38:33 <dmellado> heh, go for it folks
14:38:37 <irenab> let me go first, since update is short
14:38:41 <dulek> Okay!
14:39:42 <irenab> leyal and me in a process to map k8s net policies to neutron SGs and figure lout the optimal design, still not finalized
14:40:10 <irenab> hopefully something to share in a few days
14:40:17 <ltomasbo> irenab, nice, any big issues so far?
14:40:49 <irenab> one issue that may complicate is if we neeed to support selectors of a ! form
14:41:05 <irenab> like role != DB
14:41:19 <ltomasbo> umm, right
14:41:56 <irenab> leyal has some ideas how solve it, but this will require watchers per query
14:42:11 <dmellado> irenab: do you have any bp/spec for this?
14:42:23 <irenab> I think we may drop this case for the first iteration
14:42:31 <irenab> whatdo you think?
14:42:35 <ltomasbo> sounds reasonable to me
14:42:38 <irenab> dmellado: a sec
14:42:43 <dmellado> fair enough for me as well
14:42:58 <irenab> https://blueprints.launchpad.net/kuryr-kubernetes/+spec/k8s-network-policies
14:43:08 <dmellado> #link https://blueprints.launchpad.net/kuryr-kubernetes/+spec/k8s-network-policies
14:43:10 <dmellado> thanks irenab ;)
14:43:48 <irenab> that’s all for now
14:43:54 <dmellado> dulek: you can go now ;P
14:43:56 <dulek> Okay, so CNI daemon.
14:44:37 <dulek> I've listened to apuimedo's advice and started using cotyledon. I've placed updated patch in separate commit: https://review.openstack.org/#/c/515186/
14:44:54 <dmellado> #link https://review.openstack.org/#/c/515186/
14:45:10 <dulek> This works much better and works around issues I've had with signal handling and stuff.
14:45:23 <dulek> Besides I've moved to Flask to implement the server.
14:45:36 <dmellado> dulek: +1 on flask ;)
14:45:59 <dulek> There's an issue there, like ltomasbo noticed - processes are somehow leaking even though I've set threaded=True when creating server.
14:46:12 <dmellado> huh
14:46:24 <ltomasbo> so, it also happened in your env?
14:46:32 <dulek> ltomasbo: Yup.
14:46:38 <ltomasbo> ok!
14:46:40 <dulek> I'm working on that - probably I'll place that whole server under uwsgi - another thing apuimedo advised.
14:46:47 <dulek> Maybe that'll help.
14:47:09 <dmellado> dulek: shoot out if you need help with that ;)
14:47:36 <dulek> dmellado: I will if kuryr-libnetwork example will not help me. :)
14:47:43 <dulek> Plus I'm writing a few more unit tests to increase the coverage. But besides that the patch is ready for reviews and testing!
14:47:59 <dmellado> all right, so let's do a quick round to the other topics as we're getting out of time
14:48:10 <dmellado> #topic kuryr-libnetwork
14:48:17 <ltomasbo> great work dulek!
14:48:20 <dmellado> anything on this?
14:48:32 <dmellado> hongbin: ?
14:48:55 <hongbin> hongbin: i don't have anything
14:49:11 <dmellado> hongbin: and on fuxi or fuxi-kubernetes?
14:49:25 <dmellado> #topic fuxi / fuxi-kubernetes
14:49:28 <hongbin> dmellado: nothing much from me as well
14:49:29 <dmellado> #chair hongbin
14:49:29 <openstack> Current chairs: dmellado hongbin
14:49:45 <dmellado> hongbin: ack, thanks!
14:49:52 <dmellado> #topic open discussion
14:50:17 <ltomasbo> I forgot to mention about this patch https://review.openstack.org/#/c/510157/
14:50:24 <ltomasbo> I discussed it with apuimedo
14:50:27 <dmellado> all right, so besides this I just wanted to remember you folks that we do now have the scenario manager ready to be used for any kind of tests you might want to add!
14:50:30 <dmellado> please do so!
14:50:35 <dmellado> ltomasbo: what was that about?
14:50:45 <ltomasbo> and we agree on a couple of modifications to skip more calls to neutron
14:51:08 <ltomasbo> (1 to get the ports intead of 2) plus only 2 for the network/subnet information retrival
14:51:10 * dmellado adds that to the review queue, thanks luis ;)
14:51:27 <ltomasbo> that will speed up remarkably that part
14:51:27 <irenab> ltomasbo: it is still in WIP
14:51:30 <ltomasbo> (hopefully)
14:51:39 <dmellado> ltomasbo: but you still owe me a test
14:51:40 <dmellado> xD
14:51:42 <ltomasbo> it was not, but I added it after talking with apuimedo
14:51:52 <dmellado> irenab: btw, regarding the dragonflow kuryr gate
14:51:56 <dmellado> sorry on the delay and so
14:52:07 <dmellado> I'll be somehow quite busy during this week
14:52:16 <dmellado> but I do plan to fetch oanson next week
14:52:23 <irenab> dmellado: not that urgent, but my idea was to add gate in dragonflow for kuryr integration
14:52:26 <dmellado> oanson: I've seen that you'll be at the OSS, won't you?
14:52:44 <irenab> dmellado: he is
14:52:58 <dmellado> irenab: ack, then I'll have a talk with him there and will follow up with you too
14:53:10 <irenab> dmellado: will be great if you can cover it with him next week
14:53:23 <dmellado> that said, if I get out of the zombie state when I get out of the aircraft
14:53:25 <dmellado> xD
14:53:52 <irenab> dmellado: I am sure you will
14:54:14 <dmellado> irenab: heh, my trip is about 24.5 hours so let's see ;)
14:54:39 <dmellado> all right folks, thanks for attending! thanks frickler and haleyb for the pointers too!
14:55:04 <irenab> so no weekly next week?
14:55:09 <dmellado> oh, yeah
14:55:21 <dmellado> I'd say nope, at least from apuimedo's and my side
14:55:49 <dmellado> #action send email to ML about no meeting next week (apuimedo|dmellado)
14:55:52 <irenab> ok, I guess we can sync on the channel
14:55:59 <ltomasbo> ok, thanks!
14:56:22 <irenab> dmellado: enjoy the summit
14:56:32 <dmellado> so if nobody has anything more to discuss I'll close the meeting ;)
14:56:34 <dmellado> thanks irenab!
14:56:38 <dmellado> #endmeeting