14:00:41 <apuimedo> #startmeeting kuryr
14:00:42 <openstack> Meeting started Mon Aug 29 14:00:41 2016 UTC and is due to finish in 60 minutes.  The chair is apuimedo. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:45 <openstack> The meeting name has been set to 'kuryr'
14:00:57 <apuimedo> Hello everybody and welcome to yet another Kuryr meeting
14:01:03 <pablochacin> Hello!
14:01:03 <apuimedo> who's here for the show?
14:01:05 <vikasc> o/
14:01:06 <devvesa> o/
14:01:12 <tonanhngo> o/
14:01:12 <pablochacin> o/
14:01:12 <limao_> o/
14:01:13 <banix> o/
14:01:15 <garyloug> o/
14:02:03 <apuimedo> I like this attendance level :-)
14:02:34 <apuimedo> #info pablochacin, vikas, devvesa, tonanhngo, limao, banix, garyloug and apuimedo present
14:02:41 <apuimedo> Let's get started
14:02:46 <apuimedo> #topic kuryr-lib
14:03:05 <apuimedo> we're almost done with the split (sorry it took so long)
14:03:30 <apuimedo> limao and I sent some patches to remove wrong requirements today
14:03:59 <apuimedo> and after that, I think we can make a 1.0.0 release, then make a 2.0.0 when we have the rest/rpc
14:04:26 <apuimedo> #link https://review.openstack.org/362070
14:04:37 <apuimedo> #link https://review.openstack.org/361972
14:04:46 <apuimedo> #link https://review.openstack.org/361937
14:05:24 <apuimedo> #info vikasc has patches ongoing for adding the pieces for container-in-vm
14:05:38 <apuimedo> #link https://review.openstack.org/361937
14:05:50 <apuimedo> #link https://review.openstack.org/362023
14:06:23 <apuimedo> are we all in agreement to make a very short lived (a couple of weeks) 1.0.0 release before we add the rest/rpc?
14:06:42 <banix> yes
14:06:46 <limao_> +1
14:06:50 <vikasc> +1
14:07:06 <apuimedo> very well
14:07:34 <apuimedo> so once the three patches I put above are merged, I'll try a deployment and if it works, I'll send the tag
14:08:11 <apuimedo> #info There's agreement in having a very short lived (two-three) weeks 1.0.0 openstack/kuryr release before we add rpc/rest
14:08:34 <apuimedo> #info apuimedo is working on packaging for centos/fedora
14:08:53 <apuimedo> the patch will be a submission to openstack-packages/kuryr
14:09:44 <apuimedo> #info: note that openstack-packages/* do not seem to be built with py3 as of now, so that means we'll need to put some work before kuryr-kubernetes can use those packages
14:10:02 <apuimedo> anything else about openstack/kuryr?
14:12:52 <apuimedo> alright, that's a "move along!"
14:12:58 <apuimedo> #topic kuryr-libnetwork
14:13:53 <apuimedo> #info vikasc has improved a lot the configuration and option listing
14:14:19 <apuimedo> #action apuimedo to rebase the keystone v3 support on top of this to unblock kolla
14:14:57 <apuimedo> limao_: what's the status of https://review.openstack.org/#/c/355714/
14:14:59 <apuimedo> ?
14:15:50 <limao_> Hi apuimedo, I'm modifying UT, when I test I find the gate-install-dsvm-kuryr-libnetwork-nv not work
14:16:08 <apuimedo> limao_: did you identify the issue?
14:16:15 <limao_> yes, https://review.openstack.org/#/c/361841/
14:16:22 <limao_> and https://review.openstack.org/#/c/361795/
14:16:43 <limao_> second one is for no kuryr.log
14:17:01 <limao_> after first one merged , I will rebase the patch
14:17:54 <apuimedo> #links https://review.openstack.org/#/c/361841/
14:18:04 <apuimedo> #link https://review.openstack.org/#/c/361841/
14:18:12 <apuimedo> #link https://review.openstack.org/#/c/361795/
14:18:14 <apuimedo> thanks limao_
14:18:36 <apuimedo> #action apuimedo to review limao's kuryr-libnetwork fixes
14:18:53 <apuimedo> banix: any update or issues about kuryr-libnetwork?
14:19:29 <vikasc> apuimedo, should we keep kuryr_libnetwork/common?
14:19:34 <banix> no the only thing is adding support for keystonev3; which is a lib issue I suppose.
14:20:29 <apuimedo> banix: it's a lib issue. Now that we fixed the config generation, I'll rebase and push
14:20:36 <apuimedo> vikasc: I'm in favor of dropping it
14:20:56 <vikasc> limao_,
14:21:04 <apuimedo> My understanding is that it would make sense to have a kuryr_libnetwork/common if we had a plugin architecture for kuryr-libnetwork
14:21:04 <vikasc> banix,
14:21:06 <apuimedo> let's say
14:21:22 <apuimedo> for example kuryr_rd and kuryr_ipam
14:21:33 <vikasc> apuimedo, makes sense
14:21:36 <banix> makes sense
14:21:51 <vikasc> there is no further branching
14:22:00 <apuimedo> but with how interdependant they are, I think it's fine to put what we have in 'common' in kuryr_libnetwork
14:24:17 <apuimedo> #action apuimedo to take to the ml about the 'common' thing and merge the removal if it by next week if there is not a strong counter argument
14:24:34 <apuimedo> anything else about kuryr-libnetwork?
14:24:45 <vikasc> yes
14:25:39 <vikasc> for getting trunk port id in vm I am planning to add a variable mentioning IP of trunk port in config file
14:25:49 <vikasc> and then filter using IP
14:26:07 <vikasc> trunk port ID will be used to create subport
14:26:20 <vikasc> is there any other better way?
14:26:27 <apuimedo> vikasc: not for libnetwork
14:26:38 <apuimedo> for kuryr you could retrieve it from the scheduler status info
14:26:51 <apuimedo> but for libnetwork I think that is it
14:27:11 <apuimedo> so for now, let's put the configuration option in openstack/kuryr-libnetwork
14:27:19 <vikasc> i was also thinking similar.
14:27:43 <vikasc> thats why raised in kuryr-libnetwork disussion
14:27:48 <vikasc> thanks
14:29:47 <apuimedo> :-)
14:29:49 <apuimedo> thanks vikasc
14:29:56 <apuimedo> #topic kuryr-kubernetes
14:30:46 <apuimedo> #info devvesa's first upstreaming patch was merged today: https://review.openstack.org/359811
14:30:50 <apuimedo> thanks a lot devvesa
14:31:02 <devvesa> thanks to you for the reviews and good feedback!
14:31:10 <apuimedo> #info devvesa has sent the aio and asyncio loop patches as well
14:31:18 <apuimedo> #link https://review.openstack.org/360376
14:31:33 <apuimedo> #link https://review.openstack.org/360629
14:32:04 <apuimedo> the game plan is to get these merged, then proceed with the watchers and then the translators, is that right, devvesa?
14:32:14 <devvesa> Yes, and unit tests
14:32:31 <devvesa> I'm still thinking which relationship should be between watchers and translators
14:32:48 <vikasc> relationship?
14:32:56 <apuimedo> which reminds me that
14:33:00 <devvesa> Should the watcher call the translator?
14:33:06 <apuimedo> #action apuimedo to update the k8s devref
14:33:09 <devvesa> Should the translator have the watcher as attribute?
14:33:21 <apuimedo> and we should have the discussion there
14:33:36 <devvesa> Ok. Let's have it :)
14:33:46 <apuimedo> but as an intro, my idea is that when you are scheduling the watchers
14:33:57 <apuimedo> before scheduling them
14:34:11 <apuimedo> the service start registers the filters and translators for each watcher
14:34:20 <apuimedo> then they get scheduled
14:34:29 <apuimedo> very explicit and tidy IMHO
14:34:45 <devvesa> so you initialize the watcher with the translators
14:34:52 <devvesa> that should call on each event
14:34:57 <apuimedo> right
14:35:02 <apuimedo> that was my idea
14:35:08 <devvesa> sounds good
14:35:18 <devvesa> I will skip filters now, we can introduce them later
14:35:29 <apuimedo> sure, it's just a matter of composing methods
14:35:55 <devvesa> Also I think that it's the watcher who has to update the annotations on K8s
14:36:13 <apuimedo> I think the ideal would be that both watchers, filters and translators can all be simple methods
14:36:29 <apuimedo> but if one wants to have a class instance play the part
14:36:35 <apuimedo> making it callable should suffice
14:36:49 <pablochacin> apuimedo: what you mean by scheduled?
14:36:57 <apuimedo> devvesa: that's fine for me
14:37:12 <apuimedo> pablochacin: telling asyncio to start them
14:37:36 <pablochacin> ok. Have you considered more than one filter/translator?
14:37:39 <apuimedo> devvesa: not only fine in fact, I think it's the best way
14:37:41 <apuimedo> :P
14:37:49 <apuimedo> sure
14:37:55 <pablochacin> good
14:38:17 <apuimedo> you should be able to have multiple filters and translators for each event type
14:38:23 <apuimedo> and resource type
14:38:30 <apuimedo> it will simplify the logic a lot
14:38:34 <pablochacin> devvesa the watcher updating the annotation?
14:38:57 <devvesa> pablochacin: yes. or create another entity (which I think now is not necessary)
14:38:58 <apuimedo> pablochacin: this way, the neutron translators do not need to know anything about k8s
14:39:06 <pablochacin> uhm
14:39:07 <devvesa> yes
14:39:09 <apuimedo> so it is much simpler to unit test and reason about them
14:39:16 <devvesa> watcher are on K8s side, translators on Neutron side
14:39:23 <apuimedo> right
14:39:32 <devvesa> (we can change the names if you feel more comfortable, I am bad at names)
14:39:36 <apuimedo> the idea is that you can register the filter/translator under a name
14:39:39 <apuimedo> for example
14:39:44 <pablochacin> then translators have to communicate back to watchers
14:39:51 <apuimedo> 'kuryr.org/port'
14:40:01 <apuimedo> then, when the watcher calls the translator
14:40:07 <apuimedo> whatever dictionary it gets back
14:40:46 <apuimedo> it updates the k8s resource with key 'kuryr.org/port' and the value is the serialization of what it gets back from the filter/translator
14:41:17 <apuimedo> pablochacin: the watcher does yield from (await) on the filter/translator
14:41:27 <apuimedo> and it will get the dict back
14:41:42 <apuimedo> I think it shouldn't bring issues
14:41:50 <apuimedo> (unmanageable issues)
14:42:12 <devvesa> what i don't like is that the watcher will need the translator instances and the translators will need the watcher instances
14:42:46 <devvesa> kind of weird initializations
14:43:00 <pablochacin> devvesa, why the translator will need the watcher?
14:43:02 <apuimedo> devvesa: I don't think it needs such thing
14:43:05 <vikasc> sorry i could not get the translators will need the watcher instances
14:43:13 <apuimedo> if you take the translator to be a method
14:43:18 <apuimedo> for example
14:43:22 <apuimedo> there's no instance involved
14:43:35 <devvesa> pablochacin, vikasc: from the translator you will need to eventually tell the watcher to update the annotations
14:43:55 <apuimedo> the contract for a translator is simply, receive an event as a parameter, do all my stuff async, and return a dict of the result
14:44:05 <vikasc> devvesa,  that would be same watcher waiting on await
14:44:21 <apuimedo> the translator does not know anything about watchers
14:44:25 <devvesa> vikasc: true...
14:44:49 <vikasc> can i say ..
14:45:15 <apuimedo> vikasc: go ahead
14:45:39 <vikasc> events from k8s results in parallel instances of watchers but translators dont know or bother which watcher has called him
14:45:47 <devvesa> problem with this is that the watcher becomes some kind of controller of the logic. What if we want to watch the same events for two different logics?
14:46:11 <pablochacin> devvesa each logic should be independnet
14:46:31 <pablochacin> and therefore the changes they make to annotations also
14:46:32 <devvesa> my point is that maybe watcher should not know about translators either
14:46:50 <apuimedo> devvesa: that's fine
14:46:51 <devvesa> and have some kind of controller that joins both
14:46:52 <pablochacin> let's separate que topics, then
14:47:03 <apuimedo> the watcher can call different filters/translators for each event
14:47:17 <apuimedo> and for each await that returns data, it puts one annotation
14:47:29 <pablochacin> I agree with apuimedo
14:47:34 <pablochacin> is simpler
14:47:45 <vikasc> apuimedo, +1
14:48:28 <apuimedo> devvesa: if we see that the watcher logic ends up being simpler with another element taking over the annotation updates, we can do that as well
14:48:33 <pablochacin> another way is to use event like communication
14:48:40 <pablochacin> watchers rise update events
14:48:42 <apuimedo> it would be
14:48:55 <apuimedo> you start a watcher and an updater
14:49:00 <pablochacin> translator process them, rise update events (to the annotations)
14:49:20 <pablochacin> like pubsub instead of calls
14:49:23 <devvesa> no, no. I am ok with annotation updates on watcher, I mean that maybe a controller should manage input/outpus of watchers/translators
14:49:26 <devvesa> and synchronize them
14:49:43 <apuimedo> I'd rather leave synchronization for later
14:49:44 <apuimedo> :P
14:49:50 <apuimedo> after the pods
14:49:54 <devvesa> haha. No, syncrhonize is not the word
14:50:17 <devvesa> If we can put the business logic in a method that does not need I/O, we may simplify the tests a lot
14:50:32 <devvesa> method/class
14:51:10 <devvesa> watcher <-> controller <-> translator
14:51:26 <pablochacin> dispatcher?
14:51:38 <apuimedo> devvesa: vikasc: pablochacin: I propose we have a separate discussion this week about this
14:51:49 <devvesa> apuimedo: ok
14:51:50 <apuimedo> with video and diagrams
14:51:54 <pablochacin> ok
14:51:57 <apuimedo> cool
14:52:02 <vikasc> awesome
14:52:02 <apuimedo> I'll send an invite
14:52:37 <apuimedo> #info apuimedo to send an invite to discuss k8s watcher translator archicture for Wednesday
14:53:10 <apuimedo> #info if anybody wants to join, please ping me
14:53:16 <vikasc> apuimedo,  please avoid UTC 1400 to UTC1500
14:53:22 <apuimedo> vikasc: understood
14:53:29 <pablochacin> apuimedo: could you please propose two hours for the meeting?
14:53:29 <apuimedo> thanks
14:53:46 <apuimedo> pablochacin: you mean two different times or a duration of two hours?
14:54:38 <pablochacin> two times and see who can attend, devvesa and I have some meetings
14:55:08 <apuimedo> very well
14:55:11 <apuimedo> thanks
14:55:12 <pablochacin> (16-17 and 18-19, if I recall well)
14:55:28 <apuimedo> oh, I have to bring up another topic
14:55:34 <apuimedo> but we almost ran out of time
14:55:39 <devvesa> bring it fast! :)
14:55:49 <devvesa> thanks all for your ideas and suggestions!
14:55:56 <apuimedo> we should have a discussion about the plan for container-in-vm in k8s
14:56:34 <apuimedo> icoghla from Intel was proposing a short term step while trunk and subports are not ready based on address pairs
14:56:45 <apuimedo> but we need more time to discuss it than remains
14:57:03 <vikasc> apuimedo, any relevant link to this
14:57:04 <apuimedo> #info let's discuss in #openstack-kuryr about the container-in-vm plan
14:57:08 <apuimedo> vikasc: not yet
14:57:20 <apuimedo> #topic general
14:57:29 <apuimedo> any other topic before we close the meeting?
14:57:40 <tonanhngo> Is the alternate week meeting still on?
14:58:13 <vikasc> its not happening these days
14:58:32 <tonanhngo> ok
14:58:38 <apuimedo> tonanhngo: we are lacking some volunteer to chair it and attendance
14:58:44 <apuimedo> it was usually just one person
14:58:46 <apuimedo> or two
14:58:48 <apuimedo> :(
14:59:10 <tonanhngo> would it make sense to move to this time slot?
14:59:26 <apuimedo> tonanhngo: I think it's probably the best outcome
14:59:31 <devvesa> +1
14:59:35 <pablochacin> +1
14:59:35 <vikasc> tonanhngo, +1
14:59:39 <hongbin> +1
14:59:42 <limao_> +1
14:59:42 <apuimedo> banix: does that work for you?
14:59:57 <banix> yes
15:00:02 <apuimedo> alright then
15:00:05 <tonanhngo> Thanks!
15:00:05 <apuimedo> I'll update it
15:00:17 <apuimedo> #info all the meetings will happen at 14utc
15:00:32 <apuimedo> #action apuimedo to send the meeting reservation and ics updates
15:00:41 <apuimedo> thank you all for joining the meeting!
15:00:46 <apuimedo> it was a great one!
15:00:49 <apuimedo> #endmeeting