20:00:02 <johnsom> #startmeeting Octavia
20:00:03 <openstack> Meeting started Wed Mar 14 20:00:02 2018 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:07 <openstack> The meeting name has been set to 'octavia'
20:00:18 <johnsom> Hi folks!
20:00:25 <xgerman_> o/
20:00:26 <longstaff> hi
20:00:30 <johnsom> Let's see how many people show up after the DST change
20:00:32 <cgoncalves> hi
20:00:37 <rm_work> o/
20:00:51 <jniesz> hi
20:00:52 * rm_work uses the strategy of just paying attention to the channel and having no idea when the meeting will actually start
20:00:58 <johnsom> Not bad
20:01:16 <johnsom> That is a bonus of having the meeting in channel
20:01:19 <johnsom> #topic Announcements
20:01:22 <rm_work> yeah i love it
20:01:32 <johnsom> Queens Released!
20:01:43 <johnsom> In case you missed it, queens is out the door.
20:01:55 <johnsom> Thank you all for your contributions to our Queens release
20:02:44 <johnsom> It seems we had a hiccup with some global-requirements, but we will talk about that later in the agenda.
20:02:54 <johnsom> Otherwise it's a pretty good release.
20:03:24 <johnsom> In case you were not able to make the Rocky PTG, I attempted to take notes in the etherpad
20:03:31 <johnsom> #link https://etherpad.openstack.org/p/octavia-ptg-rocky
20:03:49 <johnsom> Once I dig out a bit more I will try to send out a summary e-mail.
20:04:21 <johnsom> Also, the naming for the "S" series is out for a vote. Check you OpenStack e-mail for your voting link.
20:04:37 <johnsom> The theme is around Berlin as that is where the summit will be.
20:05:12 <johnsom> I still have a stack of windows for e-mails I need to read, so that is about all of the announcements I have today. Anything I missed?
20:05:52 <johnsom> #topic Brief progress reports / bugs needing review
20:06:47 <johnsom> I am still getting caught up after travelling for two weeks.  Mostly I have been working on gate fixes, catching up on e-mails, expense reports, etc.
20:07:11 <rm_work> yeah the gate currently is ... :/
20:07:21 <johnsom> Also catching up on reviews.  There was a lot of work done while I was in Ireland.
20:07:30 <rm_work> i have a list of patches i'd like eyes on, i guess i can start posting
20:07:53 <johnsom> I have also started to clean out the neutron-lbaas patches.  Some had not been touched in two years, so very clearly needed abandoned.
20:08:41 <rm_work> https://review.openstack.org/549263 and https://review.openstack.org/548989 and https://review.openstack.org/550303
20:09:05 <johnsom> cgoncalves I see you had some stuff in the agenda, feel free to share here.
20:09:05 <rm_work> and https://review.openstack.org/552641 just needs a +2/+A once the gate fixes merge (don't do it yet)
20:10:36 <cgoncalves> I wanted to share that we have recently faced some gate issues that led us to migrate from testr/ostestr to stestr which is the new tool for running tests in OpenStack. testr is not maintained and ostestr was a wrapper of it; stestr is a fork of testr
20:11:14 <rm_work> i feel like we've done this about 3 different times for 4 different projects <_<
20:11:16 <johnsom> Yep, cool. Funny that we migrated to ostestr like a year ago....
20:11:16 <cgoncalves> octavia and neutron-lbaas got migrated already. we need to do the same now for client, dashboard and tempest projects
20:12:08 <johnsom> My plan is to address some comments on the tempest plugin patch and update it for PTG discussions.
20:12:08 <cgoncalves> we also faced some other issues but turned out to be because of running jobs in parallel
20:12:18 <johnsom> Then I want to focus on the driver code
20:13:01 <cgoncalves> the other item I put on the agenda was grenade: https://review.openstack.org/#/c/549654/
20:13:30 <cgoncalves> the patch has a depends-on btw
20:13:36 <johnsom> Yes, cool stuff!  I haven't had a chance to look at it, but I'm excited that we are working towards declaring upgradablility
20:14:00 <cgoncalves> the grenade now verifies successfull upgrading from queens to master and no dataplane downtime
20:14:35 <cgoncalves> oh, we also started looking at performance/scale
20:14:44 <cgoncalves> Nir has started this with a patch submitted to Rally: https://review.openstack.org/#/c/551024/
20:14:48 <johnsom> Nice!  I am interested to read it and learn about the grenade jobs.
20:15:29 <cgoncalves> we have got baremetal machines internally and we plan to have data by last week of March
20:15:49 <johnsom> Nice
20:16:36 <johnsom> #topic Other OpenStack activities of note
20:16:55 <johnsom> A few things are falling out of PTG discussions. This is the first one I have had time to read.
20:17:02 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2018-March/128175.html
20:17:02 <cgoncalves> also... (sorry!) soon Octavia will work when firewall is set to openvswitch, which until now would fail at load balancer creation
20:17:21 <johnsom> Ah, so there is a fix for OVSFW?
20:17:32 <cgoncalves> https://review.openstack.org/#/c/550421/
20:17:57 <cgoncalves> https://review.openstack.org/#/c/550431/ validates the patch
20:18:22 <johnsom> Along those lines, there is another issue in neutron DVR that can cause us problems:
20:18:24 <johnsom> #link https://bugs.launchpad.net/neutron/+bug/1753434
20:18:24 <openstack> Launchpad bug 1753434 in neutron "Unbound ports floating ip not working with address scopes in DVR HA " [Undecided,Confirmed]
20:18:29 <johnsom> In case anyone is using DVR
20:19:21 <johnsom> My item of note above was that there is work on a new oslo library for quotas.  I looked for this when I was doing the quota work for Octavia. So, this is good stuff.
20:19:41 <johnsom> I will try to highlight the others next week once I get through my stack of stuff to read.
20:20:37 <johnsom> #topic Open Discussion
20:21:18 <cgoncalves> I added "Gate with lower-requirements.txt and workflow to ensure dependencies bumped in requirements.txt (i.e. a prereq before merging patches?)"
20:21:22 <johnsom> cgoncalves Put an item on the agenda about the new lower-constraints requirements is starting
20:21:56 <cgoncalves> we have recently discovered that our requirements.txt is a bit behind what are our requirements
20:22:01 <johnsom> This is what I came up with after looking into the issues we had with the two requirements in queens
20:22:18 <cgoncalves> right
20:22:34 <cgoncalves> I would like to discuss ways we could improve this situation
20:22:38 <johnsom> This lower-constraint seems to be super new and talking with the requirements team, we might be the first to use it.
20:23:07 <rm_work> well, i still hold that this is a symptom of the way g-r works in openstack, and that we got stuck in kind of a hard spot
20:23:23 <johnsom> My thought is to setup a gate using this lower-constraints file and run through the py27 and functional tests with no-op.
20:23:27 <cgoncalves> in our dev envs and at gate we may not face such issues because we have recent lib versions installed, so we dont detect that requirements.txt gets outdated when we modify code
20:23:31 <xgerman_> yeah, and then there is my nique problem with privsep and the broken msgpack
20:23:42 <rm_work> and that in general, things need to be packaged/deployed using the same stuff we use in testing, which is to say "upper-constraints.txt"
20:23:45 <johnsom> Agreed, it is an oddity in the requirements setup
20:24:17 <cgoncalves> johnsom: functional and no-op are enough?
20:25:26 <johnsom> cgoncalves, I  think so.  Do you think we need a full dsvm?
20:25:31 <cgoncalves> I don't know, therefor my question :)
20:25:53 <cgoncalves> one other idea would be using our requirements.txt
20:25:56 <johnsom> It would have caught these two issues
20:26:18 <johnsom> Umm, we are using our requirements.txt....
20:26:23 <xgerman_> yes
20:26:28 <rm_work> requirements.txt is all >=
20:26:43 <rm_work> the issue is that if you don't use u-c values for packages when packaging
20:26:45 <xgerman_> and we use the gloval requirements indirectly in the gates
20:26:49 <johnsom> Right, it assumes to pull whatever the latest is
20:26:51 <rm_work> and just guess at "something that matches" ...
20:26:53 <cgoncalves> well, but that didn't prevent us from shipping a kind-of broken octavia
20:27:04 <rm_work> cgoncalves: IMO it isn't broken
20:27:17 <rm_work> when I deployed / build images for deploy, i followed guidelines which are to use u-c
20:27:21 <rm_work> and everything was fine
20:27:36 <rm_work> u-c details explicitly which packages are used in testing
20:27:52 <cgoncalves> rm_work: if you clone octavia repo only and follow the docs probably you end up installing from requirements.txt
20:28:04 <rm_work> and are therefore the packages that should be used in deployments
20:28:05 <rm_work> you also end up installing neutron-lbaas
20:28:16 <rm_work> i wouldn'tuse our docs as a good example of what to do
20:28:16 <johnsom> Right, but it is valid that we should be making sure G-R gets updated to our bare-minimum requirements.  Thus, why I am proposing a gate that runs with lower-constraint
20:28:23 <rm_work> see the global openstack docs
20:28:35 <cgoncalves> ok
20:28:38 <cgoncalves> johnsom: +1
20:28:56 <johnsom> cgoncalves Yes, installing with requirements.txt will work correctly and not have an issue
20:29:17 <cgoncalves> johnsom: if we proceed with that, would we block patches from being merged before bumping in g-r?
20:29:55 <cgoncalves> johnsom: in queens you will have issues. their names are: jinja2 and pyOpenSSL :)
20:29:56 <johnsom> Following the docs will lead to a successful install.
20:30:34 <johnsom> cgoncalves no. it will go pull the latest for those two packages, which will succeed
20:31:03 <cgoncalves> meh, assuming that you pull the latest yes
20:31:14 <rm_work> cgoncalves: it WILL pull latest, if you pass requirements.txt to pip
20:31:19 <rm_work> that's what requirements.txt *has*
20:31:20 <johnsom> As for the lower-constraint gate, yes, it would block patches from merging until G-R is updated.
20:31:21 <rm_work> >=
20:31:28 <cgoncalves> if the system already has minimum required and turns out to not be good enough, then no
20:32:00 <rm_work> ah, if you don't use -U and you already have random system python packages installed, then yes, ugh
20:32:01 <cgoncalves> johnsom: +1 for blocking patches
20:32:08 <rm_work> system python needs to DIAF
20:32:20 <rm_work> *system python environment
20:32:24 * cgoncalves looks up for DIAF
20:32:39 <rm_work> die in a fire
20:32:49 <cgoncalves> heh :)
20:32:56 <johnsom> I kind of agree that packaged python modules tend to lead to nothing but trouble
20:33:04 <xgerman_> +1
20:33:21 <johnsom> But anyway, we have rat holed a bit here.
20:33:25 <rm_work> absolutely everything should be deployed in a virtualenv, no exceptions
20:33:39 <johnsom> Does anyone have any comments about the lower-constraint gate?  Are we in favor?
20:33:48 <rm_work> i'm not sure i understand how it works
20:33:49 <rm_work> but sure
20:33:56 <cgoncalves> we have to play well with distros, after all majority of users install from distro packages ;)
20:34:07 <cgoncalves> in favor
20:34:18 <johnsom> rm_work it will install using the minimum versions of the packages in requirements.txt
20:34:24 <rm_work> yes, you can play well with distros by ignoring their system packages and using a virtualenv :P
20:34:38 <rm_work> this doesn't impact the distro in any way
20:34:53 <rm_work> and is in fact very friendly to it by ignoring it altogether and being extremely low impact
20:34:58 <johnsom> I know of distros that ship venvs too...
20:35:23 <cgoncalves> mobing on..... :P
20:35:29 <cgoncalves> *moving
20:35:32 <johnsom> grin
20:36:07 <johnsom> Ok, I will put together a gate, non-voting for now so we can try this out. I'm a bit nervous as it's "new" for requirements team.
20:37:01 <johnsom> Probably will need a py27 and py35, but maybe just start with py27
20:37:32 <johnsom> We don't really have any version specific requirements if I remember right.
20:38:20 <johnsom> Other topics for open discussion today?
20:38:34 <cgoncalves> yes
20:38:45 <johnsom> lol
20:38:57 <cgoncalves> I would like to bring up the topic of backports
20:39:41 <cgoncalves> could we have devs also proposing backporting stuff to stable/* ?
20:40:10 <cgoncalves> so far I have got the impression that it is a best effort, occasional
20:40:14 <johnsom> Typically that is handled by a stable maintenance sub-team.
20:40:47 <rm_work> cgoncalves: congrats on being the first member of the stable maintenance subteam for Octavia!
20:40:49 <cgoncalves> hmm I don't recall seen anyone from that sub-team proposing
20:40:49 <johnsom> We however are a small team, so that doesn't really exist except for cgoncalves volunteering
20:40:51 * rm_work claps for cgoncalves
20:40:59 <cgoncalves> rm_work: lol
20:41:13 * johnsom Congratulates cgoncalvess
20:41:24 <cgoncalves> I'm okay with that
20:41:37 <xgerman_> yep, I would lobe the stable subtem to backport some of the recent hm fixes to Pike
20:41:43 <cgoncalves> because down the road it will save me lots of time with customer tickets
20:41:48 <johnsom> Anyway, the trick here is they backports need to be proposed after the patch has merged on master. So a bit async
20:42:32 <johnsom> Yeah, anyone can propose a backport.  I have kind of being going on the "If someone needs it" approach (feel free to fire the PTL).
20:42:57 <xgerman_> if we do we will do it by tweet
20:42:58 <cgoncalves> I'd suggest, whenever possible, to leave a comment or even add a tag to the commit message that the patch is backport material
20:43:37 <johnsom> The key part is making sure it meets the policy:
20:43:39 <johnsom> #link https://docs.openstack.org/project-team-guide/stable-branches.html
20:43:46 <cgoncalves> sure
20:45:42 <johnsom> So, yes, it would be nice to take with backport potential.  Please feel free to propose things.  Please propose them after the master has merged.
20:46:15 <johnsom> cgoncalves Do you have a cadence you would like to see for dot releases of the stable branches?
20:46:18 <cgoncalves> will do
20:46:45 <cgoncalves> not really
20:46:56 <johnsom> Again, my approach has been roughly at release cycles and if folks request them.
20:47:15 <johnsom> ok
20:47:15 <xgerman_> yeah, I usually run off a SHA
20:47:28 <cgoncalves> on our side we like to be proactive and backport whenever applicable
20:47:37 <xgerman_> ok, sounds good
20:47:49 <johnsom> Yeah, not a problem.
20:48:24 <johnsom> Ok, other topics today?
20:48:44 * cgoncalves feels observed....
20:49:08 <johnsom> Hahaha
20:49:32 <cgoncalves> no
20:51:20 <johnsom> I need to go dig into the periodic stable jobs too. Not sure if those ever got put back after the zuulv3 stuff
20:51:40 <johnsom> We had a nice health dashboard I used to track those, but that is now gone.
20:51:57 <cgoncalves> http://status.openstack.org/openstack-health/
20:52:24 <AJaeger> johnsom: periodic-stable-jobs-neutron run for neutron-lbaas
20:53:05 <cgoncalves> AJaeger: hi. do you have an ETA for grenade zuul v3 native job?
20:53:14 <AJaeger> check also http://zuul.openstack.org/builds.html?project=openstack%2Fneutron-lbaas&pipeline=periodic-stable
20:53:24 <AJaeger> cgoncalves: no - best ask QA team
20:53:25 <johnsom> Ah, it's working again. cool. Yeah, looks like the stable periodics are gone
20:53:35 <cgoncalves> AJaeger: ack
20:53:53 <AJaeger> johnsom: those get renamed, we might have forgotten to update the dashboard ;(
20:54:10 <johnsom> Ah, that is why I couldn't find them. periodic-stable.
20:54:45 <johnsom> Hmm, wish it had the branch in there somewhere
20:55:05 <AJaeger> johnsom: not in that display yet ;(
20:55:29 <rm_work> looks like https://review.openstack.org/#/c/552978/ is about to pass and needs +A
20:55:33 <johnsom> At least they are all passing.... grin
20:55:50 <rm_work> there it goes
20:55:52 <AJaeger> so, something wrong with them? ;)
20:55:56 <rm_work> +A plz
20:56:04 <johnsom> Yeah, good question.  hahaha
20:56:26 <johnsom> Ok, a few minutes left, anything else today?
20:57:06 <rm_work> yes, someone, right now, should +A https://review.openstack.org/#/c/552978/ :P
20:57:11 <johnsom> Ok, thanks folks
20:57:12 <johnsom> #endmeeting