15:00:56 <bnemec> #startmeeting oslo
15:00:56 <openstack> Meeting started Mon May 13 15:00:56 2019 UTC and is due to finish in 60 minutes.  The chair is bnemec. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:01:00 <openstack> The meeting name has been set to 'oslo'
15:01:01 <bnemec> moguimar: Yes :-)
15:01:20 <bnemec> courtesy ping for amotoki, amrith, ansmith, bnemec, dims, dougwig, e0ne
15:01:20 <bnemec> courtesy ping for electrocucaracha, garyk, gcb, haypo, hberaud, jd__, johnsom
15:01:20 <bnemec> courtesy ping for jungleboyj, kgiusti, kragniz, lhx_, moguimar, njohnston, raildo
15:01:20 <bnemec> courtesy ping for redrobot, sileht, sreshetnyak, stephenfin, stevemar, therve, thinrichs
15:01:20 <bnemec> courtesy ping for toabctl, zhiyan, zxy, zzzeek
15:01:31 <hberaud> o/
15:01:35 <kgiusti> o/
15:01:53 <johnsom> o/
15:01:54 <moguimar> o/
15:03:15 <bnemec> #link https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_for_Next_Meeting
15:03:28 <bnemec> Forgot the agenda link. It's been too long since we had this meeting. :-)
15:03:35 <bnemec> #topic Red flags for/from liaisons
15:03:36 <ansmith> o/
15:03:58 <moguimar> I don't think we have anything from the Barbican side
15:03:58 <hberaud> lol
15:04:13 <jungleboyj> o/
15:04:22 <bnemec> I think the main thing is Bandit, but that's not really Oslo.
15:04:28 <bnemec> I have that as a main topic later.
15:04:33 <jungleboyj> No flags from Cinder that I know of.  :-)
15:04:38 <johnsom> Nothing from Octavia
15:05:03 * johnsom Notes the bandit issue was an easy fix for Octavia team. It's just a path parsing change
15:05:57 <bnemec> Okay, sounds like we can move on then.
15:06:01 <bnemec> #topic Releases
15:06:26 <bnemec> Releases happened last week, so everything that merged up until then should be out there.
15:06:34 <bnemec> Including stable branches.
15:06:44 <bnemec> Business as usual for this week too.
15:07:13 <bnemec> I will note that I added hberaud as a new Oslo release liaison since Doug is no longer active in Oslo.
15:07:45 <bnemec> I need to let the release team know that as well.
15:07:58 <bnemec> #action bnemec to contact release team about new Oslo liaison
15:08:29 <bnemec> That's it for releases.
15:08:31 <bnemec> #topic Action items from last meeting
15:08:40 <bnemec> "ansmith_ to review https://review.opendev.org/#/c/638248"
15:08:56 <bnemec> Done, thanks ansmith.
15:09:02 <bnemec> "bnemec to send email about meeting cancellation."
15:09:06 <bnemec> Done
15:09:15 <bnemec> And if not, it's way too late. :-)
15:09:38 <bnemec> That was it for action items.
15:09:40 <openstackgerrit> Hervé Beraud proposed openstack/oslo.messaging master: Fix switch connection destination when a rabbitmq cluster node disappear  https://review.opendev.org/656902
15:09:49 <bnemec> #topic Courtesy pings
15:10:12 <bnemec> If you're in the courtesy ping list, please note that we'll be moving away from courtesy pings in the near future.
15:10:23 <bnemec> This came out of the discussion in the PTL tips and tricks session in Denver.
15:10:28 <bnemec> #link https://etherpad.openstack.org/p/DEN-ptl-tips-and-tricks
15:10:44 <bnemec> Basically courtesy pings are considered bad IRC netiquette.
15:11:05 <bnemec> The preferred method is for anyone interested in a meeting to add a custom notification on "#startmeeting oslo".
15:11:34 <moguimar> isn't that irc client dependent?
15:11:44 <bnemec> I'll keep doing courtesy pings for another week or two to give people a chance to see the change.
15:11:46 * jungleboyj missed this discussion
15:11:55 <bnemec> moguimar: Yes, but any decent IRC client should be able to do it.
15:12:10 <jungleboyj> Is there a pointer to instructions on how to do that?
15:12:24 <bnemec> I guess if that's a blocker for anyone, please raise it on the list.
15:12:36 <moguimar> but then if you change irc client you're back to step 0
15:12:38 <moguimar> =(
15:12:40 <bnemec> jungleboyj: Not that I know of. I actually need to figure out how to do it myself.
15:12:55 <jungleboyj> Ok.
15:13:20 <bnemec> I also need to figure out how to test it since it presumably won't highlight on my own messages.
15:14:16 <jungleboyj> :-)
15:14:17 <bnemec> I'll send an email to the list about this. We can maybe have a deeper discussion there where the folks who objected to courtesy pings can be involved.
15:14:39 <moguimar> good
15:14:40 <bnemec> We might need some dev docs about adding custom highlights in various popular IRC clients.
15:14:41 <jungleboyj> bnemec:  Sounds like a good plan.  I will do some investigation as well.
15:14:57 <jungleboyj> bnemec:  Yeah, was just looking for info on IRC Cloud
15:15:07 <bnemec> #action bnemec to send email to list about courtesy pings
15:15:57 <bnemec> Okay, look for that later today.
15:16:03 <bnemec> #topic Bandit breakage
15:16:25 <bnemec> So we have a bunch of different patches proposed to deal with this.
15:16:35 <bnemec> Some capping it, some tweaking the exclusion list.
15:16:52 <bnemec> I'd prefer to find a single solution so we don't have slightly different configs all over Oslo.
15:17:09 <bnemec> #link https://github.com/PyCQA/bandit/pull/489
15:17:19 <johnsom> Octavia just needed this:
15:17:21 <johnsom> #link https://review.opendev.org/#/c/658476/1/tox.ini
15:18:58 <bnemec> Apparently that works well if you have a fairly simple test tree.
15:19:07 <kgiusti> I followed Doug's patch https://review.opendev.org/#/c/658674/
15:19:12 <bnemec> I know they had issues in Keystone because the wildcard didn't handle nested test directories or something.
15:19:30 <bnemec> I don't know how much of that we have in Oslo, but it could be an issue.
15:19:52 <hberaud> https://github.com/PyCQA/bandit/issues/488
15:20:02 <bnemec> It also worth noting that Bandit intends to fix this behavior, so we need to make sure whatever we do will work with 1.6.1.
15:20:06 <kgiusti> I'd prefer just to blacklist 1.6.0 tho
15:20:08 <bnemec> *It's
15:20:28 <bnemec> kgiusti: I'm inclined to agree.
15:20:36 <moguimar> then we can pick Doug's fix for now
15:21:04 <bnemec> Although I wonder if we should do a != exclusion instead so we don't have to go through and uncap when the fix is released.
15:21:07 <moguimar> blacklist just 1.6.0 for now might pop the error back again in the future
15:21:59 <hberaud> moguimar: +1 but we can wait for 1.6.1
15:22:09 <moguimar> I'd put < 1.6.0 and keep an eye on bandit
15:22:41 <bnemec> On that note, it would be nice if we could do some testing with the proposed fix in bandit.
15:23:37 <hberaud> FYI we also have some issues related to sphinx requirements
15:23:38 <bnemec> Not only would it ensure that our stuff works with the eventual fix, it might help get the fix merged in the first place.
15:23:53 <hberaud> https://review.opendev.org/#/c/658812/
15:23:59 <hberaud> https://review.opendev.org/#/c/650505/
15:24:13 <hberaud> related to this bump ^^^^
15:25:13 <bnemec> hberaud: Okay, one dependency problem at a time. :-)
15:25:20 <bnemec> I've added sphinx as another topic after this one.
15:25:30 <hberaud> bnemec: ack
15:25:54 <bnemec> We need an action on the bandit stuff.
15:26:19 <bnemec> Maybe two patches per repo: one to cap, one to uncap with an exclusion of 1.6.0?
15:26:47 <bnemec> We merge the cap and then recheck the uncap once bandit releases the fix.
15:26:59 <bnemec> That way if the fix doesn't work we aren't blocked on anything but the uncap patch.
15:27:16 <bnemec> The downside is if we forget to merge an uncap patch, but that's not the end of the world.
15:27:24 <bnemec> Thoughts?
15:27:26 <hberaud> +1
15:27:35 <kgiusti> +1
15:28:18 <moguimar> +1
15:28:33 <bnemec> Okay, we'll go with that.
15:28:35 <stephenfin> bnemec: Any reason not to just change paths?
15:28:48 <stephenfin> Assuming they don't break that with 1.6.1, of course
15:28:50 <bnemec> stephenfin: It's not clear to me what the behavior in 1.6.1 is going to be.
15:29:07 <bnemec> The current behavior was not intended and I don't want to rely on it.
15:29:18 <stephenfin> Ack, +1 from me too so
15:29:31 <bnemec> Also, apparently it doesn't work in more complex repos.
15:29:37 <bnemec> Okay, thanks.
15:30:13 <bnemec> #action push cap, uncap patches to projects blocked by bandit
15:30:45 <bnemec> I guess this might bite us if they can't restore the old behavior entirely.
15:30:56 <bnemec> But I'm not going to borrow trouble. :-)
15:31:12 <bnemec> I'll update the list about our plans since I know basically everyone is dealing with this.
15:31:25 <bnemec> Although most teams don't have quite so many projects to manage.
15:31:37 <bnemec> #action bnemec to send email about bandit plans
15:31:53 <bnemec> #topic Sphinx requirements
15:32:02 <bnemec> hberaud: You're up.
15:33:03 <hberaud> so I guess my patch is not the right solution to fix the issue https://review.opendev.org/658812
15:33:36 <hberaud> i know that the error occur on many project since few days
15:33:48 <hberaud> s/project/projects/
15:34:13 <bnemec> stephenfin: This may also be relevant to your interests.
15:34:25 <hberaud> I guess the CI requirements check fail due to this one => https://review.opendev.org/#/c/650505/
15:34:48 <bnemec> hberaud: I think maybe you want python_version>='3.4' instead of listing them.
15:35:08 <bnemec> "...does not match "python_version>='3.4'"
15:35:10 <hberaud> I suppose
15:35:17 <stephenfin> hberaud: Which repo is failing?
15:35:34 <stephenfin> I've seen some failures on projects that aren't using constraints
15:35:34 <hberaud> openstack/murano
15:35:53 <hberaud> and I other I guess too but not sure yet
15:35:56 <bnemec> Oh, wait. It's not complaining about lower-constraints, it's complaining about doc/requirements.txt.
15:37:26 <hberaud> oh are you sure
15:37:39 <bnemec> hberaud: http://logs.openstack.org/12/658812/1/check/requirements-check/04a5cd8/job-output.txt.gz#_2019-05-13_13_38_31_400156
15:37:49 <bnemec> That's where the errors are coming from.
15:38:22 <hberaud> http://logs.openstack.org/18/658818/1/check/requirements-check/c8b8958/job-output.txt.gz#_2019-05-13_13_45_49_007565
15:39:40 <bnemec> hberaud: Which patch is that? It's not https://review.opendev.org/658812
15:40:29 <hberaud> related to => http://logs.openstack.org/18/658818/1/check/requirements-check/c8b8958/ https://review.opendev.org/#/c/658818/
15:40:29 <bnemec> Oh, it's in the url.
15:40:38 <hberaud> np
15:41:03 <bnemec> That one is failing on test-requirements.txt.
15:41:32 <hberaud> yep
15:41:35 <bnemec> We most likely need to fix https://github.com/openstack/oslo.service/blob/master/test-requirements.txt#L14
15:42:02 <hberaud> yeah
15:43:57 <bnemec> Okay, "Similarly sphinx 2.0.0 drop py27 support do express that in global-requirements."
15:44:01 <bnemec> From https://opendev.org/openstack/requirements/commit/00b2bcf7d664b1526b4eefe157c33113206d6251
15:44:20 <bnemec> So we need tweak the requirements to cap sphinx on py27.
15:44:45 <hberaud> ok
15:45:08 <bnemec> Probably we need to split it into two lines, one for python_version=='2.7' and one for >='3.4'.
15:45:16 <bnemec> Let's take a look at that after the meeting.
15:45:25 <hberaud> ok
15:45:32 <bnemec> #action bnemec and hberaud to sort out sphinx requirements
15:45:41 <hberaud> ack
15:46:02 <bnemec> #topic PBR unit test flakiness
15:46:15 <bnemec> In other good news, it's almost impossible to merge anything in PBR because the unit tests are super flaky in the gate.
15:46:27 <hberaud> oh
15:46:30 <bnemec> It seems to be related to the WSGI wrapper unit tests.
15:47:03 <bnemec> I wanted to bring it up in the hopes that someone would be able to investigate.
15:47:14 <bnemec> Unfortunately, IIRC these failures don't reproduce for me locally.
15:47:45 <bnemec> And I never got any further than running the pbr unit tests in a loop last time I looked at this.
15:48:19 <hberaud> I'll take a look on my side too
15:48:36 <bnemec> hberaud: Thanks
15:48:46 <bnemec> #action hberaud to investigate pbr unit test flakiness
15:48:59 <bnemec> #topic Weekly Wayward Review
15:50:07 <bnemec> #link https://review.opendev.org/#/c/618569/
15:50:38 <bnemec> hberaud: Looks like you had been involved in this one too.
15:51:03 <hberaud> yep
15:51:41 <bnemec> I think your changes were just to the docstrings, so I'd be okay with you +2'ing it if it looks good now.
15:51:59 <hberaud> ok I'll double check
15:52:12 <bnemec> It's pbr so it may not actually merge, but at least we can get it approved.
15:52:26 <hberaud> ack
15:52:30 <bnemec> #action hberaud to review https://review.opendev.org/#/c/618569/
15:52:38 <bnemec> #topic Open discussion
15:52:46 <bnemec> Okay, that was it for topics.
15:52:53 <bnemec> We have a few minutes left if there's anything else.
15:53:11 <hberaud> if someone can take a look to => https://review.opendev.org/645208 and https://review.opendev.org/647492
15:53:45 <hberaud> (the second is pbr too)
15:54:01 <bnemec> Yeah, that would be next week's wayward review if it doesn't merge before then. :-)
15:54:17 <hberaud> ack
15:54:49 <bnemec> hberaud: Approved https://review.opendev.org/#/c/645208
15:55:09 <hberaud> bnemec: thx
15:55:21 <bnemec> Note that pike is EM now so it won't get released.
15:55:32 <hberaud> bnemec: ack
15:57:01 <bnemec> I also +2'd the ocata backport.
15:57:11 <hberaud> nice
15:57:15 <bnemec> Anything else? We have 3 minutes.
15:57:38 <moguimar> not on my end
15:58:11 <hberaud> I'll ping kmalloc about https://review.opendev.org/634457 (switch from python-memcached to pymemcache), since I guess everything is ok
15:58:24 <bnemec> Sounds good.
15:58:54 <hberaud> I just have an issue with the openstack/opendev => https://review.opendev.org/658347
15:59:01 <bnemec> Okay, let's call the meeting and get started on all the action items I assigned.
15:59:09 <bnemec> #endmeeting