16:00:01 <smcginnis> #startmeeting releaseteam
16:00:02 <openstack> Meeting started Thu Jul 23 16:00:01 2020 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:05 <openstack> The meeting name has been set to 'releaseteam'
16:00:09 <smcginnis> Ping list: ttx armstrong
16:00:13 <ttx> o/
16:00:16 <smcginnis> #link https://etherpad.opendev.org/p/victoria-relmgt-tracking Agenda
16:00:19 <armstrong> o/
16:00:30 <smcginnis> Line 178-ish
16:00:41 <smcginnis> 203 for the meeting agenda, if you want to be technical.
16:01:08 <smcginnis> #topic Review tasks completion
16:01:22 <smcginnis> ttx: Want to cover the governance checks?
16:01:55 <ttx> sure
16:02:49 <ttx> So that check basically checks if we have the deliverables files matcging current state of governance
16:03:10 <ttx> There were a few extras, which I removed at https://review.opendev.org/742200
16:03:22 <ttx> There are also a few missing, which we need to review
16:03:44 <ttx> Deliverable files are missing when new deliverables get added and no deliverable file is created
16:04:01 <ttx> Like for example oslo.metrics, added May 6
16:04:21 <ttx> For those, the question is, are they going to be ready for a victoria release
16:04:26 <smcginnis> Is that process documented somewhere that we can call out the step of adding the new deliverable file? I can't recall at the moment.
16:04:47 <ttx> It does not make sense until the deliverable is ready to release
16:04:54 <ttx> which can take short or long timew
16:05:03 <ttx> better to review it as consistency check
16:05:06 <smcginnis> Two added since this May. Those I could see not being ready, but the others listed have had some time.
16:05:11 <smcginnis> Yeah, true.
16:05:47 <ttx> The ones added in May+ i would not even look into. If ready they will let us know, otherwise we'll pick then up next time
16:06:01 <ttx> that leaves us with:
16:06:21 <ttx> monasca-ceilometer and monasca-log-api ... were released in train, but not in ussuri
16:06:31 <ttx> We need to ask monasca folks what their plans are
16:06:33 <smcginnis> Repos are not retired.
16:06:43 <ttx> is it abandoned? If not, why did we skip them in ussuri?
16:07:03 <ttx> and should we track them for victoria ?
16:07:04 <smcginnis> Ah, they were marked as deprecated: https://opendev.org/openstack/monasca-ceilometer/commit/875bc660ee68664d0ab4a21442c69ffd164d2ddf
16:07:23 <ttx> hmm, not in governance :)
16:07:28 <smcginnis> And https://opendev.org/openstack/monasca-log-api/commit/4eccad156f282f2eb300be7a306703c90dcba996
16:07:46 <smcginnis> So at least those two, I think we should remove the files. They should follow up on governance updates.
16:07:51 <ttx> so maybe the fix here is to mark them as deprecated in governance
16:08:04 <smcginnis> I think so.
16:08:41 <ttx> barbican-ui (Added Oct 2019) -- never released yet
16:09:03 <ttx> would be good to ask for their plans
16:09:11 <ttx> js-openstack-lib (Added January 9) -- never released yet
16:09:48 <smcginnis> Maybe mordred would know?
16:09:51 <ttx> the bunch of xstatic things... I don;t see what their point is if they don;t get released
16:10:01 <smcginnis> e0ne: ^
16:10:13 <ttx> yes all of those are tasks that we need to follow up with people
16:10:21 <fungi> i thought they did get released, but they needed "special" version numbering?
16:10:30 <ttx> fungi: yes. My point is...
16:10:36 <smcginnis> Definitely some of the xstatic ones have been released.
16:10:49 <ttx> Their only point is to be released
16:10:55 <fungi> true
16:11:04 <ttx> there is no "work" in them, just a packaging shell for Pypi release
16:11:08 <fungi> i see, some were added and never released
16:11:17 <ttx> so I'm surprised why they would be created but not released yet
16:11:34 <ttx> finally openstack-tempest-skiplist (Added Mar 20)
16:11:41 <ttx> no idea if the plan was to release that
16:12:07 <ttx> Last two I would ignore for now as too young
16:12:42 <ttx> Who can do the followup? I'm off next week so would rather not take it
16:13:16 <smcginnis> I can call them out in the countdown email at least. I didn't do all of them in this week's (that I just finally sent out yesterday).
16:13:38 <smcginnis> Or maybe better if I do it as its own message.
16:13:51 <smcginnis> That way it might get more visibility and I can tag affected projects.
16:14:01 <ttx> I would try to ping people in IRC, but your call :)
16:14:22 <smcginnis> If someone can do that, it would be best. I'm not sure if I will have time to, but I can try.
16:14:37 <ttx> not super urgent
16:14:46 <ttx> We can pick it up at next meeting if we prefer
16:15:02 <smcginnis> Let's see how far we can get.
16:15:50 <smcginnis> The other task is the countdown, and I have written down a big reminder to make sure I don't get too busy and forget to send it tomorrow.
16:16:00 <smcginnis> #topic Octavia EOL releases
16:16:08 <smcginnis> #link https://review.opendev.org/#/c/741272/
16:16:17 <smcginnis> #link https://review.opendev.org/#/c/719099/
16:16:22 <smcginnis> Yeah, I think those are ready.
16:16:27 <smcginnis> There are a couple Cinder ones now too.
16:16:45 <smcginnis> We would just need to follow up removing the branches.
16:16:51 <ttx> I was unclear if there were ok to +2a
16:16:58 <ttx> will do now
16:17:04 <smcginnis> I forget, did we figure out the release managers have the necessary permissions to delete those branches?
16:17:27 <smcginnis> I know we talked about it, I just can't remember what we determined.
16:17:34 <ttx> we can't delete branches
16:17:39 <ttx> only create them.
16:17:54 <smcginnis> OK. At least we can bug fungi :)
16:18:20 <fungi> yep, also i can temporarily grant that permission
16:18:40 <fungi> it's just that under the version of gerrit we're still on, that permission comes lumped in with a bunch of much more dangerous ones
16:18:59 <fungi> so even my admin account doesn't have that granted to it normally
16:19:08 <smcginnis> I'd rather not have the rights to be more dangerous. ;)
16:19:35 <smcginnis> We can follow up on those afterwards.
16:19:36 <fungi> smcginnis 007, license to delete
16:19:37 <elod> hi, sorry, just one question: is there an easy way / common place to check which repositories are EOL'd on a certain branch?
16:20:01 <smcginnis> They should have a $series-eol tag.
16:20:09 <ttx> fungi: re your earlier question about job fails, about 15% of the jobs had an AFS failure this morning
16:20:15 <smcginnis> But not a great visible way like we had with the table in releases.o.o.
16:20:15 <ttx> the others worked ok
16:20:41 <smcginnis> #topic Review email content
16:20:48 <smcginnis> #link https://etherpad.opendev.org/p/relmgmt-weekly-emails
16:21:08 <elod> smcginnis: ok, thanks
16:21:08 <smcginnis> Nothing too exciting. Just reminders of the upcoming deadlines.
16:21:09 <ttx> hhhm
16:21:10 <fungi> ttx: thanks, that does lead me to believe it could either be a temporary connectivity problem or an issue with afs writes from a subset of our executors
16:21:23 <ttx> that sounds off
16:21:23 <fungi> i'm hoping to get back to running those errors down the rest of the way shortly
16:21:30 <smcginnis> ttx: Yeah, too soon.
16:21:31 <ttx> smcginnis: Victoria-2 is next week.
16:21:36 <smcginnis> Is this a skip week?
16:21:40 * smcginnis looks again
16:21:40 <ttx> so the email you sent this week is the right one
16:21:53 <ttx> just a bit early, rather than a bit late
16:22:10 <armstrong> fungi: does the permissions on Repos come from the Infra team?
16:22:13 <smcginnis> Honestly, really not liking having the emails in the process docs. It's a bit confusing.
16:22:30 <ttx> It should raelly not be confusing. Just tells you what to send every week :)
16:22:46 <smcginnis> It shouldn't be, but it has been.
16:23:06 <ttx> Shoudl probably say " at the end of the week, send
16:23:38 <smcginnis> I was thinking about some sort of script to use schedule.yaml and jinja templates, but that's probably overkill. :)
16:23:55 <ttx> OK so the email for this week was sent already
16:24:12 <smcginnis> ☑️
16:24:13 <fungi> armstrong: there are access controls which we manage in a git repository, but those also inherit from a shared access configuration where we centrally grant some permissions to specific gerrit groups. one group called "project bootstrappers" is used by our project creation automation and has basically full access to delete things from a repository, so one of our admins generally adds themselves or
16:24:15 <fungi> some delegate to that group temporarily to do things like branch deletion
16:24:19 <ttx> (there was none to send last week)
16:24:47 <smcginnis> Let's move on then.
16:24:49 <smcginnis> #topic AFS-related job failures
16:24:57 <smcginnis> #link http://lists.openstack.org/pipermail/openstack-discuss/2020-July/016064.html
16:25:00 <fungi> yeah, i'm looking into these
16:25:12 <smcginnis> Potentially something to do with nodes restarting?
16:25:14 <ttx> Those will liekly require manual work to fix
16:25:27 <fungi> so far it appears that the two tarball write issues happened from ze11, and the rather opaque docs upload error came from a build on ze10
16:26:00 <smcginnis> So the important ones are probably oslo.messaging and designate.
16:26:04 <fungi> and i noticed that both of those executors spontaneously rebooted (perhaps provider was doing a reboot migration to another host) in the past day, though still hours before the failed builds
16:26:14 <smcginnis> They were tagged but no tarballs uploaded?
16:26:25 <ttx> the pypi uploads worked, so we need to find a way to upload the corresponding tarballs
16:26:37 <fungi> yeah, i can do that part manually
16:26:51 <ttx> + Missing constraint updates
16:26:53 <fungi> and also the signatures, now that we actually spit out copies of them in the job logs
16:26:59 <ttx> missing release announce we can probably survive
16:27:26 <fungi> i need to test afs writes from ze11 and also check the executor debug log from ze10 to see what specifically the docs error was
16:27:38 <smcginnis> These were stable releases, so the nightly constraints update won't pick up oslo.messaging.
16:27:43 <smcginnis> I can propose that one.
16:27:47 <fungi> also all three failures occurred within an hour of each other, so it's possible this was a short-lived network connectivity issue
16:27:51 <ttx> AFS appears to be exceptionally brittle, or at least not liking our setup :)
16:28:45 <fungi> well, if it was a connectivity issue, scp and rsync would have broken similarly
16:29:12 <smcginnis> Ah, oslo.messaging one was victoria, so that actually will be picked up by the nightly updates.
16:29:17 <smcginnis> So we just need the tarballs.
16:29:44 <smcginnis> fungi: Seems like scp and rsync have more retrying built in though.
16:30:56 <smcginnis> fungi: I've put a note in the etherpad as a reminder that we will need you to upload the tarballs. That good?
16:30:58 <fungi> afs actually does too
16:31:04 <fungi> yep, that's good
16:31:11 <smcginnis> Thanks!
16:31:21 <smcginnis> #topic Assign tasks for R-11 week
16:31:40 <smcginnis> ttx is out all week, so unfortunately we can't assign them all to him.
16:31:50 <fungi> (it's the command line tools which aren't retrying because they treat afs like they do a local filesystem, afs itself continually rechecks for connectivity to be reestablished)
16:31:57 <ttx> I mean, you /can/
16:32:10 <smcginnis> :)
16:32:42 <smcginnis> Maybe hberaud would be willing to pick those up.
16:32:58 <smcginnis> I will leave it unassigned for now and do them if no one else can.
16:33:15 <smcginnis> Mostly just running some scripts and then seeing if there is anything to do based on that.
16:33:42 <ttx> ++
16:33:57 <smcginnis> #topic AOB
16:34:02 <smcginnis> Anything else?
16:34:31 <openstackgerrit> Merged openstack/releases master: Octavia: EOL Rocky  https://review.opendev.org/741272
16:34:32 <openstackgerrit> Merged openstack/releases master: Octavia: EOL Queens branch  https://review.opendev.org/719099
16:34:48 <smcginnis> OK, we can end early then. \o/
16:34:56 <smcginnis> Thanks everyone.
16:34:57 <ttx> o/
16:34:59 <ttx> Thanks
16:35:07 <smcginnis> #endmeeting