16:01:16 #startmeeting releaseteam 16:01:17 Meeting started Thu Feb 27 16:01:16 2020 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:18 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:20 The meeting name has been set to 'releaseteam' 16:01:22 Courtesy ping: ttx armstrong diablo_rojo, diablo_rojo_phon 16:01:23 o/ 16:01:26 o/ 16:01:27 o/ 16:01:27 #link https://etherpad.openstack.org/p/ussuri-relmgt-tracking Agenda 16:01:29 o/ 16:01:45 i wondered why i was getting a highlight in the oslo channel ;) 16:02:00 can't attend today, sorry :/ 16:02:09 evrardjp: No worries, thanks. 16:02:15 We'll make sure to assign all tasks to you. 16:02:27 It's christmas all over again 16:02:33 To be fair, not the worst IRC screw up I've done. :) 16:02:55 ~ line 380 16:02:55 #topic Release-post job issues from last night 16:03:05 o/ 16:03:14 #link http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2020-02-26.log.html#t2020-02-26T22:06:58 16:03:17 Some context 16:03:32 Not sure if fungi can or wants to add anything. 16:03:32 cool.. what about the ones in last three hours 16:03:38 +1 16:03:41 But root cause was a config change that has been fixed. 16:03:51 We did have a couple in the last few hours. 16:04:04 Looked like one of the intermittent ssh connection errors? 16:04:15 i can, but we're about to restart the scheduler 16:04:21 fungi: No problem. 16:04:43 the ceilometer tags seem to be something else? 16:04:46 So the good part at least was these were all on docs jobs, so they will correct themselves with the next merge. 16:04:55 intermittent fail ok 16:04:57 memory pressure is resulting in zookeeper connection flapping, and jobs are getting retried until they occasionally hit the retry limit 16:04:57 * smcginnis looks 16:05:13 seems like it's been severe for maybe the past 12 hours 16:05:25 or increasingly severe starting 12 hours ago 16:05:58 #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001278.html python-octaviaclient failure on announce. 16:05:59 smcginnis: oh, the git redirect problem got fixed 16:06:07 fungi: Excellent, thanks! 16:06:21 announce fail we can probably survive 16:06:24 one rewrite rule was missing a leading / 16:06:50 due to context change migrating the redirects from a .htaccess file to an apache vhost config 16:06:51 more concerned about the node fail at 15:58 UTC 16:06:55 And actually, it does look like the announce failure actually did get the announcement out. 16:07:03 and the tag release fail at 16:04 UTC 16:07:17 as those might need to be retried 16:07:34 ssh: connect to host 192.237.172.45 port 22: Connection timed out\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(226) [Receiver=3.1.1] 16:07:39 node fail has no indication of what it was though 16:08:00 #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001275.html Ceilometer failure 16:08:08 ttx: Which one are you looking at? 16:08:15 ALL OF THEM 16:08:23 I should focus on one 16:08:32 ceilometer appears to be the same, ssh timed out. 16:08:39 Currently on the last one 16:04 UTC 16:08:45 #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001276.html Ceilometer 11.1.0 16:09:07 Looks like all three of those are ssh timeouts after the fact. 16:09:08 That last one is successful. Failed at collecting logs 16:09:18 and then skipped docs release 16:09:19 On log collection. 16:09:47 that leaves the NODE_FAILURE at 15:58 16:10:04 hard to know what it was attached to 16:10:17 yeah, i suspect those are all boiling down to the zookeeper connection going up and down because the scheduler's out of memory 16:10:40 ttx: Where is that node failure? I've only seen the ssh timeouts. 16:11:23 Oh, this one? http://lists.openstack.org/pipermail/release-job-failures/2020-February/001279.html 16:11:43 Now I see two recent failures came in. 16:11:53 Does seem likely it's down to the zookeeper thing. 16:11:58 Let the animals out of the cages. 16:12:40 Looks like the tagging actually happened, just failed again on log collection. 16:12:46 So just missing docs again. 16:12:51 got ya 16:13:02 monasca-ui 1.14.1 16:13:20 Merged at https://opendev.org/openstack/releases/commit/2897a098897231f86b4410c66d10d6b8f8945046 16:13:26 Did not result in a tag 16:13:36 That's probably our ghost 16:13:54 OK, finally looking at the NODE_FAILURE one. 16:13:57 #link http://lists.openstack.org/pipermail/release-job-failures/2020-February/001277.html 16:14:07 Doesn't link anywhere. 16:14:10 that's the one I just mentioned 16:14:24 So yeah, if the tagging never happened, then at least we can reenqueu that one. 16:14:25 monasca-ui 1.14.1 16:14:53 everything else is accounted for 16:14:54 fungi: Is that something you can help us with once the restart is done and things look calmer? 16:15:24 smcginnis: absolutely 16:15:45 Thanks! 16:16:04 doublechecking 16:16:09 I made a note in the tasks for the week. 16:17:37 That seems to be it. Nothing new has come through in the ML that I've seen. 16:17:43 ok confirmed monasca-ui is the only one missing in the last hours 16:17:53 we can move on 16:17:57 #topic Review task status 16:18:13 Switching single release c-w-i to c-w-rc. 16:18:43 So the idea here is if someone is using cycle-with-intermediary, the expectation is that they need to do multiple releases over the course of the cycle. 16:18:45 in the propsoed weekly-email I said if no answer end of next week 16:18:55 That makes sense. 16:19:11 Here are the outstanding patches: 16:19:14 #link https://review.opendev.org/#/q/status:open+project:openstack/releases+branch:master+topic:ussuri-cwi 16:19:19 it's an easily reverted change anyway 16:19:45 Some good responses so far. A few have said to go ahead. A few others have said they will get releases out and want to stay with intermediary. 16:19:55 So I think we're good on that one. 16:19:58 but yeah, if you have trouble making more than one per cycle, with-rc is probably a good bet for you 16:20:06 Next, update on rocky-em status. 16:20:15 #link https://review.opendev.org/#/q/status:open+project:openstack/releases+branch:master+topic:rocky-em 16:20:23 Quite a few patches out there yet. 16:20:34 But it's everything, so it's actually not that bad. 16:20:39 We've had responses on those too. 16:20:47 Some have said to go ahead and I've been approving them. 16:20:53 Others have said they need a little more time. 16:21:27 Only real issues has been some questionable monasca backports in some of their repos. 16:21:35 Thanks hberaud for calling those out! 16:21:42 the majority that I've already checked looks fine, I'll continue my journey on these ones 16:21:50 you are welcome 16:21:59 Just waiting on PTL acks on many of them. 16:22:19 I think probably approve next week if no response from the team? 16:23:44 We could check if any outstanding unreleased commits, but I don't think this team should be driving that. Nor has the bandwidth to do so. 16:24:18 smcginnis: maybe that could be added to the email 16:24:33 good idea 16:24:36 i've commented 4-5 patch where I saw possible unreleased but would be good to release changes 16:24:39 Yeah, we should add that to make sure there's a chance they are all aware. 16:24:44 just for the record :] 16:24:51 Thanks for checking on those elod 16:25:10 Hopefully the teams notice that and respond. 16:26:16 OK, only other task was the countdown email, but we'll cover that shortly. 16:26:23 #topic Questions on xstatic 16:26:31 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012878.html 16:26:48 I'll be honest - I haven't had a chance to follow this. 16:26:58 ttx: Do you have a summary of the situation? 16:26:59 I did 16:27:12 Yeah, so... 16:27:46 IIUC xstatic things are Javascript thingies that are packaged as PyPI modules 16:28:00 Not updated very often 16:28:15 So a bunch of them used to be published before we drove releases 16:28:39 At one point there was a cleanup, as some xstatic repos were never released/used 16:28:57 That included xstatic-angular-*, which were part of a transition that never happened 16:29:21 But it seems we caught some in the cleanup that shoudl not have been caught 16:29:59 So we have a bunch of xstatic-?? releases on PyPI for things Horizon depends on... that do not have deliverable files to match 16:30:33 The way we fixed that precise situation in the past (for other xstatic things) was to do a new release and start fresh 16:30:49 so that PypI situation matches openstack/releases latest 16:30:52 Were these cycle based but they should have been independent? 16:31:07 no they always were independent I think 16:31:19 The issue is that those were manually uploaded 16:31:35 you have to understand this is just a thin layer around a Javascipt module 16:31:41 Ah, so they just were released so infrequently that they never made it into our managed process? 16:32:08 so the temptation to take xstatic-foobar 1.2.3 and push it to PyPI as 1.2.3.0 is high 16:32:12 yes 16:32:25 but they also never used tags 16:32:53 which is why I missed then last time I looked and assumed they never were released 16:33:00 Oh?! So it's not an issue of them having too many rights with the current ACLs. They just manually threw it out there? 16:33:05 yes 16:33:29 Witold Bedyk proposed openstack/releases master: Switch monasca-* to cycle-with-rc https://review.opendev.org/709848 16:33:30 It's more an issue of too much rights on PyPI really :) 16:33:46 but then it was 6-8 years ago 16:33:50 Sounds like the next steps then would be to 1) get deliverable files added, 2) get releases done of current repos. 16:34:03 And 3) slap some wrists and tell them not to do that. 16:34:04 :) 16:34:12 1-2 can be done at the same time, since you can't import history 16:34:19 Yeah 16:34:33 3 would be to remove the "deprecated" tags from governance 16:34:46 Can/should we get the pypi permissions updated so only openstackci can publish new releases there? 16:34:53 Oh right, that too. 16:35:09 amotoki wanted to do it cleanly and recreate the missing tags, but that's likely to be complicated 16:35:40 but that would result in having something in tarballs.o.o that does not match what's already in PyPI 16:35:46 so more confusing than helping 16:35:56 Witold Bedyk proposed openstack/releases master: Switch monasca-* to cycle-with-rc https://review.opendev.org/709848 16:36:37 so yes, push a new x.y.z.a+1 release by creating a matching deliverable file 16:36:57 IIRC that also involves updating a metadata file in the repo to be released. 16:37:12 Michael Johnson proposed openstack/releases master: [octavia] Transition Rocky to EM https://review.opendev.org/709903 16:37:15 I suppose we could lockstep: delete release from pypi (gasp), merge equivalent release in releases repo, let automation get things back to right place. 16:37:45 yeah https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py#L16 16:37:53 Basically rebuild history. 16:37:58 I don't really like that though. 16:38:03 I'd rather move forward. 16:38:04 smcginnis: you cannot do that 16:38:18 getting two difefrent artifacts with same release number is nonono 16:38:33 them not being available at the same time is not enough 16:38:43 Yeah, bad idea. 16:38:53 a+1 is the only way to resync 16:39:04 Was that suggested on the ML? 16:39:34 I'll clarify 16:40:04 Witold Bedyk proposed openstack/releases master: Do not release monasca-ceilometer for Ussuri https://review.opendev.org/710312 16:40:32 OK, thanks. 16:40:36 Sounds like we have a plan then. 16:40:38 Anything else? 16:43:12 nope 16:43:15 #topic Validate countdown email 16:43:19 #link https://etherpad.openstack.org/p/relmgmt-weekly-emails 16:43:31 Look for "Milestone 2 week +2" 16:44:03 Witold Bedyk proposed openstack/releases master: Do not release monasca-log-api for Ussuri https://review.opendev.org/710313 16:44:20 Should we add cycle highlight mentions now that I sent that kickoff email? 16:45:20 diablo_rojo: I was thinking of mentioning it in the next one 16:45:27 as a reminder 16:45:33 ttx, that works too 16:45:37 this week sounds a bit early for a reminder 16:45:49 fair:) 16:46:13 #link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012892.html <- solution for xstatic, detailed 16:46:46 Thanks! 16:47:07 I added a section on the rocky-em patches to the countdown. Please take a look and let me know if it looks ok. 16:47:20 Or feel free to tweak. I will send this out tomorrow morning my time. 16:47:39 +1 16:48:07 ttx: That description for xstatic looks good. 16:48:26 #topic AOB 16:48:38 Any other topics to cover this week? 16:48:39 I'll be off starting tomorrow, back on March 9 16:48:54 No meetings the next two weeks. 16:49:24 (i have something for open discussion) 16:49:38 I think I should be able to be here for the R-8 meeting, but if not I may need to ask someone to cover for me or we can skip. 16:49:50 rosmaita: The floor is yours! 16:49:55 i should be back 16:50:07 i'm seeing a weird validation error on https://review.opendev.org/#/c/709294/ 16:50:17 Thanks, I was going to raise that. 16:50:19 for cinder.yaml i think 16:50:24 oh, ok 16:50:26 all yours! 16:50:36 Haha, no, too late. :) 16:50:49 I took a quick look yesterday, but I couldn't tell what was happening. 16:51:12 Witold Bedyk proposed openstack/releases master: Switch monasca-kibana-plugin to independent https://review.opendev.org/710316 16:51:14 Since we merged the patch to tell reno to ignore the older branches. But it still looks like it is choking on trying to parse the older releases. 16:51:48 So it's something with reno. 16:52:12 We probably need to check out stable/rocky cinder and try building release notes to see if we can repro it locally. 16:53:04 ok, i can do that now 16:54:13 Kilo was the last 2015.x versioned release, so ignoring kilo and older *should* work. 16:54:34 Merged openstack/releases master: Release monasca-persister 1.12.1 https://review.opendev.org/710224 16:54:37 I'll try to take a look too later today. 16:54:45 Anything else for the meeting? 16:54:54 ok, me too, we can talk offline 16:55:14 none from me 16:55:42 OK, thanks everyone! 16:55:49 #endmeeting