Wednesday, 2020-10-07

*** dtantsur has joined #openstack-release01:03
*** dtantsur|afk has quit IRC01:05
*** dtantsur has quit IRC01:08
*** dtantsur has joined #openstack-release01:12
*** dtantsur has quit IRC01:24
*** armax has quit IRC01:25
*** dtantsur has joined #openstack-release01:29
*** armax has joined #openstack-release02:13
*** diablo_rojo has quit IRC02:38
*** pmannidi has quit IRC03:07
*** vishalmanchanda has joined #openstack-release03:43
*** armax has quit IRC04:01
*** evrardjp has quit IRC04:33
*** evrardjp has joined #openstack-release04:33
*** ykarel has joined #openstack-release04:44
*** ricolin_ has joined #openstack-release05:15
*** ricolin_ has quit IRC05:26
openstackgerritRico Lin proposed openstack/releases master: Release stable branches for heat services  https://review.opendev.org/75600705:31
ykarelTarball missing for 2.1.1 release of heat-agents: https://tarballs.opendev.org/openstack/heat-agents/?C=M;O=D05:32
ykarelnoticed during package build of it https://review.rdoproject.org/r/#/c/29923/05:32
ykarelrelease job passed though https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_432/df7a05a48ca0a0a99e442c19849577d0e4fbe2de/release/release-openstack-python/43254e4/job-output.txt05:32
ykarelfungi, can u check if the issue from some days back reappeared ^05:33
*** jtomasek has joined #openstack-release05:37
*** ykarel has quit IRC05:59
*** ykarel has joined #openstack-release06:00
*** icey has quit IRC06:39
*** icey has joined #openstack-release06:41
*** icey has quit IRC06:45
*** icey has joined #openstack-release06:48
*** vishalmanchanda has quit IRC07:02
*** icey has quit IRC07:04
*** slaweq has joined #openstack-release07:05
*** e0ne has joined #openstack-release07:07
*** slaweq has quit IRC07:09
*** haleyb has quit IRC07:10
*** haleyb has joined #openstack-release07:11
*** sboyron has joined #openstack-release07:11
*** vishalmanchanda has joined #openstack-release07:12
*** slaweq has joined #openstack-release07:14
*** icey has joined #openstack-release07:18
*** tosky has joined #openstack-release07:41
*** rpittau|afk is now known as rpittau07:41
*** ykarel_ has joined #openstack-release07:43
*** ykarel has quit IRC07:44
*** ykarel__ has joined #openstack-release07:44
ttxykarel_: nice catch07:46
ttxwe definitely need that up and working before we launch the final release steps07:47
*** ykarel_ has quit IRC07:47
*** ykarel__ is now known as ykarel07:52
*** icey has quit IRC07:57
*** icey has joined #openstack-release08:00
*** tosky_ has joined #openstack-release08:12
*** tosky is now known as Guest9816008:13
*** tosky_ is now known as tosky08:13
*** Guest98160 has quit IRC08:15
*** jbadiapa has quit IRC08:32
*** ykarel_ has joined #openstack-release09:03
*** ykarel has quit IRC09:03
*** ykarel_ is now known as ykarel09:25
*** ykarel has quit IRC10:50
*** ykarel has joined #openstack-release10:50
*** jbadiapa has joined #openstack-release11:20
fungiykarel: ttx: there was another server hung in rackspace we had to forcibly reboot a couple of days ago (we suspect it's a problem with live migration on their xen deployment but hard to know). looks like it's left things in an inconsistent state again such that one of the read/write volumes isn't getting successfully released to the read-only replicas, but the script which periodically releases11:28
fungithem all seems to be short-circuiting before it reaches the tarballs volume. i've manually released the tarballs replicas for now while i pinpoint the problem volume11:28
fungiaha, that one was simpler, turns out we just had a hung vos release script still running from two days ago, never terminated after things failed over to the redundant fileserver11:33
fungiso it was still holding our safety lockfile11:33
fungiyep, looks like that's gotten everything back on track again11:36
fungiykarel: thanks for bringing it to my attention!11:36
ykarelfungi, THanks11:43
ttxthanks fungi!12:19
fungiany time!12:23
*** jtomasek has quit IRC12:37
*** jtomasek has joined #openstack-release12:41
*** sboyron_ has joined #openstack-release13:03
*** sboyron has quit IRC13:03
*** vishalmanchanda has quit IRC13:03
*** evrardjp has quit IRC13:04
*** ttx has quit IRC13:04
*** evrardjp has joined #openstack-release13:04
*** vishalmanchanda has joined #openstack-release13:04
*** ttx has joined #openstack-release13:09
openstackgerritMerged openstack/releases master: Release stable branches for heat services  https://review.opendev.org/75600713:10
*** ttx has quit IRC13:16
*** ttx has joined #openstack-release13:17
openstackgerritBernard Cafarelli proposed openstack/releases master: Neutron stable releases (Stein, Train, Ussuri)  https://review.opendev.org/75520313:22
openstackgerritMerged openstack/releases master: Neutron stable releases (Stein, Train, Ussuri)  https://review.opendev.org/75520314:23
*** icey has quit IRC14:36
*** icey has joined #openstack-release14:36
hberaudfungi: not sure but I think that heat-dashboard faced a similar issue https://624a3940f1199870a766-186f218c38a8ba58edb43244f86e77cb.ssl.cf1.rackcdn.com/42ce3d15f0cbe40d530419678d40701903bd5445/tag/publish-openstack-releasenotes-python3/667bfd0/job-output.txt14:37
fungihberaud: similar to what?14:38
hberaudfungi: your previous discussion on this channel14:38
smcginnisrsync: failed to set permissions on "/afs/.openstack.org/docs/releasenotes...."14:39
fungithe tarballs.opendev.org site was serving stale content for ~2.5 days, so anything released during that time didn't appear on the site until a couple hours ago14:39
smcginnisLooks like some sort of AFS issue.14:39
smcginnisStarting at "2020-10-07 13:25:37.407447"14:39
fungiyeah, doesn't sound at all similar14:39
hberaudack14:39
fungibut i'll look into it14:40
hberaudthanks14:40
*** armax has joined #openstack-release14:40
fungifor the record, the zuul build result pages are far easier to use to get to the bottom of job failures, if someone gives me a link to the raw logs i always wind up having to hunt for the zuul build id in them so i can get to the build result view14:42
hberaudack I take note14:44
fungilooking at that error, the most likely cause would be two different concurrent builds updating the heat-dashboard release notes at the same time since they're relying on in-tree tempfiles and use the delete option of rsync, one could easily remove the other's tempfile14:47
fungii'll see if i can find two runs, maybe triggered on different branches, which were running rsync at the same time14:47
fungiyeah, looks like there were publish-openstack-releasenotes-python3 jobs running simultaneously for heat-dashboard 1.5.1, 2.0.2 and 3.0.1 tags14:51
smcginnisSome thing we've run in to with publishing normal docs.14:56
smcginnisWe added a semaphore for that to prevent it, or at least reduce the chances.14:57
smcginnisNot sure if we would need to do the same here.14:57
*** priteau has joined #openstack-release14:58
*** slaweq_ has joined #openstack-release14:58
*** slaweq has quit IRC14:58
*** ykarel is now known as ykarel|away15:00
fungihttps://zuul.opendev.org/t/openstack/build/667bfd0d8f4843ee8710f0d54285538c/log/job-output.txt#3335-336015:09
fungihttps://zuul.opendev.org/t/openstack/build/1be889bccad047aeacff3b5d56bce393/log/job-output.txt#3333-333415:09
fungiconfirmed, those were trying to rsync the same tree at the same time, one succeeded and one did not because the other one removed a tempfile15:09
clarkbI can't recall, is there a reason we aren't using supercedent pipelines for those jobs?15:10
clarkbI'm guessing because each job needs to run for every commit?15:11
clarkbs/commit/ref/15:11
fungithe release notes job may not need to15:11
fungithough we'd likely want a special tag-triggered supercedent pipeline if we were going to take that route15:12
fungiand that might be the only job it runs15:12
*** ykarel|away has quit IRC15:32
smcginnisfungi, clarkb: Would be easy enough to do this for the publish-openstack-releasenotes-python3 job too: https://review.opendev.org/#/c/724727/2/zuul.d/jobs.yaml15:48
smcginnisI think the only concern at the time was adding delays if the jobs were not actually related, but I think it runs infrequently enough that it doesn't really make much of a difference.15:49
fungiand they don't take long to run, generally16:05
*** rpittau is now known as rpittau|afk16:06
*** slaweq has joined #openstack-release16:06
*** slaweq_ has quit IRC16:07
fungismcginnis: that should work, with the caveat that a bunch of builds of that job for different projects could queue up behind one another during mass releases16:07
fungiclarkb: rethinking, i don't expect the supercedent idea to work because it only collapses items for the same project+branch, and even with branch guessing these tags were for different branches16:08
clarkboh good point16:09
fungii'm not actually certain what supercedent would do with tag events, but generally i think it would end up being equivalent to independent16:09
clarkbfor some reason I thought it was supercedent per project (branch didn't matter)16:09
smcginnisThis happens (the failure) rarely enough that it probably isn't too critical. But I'll propose a patch similar to the above so we don't have to figure it out again every time it actually does run into this.16:09
clarkbI think the biggest issue is that every time it happens we (opendev/infra) get asked to debug it :)16:11
hberaudsmcginnis: too many queued things are not an issue for us? (c.f fungi's comment)16:11
smcginnisclarkb: Yep. ;) If we can avoid that, then I think it's good to have to live with a potentially small delay.16:11
hberaudsmcginnis: from my POV I don't think that this is an issue16:11
smcginnishberaud: Usually not. The main issue is when multiple jobs are running for the same repo. So like the case today where one patch was releasing three stable branches.16:12
fungismcginnis: an optimization which was proposed was to add a special kind of semaphore (or special flag for a semaphore) which limited its conflict scope so it would only block subsequent builds if triggered by events for the same project16:12
smcginnisSo it's really not a big deal. As long as one of those passes, then it will at least get published.16:12
hberaudgot it16:12
smcginnisfungi: Is that a mechanism that is present in zuul today? Or would that require a larger effort to enable?16:13
fungii don't think anyone has written an implementation yet16:13
smcginnisIf a standard semaphore looks like it causes unreasonable delays, then maybe it might be worth doing that.16:15
smcginnisBut for now, I don't think it would be an issue to just limit concurrency.16:15
hberaud+116:15
clarkbsmcginnis: do you think release day next week will create enough tags that a semaphore like that will be problematic in that specific situation?16:17
clarkbI agree during the rest of a release cycle it should be a non issue but not sure of when we make a ton of releases all in one short window16:17
hberaudclarkb: They will be on different repo16:19
hberaudclarkb: so I don't think it will be an issue during the final release16:20
smcginnisYeah, if it really is only when the same repo is being tagged, then the coordinated release should be fine.16:20
hberauds/repo/repos/16:20
smcginnisAnd I don't think we ever hit this in the past, though I think it didn't start happening until fairly recently.16:20
smcginnisBut it would have been before the ussuri release, and I don't recall hitting any issues there.16:21
hberaudmaybe because release note syncs are now on several branches16:21
*** tosky has quit IRC16:21
hberaudso likelyhood to facing this happen more often16:23
hberauds/likelyhood/likelihood/16:23
clarkbgot it16:24
*** dtantsur is now known as dtantsur|afk16:24
hberaudIn other words, the likelihood to trigger a similar scenario increase each time a cycle is finished and when a new stable branch is created. PTL and liaisons often release their stable branches through a single patch so each new stable branch increase the probability to trigger this scenario16:34
*** ykarel has joined #openstack-release17:10
*** ykarel has quit IRC17:18
*** e0ne has quit IRC17:23
*** armstrong has joined #openstack-release18:15
*** vishalmanchanda has quit IRC18:22
armstrongHello18:46
armstrongI ran the command (at line 469: ./tools/list_rc_updates.sh) due for this week (Wednesday) and got some output, which I need some help interpreting. Anyone to help?19:09
*** priteau has quit IRC19:12
smcginnisarmstrong: Hey! If you can put the output into a paste, I can take a look and walk through it with you.19:13
smcginnisOr etherpad if we want to make any notes on them.19:16
smcginnisBullet item 4 has the full instructions for the task: https://releases.openstack.org/reference/process.html#r-1-week-final-rc-deadline19:16
smcginnisThe key thing being the second sub-bullet: "Propose patches creating a new RC for those that have unreleased bugfixes or updated translations"19:17
smcginnisSo what we are looking for is if any repos have merged changes that either address bugs or include translations.19:17
smcginnisWe can ignore all of the "update .gitreview for stable/victoria" and related patches.19:17
smcginnisOnce we look through the list and see which repos have merged anything of interest, we can use that list of projects to generate release requests.19:18
smcginnisExamples from the last cycle can be found here: https://review.opendev.org/#/q/topic:r1-final-rc-deadline+(status:open+OR+status:merged)19:18
smcginnisTomorrow this can be run to get the final list of interesting projects. Then the new-release command can be run, or you could put the list of repos in a text file and use that to run tools/propose_auto_releases.sh to iterate through the list and get new RC releases proposed.19:20
*** tosky has joined #openstack-release19:26
armstrongok doing that right now19:27
armstronghttps://usercontent.irccloud-cdn.com/file/wlwWRPpB/command_output.md19:37
armstrongsmcginnis:  can you open the file?19:37
smcginnisarmstrong: Can you just use paste.openstack.org or etherpad.openstack.org?19:39
armstrongok19:40
armstronghttp://paste.openstack.org/show/798806/19:43
smcginnisScanning through the output.19:43
smcginnisLooks like cinder has some significant changes merged.19:44
smcginnisMaybe cloudkitty.19:44
smcginnisheat-dashboard has translations.19:44
smcginnisAh, here's a tricky part. Looks like it includes cycle-trailing deliverables too.19:44
smcginnisSo we can ignore kayobe.19:45
smcginnisAnd kayobe-config-dev.19:45
smcginnisAnd kolla-ansible.19:45
smcginniskuryr-kubernetes has something.19:46
smcginnisA few with CI related changes. Those can be ignored.19:46
smcginnisneutron has something.19:47
armstrongok, how do we remove them?19:47
smcginnisoctavia-dashboard has translations.19:47
smcginnisRemove which?19:47
armstrongthose to ignore19:47
johnsomYeah, I will post a RC2 for the translations here in an hour or so19:47
smcginnisjohnsom: Awesome, thanks!19:47
smcginnisarmstrong: We just won't propose releases for them.19:48
armstrongok19:48
smcginnissenlin-dashboard has translations.19:48
smcginnisTrove has some things.19:48
smcginnisNot bad, looks like that's it.19:49
smcginnisarmstrong: So does it make sense what we are looking for?19:49
armstrongok19:49
armstrongYes it does, but just wondering if we have to do  something  on the result.19:50
smcginnisarmstrong: Did you read through bullet 4 in the process doc that I mentioned: https://releases.openstack.org/reference/process.html#r-1-week-final-rc-deadline19:51
slaweqsmcginnis: hi, may I ask You about one thing, regarding ovsdbapp19:51
smcginnisslaweq: Sure!19:52
slaweqsmcginnis: it seems that we need to bump minimum version of ovs there, patch is here https://review.opendev.org/#/c/756329/19:52
armstrongOh, sorry smcginnis i missed that19:52
armstrongsmcginnis:  just saw it, thanks19:52
slaweqcan we do that now or should we maybe wait until victoria will be released and do it later?19:53
openstackgerritMichael Johnson proposed openstack/releases master: Release octavia-dashboard 6.0.0rc2  https://review.opendev.org/75658119:54
smcginnisslaweq: Requirements for stable/victoria are frozen until the coordinated release goes out.19:54
johnsomWell, I had break between meetings, so there we are.  That should be it for Octavia. No other RC2 plans19:54
smcginnisSo I think it's fine for you to raise the lower-constraints in the repo. But we should probably wait until after next Wednesday to release that so we can update global requirements right away.19:55
smcginnisjohnsom: Perfect, thanks!19:55
slaweqsmcginnis: ok, so cherry-pick can be merged, but don't do new release of ovsdbapp until next wednesday at least19:56
slawequnderstood19:56
slaweqthx a lot19:56
smcginnisslaweq: 👍19:57
smcginnisarmstrong: So back to the process doc, tomorrow it would be good to run the script again and check for any final updates.19:57
smcginnisarmstrong: Then right after getting that list, identify which ones look like they should have an RC2 and get new releases proposed.19:57
armstrongSmcginnis: Sure, I will do that tomorrow19:58
smcginnisarmstrong: That can be done either one by one using the new-release script, or you can add a list of them to a local text file as you go through (makes it easy to edit the list) and use propose_auto_releases.sh to get patches up.19:58
armstrongok19:59
smcginnisOops, wrong name of the second script: process_auto_releases.sh19:59
smcginnishttps://releases.openstack.org/reference/using.html#toos-process-auto-releases-sh19:59
armstrongsmcginnis: OK, noted20:00
openstackgerritSlawek Kaplonski proposed openstack/releases master: Release Neutron RC2 for Victoria  https://review.opendev.org/75658220:03
*** slaweq has quit IRC20:14
*** gmann is now known as gmann_lunch20:22
*** slaweq has joined #openstack-release20:23
*** mgoddard has quit IRC20:31
*** mgoddard has joined #openstack-release20:32
*** jbadiapa has quit IRC20:33
*** slaweq has quit IRC20:58
*** gmann_lunch is now known as gmann21:04
*** openstackgerrit has quit IRC22:07
*** sboyron_ has quit IRC22:16
*** tosky has quit IRC22:22
*** openstackgerrit has joined #openstack-release23:22
openstackgerritBrian Rosmaita proposed openstack/releases master: Proposing victoria RC2 for cinder  https://review.opendev.org/75659923:22

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!