Friday, 2019-03-22

jamesmcarthurefried: I'm here! Obviously too late :)00:03
*** wolverineav has joined #openstack-infra00:07
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Add API endpoint to get frozen jobs  https://review.openstack.org/60707700:07
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Get executor job params  https://review.openstack.org/60707800:07
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: Separate out executor server from runner  https://review.openstack.org/60707900:10
*** armax has joined #openstack-infra00:10
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: runner: implement prep-workspace  https://review.openstack.org/60708200:11
*** gyee has quit IRC00:11
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: runner: add configuration schema  https://review.openstack.org/64067200:11
openstackgerritTristan Cacqueray proposed openstack-infra/zuul master: runner: add execute sub-command  https://review.openstack.org/63094400:11
*** mriedem_afk is now known as mriedem00:12
*** irclogbot_1 has joined #openstack-infra00:17
ianware the deb-* and fuel-* repos actually worth updating?00:21
clarkbianw I think you can skip any that are retired in projects.yaml. I know all the deb-* repos meet that criteria00:22
*** wolverineav has quit IRC00:23
ianwgreat point ...00:26
*** rascasoft has joined #openstack-infra00:28
*** jamesmcarthur has quit IRC00:30
*** rascasoft has quit IRC00:36
*** armax has quit IRC00:44
*** ricolin has joined #openstack-infra01:05
*** rascasoft has joined #openstack-infra01:45
*** jamesmcarthur has joined #openstack-infra01:46
*** rascasoft has quit IRC01:54
*** Sundar has quit IRC02:02
*** jamesmcarthur has quit IRC02:03
openstackgerritIan Wienand proposed openstack/diskimage-builder master: Replace openstack.org git:// URLs with https://  https://review.openstack.org/64544002:21
*** hongbin has joined #openstack-infra02:22
*** wolverineav has joined #openstack-infra02:24
*** diablo_rojo has quit IRC02:25
*** wolverineav has quit IRC02:28
openstackgerritIan Wienand proposed openstack-infra/devstack-gate master: Replace openstack.org git:// URLs with https://  https://review.openstack.org/64545102:39
*** psachin has joined #openstack-infra02:48
*** yamamoto has joined #openstack-infra02:49
openstackgerritMerged openstack/diskimage-builder master: Replace openstack.org git:// URLs with https://  https://review.openstack.org/64544002:50
*** rascasoft has joined #openstack-infra03:00
mriedemianw: gmann: can we land https://review.openstack.org/#/c/644638/ ? i just hit it again in another place (keystone failed to import memcache)03:04
ianwmridem: i'm ok if you want to merge it ... i just wasn't like 100% confident I'd followed every nuance ...03:07
mriedemyeah it's rare b/c only one node provider has preloaded pips it looks like - rax-dfw03:08
mriedemand some of these packages don't have 3.6 classifiers03:08
mriedemapparently memcached is another one03:08
ianwyeah ... that ... just shouldn't be the case :/03:09
*** rascasoft has quit IRC03:10
*** apetrich has quit IRC03:15
*** udesale has joined #openstack-infra03:21
gmannmriedem: ianw +A03:26
mriedemthanks03:26
*** mriedem has quit IRC03:29
*** ramishra has joined #openstack-infra03:38
*** whoami-rajat has joined #openstack-infra03:41
*** yamamoto has quit IRC03:50
*** jamesmcarthur has joined #openstack-infra03:51
*** lseki has quit IRC03:52
*** nicolasbock has quit IRC04:03
*** jamesmcarthur has quit IRC04:12
*** hongbin has quit IRC04:12
*** ykarel has joined #openstack-infra04:16
*** yamamoto has joined #openstack-infra04:25
*** hamzy_ has joined #openstack-infra04:31
*** TheJulia has quit IRC04:31
*** coreycb has quit IRC04:31
*** portdirect has quit IRC04:31
*** TheJulia has joined #openstack-infra04:31
*** dustinc has quit IRC04:31
*** spsurya has quit IRC04:31
*** PrinzElvis has quit IRC04:31
*** evgenyl has quit IRC04:32
*** adrianreza has quit IRC04:32
*** sparkycollier has quit IRC04:32
*** zaro has quit IRC04:32
*** kmalloc has quit IRC04:32
*** hogepodge has quit IRC04:32
*** srwilkers has quit IRC04:32
*** johnsom has quit IRC04:32
*** Ng has quit IRC04:33
*** jbryce has quit IRC04:33
*** hamzy has quit IRC04:33
*** chrisyang_0660 has quit IRC04:33
*** evgenyl has joined #openstack-infra04:34
*** srwilkers has joined #openstack-infra04:34
*** sparkycollier has joined #openstack-infra04:34
*** PrinzElvis has joined #openstack-infra04:34
*** portdirect has joined #openstack-infra04:34
*** adrianreza has joined #openstack-infra04:34
*** coreycb has joined #openstack-infra04:34
*** spsurya has joined #openstack-infra04:34
*** johnsom has joined #openstack-infra04:34
*** hogepodge has joined #openstack-infra04:35
*** chrisyang_0660 has joined #openstack-infra04:36
*** dayou has quit IRC04:36
*** jbryce has joined #openstack-infra04:38
*** dustinc has joined #openstack-infra04:39
*** Ng has joined #openstack-infra04:39
*** zaro has joined #openstack-infra04:39
*** kmalloc has joined #openstack-infra04:39
*** udesale has quit IRC04:47
*** udesale has joined #openstack-infra04:47
*** jamesmcarthur has joined #openstack-infra04:52
*** lpetrut has joined #openstack-infra04:52
*** jamesmcarthur has quit IRC04:57
*** janki has joined #openstack-infra05:04
*** raukadah is now known as chandankumar05:06
*** rascasoft has joined #openstack-infra05:17
*** lpetrut has quit IRC05:25
*** rascasoft has quit IRC05:28
*** dustinc has quit IRC05:50
*** tkajinam has quit IRC05:59
*** tkajinam has joined #openstack-infra05:59
*** lpetrut has joined #openstack-infra06:17
*** jtomasek has joined #openstack-infra06:17
*** psachin has quit IRC06:20
*** wolverineav has joined #openstack-infra06:24
*** psachin has joined #openstack-infra06:25
*** david-lyle has joined #openstack-infra06:27
*** dklyle has quit IRC06:27
*** rascasoft has joined #openstack-infra06:29
*** david-lyle has quit IRC06:29
*** dklyle has joined #openstack-infra06:29
*** wolverineav has quit IRC06:29
*** lpetrut has quit IRC06:30
*** rascasoft has quit IRC06:33
*** kjackal has joined #openstack-infra06:35
*** dims has quit IRC06:39
*** dims has joined #openstack-infra06:41
openstackgerritIan Wienand proposed openstack-infra/system-config master: [dmn] Stashing some scripts to make git:// -> https:// changes  https://review.openstack.org/64231407:00
ianwinfra-root: ^ given those a pretty good workout, including posting the change, adding a message about auto-merging, and auto-merging some devstack ones to test (https://review.openstack.org/#/q/status:merged+topic:opendev-gerrit-git)07:01
jheskethianw: I'm very late to the party, but did you see my email on the discuss list?07:03
*** rascasoft has joined #openstack-infra07:03
ianwjhesketh: when was it sent ;)  looking now07:03
jheskeththis afternoon I think07:03
jheskethre [infra][dev] Options for upcoming git:// to https:// transition07:04
ianwjhesketh: oh, yeah adding in renames was pretty much option 3 of the original mail?07:04
*** pgaxatte has joined #openstack-infra07:05
jheskethianw: ah, I took option #3 as doing it during part of the migration/cut over. My suggestion was to do the rename as a mass set of proposed changes now (ie earlier than things like gerrit moving etc)07:06
*** harlowja has quit IRC07:06
*** rcernin has quit IRC07:07
ianwjhesketh: but we haven't finalised if it will be git.opendev.org/openstack/nova or git.opendev.org/nova/nova right?  so that fixup needs to happen *after* all that is sorted?07:07
ianwwhereas, git.openstack.org/openstack/nova will be redirected to the right place (via a lot of modrewrite magic ultimately) no matter where it ends up in opendev.org07:08
ianwI should say "https://git.openstack.org/openstack/nova", to be clear07:08
jheskethhmm okay, that's fair07:09
jheskethI'm happy with option 1 FWIW, was mostly thinking out loud07:10
jhesketh(but I've also obviously been a little disconnected from the migration too which is entirely my fault)07:10
ianwnp ... plenty of ideas to go around with this transition! :)07:10
*** gfidente has joined #openstack-infra07:10
*** whoami-rajat has quit IRC07:11
*** lpetrut has joined #openstack-infra07:11
ianwi'll respond on list just to close the loop07:13
jheskethsure :-)07:14
*** dpawlik_ is now known as dpawlik07:21
*** kjackal has quit IRC07:22
*** kjackal has joined #openstack-infra07:22
*** pcaruana has joined #openstack-infra07:27
*** whoami-rajat has joined #openstack-infra07:30
*** ramishra has quit IRC07:32
*** rpittau|afk is now known as rpittau07:33
*** kopecmartin|off is now known as kopecmartin07:37
*** ykarel_ has joined #openstack-infra07:37
*** ykarel_ has quit IRC07:37
*** ykarel_ has joined #openstack-infra07:39
*** ykarel_ has quit IRC07:39
*** ykarel has quit IRC07:40
*** apetrich has joined #openstack-infra07:43
*** ykarel has joined #openstack-infra07:47
*** tosky has joined #openstack-infra07:53
*** ramishra has joined #openstack-infra07:56
*** yamamoto has quit IRC08:02
*** yamamoto has joined #openstack-infra08:03
*** lpetrut has quit IRC08:04
*** xek_ has joined #openstack-infra08:08
*** ginopc has joined #openstack-infra08:10
*** helenaAM has joined #openstack-infra08:27
*** iurygregory has joined #openstack-infra08:32
*** dtantsur|afk is now known as dtantsur08:33
*** tkajinam has quit IRC08:34
*** jpich has joined #openstack-infra08:43
*** ykarel is now known as ykarel|lunch08:48
*** jpena|off is now known as jpena08:51
*** jbadiapa has joined #openstack-infra09:02
*** tobias-urdin has joined #openstack-infra09:15
*** ricolin has quit IRC09:22
*** kjackal has quit IRC09:22
openstackgerritMerged openstack-infra/nodepool master: Update docs for provider removal.  https://review.openstack.org/64522009:27
*** derekh has joined #openstack-infra09:42
*** kjackal has joined #openstack-infra09:43
*** dayou has joined #openstack-infra09:43
*** Lucas_Gray has joined #openstack-infra09:50
*** roman_g has joined #openstack-infra09:58
*** jbadiapa has quit IRC10:00
*** lpetrut has joined #openstack-infra10:04
*** ykarel|lunch is now known as ykarel10:08
*** ramishra_ has joined #openstack-infra10:10
*** ramishra has quit IRC10:12
*** jpich has quit IRC10:13
*** jpich has joined #openstack-infra10:14
*** jbadiapa has joined #openstack-infra10:19
*** lpetrut has quit IRC10:23
*** dtantsur is now known as dtantsur|brb10:28
*** nicolasbock has joined #openstack-infra10:40
*** rascasoft has quit IRC10:42
openstackgerritLuigi Toscano proposed openstack-infra/zuul-jobs master: DNM Debug stage-output, change archival mechanism  https://review.openstack.org/64523910:42
*** rascasoft has joined #openstack-infra10:43
*** kopecmartin is now known as kopecmartin|lunc10:47
*** chrisyang_0660 has quit IRC10:59
*** johnsom has quit IRC10:59
*** chrisyang_0660 has joined #openstack-infra10:59
*** johnsom has joined #openstack-infra10:59
*** adrianreza has quit IRC10:59
*** sparkycollier has quit IRC10:59
*** whoami-rajat has quit IRC10:59
*** spsurya has quit IRC10:59
*** portdirect has quit IRC11:00
*** sparkycollier has joined #openstack-infra11:00
*** dougwig has quit IRC11:00
*** adrianreza has joined #openstack-infra11:00
*** wolverineav has joined #openstack-infra11:00
*** portdirect has joined #openstack-infra11:01
*** spsurya has joined #openstack-infra11:01
*** whoami-rajat has joined #openstack-infra11:01
*** dougwig has joined #openstack-infra11:01
*** wolverineav has quit IRC11:05
openstackgerritFabien Boucher proposed openstack-infra/zuul master: Elasticsearch Zuul reporter  https://review.openstack.org/64492711:12
*** kaisers has quit IRC11:12
*** kaisers has joined #openstack-infra11:13
*** notmyname has quit IRC11:28
*** notmyname has joined #openstack-infra11:30
*** arxcruz|pto is now known as arxcruz11:31
*** Lucas_Gray has quit IRC11:34
*** rpioso|afk is now known as rpioso11:46
*** pcaruana has quit IRC11:53
openstackgerritMerged openstack-infra/devstack-gate master: Replace openstack.org git:// URLs with https://  https://review.openstack.org/64545112:00
*** dtantsur|brb is now known as dtantsur12:05
*** jento_ has joined #openstack-infra12:06
*** JpMaxMan_ has joined #openstack-infra12:07
*** davidlenwell_ has joined #openstack-infra12:07
*** csatari_ has joined #openstack-infra12:07
*** Guest12731 has joined #openstack-infra12:08
*** roman_g has quit IRC12:09
*** kgiusti has joined #openstack-infra12:11
*** rh-jelabarre has joined #openstack-infra12:12
*** kopecmartin|lunc is now known as kopecmartin12:14
*** rlandy has joined #openstack-infra12:14
*** eharney has quit IRC12:14
*** logan- has quit IRC12:14
*** ab-a has quit IRC12:14
*** davidlenwell has quit IRC12:14
*** JpMaxMan has quit IRC12:14
*** csatari has quit IRC12:14
*** jento has quit IRC12:14
*** Guest12731 is now known as logan-12:14
*** csatari_ is now known as csatari12:14
*** jento_ is now known as jento12:14
*** davidlenwell_ is now known as davidlenwell12:14
*** JpMaxMan_ is now known as JpMaxMan12:14
*** weshay is now known as weshay|rover12:17
openstackgerritLuigi Toscano proposed openstack-infra/zuul-jobs master: DNM Debug stage-output, change archival mechanism  https://review.openstack.org/64523912:19
*** eharney has joined #openstack-infra12:23
*** trown|outtypewww is now known as trown|brb12:24
*** trown|brb is now known as trown12:24
*** jbadiapa has quit IRC12:33
*** jpena is now known as jpena|lunch12:36
*** udesale has quit IRC12:47
*** udesale has joined #openstack-infra12:48
fungiianw: regarding http://paste.openstack.org/show/748220/ i think it's fair to say that no bug tracker (that i'm aware of at least) has a good way to track hundreds of distinct tasks in one report. what we discovered is that storyboard handles it better than, say, launchpad. it does render after a bit of a wait rather than just throwing up an api timeout error and asking you to try again later12:49
Shrewsfungi: the nodepool.yaml change we merged yesterday for the builders still has not propogated to the nb01/nb02. any reason you may know of?12:51
funginothing comes to mind but i'll take a look and see if i can figure out why12:54
*** pcaruana has joined #openstack-infra12:54
fungilast puppet apply was 16:58:38z12:56
fungi(on nb01)12:56
fungiansible is still connecting to it regularly though12:58
fungiso i have a feeling we broke something with host matching around that time12:58
Shrewshrm12:58
fungidigging into recent config changes now to see what jumps out at me12:58
fungi643713 was the last change that messed with host globs but it merged at 15:41z so the delay for it to break matching doesn't quite fit13:00
*** altlogbot_0 has quit IRC13:01
*** irclogbot_1 has quit IRC13:01
*** irclogbot_2 has joined #openstack-infra13:02
*** altlogbot_3 has joined #openstack-infra13:02
*** lseki has joined #openstack-infra13:02
*** Lucas_Gray has joined #openstack-infra13:03
fungiaccording to run_all_cron.log on bridge.o.o, the "puppet : copy puppet modules" task is failing13:04
fungion nb01 and nb0213:04
fungicould it be that we copy modules into /opt which is full?13:06
*** dklyle has quit IRC13:07
fungistrangely it's not breaking on nb03 even though /opt is full there too13:07
Shrewshrm, that could be. nb03 has a separate config13:08
openstackgerritLuigi Toscano proposed openstack-infra/zuul-jobs master: DNM Debug stage-output, change archival mechanism  https://review.openstack.org/64523913:08
fungioh, i see13:08
fungithe /opt fs is only "full" on nb03 but still has some space writeable by root13:08
fungithe /opt fs on nb01 and nb02 really have no remaining available blocks even for root13:09
Shrewschicken+egg=uhoh13:09
openstackgerritFabien Boucher proposed openstack-infra/zuul master: Elasticsearch Zuul reporter  https://review.openstack.org/64492713:09
fungiso if we temporarily free up some space in /opt on those two i think things will start working again13:09
zigofungi: Hi there! Is there a Buster image in infra already?13:10
Shrewsfungi: maybe we can delete one of the older .raw files for a current image??13:12
*** dklyle has joined #openstack-infra13:12
fungiShrews: that ought to be plenty13:12
fungizigo: not to my knowledge13:12
zigo:/13:13
zigofungi: I'll need it for unit testing Stein on puppet-openstack ...13:13
* zigo needs to start a patch then.13:13
fungiwe can try adding it shortly, but need to clean up some space on our nodepool image builders first (which is what Shrews is working on right now)13:13
zigofungi: Ok, thanks.13:14
zigoCan I attempt a patch anyway?13:14
fungiof course!13:14
zigofungi: It's probably going to be wrong, but I'll try! :P13:14
zigo(ie: search for stretch and try to replicate ...)13:14
fungijust warning that if it takes us a little longer to fix the current image retention issues we're dealing with we may hold off approving the patch even if it's correct13:15
fungithough we hope it'll self-correct here within a few hours13:15
zigofungi: I've just started the Stein packaging, so it's not urgent, I wont start using before like 1 week or 2...13:15
fungisounds exciting for sure13:15
zigo:)13:16
fungithanks for working on that!13:16
zigofungi: Have you heard of this? https://salsa.debian.org/openstack-team/debian/openstack-cluster-installer13:16
zigoThat's my own tool, based on Debian & puppet-openstack. It's part of Buster ... :P13:16
zigoWe're currently using it in production @my-work.13:17
fungithe name sounds familiar but i didn't realize that was what you were working on13:17
zigoI've done it all.13:17
zigo:)13:17
zigo(nearly completely alone)13:17
zigoThat's the reason why I care about puppet-openstack gating for Debian Buster.13:18
fungimakes sense13:18
fungiso basically a push-button openstack installer which uses the puppet-openstack modules?13:18
Shrewsfungi: perhaps we should kick off puppet now before the running dib process take the free space13:19
zigofungi: Yeah. Though these days, I only use the cli, there's also a web interface for it.13:19
Shrews(which seems to be happening)13:19
fungiShrews: i'll use kick.sh now on them13:19
zigofungi: I try to make it simple and stupid, no OOO, containers, or such.13:19
*** eharney has quit IRC13:20
zigoHaving it working is good enough, IMO ! :)13:20
zigo(and actually, not that easy to do...)13:20
*** jbadiapa has joined #openstack-infra13:20
fungizigo: yeah, i meant "push-button" figuratively, not necessarily implying actual buttons13:20
zigo:)13:20
zigofungi: Do I need to edit nodepool/n*.openstack.org.yaml in my patch?13:21
zigoAnd Grafana?13:21
fungiShrews: okay, on bridge.o.o i did `sudo /opt/system-config/tools/kick.sh nb01*:nb02*` and it's updating them now13:23
fungizigo: yes, the nodepool files are what will configure building and uploading the images and the grafana files are how we'll graph usage and availability for those new node labels13:24
fungiShrews: the "puppet : run puppet" task just completed on nb0113:25
funginb02 seems to be taking longer13:26
fungiand finally completed on nb02 now as well13:26
fungiShrews: should hopefully be starting to delete the old images now?13:26
openstackgerritLuigi Toscano proposed openstack-infra/zuul-jobs master: DNM Debug stage-output, change archival mechanism  https://review.openstack.org/64523913:26
Shrewsfungi: yep. disk is freeing up too13:27
fungizigo: the grafana config addition is what will make it show up on graphs like the ones you see at http://grafana.openstack.org/dashboard/db/nodepool13:28
Shrewsfungi: /opt on nb01 now around 62% in use, and 77% on nb0213:29
fungigreat!13:31
Shrewscleanup seems to be done, stabilized on those #s13:32
Shrewsyay auto-fixing of things13:32
fungiexcept when there's a catch-22 where the breakage prevents applying the fix13:33
*** jpena|lunch is now known as jpena13:36
openstackgerritThomas Goirand proposed openstack-infra/project-config master: Add a Debian Buster image.  https://review.openstack.org/64557413:36
zigofungi: Is there some other place I should commit as well, or is this repo enough?13:36
fungilooking13:36
zigoLike, system-config also maybe?13:37
fungizigo: yeah, modules/openstack_project/manifests/mirror_update.pp is where we'd add the reprepro configuration for maintaining a debian mirror cache in each nodepool provider13:42
fungishould be able to add a gnupg_key resource for the appropriate signing key and add change the releases lists in appropriate reprepro resources to something like ['stretch', 'buster']13:45
fungithe other existing occurrences of "stretch" in openstack-infra/system-config can be safely ignored for now i think, as those are more related to container images we're building for some of our newer opendev services13:47
fungi(opendev.org for example is actually running in debian/stretch-based docker containers)13:48
Shrewsfungi: seeing how each new image (e.g. ^) adds at least 3 concurrent on-disk images to /opt, we might want to begin considering a path to increasing disk space in /opt in the not-too-far-away future13:49
fungii concur13:49
*** mriedem has joined #openstack-infra13:51
fungiShrews: looks like /opt is already in lvm2 logvols on cinder volumes, so should just be a matter of attaching more cinder volumes and growing the vg/lv/fs13:51
*** Lucas_Gray has quit IRC13:51
Shrewscool13:52
*** Lucas_Gray has joined #openstack-infra13:52
fungiShrews: oh, except on nb0313:53
fungiwhere we may need to consider other options (not sure that arm64 linaro cloud has cinder available)13:53
*** Lucas_Gray has quit IRC13:56
*** efried is now known as fried_rice13:58
*** Lucas_Gray has joined #openstack-infra13:59
*** bnemec is now known as beekneemech13:59
*** jaosorior has quit IRC14:01
*** jbadiapa has quit IRC14:04
*** eharney has joined #openstack-infra14:06
*** iurygregory has quit IRC14:11
*** iurygregory has joined #openstack-infra14:13
*** iurygregory has quit IRC14:24
*** iurygregory has joined #openstack-infra14:24
*** armax has joined #openstack-infra14:29
clarkbShrews: fungi when I've looked at this in the past the issue is we've leaked images to disk14:35
clarkbhave we checked all images on disk arevalid according to nodepool?14:35
fungii have not. but also i suspect we're capped at ~200gb /opt on nb03 without rebuilding on a different flavor14:36
clarkbthe arm builder shouldnt need much disk14:36
clarkbit builds ~3 qcow2 only images14:37
fungiahh, then mayhaps it has a lot of leaked images14:37
fungibecause it's also basically full14:37
clarkbone theory that cameup before was whe  we restart the builder the current image build may leak14:39
*** cmoura has quit IRC14:42
*** cmoura has joined #openstack-infra14:43
*** lmiccini has joined #openstack-infra15:01
*** armax has quit IRC15:03
lmiccinio/ I am facing something similar to https://bugs.launchpad.net/openstack-ci/+bug/1394191 with my gerrit account, anyone able to help me out?15:07
openstackLaunchpad bug 1394191 in OpenStack Core Infrastructure "can't be added as a gerrit reviewer " [Medium,Fix released] - Assigned to Jeremy Stanley (fungi)15:07
*** harlowja has joined #openstack-infra15:07
fungilmiccini: sure, i can take a look, it's almost always the result of having multiple gerrit accounts with the same e-mail address15:08
lmiccinifungi: thanks! I think I've tried too hard to work around it myself and ended up with a bunch of duplicates15:09
*** jamesmcarthur has joined #openstack-infra15:12
fungilmiccini: indeed, i find 4 different accounts in gerrit for someone with the same username as your irc nick15:12
fungiaccount numbers 19705, 25259, 25412 and 3014915:13
*** dpawlik has quit IRC15:14
lmiccinifungi: I have a "lmiccini2" with ID 30126 that I've tried to merge duplicate accounts into (and apparently succeeded). any chance you can wipe those out?15:14
fungiyikes, there are also 2 accounts with the same e-mail address for username:lmiccini215:16
fungi23817 and 3012615:16
lmiccinifungi: ouch15:16
*** cgoncalves has quit IRC15:17
lmiccinifungi: wipe everything out maybe? I don't care about history or anything, just want to clean up things15:17
fungiare you a member of any core review groups in gerrit, or do you have any open changes in review? membership/ownership of those will be lost if i deactivate the accounts used for them15:17
lmiccinifungi: nope all closed15:17
fungiokay, i'll deactivate the following accounts: 19705, 23817, 25259, 25412 and 3014915:18
fungithat will leave 30126 as your only active account15:18
*** cgoncalves has joined #openstack-infra15:18
lmiccinifungi: awesome thanks15:18
fungicool, doing that now15:19
fungilmiccini: all done, let us know if you run into any issues with this15:19
lmiccinifungi: will do. thank you very much15:19
fungiyou may want to log out of gerrit and ubuntuone and log back into them again just to make sure, and double-check that you can push a change15:20
fungi#status log deactivated duplicate gerrit accounts 19705, 23817, 25259, 25412 and 30149 at the request of lmiccini15:21
openstackstatusfungi: finished logging15:21
*** altlogbot_3 has quit IRC15:21
*** aaronsheffield has joined #openstack-infra15:21
clarkbShrews: fungi http://paste.openstack.org/show/748248/ these images are all leaked on nb0315:22
clarkbI don't think we should worry about adding disk or replacing builders where we can't add disk until we understand the image leak problem15:22
fungiclarkb: makes sense, thanks for checking15:22
fungialso noting that /opt/dib_tmp on nb03 has 61G of data in it right now... mostly stale?15:23
clarkbfungi: likely15:23
fungitons of profiledir.* directories15:24
clarkbI haven't cleaned up those leaked miages in case shrews wants to look closer but I can help clean them up in a bit15:24
clarkbfungi: if you have a sec care to review https://review.openstack.org/#/c/645372/1 and children for more puppet 4 good ness?15:24
* clarkb finds breakfast15:25
*** altlogbot_2 has joined #openstack-infra15:25
aaronsheffieldHas zuul changed recently (this week) that would have removed one or more of the following variables?  zuul.branch, zuul.change, zuul.newrev, zuul.patchset?  Airship gates are failing on code like https://github.com/openstack/airship-shipyard/blob/c7c25e8cdafa34a04419a2740e7636631f37404b/tools/gate/roles/build-images/tasks/airship-shipyard.yaml#L28, an example is https://review.openstack.org/#/c/644958/15:26
lmiccinifungi: tested and working fine. thanks again15:27
fungiaaronsheffield: the default ansible version used in zuul jobs has increased from 2.5 to 2.715:29
fungiaaronsheffield: and ansible 2.7 is a little more picky about not ignoring missing variables15:29
*** irclogbot_2 has quit IRC15:30
aaronsheffieldGotcha, so we probably had a missing variable for a long time, but just now a problem.15:30
fungiin particular this is not the first incident we've seen with jobs failing on "The field 'environment' has an invalid value, which includes an undefined variable. The error was: 'dict object' has no attribute 'newrev'15:30
fungiin the check/gate pipelines15:30
clarkbzuul only sets those values when appropriate for the event that triggered the job15:30
clarkbso post pipeline jobs don't get a branch (neither do release jobs)15:31
openstackgerritLuigi Toscano proposed openstack-infra/zuul-jobs master: stage-output: fix the archiving of all files  https://review.openstack.org/64523915:31
fungiand check/gate pipeline jobs don't have a zuul.newrev15:31
toskyfungi: ^^ that review should fix the log compression phase of stage-output15:32
fungiaaronsheffield: but yes, that missing variable was simply being ignored by ansible 2.5 (or only induced it to emit a non-failure warning)15:32
*** irclogbot_3 has joined #openstack-infra15:32
*** michaelbeaver has joined #openstack-infra15:32
aaronsheffieldthanks for the quick response.15:32
fungiaaronsheffield: you *can* temporarily downgrade the version of ansible in use for that job if needed, but be warned that when ansible 2.5 reaches end of life in a few weeks we expect to drop it from the available versions on our executors15:33
fungiaaronsheffield: also see http://lists.openstack.org/pipermail/openstack-discuss/2019-March/004034.html15:36
*** irclogbot_3 has quit IRC15:36
fungii thought we'd also posted that to the openstack-infra ml but i guess we didn't15:36
fungianyway, running late for a lunch appointment. should be back shortly15:37
*** irclogbot_2 has joined #openstack-infra15:37
fungiinfra-root: citycloud has sent us another notice about the impending la1 region shutdown, stating that we've booted instances there since the previous notice ~ a month ago. that seems... unlikely to me since it's had max-servers 0 since the beginning of september. maybe they're counting image uploads? it's probably time to remove them from our nodepool configuration cleanly15:40
fungiand with that, i disappear for lunch15:40
*** pgaxatte has quit IRC15:52
openstackgerritLogan V proposed openstack-infra/project-config master: Revert "Disable provider limestone"  https://review.openstack.org/64563915:53
openstackgerritMerged openstack-infra/system-config master: Fix groups.openstack.org glob  https://review.openstack.org/64532615:58
Shrewsfungi: clarkb: i can examine the nb03 logs to see if i can spot the leak issue. will do that in a bit16:07
*** mriedem is now known as mriedem_afk16:07
*** dustinc has joined #openstack-infra16:08
clarkbShrews: ok should we hold off on deleting leaked images then? or proceed with taht (but record them first) so that builds will work again?16:11
Shrewsclarkb: go ahead, don't need those, just the logs16:11
clarkbok I'm starting with the list I posted for nb03 and will generate them for nb01 and nb02 as well16:12
*** Lucas_Gray has quit IRC16:13
clarkbnb03 now at /dev/sdb        197G  101G   87G  54% /opt16:15
*** helenaAM has quit IRC16:15
*** ykarel_ has joined #openstack-infra16:18
clarkbhttp://paste.openstack.org/show/748254/ is nb02's list16:19
*** imacdonn has quit IRC16:19
*** imacdonn has joined #openstack-infra16:20
*** ykarel has quit IRC16:20
*** janki has quit IRC16:22
clarkbnb02 now at /dev/mapper/main-nodepool 1008G  295G  714G  30% /opt16:23
*** yamamoto has quit IRC16:24
*** ykarel_ is now known as ykarel16:28
clarkbhttp://paste.openstack.org/show/748255/ nb01's leaked file list16:28
clarkbnb01 now at /dev/mapper/main-nodepool 1008G  635G  374G  63% /opt16:29
*** yamamoto has joined #openstack-infra16:30
*** ykarel is now known as ykarel|away16:30
*** fried_rice is now known as fried_rolls16:30
clarkbinfra-root can I get reviews on https://review.openstack.org/#/c/645372/1 and children? I'd like to get that stack in today before I'm afk most of next week16:31
clarkbgets a large chunk of our servers onto puppet416:32
*** roman_g has joined #openstack-infra16:33
*** yamamoto has quit IRC16:35
*** sthussey has joined #openstack-infra16:38
*** rpittau is now known as rpittau|afk16:39
clarkbcorvus: your mailman queue cleanup command failed, but I believe that is because there was nothing to cleanup. Cna you check http://paste.openstack.org/show/748259/ and see if that is what you see as well?16:39
*** dpawlik has joined #openstack-infra16:42
*** iurygregory has quit IRC16:45
*** dpawlik has quit IRC16:47
*** e0ne has joined #openstack-infra16:48
openstackgerritClark Boylan proposed openstack-infra/system-config master: Update even more servers to puppet4  https://review.openstack.org/64537516:51
*** jpich has quit IRC16:54
Shrewsclarkb: oh weird. so it seems those leaked files are being created (or at least last written to) *after* the file cleanup phase runs16:54
Shrewsbased on timestamps16:55
clarkbShrews: huh I wonder if dib is syncing those files to disk after we think it is done somehow16:55
Shrewsso i guess the creating process still has them open when we try to read them16:55
clarkbya16:55
Shrewsya16:55
clarkbShrews: maybe we can check that dib has completely exited before cleaning up?16:56
Shrewsmaybe16:56
clarkb(could have sigchild queue up the cleanup?16:56
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Update component diagram to show statsd  https://review.openstack.org/64579816:58
*** e0ne has quit IRC16:59
Shrewsthis seems to be related to losing the zookeeper connection16:59
*** udesale has quit IRC17:00
Shrewswhich maybe causes us to lose our image build lock in zk... which is how we test to see if the build is inprogress17:01
corvusclarkb: yes, it's normal for those directories to be empty17:01
*** chandankumar is now known as raukadah17:02
clarkbcorvus: it is safe for mailman to start in that state? I expect the systemd switch post upgrade to cause that to happen17:03
corvusclarkb: yes17:04
corvusclarkb: just means there's nothing for it to do17:04
clarkbperfect, thanks17:04
*** jamesmcarthur has quit IRC17:05
corvusfungi: does my comment on https://review.openstack.org/645346 answer your question?17:06
fungii shall find out now!17:07
fungi(as my lunch is presently digesting)17:09
corvusfungi: also replied on https://review.openstack.org/64539117:09
fungithanks!17:09
*** dtantsur is now known as dtantsur|afk17:11
*** trown is now known as trown|lunch17:11
*** gfidente has quit IRC17:14
*** diablo_rojo has joined #openstack-infra17:15
*** ramishra_ has quit IRC17:17
*** jamesmcarthur has joined #openstack-infra17:22
*** michaelbeaver has quit IRC17:23
*** michael-beaver has joined #openstack-infra17:23
*** mattw4 has joined #openstack-infra17:25
*** eharney has quit IRC17:26
openstackgerritMerged openstack-infra/zuul-jobs master: Add fetch-sphinx-tarball role  https://review.openstack.org/64534617:26
openstackgerritMerged openstack-infra/zuul-jobs master: Add download artifact role  https://review.openstack.org/64538417:26
*** derekh has quit IRC17:26
*** jamesmcarthur has quit IRC17:27
*** tjgresha has joined #openstack-infra17:29
fungic17:30
fungiwait, this is not my mutt window17:30
*** roman_g has quit IRC17:30
*** tjgresha has quit IRC17:30
pabelangerI happen to notice a zuul config error: https://zuul.openstack.org/config-errors FYI17:31
fungihttp://git.openstack.org/cgit/openstack/masakari/tree/.zuul.yaml17:33
*** tjgresha_ has joined #openstack-infra17:34
*** jamesmcarthur has joined #openstack-infra17:34
pabelangerlooks like the nodeset is defined more then once17:35
pabelangerand when bionic changes landed, zuul raised error because they are now different17:35
*** tjgresha has joined #openstack-infra17:37
*** tjgresha_ has left #openstack-infra17:37
*** tjgresha_ has quit IRC17:38
*** tjgresha has quit IRC17:38
*** tjgresha has joined #openstack-infra17:39
*** diablo_rojo has quit IRC17:39
openstackgerritSorin Sbarnea proposed openstack-infra/bindep master: Expose base python version as an atom  https://review.openstack.org/63995117:44
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Fix ignored default ansible version  https://review.openstack.org/64581917:47
*** psachin has quit IRC17:48
*** ykarel_ has joined #openstack-infra17:50
*** gmann is now known as gmann_afk17:52
*** ykarel|away has quit IRC17:52
*** mriedem_afk is now known as mriedem17:55
*** tjgresha has quit IRC17:56
*** tjgresha has joined #openstack-infra17:56
*** trown|lunch is now known as trown18:02
clarkbinfra-root I've upgraded my snapshotted lists.o.o server per https://etherpad.openstack.org/p/lists.o.o-trusty-to-xenial 166.78.27.52 is its ip address if you would like to log in and look around18:08
clarkbI've recordred my notes on the etherpad. I think the first thing I am goign to look into is why systemct reports errors connecting to the upstart socket18:09
*** gmann_afk is now known as gmann18:11
*** jpena is now known as jpena|off18:13
slaweqclarkb: hi18:14
clarkb/sbin/init is a link to systemd so we have properly converted over to systemd18:14
clarkbslaweq: hello18:14
slaweqclarkb: I saw today at list couple of times errors like http://logs.openstack.org/86/643486/12/check/neutron-tempest-plugin-scenario-linuxbridge/48055f7/controller/logs/devstacklog.txt.gz#_2019-03-22_17_00_19_424 during devstack18:14
slaweqis it something You are aware of?18:14
clarkbslaweq: that is news to me, but the jobs should use our mirrors which are curated (and should never have those errors, its why we run our own mirrors and only "publish" them when the mirror is viable)18:15
*** tjgresha has quit IRC18:15
clarkbslaweq: http://logs.openstack.org/86/643486/12/check/neutron-tempest-plugin-scenario-linuxbridge/48055f7/controller/logs/devstacklog.txt.gz#_2019-03-22_16_40_19_946 shows the job starts out using our mirrors18:16
clarkbslaweq: is somethign in the job overriding the apt config?18:16
*** tjgresha has joined #openstack-infra18:16
*** xek_ has quit IRC18:17
*** ykarel_ has quit IRC18:18
clarkbslaweq: that is failing in the customization of the nested ubuntu test image18:19
*** jamesmcarthur has quit IRC18:19
clarkbslaweq: so it is outside of infras control unless you configure it to use our mirrors as well18:19
*** armax has joined #openstack-infra18:20
*** jamesmcarthur has joined #openstack-infra18:20
*** ginopc has quit IRC18:20
clarkbinfra-root my naive read of the mailman upgrade is that our vhost configs have survived the upgrade process18:21
slaweqclarkb: ahh, ok18:22
slaweqnow I remember that we did such tool to customize image before upload to devstack's glance18:23
slaweqso it's on our side bug then18:23
fungiclarkb: it seems to be working for me after an /etc/hosts override18:23
slaweqthanks a lot for help18:23
*** jamesmcarthur has quit IRC18:24
clarkbslaweq: unfortunately deb repos suffer from an updating flaw where you can have package and index mismatches leading to unhappy apt-get installs18:24
fungiclarkb: last message at http://lists.openstack.org/pipermail/openstack-discuss/2019-March/date.html#start was sent Thu Mar 21 17:48:38 UTC 201918:24
fungi(in the snapshot)18:24
clarkbslaweq: this is why we build our own mirrors on top of afs and only publish updates once we have done verification taht the mirror is valid18:24
clarkbfungi: cool I don't have exim or mailman running on the server yet, but good to know the http side of things is happy18:25
corvusfungi, clarkb: the error in https://review.openstack.org/645391 is "neat" (post review pipeline gets dynamic config for trusted projects, but trusted project git repos aren't updated until changes land)18:25
slaweqclarkb: thx for explanation18:25
fungiclarkb: also http://lists.zuul-ci.org/ seems to be working and serving archives on it too18:25
slaweqclarkb: I will have to check why we are using this customized image in this job and (maybe) switch it to use OS mirrors inside it then18:25
slaweqclarkb: thx a lot for help18:25
clarkbslaweq: no problem18:25
fungicorvus: yeah, when i saw the gerrit notification of your second workflow +1 i got curious and went looking at the error. made sense to me18:26
openstackgerritJames E. Blair proposed opendev/base-jobs master: Add opendev docs build/promote jobs  https://review.openstack.org/64539118:27
openstackgerritJames E. Blair proposed opendev/base-jobs master: Use new opendev docs jobs  https://review.openstack.org/64583218:27
corvusfungi, clarkb: ^ those 2 changes should be very easy to review :)18:28
corvusi just moved the project stanza update into its own change18:28
clarkbsystemd-sysv depends on upstart on xenial. I'm guessing that is to compat layer in old upstart jobs? So the fix for that isn't to just remove the upstart package :/18:29
fungiclarkb: yeah, debian derivatives can have as many init systems installed as you want18:30
fungiwhat matters is which one gets invoked by the kernel at boot18:30
*** armax has quit IRC18:31
fungiit's not that uncommon (and was quite common for a while during the great systemd scourge) to have different boot stanzas which booted the same kernel and rootfs with various different inits18:31
clarkbfungi: ya https://unix.stackexchange.com/questions/429032/initctl-unable-to-connect-to-upstart says the issue is /etc/init.d/screen-cleanup being a symlink to upstart-job18:31
clarkbsure enough we have such a link18:32
clarkbwe don't have it on the afs servers where things were happy18:32
fungisounds like the case, agreed18:32
*** armax has joined #openstack-infra18:33
clarkbthe screen package apparently installs that file18:34
clarkbI can remove the file then reinstall screen to see if it doesn't set it up that way anymore?18:34
clarkb*remove the link18:34
clarkboh ya there is a screen-cleanup.dpkg-new18:35
clarkbok etherpad ammended on how to fix that item18:38
*** eharney has joined #openstack-infra18:40
clarkbI've double checked that mailman and exim aren't provided via systemd units now so our existing sysv scripts should continue to work as is for the vhosting in mailman18:42
clarkband the exim config updates seem to be chagnes to macro listings in /etc/exim4/conf.d which I'm assuming we already work with on xenial in general (beacuse most of our machines are xenial with exim4)18:43
clarkball this to say I've get to see any system level chagnes that should break our existing setup18:43
clarkbShould I reenable exim4 and mailman services then reboot to have them start and see if they are actually happy?18:44
corvusclarkb: "exim -bt openstack-discuss@lists.openstack.org" will do a very simple exim config check18:44
clarkb(as far as double checking an upgrade goes for exim and mailman I am sort of out of my element so input/ideas much appreciated)18:44
clarkbwoot thanks18:44
*** pcaruana has quit IRC18:45
clarkbhttp://paste.openstack.org/show/748271/ that also appears to be happy18:46
*** tjgresha has quit IRC18:50
*** tjgresha has joined #openstack-infra18:50
*** tjgresha_nope has joined #openstack-infra18:51
clarkbthere is apparently a way to set a site wide dmarc_moderation_action and if you do that then lists cannot set a less strict value18:52
*** tjgresha_nope has quit IRC18:52
*** eharney has quit IRC18:52
*** tjgresha has quit IRC18:52
*** tjgresha has joined #openstack-infra18:53
*** eharney has joined #openstack-infra18:54
*** Adri2000 has quit IRC18:55
clarkbok what confuses me about ^ is that action always taken ? if so our strategy of accepting the email and not modifying it means that lists could still choose to munge or reject themselves :/18:56
clarkbya my reading of it is if we set it to Accept as default then a list will be able to set it to munge, wrap, reject, or discard18:58
*** tjgresha_nope has joined #openstack-infra18:58
*** tjgresha has quit IRC18:58
openstackgerritMerged opendev/base-jobs master: Add opendev docs build/promote jobs  https://review.openstack.org/64539118:58
*** tjgresha_nope has quit IRC18:58
*** tjgresha has joined #openstack-infra18:58
*** tjgresha has quit IRC18:59
*** tjgresha has joined #openstack-infra19:00
clarkbI wonder if we can patch the html to prevent the option from being presented to people?19:00
clarkbThey'd still be able to POST around it but maybe that is good enough?19:00
fungior take it out of the message pipeline like we did with recipient deduplication19:01
clarkbhttps://wiki.list.org/DEV/DMARC is the docs fwiw and https://fossies.org/linux/mailman/Mailman/Defaults.py.in seems to be the listing of the defaults we can set19:03
clarkbThis seems like a thnig that isn't going to get solved in a 10 minute brainstorm /me lets it stew in the back of the mind for a bit19:05
clarkbfungi: corvus https://review.openstack.org/#/c/645372/1 can I get a review on that please?19:05
*** fried_rolls is now known as fried_rice19:07
clarkbBack of brain thinking out loud: If we set the default to accept (whcih I think it already is) and we continue to pass email through for the most part as is, do we expect people to ever notice that is somethign that they can change and that they might want to cahnge it? Like maybe its enough to trust our users? though I guess as the user pool grows we won't necessarily keep up19:07
fungiit was enough for the listadmin of kata-dev to decide to set19:08
clarkbfungi: that was before we had a less bad answer to the problem though19:08
*** tjgresha has quit IRC19:08
*** tjgresha has joined #openstack-infra19:09
*** tjgresha has quit IRC19:10
*** tjgresha_nope has joined #openstack-infra19:10
*** tjgresha_nope has quit IRC19:10
*** tjgresha has joined #openstack-infra19:10
clarkbthough I guess we still do have a few problem domains19:10
clarkbso people are likely to want to go investigating how to fix it19:10
fungii'm still not entirely convinced our current solution is especially "less bad" though. it sacrifices recipient deduplication as well as a host of lesser mailman features, and still results in needing to disable subscription deactivation from bounces because 1. mailman mangles messages in other unconfigurable ways (whitespace normalization, references rewriting) which break dmarc, but worse 2. it19:11
fungidoesn't actually solve the case of people sending messages with broken dmarc signatures to the list either19:11
fungis/dmarc/dkim/ really19:12
openstackgerritMatt Riedemann proposed openstack-infra/elastic-recheck master: Update query for bug 1820892  https://review.openstack.org/64585119:13
clarkbhave we confirmed any broken signatures as source of trouble?19:13
openstackbug 1820892 in devstack "Intermittent "Error starting thread.: ModuleNotFoundError: No module named 'etcd3gw'" in grenade-py3 jobs since March 14" [High,Fix released] https://launchpad.net/bugs/1820892 - Assigned to Matt Riedemann (mriedem)19:13
corvusi agree.  i think the ideal would be to reject messages which mailman cannot safely process.19:13
clarkb(I wasn't sure if we ever pinned a failure down to that or not)19:13
fungicorvus had an idea of maybe also adding a receipt-time filter in exim which checks to see if the result of the transformations mailman performs will cause broken dkim sigs and then reject them before mailman gets them19:13
corvus(unfortunately mailman itself only has an option to reject all messages from dmarc domains)19:14
fungii haven't spotted a dkim signature validation yet which i was able to attribute to being broken before it was handed off to mailman19:14
fungier, dkim signature validation failure i mean19:14
fungibut that doesn't mean it can't happen19:14
clarkbcorvus: ya that is my read of the dmarc_moderation_action config19:15
clarkbseems like mailman expects people to do munge or wrap19:15
fungiso even if we're able to coerce mailman into not altering anything which could possibly invalidate a dkim signature, we still have to take in to account that forwarding a message with an (accidentally or intentionally) invalid dkim signature could still disable subscriptions for a vast swath of subscribers19:16
*** weshay|rover is now known as weshay19:17
corvusit's very frustrating that the mailman folks have taken this approach -- since mailman knows, at the point that it sends out the message, whether it's going to work or not.19:17
corvusbut that's not where the filtering happens19:17
clarkbthat wasmy next question will mailman or exim not reject due to invlaid signature?19:18
corvusclarkb: exim could, but at that point the result is the same as the remote site rejecting -- a bounce to mailman19:18
corvusi mean, we could tell exim not to bounce19:19
corvusbut then it's silently dropping19:19
clarkbcan wehave exim bounce to the originator?19:19
clarkbI guess we have to have mailman update it forst19:19
fungionly feasibly if we can predict whether mailman will alter the message in ways that make the signature invalid19:19
corvusprobably, though i'm not sure we could prevent multiple copies of that bounce, depending on how many copies of that message mailman emitted...19:20
fungioh, after passing through mailman, yes perhaps, but again bounces aren't as useful as rejecting at rcpt command19:20
corvusyeah, that would be the real win19:21
fungibounces are spoofable, whereas refusing receipt of the message at our server is less so19:21
openstackgerritPaul Belanger proposed openstack-infra/zuul master: Add web / fingergw connections for components graph  https://review.openstack.org/64585219:21
fungii wonder if the mailman pipeline scripts could be used in an exim filter19:22
openstackgerritTobias Henkel proposed openstack-infra/zuul master: Increase default wait_timeout  https://review.openstack.org/64585319:22
corvusfungi: probably.  we could probably come up with a python program which actually uses mailman internals to process a message.19:23
fungilike pass a copy o fthe message thruogh the pipeline parts which are likely to mangle the message and then through a dkim validator, and then accept or reject the message based on the results19:23
corvusif we can write that python program, we can *certainly* have exim run it and reject at rcpt time19:23
*** jtomasek has quit IRC19:23
fungii wonder how pipeline-y the mm pipeline pieces are19:24
clarkbthat doesnt  affect a users ability to set munge or wrap but there wont be a reason too as all the emails that would get munged would be rejected upstream?19:25
corvusclarkb: right (assuming the lists are configured not to munge more than we're testing for)19:26
fungithe pipeline modules seem to be in /usr/lib/mailman/Mailman/Handlers/19:26
fungilooks like they contain a process() function which takes the list, message and message data as parameters19:27
fungiand then they directly manipulate the message dict19:28
corvusi have to grab lunch; biab.19:28
openstackgerritMerged opendev/base-jobs master: Use new opendev docs jobs  https://review.openstack.org/64583219:31
openstackgerritMerged openstack-infra/elastic-recheck master: Update query for bug 1820892  https://review.openstack.org/64585119:33
openstackbug 1820892 in devstack "Intermittent "Error starting thread.: ModuleNotFoundError: No module named 'etcd3gw'" in grenade-py3 jobs since March 14" [High,Fix released] https://launchpad.net/bugs/1820892 - Assigned to Matt Riedemann (mriedem)19:33
*** mriedem has quit IRC19:46
*** mriedem has joined #openstack-infra19:47
*** yamamoto has joined #openstack-infra19:50
*** armax has quit IRC19:54
*** yamamoto has quit IRC19:56
*** armax has joined #openstack-infra19:59
*** armax has quit IRC20:02
*** armax has joined #openstack-infra20:06
*** kjackal has quit IRC20:15
openstackgerritMerged openstack-infra/system-config master: Run static and status under futureparser  https://review.openstack.org/64537220:15
*** Lucas_Gray has joined #openstack-infra20:17
*** diablo_rojo has joined #openstack-infra20:22
openstackgerritJames E. Blair proposed opendev/base-jobs master: Docs promotion: create destination directory  https://review.openstack.org/64587320:27
corvusfungi, clarkb: ^ that should fix an observed failure20:28
*** armax has quit IRC20:31
openstackgerritMerged openstack/os-testr master: add python 3.7 unit test job  https://review.openstack.org/63774920:32
*** Lucas_Gray has quit IRC20:33
openstackgerritMerged opendev/base-jobs master: Docs promotion: create destination directory  https://review.openstack.org/64587320:46
*** armax has joined #openstack-infra20:50
*** trown is now known as trown|outtypewww20:58
clarkbcorvus: fungi: thinking about testing the lists server upgrade more, short of turning on exim and mailman is there anything else worth doing to test it? And if I turn on those processes is doing smtp over telnet going to be the easiest way to interact with it?21:01
clarkb(also I probabl won't get too much further into testing it today as I want to finish up the puppet4 thread before take monday-thursday off)21:01
corvusclarkb: i think that's it21:01
clarkbok so all doable just may involve some rtfm'ing about smtp :)21:02
clarkbelo and rcpt is about all I currently remember and I probably got those wrong21:02
corvusclarkb: http://paste.openstack.org/show/748274/21:04
clarkbthanks21:05
corvusi forgot to include the bits about getting an afs token in the promote job; i'll work on adding that now21:08
corvusi'll make a new principal for it21:10
openstackgerritJames E. Blair proposed openstack-infra/zuul master: Don't assume secrets are text in encrypt_secret  https://review.openstack.org/64588821:22
openstackgerritJames E. Blair proposed opendev/base-jobs master: Obtain AFS creds in docs promote  https://review.openstack.org/64589321:32
corvusclarkb, fungi: ^ there we go21:32
clarkbcorvus: zuul says there is an invalid byte21:32
clarkbI think zuul must expects the secrets to be utf8 internally?21:33
clarkbcan we ask kerberos to use utf8 for its tokens?21:34
corvusclarkb: erm... hrm.  i think this is how the others are done.21:34
corvusderp21:35
corvuswe base64 encode the keytabs21:35
corvusmaybe that "fix" to encrypt_secrets should be rethought as well21:35
openstackgerritJames E. Blair proposed opendev/base-jobs master: Obtain AFS creds in docs promote  https://review.openstack.org/64589321:37
*** rkukura_ has joined #openstack-infra21:41
*** rkukura has quit IRC21:43
*** rkukura_ is now known as rkukura21:43
*** mgoddard has quit IRC21:47
*** mgoddard has joined #openstack-infra21:47
clarkbinfra-root static and status looked good under futureparser to switch to puppet 4 on them is on its way21:49
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Minor improvements to docker-image doc structure  https://review.openstack.org/64589721:49
clarkbhttps://review.openstack.org/#/c/645375/3 is next up and I can be around long enough today to ensure that that is happy once in21:49
clarkbthat gives us puppet 4 on all the logstash hosts and mirror nodes21:50
clarkbwe'll be down to a very small list of servers to update to puppet 4 once that is in :)21:50
clarkbfungi: corvus thank you!21:52
*** e0ne has joined #openstack-infra21:54
*** auristor has quit IRC21:58
*** auristor has joined #openstack-infra21:58
openstackgerritMerged opendev/base-jobs master: Obtain AFS creds in docs promote  https://review.openstack.org/64589321:58
openstackgerritKendall Nelson proposed openstack-infra/storyboard-webclient master: Show Email Addresses when Searching  https://review.openstack.org/58971322:00
clarkbcorvus: I'm trying to remember, last time we did a lists upgrade did we create a tests list or just use openstack-infra for that?22:02
*** armax has quit IRC22:02
clarkbI don't see a test list in the list of lists22:02
clarkbso I'm guessing we used the infra list for that22:02
*** auristor has quit IRC22:03
corvusclarkb: yeah i think we just used infra22:03
corvuswe should really create a test list one of these days :)22:03
clarkb++22:04
*** armax has joined #openstack-infra22:14
*** kgiusti has left #openstack-infra22:14
*** armax has quit IRC22:15
openstackgerritMerged openstack-infra/system-config master: Run static and status under puppet4  https://review.openstack.org/64537322:17
openstackgerritMerged openstack-infra/system-config master: Update even more servers to puppet4  https://review.openstack.org/64537522:17
*** michael-beaver has quit IRC22:20
*** auristor has joined #openstack-infra22:22
*** yamamoto has joined #openstack-infra22:27
clarkbnext run in half an hour should apply ^ those changes22:27
corvusclarkb, fungi: w00t: http://files.openstack.org/project/opendev.org/docs/opendev/base-jobs/latest/22:31
corvusthat's the opendev docs promotion working :)22:31
*** yamamoto has quit IRC22:31
corvusnow we just need a vhost and dns and we're done22:31
* fungi cheers22:31
clarkbcorvus: can probably update/replace the existing vhost on files?22:32
corvusyeah22:32
corvusi guess we're going to want an ssl cert22:32
corvusbut we can probably skate by without one until ianw finished le22:32
corvusfinishes le22:32
*** e0ne has quit IRC22:36
corvusthat vhost was on files, right?22:36
clarkbcorvus: yes it should be22:37
clarkbit was set up like zuul and starlingx afs hosted sites22:38
corvusah! i found it :)22:39
*** whoami-rajat has quit IRC22:41
openstackgerritJames E. Blair proposed openstack-infra/system-config master: Serve docs.opendev.org from files.openstack.org  https://review.openstack.org/64595322:48
corvusclarkb: so that we don't have to immediately deal with the challenges of integrating letsencrypt with the pile of puppet that's files.o.o, we may want to go ahead and buy a comodo cert for that :/22:48
corvus(i revised my opinon on that after making that change)22:49
clarkbcorvus: ah does the assume https?22:50
openstackgerritJames E. Blair proposed openstack-infra/system-config master: Serve docs.opendev.org from files.openstack.org  https://review.openstack.org/64595322:50
clarkbmaybe that is something fungi can help with? (/me hoping to avoid needing to be around soon)22:50
corvusclarkb: yes; i could do a bunch of puppet to unassume that... i think it's okay to land that change and go aheand and serve it with the wrong cert for a little while22:50
clarkbk22:51
corvusmostly, i'm thinking that soon we really do want it to have the right cert, and because of the complexity of that host, we may not want to hang that on the letsencrypt work22:51
clarkbbut ya if fungi can do the verification dance and expense report that would be good. I don't expect to have my laptop with me next week22:51
openstackgerritJames E. Blair proposed openstack-infra/zuul-jobs master: Organize documentation by subject area  https://review.openstack.org/64595522:52
*** tosky has quit IRC23:01
*** mriedem has quit IRC23:04
*** mattw4 has quit IRC23:05
*** rascasoft has quit IRC23:07
*** rlandy has quit IRC23:11
clarkbcool there are some things that are a little unhappy with puppet4. logstash.o.o has the pip problem. Starting with that one23:11
clarkbhrm it is the pip problem but not due to the cryptography is slow warning. I'll need to trace this one23:13
openstackgerritClark Boylan proposed openstack-infra/system-config master: Explicitly set up mirror update crons under root user  https://review.openstack.org/64595923:27
clarkbthat is the first fix23:27
openstackgerritKendall Nelson proposed openstack-infra/storyboard master: Link development.rst to contributing.rst  https://review.openstack.org/64596023:29
clarkbon status.o.o I had to run `npm rebuild node-sass` in /opt/openstack-health because the node/npm versions changed to the version we actually wanted23:30
clarkbI expect that will be happy on the next puppet run23:30
clarkbthat leaves me with the weird pip behavior on logstash01.o.o trying to update gear via pip23:30
clarkblooking into that now23:31
clarkbon the crontab fix this is mostly cleaning up the puppet output as the default is to set it up for the user running puppet23:34
clarkbok the logstash pip issue is our openstack_pip provider23:39
clarkbdoes anyone remember why we have that?23:39
clarkbconfirmed status is happy after the npm/node sass fix23:42
*** pcrews has quit IRC23:42
clarkbon logstash01.o.o I've manaully upgraded gear to 0.13 I expect this will make puppet happe until we have to update gear. cmurphy mordred I think we should sort out whether or not openstack_pip is still useful before we update more puppet-4 hosts because hosts for things like nodepool and zuul depends on this quite a bit more than logstash23:44
clarkbI won't be around most of next week to sort that out, so sorry to not be a huge help with that23:44
clarkbcmurphy: mordred it is specifically erroring around https://git.openstack.org/cgit/openstack-infra/puppet-pip/tree/lib/puppet/provider/package/openstack_pip.rb#n1523:49
clarkbhttp://paste.openstack.org/show/748277/ is what I was able to get out of debug tracing and I expect that if you downgrade gear to 0.12 you'll be able to reproduce23:50
*** pcrews has joined #openstack-infra23:50
clarkbcmurphy: mordred the regex at https://git.openstack.org/cgit/openstack-infra/puppet-pip/tree/lib/puppet/provider/package/openstack_pip.rb#n15 doesn't match new pip output. See http://paste.openstack.org/show/748278/23:56
clarkbwe can do `pip list --outdated --format columns` to get consistent output from both current and old (9.0.1 at least) pip23:57
fungiclarkb: corvus: sure, i'm happy to buy and expense a 1 year dv cert for docs.opendev.org next week or maybe over the weekend even23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!