Monday, 2021-02-08

*** tosky has quit IRC00:15
*** DSpider has quit IRC01:33
*** dviroel has quit IRC02:36
*** raukadah is now known as chandankumar04:35
*** ykarel|away has joined #opendev05:08
*** ysandeep|away is now known as ysandeep|rover05:12
*** ykarel|away is now known as ykarel05:19
*** redrobot0 has joined #opendev05:30
*** redrobot has quit IRC05:34
*** redrobot0 is now known as redrobot05:34
*** d34dh0r53 has joined #opendev05:53
*** marios has joined #opendev05:54
*** ykarel_ has joined #opendev06:00
*** ykarel has quit IRC06:01
*** ykarel_ is now known as ykarel06:12
*** whoami-rajat__ has joined #opendev06:13
*** ykarel_ has joined #opendev06:27
*** ykarel has quit IRC06:28
*** sboyron has joined #opendev06:41
*** ykarel_ is now known as ykarel06:45
*** ykarel has quit IRC07:08
*** eolivare has joined #opendev07:37
*** ralonsoh has joined #opendev07:38
*** ykarel has joined #opendev07:45
*** hashar has joined #opendev07:53
*** fressi has joined #opendev07:57
*** slaweq_ has joined #opendev07:59
*** rpittau|afk is now known as rpittau08:00
ykarelysandeep|rover, looks like limestone hit back08:06
ykareli see jobs failing there08:06
ykarelexample https://578d70a47ab08636787d-9f757b11a1d2b00e739d31e1ecad199a.ssl.cf2.rackcdn.com/774394/3/check/tripleo-ci-centos-8-containers-multinode/b324d95/job-output.txt08:06
ykarelhttps://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_1aa/774394/3/check/tripleo-ci-centos-8-standalone/1aaa456/job-output.txt08:06
ysandeep|roverykarel, ack08:06
ykareli see more like this08:07
ykarelhttps://zuul.openstack.org/status08:07
ysandeep|rover#opendev infra anyone available to check issue with limestone mirror08:07
ianwhttp://mirror.regionone.limestone.opendev.org/ seems ok to me08:09
ykarelseems issue is with ipv6,ipv4 seems fine08:13
ianw[Mon Feb  8 07:52:53 2021] rcu: INFO: rcu_sched detected stalls on CPUs/tasks:08:16
ianwthere was some warnings starting around then08:16
ianw[Mon Feb  8 08:00:51 2021] afs: file server 104.130.138.161 in cell openstack.org is back up (code 0) (multi-homed address; other same-host interfaces may still be down)08:16
ianwand then nothing since then.08:16
ianwso possibly between that time it was a bit unhappy08:16
ykarelbut as per logs i see errors before 07:52 too08:19
ykarelaround 07:4508:19
ianwwell 7:52 was when it decided tasks were stuck and starting putting things in dmesg ... so it might still be related08:20
ianwweirdly there are no symbols in the backtrace08:21
*** slaweq_ is now known as slaweq08:23
ykareli see errors 8:02 but may be that's too related, let's see if it hit now again08:24
*** hemanth_n has joined #opendev08:24
ykarelhttps://e29f078388bc55cedcb5-0aa6565d1e0c06a8a7c8c1f2b9ada9fc.ssl.cf1.rackcdn.com/773537/4/check/openstack-tox-pep8/3fea8e3/job-output.txt08:24
*** jpena|off is now known as jpena08:26
*** tosky has joined #opendev08:45
*** zbr is now known as zbr|pto08:45
akahatchandankumar, arxcruz|ruck ysandeep|rover marios sshnaidm|afk bhagyashris please take a look when you are free: https://review.opendev.org/c/openstack/tripleo-ci/+/773413 https://review.rdoproject.org/r/#/c/31659/09:27
*** sshnaidm|afk is now known as sshnaidm10:21
*** DSpider has joined #opendev10:21
*** dtantsur|afk is now known as dtantsur10:48
*** dviroel has joined #opendev11:17
*** jpena is now known as jpena|lunch12:36
*** hemanth_n has quit IRC12:38
ysandeep|roverianw, hey o/ any luck with limestone mirror?12:58
*** dtantsur is now known as dtantsur|brb13:01
*** clayg has quit IRC13:26
*** mattmceuen has quit IRC13:26
*** gouthamr has quit IRC13:26
*** portdirect has quit IRC13:26
*** CeeMac has quit IRC13:27
*** walshh_ has quit IRC13:27
*** whoami-rajat__ has quit IRC13:27
*** hillpd has quit IRC13:27
*** snbuback has quit IRC13:27
*** guilhermesp has quit IRC13:27
*** dviroel has quit IRC13:27
*** ildikov has quit IRC13:27
*** aprice has quit IRC13:27
*** jmorgan has quit IRC13:27
*** diablo_rojo_phon has quit IRC13:27
*** mwhahaha has quit IRC13:27
*** johnsom has quit IRC13:27
*** jrosser has quit IRC13:27
*** gmann has quit IRC13:27
*** knikolla has quit IRC13:27
*** mnaser has quit IRC13:27
*** zaro has quit IRC13:27
*** parallax has quit IRC13:27
*** clayg has joined #opendev13:28
*** CeeMac has joined #opendev13:28
*** hillpd has joined #opendev13:28
*** guilhermesp has joined #opendev13:28
*** walshh_ has joined #opendev13:28
*** portdirect has joined #opendev13:28
*** johnsom has joined #opendev13:28
*** mnaser has joined #opendev13:29
*** jpena|lunch is now known as jpena13:29
*** gouthamr has joined #opendev13:30
*** mattmceuen has joined #opendev13:31
*** snbuback has joined #opendev13:31
*** whoami-rajat__ has joined #opendev13:31
*** jmorgan has joined #opendev13:31
*** parallax has joined #opendev13:31
*** gmann has joined #opendev13:31
*** TheJulia has quit IRC13:32
*** dviroel has joined #opendev13:32
*** aprice has joined #opendev13:32
*** knikolla has joined #opendev13:32
*** ildikov has joined #opendev13:32
*** zaro has joined #opendev13:33
*** mwhahaha has joined #opendev13:34
*** jrosser has joined #opendev13:34
*** artom has joined #opendev13:38
*** TheJulia has joined #opendev13:44
fungihappened a few days ago too... i wonder if ipv6 connectivity between the nodes and the mirror server occasionally breaks14:19
fungior maybe we're seeing a recurrence of the rogue route announcement leaks between test nodes and the mirror?14:19
fungiwe did catch it doing that in limestone a year or so ago, before we saw it happen in vexxhost14:20
*** ysandeep|rover is now known as ysandeep|mtg14:22
*** dtantsur|brb is now known as dtantsur14:28
*** ysandeep|mtg is now known as ysandeep|rover14:43
dtantsurhey folks. a lot of our jobs are RED with ModuleNotFoundError: No module named 'setuptools_rust' when installing cryptography14:53
dtantsurdoes it ring any bells? I assume it's an upstream problem14:53
dtantsurexample https://zuul.opendev.org/t/openstack/build/2b7a39718d9641eca58732dbc2e709ee/log/job-output.txt#2892814:53
dtantsuralso breaks release test jobs14:53
dtantsurhonestly, "update to the latest pip" is something I'd prefer not to do, since it usually brings its own problems14:54
dtantsurI see some people are working around: https://review.opendev.org/c/openstack/tenks/+/77447414:55
fungifirst i've heard of it, but i'll start researching, thanks14:56
fungimtreinish: ^ you may also have some ideas, seems like i remember you knowing arcane things about python rust integration14:56
* dtantsur loves to see rust integration, but not when it breaks him14:57
fungiinitial speculation on just seeing the error message is that it actually wants a newer version of setuptools, not pip14:58
*** fressi has quit IRC14:59
fungioh, i see, setuptools_rust is its own package on pypi15:01
fungiso maybe it's an install_requires of cryptography15:01
fungicryptography 3.3.2, 3.4 and 3.4.1 were released yesterday15:02
fungiaha, they decided to implement it in a pyproject.toml file15:04
fungiso all but fairly recent pip have no idea it needs to be installed15:04
dtantsurle sigh15:05
fungiprobably time to start just pinning cryptography, or rip back out all the effort we put into using distro packaged pip15:05
dtantsurworth raising to the distros?15:05
fungifeel free, but the previous times i brought it up the answer was e.g. "you should be using distro packaged python-cryptography, not trying to pip install"15:06
fungiat least debian/ubuntu are unlikely to backport newer pip to their existing releases15:09
fungiinterestingly though, your sample error happened on an ubuntu-focal (20.04 lts) node, which should have pip 20.0.215:09
fungihttps://github.com/pyca/cryptography/issues/575315:10
fungidtantsur: ^ that's where i expect discussion to continue15:10
fungias far as whether upstream unwinds it, i mean15:10
dtantsurhmm, 20.0.2 is weird15:11
dtantsurI wonder if they bundle the same pip with virtualenvc15:12
dtantsurwe'll also need a pretty recent rust compiler15:13
fungiyeah, the distro packaged python3-virtualenv on focal installs python-pip-whl 20.0.2 for virtualenv to use15:14
fungihttps://packages.ubuntu.com/python-pip-whl15:15
fungihttps://packages.ubuntu.com/focal/python3-virtualenv15:16
* dtantsur tries in podman15:16
jrosserin osa world we use the distro pip to build a virtualenv, then upgrade the venv pip to be some specific version we pin in our repo15:16
jrosserthat works quite solidly and means we're not at the mercy of the distro15:17
fungithat's a viable workaround, though if this is virtualenv invoked by tox you need to jump through extra hoops to make it work15:17
dtantsurI've successfully installed cryptography 3.4.1 in a venv on focal without updating pip15:18
dtantsurfrom a wheel, if it matters (so maybe it has the rust bits already compiled?)15:18
fungiwe also have a parameter to the ensure-virtualenv role (i think that's the one) to override and use versions from pip15:18
fungidtantsur: i wonder, did the example failures maybe race the wheel upload to pypi?15:19
fungialso i think you can pip install --no-binary to force it to try with the sdist instead15:20
dtantsurwell, not impossible. at least in the example I have it's downloaded from a tarball15:20
dtantsur(but it's with pip 9.0.3)15:20
fungi9.0.3 would be weird, that's what ubuntu-bionic ships i think?15:20
dtantsurCentOS 8 in our case15:20
fungioh, that ironic-lib-wholedisk-bios-ipmi-direct-src job is building in a chroot?15:21
dtantsuryup (in DIB)15:21
fungiaha, that explains it then15:22
*** tbarron is now known as tbarron|out15:22
fungiyeah, so you need a centos oriented solution, and you're going through some extra layes15:22
fungilayers15:22
dtantsurworks if I update pip to 19.1.115:22
*** hashar has quit IRC15:24
*** sshnaidm has quit IRC15:34
*** hashar has joined #opendev15:37
*** jpena is now known as jpena|brb15:46
*** ysandeep|rover is now known as ysandeep|away15:57
*** sshnaidm has joined #opendev15:58
*** jpena|brb is now known as jpena16:01
clarkbthe wheel race is normal with cryptography16:03
clarkbthey always update sdists first, then we break for like 6 hours and then everything is fine again when the wheels show up16:04
clarkbthis was typical even before they started integrating rust because many jobs don't install libssl-dev or whatever the header package is for openssl16:04
fungiyeah, so to recap, distro packaged pip on centos and ubuntu bionic or earlier can't install latest cryptography from sdist16:04
fungidoesn't help that they released three new versions in a day, so probably went quite a while with no wheels for whatever was latest at each of those16:05
clarkbIv'e suggested that they upload wheels then sdists to avoid this problem but there was some issue related to that. I think it has to do with wheel builds being a bit more "as we are able"16:06
clarkbbut I do't recall the specific reason16:06
fungiyeah, they build lots of different wheels on different platforms, they may not want to block uploading the sdist if one of those isn't working16:07
fungiclarkb: two other items... more signs of intermittent ipv6 unreachability for the limestone mirror (starting to wonder if it's router announcements leaking between tenants again), and also we've got jobs complaining they're out of disk space on citycloud (failing to resize the rootfs?)16:08
clarkbthe rootfs resizing is done by all our nodes with growfs or similar iirc. I would expect similar failures across the baord if it was that. Did the iamge disk size change maybe?16:09
funginot sure, i added df calls to validate-host so we can try to get a better idea of what they look like at the start of those builds16:10
fungiand right, it would be weird for that to only happen in one provider16:11
fungianother possibility is the rootfs size associated with that flavor we're using in citycloud shrunk16:11
clarkbya, or particular jobs running tehre (airship?) have suddenly increased disk needs16:12
fungiin the cases which were brought up, it was devstack in the context of grenade claiming there was not enough free space remaining in /var to download more packages16:13
fungiwell, more specifically, an apt-get call (one of many, well into the setup) saying there was insufficient space in /var/cache/apt/archives to download more packages16:13
*** _mlavalle_1 has quit IRC16:15
clarkbthat it doesn't fail the first time it tries to do writes implies growfs is able to find some room. I believe the free spaceo nthe image without growing the filesystem is tiny16:16
clarkbbut might need to just boot an instance there and check it out too. These are jobs running on bionic or focal?16:16
fungiprobably focal, but i'll see if i can find a recent failure from it16:17
fungiclarkb: https://zuul.opendev.org/t/openstack/build/1e5ee6221cc74100b90b1714895cd5b316:20
fungilooks like the rootfs starts out at 15gb with around 10gb used16:20
clarkbhuh thats more free space that I expected16:21
fungiwell, that's also before we add the swapfile i think16:22
clarkbya that is super early16:22
clarkbhowever that should happen after growfs which is during boot16:23
clarkbso thats showing us that either growfs is failing or the disk is small. Maybe syslog will shed some light if we capture the early boot portion of it?16:23
fungiyeah, the job fails to early to capture the usual files we would in a normal run16:24
fungier, too early16:24
fungiso probably need to hold/build a node there16:24
clarkbah16:24
fungidevstack/grenade jobs normally collect df output on their own, but due to the early failure conditions i ended up adding it to validate-host because then it gets sucked up with the zuul info16:26
*** mlavalle has joined #opendev16:27
fungioh, and that example was on ubuntu-bionic after all. i guess master branch grenade jobs still use bionic16:29
*** ykarel has quit IRC16:29
clarkbfungi: would lsblk show us raw disk sizes?16:29
fungiyeah, should anyway16:30
*** marios has quit IRC16:35
fungiseeing if i can catch a normal ubuntu-bionic node in use there and snag its syslog16:45
fungiand i'll lsblk on it too16:46
fungiworst case i'll boot something there manually16:46
funginot a bionic node, but i jumped onto a regular ubuntu-focal node which was in use and /dev/vda1 is 94G not 1516:51
fungiso either we're getting different sized images some times, or we're failing to grow the rootfs sometimes16:52
fungier, different sized flavors i mean16:52
fungi| 0022913763 | airship-kna1        | ubuntu-bionic                | 0a40080f-03df-414c-96ef-9b4b784cabb3 | 89.46.86.200    |                                         | in-use   | 00:00:00:06  | locked   |16:58
fungi/dev/vda1        94G  9.4G   80G  11% /16:58
fungiso it's not even all ubuntu-bionic nodes16:58
fungithis is going to prove hard to pin down16:59
clarkbthat certainly makes the mystery more mysterious17:03
*** rpittau is now known as rpittau|afk17:05
*** artom has quit IRC17:14
*** ralonsoh has quit IRC17:38
*** eolivare has quit IRC17:43
*** jpena is now known as jpena|off18:00
*** hashar has quit IRC18:01
*** artom has joined #opendev18:11
*** ildikov has quit IRC18:25
*** ildikov has joined #opendev18:25
*** artom has quit IRC18:56
*** diablo_rojo has joined #opendev19:33
diablo_rojofungi, let me know when you wanna chat about channel renames :)19:34
fungidiablo_rojo: yep, i should be done cooking in roughly an hour if you'll be around19:47
*** whoami-rajat__ has quit IRC19:53
diablo_rojofungi, works for me!19:55
*** artom has joined #opendev20:11
*** dtantsur is now known as dtantsur|afk20:20
clarkbI've updated the meeting agenda for tomorrow. Plan to send it out shortly. Please add (or remove) any agenda items if you've got them20:35
clarkbfungi: ianw: if you get a sec for reviews https://review.opendev.org/c/opendev/system-config/+/765021 and child are some gerrit housekeeping things I've had up a while (the 3.3 image builds)20:45
clarkband now to look at gerrit accounts again I guess20:45
* clarkb has to put on the brain melter prevention helmet20:46
diablo_rojoGood luck!20:47
*** d34dh0r53 has quit IRC20:49
*** d34dh0r53 has joined #opendev20:59
clarkb thanks!21:07
fungidiablo_rojo: sorry, taking longer than anticipated but will be done eating shortly21:10
diablo_rojofungi, no worries at all!21:11
diablo_rojoStarted working on some of the steps.21:11
clarkbthere are definitely a few patterns starting to emerge in the exteranl id issues as well21:27
clarkbone appears to be a variant of the issues with preferred emails where people seem to change employer, update their email then things get sad21:28
clarkbbut in this case the preferred email continues to have an exteranl id in two different accounts so the error isn't a missing external id but a conflit21:28
clarkbanyway I'm doing human detectiving to keep trying to classify things (I think I have ~4 groups now) and am writing down some ideas for fixing the various situations21:29
clarkbanother interesting thing is that almost universally these accounts seem to have not been used recently. I think if people have trouble with accounts and are active users they tend to talk to us about it21:30
clarkbanother group (though with only one set of conflicts in it currently) is the third party CIs with the same email addr21:31
fungidiablo_rojo: so we're on https://etherpad.opendev.org/p/openinfra-channel-renames21:39
fungii'll add a link in there to https://docs.opendev.org/opendev/system-config/latest/irc.html#renaming-an-irc-channel for reference21:40
fungialso https://docs.opendev.org/opendev/system-config/latest/irc.html#access since most of those still need that done as well21:42
fungidiablo_rojo: what are the email and kick entries for?21:43
fungifor the bots line, that's going to be opendev/system-config (meetbot) and openstack/project-config (accessbot, statusbut) changes21:45
fungiwe can batch them up though, in my opinion21:46
fungione project-config change to add the new channels there, then system-config change depends on that, then we can stage removals for the old channels in both repos to be merged after the transitional period is completed21:47
diablo_rojofungi, I figured we would email the lists when the changes are done21:47
diablo_rojoI started a project config change21:47
fungiaha, so that's signing off on an announcement21:47
diablo_rojoYeah21:47
fungiand kick is...?21:47
fungioh, when we kick people out of the old channel, got it21:47
diablo_rojoIf after a certain period of time we wanted to kick people out of the old channel21:48
diablo_rojoyup21:48
fungiyeah, sorry, i'm slow ;)21:48
diablo_rojoSO far I have registered, set guards and add openinfra access to ptg, forum, diversity, and board21:49
diablo_rojoBeautiful.21:49
diablo_rojoFor the project config change, I was adding them to the accessbot. Should I remove the openstack versions from that file as well?21:50
fungiwhat about ttx's suggestion that ptg and forum should share a common events channel (summit too)?21:50
diablo_rojoOh I am cool with that.21:51
diablo_rojoI missed that suggestion on the ML I guess.21:51
fungiis that something we want to entertain, or do you feel that the discussion played out contrary to that21:51
fungi?21:51
* fungi double-checks his memory21:51
diablo_rojoNo I think one channel makes sense.21:51
diablo_rojoopeninfra-events then?21:51
fungihttp://lists.openstack.org/pipermail/foundation/2021-January/002951.html21:52
diablo_rojoOh perfect.21:53
diablo_rojoYeah that makes sense.21:53
fungialso -summit is missing from the pad?21:53
diablo_rojoReduced siloing21:53
diablo_rojoI legit forgot we had one21:54
diablo_rojoAdded now though.21:54
fungilooks like we also used #openinfra-summit last time, but it was never registered so i'd err on the side of just letting that fade back into the aether21:54
diablo_rojoYes +221:54
diablo_rojoTotally agree21:54
fungigranted there are 60+ folks in it at the moment21:55
diablo_rojoWe could set up a redirect for it while we are doing the rest of this21:55
fungiprobably joined during the summit and just never left21:55
fungiwell, it's not registered21:55
fungioh, though i'm the chanop21:55
diablo_rojofungi, finished registering, setting the guard and adding the openstack access to #openinfra-events21:55
fungiso i can register it and do the stuff21:55
fungiyeah, lemme do the reg and access dance on #openinfra-summit so we can prep to reroute it21:56
diablo_rojoperfect21:56
diablo_rojoAnd I guess we want to do a redirect from openstack-foundation to openinfra as well?21:57
fungiyes21:57
fungithat's already on your pad though21:58
diablo_rojoRight.21:58
diablo_rojoSo many things to keep track of lol21:58
fungiour duelling red and black text backgrounds remind me of my favorite double set of dice22:00
diablo_rojoIts very pretty :)22:00
diablo_rojoDo any of the 4 channels need gerritbot? (events, board, diversity, or the regular openinfra)22:02
fungii'll check whether their source channels have it, but probably not22:02
fungiwhat was going to happen with the refstack channel, btw?22:02
diablo_rojoAhh good call. I can work on that if you wanna do the system-config change.22:03
diablo_rojoSince I've already started the project-config one.22:03
fungioh, yeah, the ptg channel gets gerritbot announcements of ptgbot changes22:04
fungithe rest are just accessbot and meetbot22:04
fungiand statusbot22:05
diablo_rojoPerfect.22:07
diablo_rojoGot accessbot and gerritbot stuff done.22:08
diablo_rojoYou said statusbot is also in project config?22:08
fungii think so, ckecking...22:09
diablo_rojoNot seeing a top level dir, but it might be hiding22:09
fungiahh, nope22:09
fungisystem-config in hiera/common.yaml, same file as meetbot just another section22:10
diablo_rojoohhhh yeahhhh that sounds right.22:10
diablo_rojoCool, then I will go ahead and push this project config pathc22:11
openstackgerritKendall Nelson proposed openstack/project-config master: Setup OpenInfra Channels  https://review.opendev.org/c/openstack/project-config/+/77455022:11
fungireviewing22:13
diablo_rojoMerci!22:13
diablo_rojoI don't seem to be able to update topics in openstack-diversity or openstack-board.22:18
fungidiablo_rojo: propose a change to add your nick to the operators list in accessbot/channels.yaml22:22
fungii'll fast approve it22:22
fungithat'll give you operator access to all the channels we manage22:22
fungiassuming you're okay with the idea of basically volunteering to help shepherd that collective set of irc channels22:23
fungidiablo_rojo: the access job failure on the first patchset claims the three new channels aren't registered with chanserv... looking into why it would think that22:26
diablo_rojoI will get that change up now22:27
fungidiablo_rojo: `/msg chanserv access #openinfra-board list` and so on definitely claims "#openinfra-board is not registered."22:28
fungiyou did the commands listed at https://docs.opendev.org/opendev/system-config/latest/irc.html#access ?22:28
openstackgerritKendall Nelson proposed openstack/project-config master: Add diablo_rojo to AccessBot Operators  https://review.opendev.org/c/openstack/project-config/+/77455522:29
diablo_rojofungi, yeah I did22:29
diablo_rojoMy history shows me running them at 12:59:52 (my time) and onwards22:30
fungiweird22:32
fungiif you run the command i quotes above, what does it say?22:32
fungier, command i quoted above22:32
fungii'm starting to regret using a qwerty keyboard with every time i type s where i meant d and vice versa22:33
diablo_rojoUhh when I run the list command nothing comes back..22:37
diablo_rojoNo error message though22:37
*** slaweq has quit IRC22:38
diablo_rojoI guess you could try registering them fungi?22:43
fungiwere you joined to the channel and an operator when you registered it?22:45
*** sboyron has quit IRC22:46
diablo_rojoI was definitely joined to the channel.22:47
diablo_rojoI might have missed the op step22:47
diablo_rojoI ran the op command and then the register command again. Then I ran the list command and still nothing.22:49
fungiif you were the only person in the channel you'd have autoamtically been the op22:49
fungiif you're not, then whoever joined the channel before you will need to do it22:49
diablo_rojoOh well there were people in the openinfra-board channel already22:50
diablo_rojolooks like none of them have op though?22:50
diablo_rojoWell no one currently there.22:50
diablo_rojoMaybe it was jbryce? During the last board meeting?22:51
fungilooks like jbryce was a chanop when i originally joined22:51
fungibut is no longer identified as a chanop22:52
jbrycei appear to no longer have any special privileges there and can't register it either22:56
diablo_rojoHuh.22:57
fungijbryce: thanks for confirming, that is certainly odd22:58
fungiwe can request that the people in that channel /part and hope that they all do, at which point the first person to rejoin will be a chanop22:59
fungialternatively, as we now have control of the #openinfra channel we can go ahead with registering an organization namespace with freenode, though their process for that is convoluted and requires legal documentation, last i looked through it22:59
fungidiablo_rojo: what about the others? we can break #openinfra-board out from the rest23:01
fungior revisit my earlier suggestion to merge the board and foundation channels23:01
openstackgerritIan Wienand proposed opendev/system-config master: [wip] add script for pruning borg backups  https://review.opendev.org/c/opendev/system-config/+/77456123:03
*** CeeMac has quit IRC23:03
*** parallax has quit IRC23:04
*** walshh_ has quit IRC23:04
*** walshh_ has joined #opendev23:04
*** CeeMac has joined #opendev23:04
*** mwhahaha has quit IRC23:04
*** snbuback has quit IRC23:04
diablo_rojofungi, diversity and events should be good to go23:04
*** snbuback has joined #opendev23:05
*** mwhahaha has joined #opendev23:05
*** parallax has joined #opendev23:05
fungiand #openinfra as well (it's already added to accessbot too)23:05
diablo_rojoThere were already people in openinfra and I couldn't get op there either23:07
fungiyeah, i took care of it already23:07
fungithat one took negotiating with the previous registrants23:07
fungiso out options for #openinfra-board are: A. ask everyone to /part and hope they do so we can claim chanop; B. register #openinfra as an organizational namespace and ask freenode staff to grant us chanop; C. decode to forward #openstack-board to a different channel e.g. #openinfra23:08
fungis/out/our/23:08
fungis/decode/decide/23:08
fungiit's getting to be that time of night where i shouldn't be typing23:08
diablo_rojoI would lean towards A first?23:09
fungimy adjacent qwerty key typos are growing23:09
diablo_rojoI can ping each person in the channel.23:09
fungiyeah, i'll do my part by /part'ing now ;)23:09
diablo_rojoWe can pick up tomorrow if you're looking at wrapping up for today :)23:09
fungii'm still around23:10
fungijust may be of decreasing utility as the evening proceeds23:10
diablo_rojoTotally fair :)23:11
diablo_rojoI pinged everyone in the channel so hopefully we see that list shrink.23:11
diablo_rojoIn the meantime then, should I get the system-config change together? Or did you have that going already?23:12
fungiit's not a ton, otherwise yeah i'd have written option A off as entirely intractable23:12
diablo_rojoYeah and they are all pretty responsive people and we know them reasonably well :)23:13
fungii haven't started the system-config change, you'll need to set it depends-on your project-config change because our checks will prevent adding bots to any channel for which we lack access23:13
fungibut if you want to split #openinfra-board out, we can probably get the others merged sooner23:13
diablo_rojoOkay. I can get that together and set the depends-on.23:13
diablo_rojoYeah that might be best.23:13
openstackgerritKendall Nelson proposed openstack/project-config master: Setup OpenInfra Channels  https://review.opendev.org/c/openstack/project-config/+/77455023:15
openstackgerritKendall Nelson proposed opendev/system-config master: Setup OpenInfra Channels  https://review.opendev.org/c/opendev/system-config/+/77456323:19
diablo_rojoDone!23:19
diablo_rojofungi, ^23:19
diablo_rojo( also linked to it in the etherpad)23:20
fungiawesome, taking a look23:20
diablo_rojoThank you!23:20
fungibut really, we have fairly thorough check jobs for these23:20
diablo_rojoYeah I figured :)23:20
fungiso i'll mostly wait for them to make sure stuff matches23:20
diablo_rojoOh shit I forgot the depends on23:20
diablo_rojoUgh23:20
fungiit's only a commit --amend away23:20
diablo_rojoDoing it now.23:20
fungithe depends-on won't affect testing anyway i don't think, it's mostly to record that we can't merge it until the first change merges and is deployed23:21
fungiso just a helpful reference to reviewers23:21
diablo_rojoThe system config needs to depend on the status config right?23:21
fungisystem-config depends on project-config23:22
fungifor this23:22
fungiit's really the accessbot addition which needs to deploy first, and that's in project-config23:22
openstackgerritKendall Nelson proposed opendev/system-config master: Setup OpenInfra Channels  https://review.opendev.org/c/opendev/system-config/+/77456323:22
diablo_rojoHokay. Hopefully thats all good now.23:22
fungithe check for system-config may actually follow the depends-on, we'll have to see23:23
clarkbfungi: I wonder if we need to clarify the statement made at https://github.com/pyca/cryptography/issues/5771#issuecomment-77547709523:24
clarkbwe provide constraints, but we also expect you to use an up to date cryptography if we haven't pinned it23:24
clarkbbut I don't want to add to the noise on that issue with a side thing23:24
fungiyeah, maybe better to just let it slide as an inaccurate example23:25
fungiand also there are already calls to lock the issue23:28
openstackgerritIan Wienand proposed opendev/system-config master: [wip] add script for pruning borg backups  https://review.opendev.org/c/opendev/system-config/+/77456123:44
openstackgerritIan Wienand proposed opendev/system-config master: [wip] backup monitor  https://review.opendev.org/c/opendev/system-config/+/77456423:44
fungidiablo_rojo: subtle typo on 774550 is causing the failure (yay testing!), see inline comment23:53

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!