19:01:15 <clarkb> #startmeeting infra
19:01:16 <openstack> Meeting started Tue Feb 11 19:01:15 2020 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:17 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:19 <ianw> o/
19:01:19 <openstack> The meeting name has been set to 'infra'
19:01:27 <clarkb> #link http://lists.openstack.org/pipermail/openstack-infra/2020-February/006599.html Our Agenda
19:01:27 <zbr> o/
19:01:33 <clarkb> #topic Announcements
19:01:44 <clarkb> I'm going to use this topic to give a quick tldr on our current fires
19:02:29 <clarkb> virtualenv>=20.0.0,<20.0.2 by default installs "seed" packages: pip, setuptools, wheel, and probably something I'm forgetting via symlinks to a common user dir
19:02:32 <diablo_rojo> o/
19:02:49 <fungi> easy_install
19:02:54 <clarkb> This causes problems if you then copy the virtualenv to someplace else or if you run software out of that venv as a user other than the user that created it and can't follow the symlinks due to permissions errors
19:03:18 <clarkb> The first issue hit zuul due to its use of bwrap and the second is hitting opendev's test node images because we have root create venvs for bindep and os-testr
19:03:36 <clarkb> then the zuul user can't access the files in /root/.local/some/path to resolve the symlinks in the venv
19:03:57 <fungi> seemed like a great idea at the time
19:04:08 <clarkb> The fix has been to use virtualen --seeder=pip which installs the seed packages normally using pip. But virtualenv==20.0.2 just released like 10 minutes ago to change the default to a file copy not symlink
19:04:18 <zbr> the seed issue painful than the incompatibility with six that was introduced.
19:05:06 <clarkb> right there is a seprate issue that affects you if you use new virtualenv with old six (possibly from distro packages)
19:05:12 <clarkb> as old six isn't compatible with new virtualenv
19:06:03 <zbr> mainly neither of the versions of six shipping with centos-8 or centos-7 are compatible with virtualenv 20.x, but somehow that virtualenv gets installed without upgrading six.
19:06:21 <clarkb> zbr: virtualenv probably doesn't specify a lower bound for six
19:06:25 <zbr> obviously that overriding the system installed six is another sensitive issue.
19:06:26 <clarkb> so the installed version is seen as sufficient
19:06:47 <mordred> o/
19:06:50 <clarkb> overall it seems we have a handle on things and if we get new images (which I've triggered builds for) we'll settle down and be happy
19:07:02 <clarkb> there is also a base jobs workaround that mordred wrote which we should revert once the dust settles
19:07:24 <mordred> yeah. although I think we can go through the normal base-test process to verify it
19:07:28 <fungi> https://github.com/pypa/virtualenv/blob/master/setup.cfg#L46
19:07:36 <fungi> six>=1.12.0,<2
19:07:52 <fungi> it's in the install_requires
19:08:38 <fungi> i suspect some jobs are installing distro packages overtop it
19:08:39 <zbr> i know it does, but I also know what I seen in the wild. newer virtualenv complaining about that import from six.
19:08:57 <clarkb> at this point I think we want to double check any new reports to nesure they aren't different bugs and if not encourage patience, however jobs that start after now (mordreds workaround landing and its fix landing) should be good to go
19:09:16 <frickler> queens and rocky u-c also install six==1.11.0
19:09:31 <clarkb> oh right devstack has some pin virtualenv changes to land, thank you frickler
19:10:11 <fungi> pip also probably still doesn't catch when you incompatibly downgrade a dependency in a separate command
19:10:22 <clarkb> fungi: it does not
19:10:40 <mordred> the main thing that triggers that conflict seems to be pip installing tox on a system where python libs are otherwise installed via distro packages
19:10:42 <fungi> afaik its dependency resolver still doesn't track dependencies for already-installed packages
19:10:43 <clarkb> (that is something the dependency resolver work in pip should address)
19:10:51 <mordred> so - I think the contraint bump on six fixes the symptom
19:11:14 <mordred> but people should take this as an opportunity to go look at whether they can stop pip installing tox on systems where they install six via packages
19:11:34 <mordred> because that _will_ break again if not addressed
19:11:48 <fungi> or switch to the tox-venv plugin if they only test python3
19:11:49 <zbr> and how about systems where tox is not packaged?
19:11:50 <mordred> might mean someone making a good usable rpm for tox even
19:12:11 <clarkb> zbr: then six and tox should probably both come from pypi and not be mixed
19:12:14 <mordred> zbr: yeah - I'm not saying I know waht the solution is - just that the people who are in that circumstance should look at the options
19:12:21 <mordred> fungi's is a good one
19:12:22 <zbr> fungi: i wanted to propose we intall tox-venv by default with tox, it could help us avoid that .... virtualenv issue.
19:12:23 <clarkb> mordred: ++
19:12:36 <mordred> although tox still has to get installed
19:12:45 <fungi> zbr: i use it heavily for python3-only projects outside openstack
19:12:55 <mordred> so there's still the issue that tox+six exposes
19:13:25 <mordred> I'm not going to dig in to it any further - just more a warning that whoever is in the distro-six+pip-tox is _going_ to be broken again in the future
19:13:47 <clarkb> ++ anything else on this? I don't think we should solve all the problems here, but I wanted to do a sumamry to ensure we all had a rough picture of what was going on
19:13:51 <clarkb> as we are getting many questions
19:13:51 <mordred> ++
19:13:54 <zbr> anyone against adding tox-venv?
19:14:25 <clarkb> zbr: I don't think so but we may have to sort out what that means for python2 ? something to figure otu in review likely
19:14:33 <mordred> yeah. I think it's worth exploring
19:14:40 <fungi> it's a good point, `pip install tox` still installs virtualenv whether or not you use it (but if you use tox-venv then virtualenv is not invoked at least)
19:14:42 <mordred> it still might not solve our issue though
19:14:45 <mordred> because of that ^^
19:14:46 <zbr> clarkb: it means nothing, tox-venv on python fallback to virtualenv
19:15:12 <clarkb> ya we can debug further after the meeting
19:15:15 <mordred> ++
19:15:19 <zbr> it is clearly documented that on platforms where venv is not available, tox-venv will let tox use virtalenv.
19:15:21 <fungi> but yeah, we can likely move on with the meeting
19:15:44 <clarkb> My other last minute announcement is that I have to pop out of the meeting early in order to get kids from school. If we aren't done at about 19:45 fungi has volunteered to take over the chair duties
19:15:56 <clarkb> Just a heads up that I'll be popping out in about half an hour
19:16:00 <fungi> so let's wrap up in 30 minutes ;)
19:16:09 <fungi> otherwise you have to deal with more of me
19:16:14 <clarkb> #topic Actions from last meeting
19:16:22 <clarkb> #link http://eavesdrop.openstack.org/meetings/infra/2020/infra.2020-02-04-19.01.txt minutes from last meeting
19:16:30 <clarkb> There were no actions.
19:16:36 <clarkb> #topic Priority Efforts
19:16:40 <clarkb> #topic OpenDev
19:17:00 <clarkb> mordred: where did we end up with the upgrading gitea stack (if we ignore that landing changes right now might be difficult)
19:18:01 <clarkb> iirc there was a 1.10.3 upgrade that should be safe, then on top of that a 1.11.0rcX change and then on top of that a roll out master for git cache
19:18:15 <clarkb> I expect we can land 1.10.3 real soon now then evaluate test results for the other changes?
19:18:48 <clarkb> (I think mordred is distracted)
19:19:09 <mordred> yes
19:19:13 <mordred> 1.10.3 is safe to land
19:19:23 <mordred> 1.11 I should go WIP - there are template changes that need to be updated
19:19:42 <mordred> however - we're in good shape broadly because 1.11 and master both at least _build_
19:19:48 <clarkb> #link https://review.opendev.org/#/c/705804/ upgrade gitea to 1.10.3
19:19:57 <mordred> it's worth noting that 1.11 is using npm directly - so we have to install that into the builder image
19:20:26 <clarkb> mordred: nodenv might make that easy then we can rm -rf the virtualenv?
19:20:42 <mordred> clarkb: doesn't matter - it's in the builder image
19:20:48 <mordred> we don't use that image in the final output
19:20:49 <clarkb> oh right we build in a throwaway
19:20:52 <mordred> yup
19:21:41 <clarkb> even if we don't land those right away we'll at least be ready when those releases are cut
19:21:49 <clarkb> seems worthwhile, thank y ou for that
19:22:09 <clarkb> (1.12.0 adds the git commit cache to gitea which should speed up rendering time for large repos like nova)
19:22:18 <clarkb> Anything else on the opendev topic?
19:23:27 <clarkb> #topic Update Configuration Management
19:23:27 <diablo_rojo> Have you given thoughts to the docs question I posed in the TC patch?
19:23:32 <diablo_rojo> Too slow lol
19:23:37 <clarkb> #undo
19:23:38 <openstack> Removing item from minutes: #topic Update Configuration Management
19:23:54 <clarkb> diablo_rojo: were there new ones or the ones about user guide?
19:24:24 <diablo_rojo> the one about the infra manual also being moved out and having openstack specific things extracted into the contrib guide
19:24:28 <fungi> i suspect we're all in agreement the infra manual is going to need a bit of an overhaul and splitting/deopenstacking
19:25:39 <mordred> ++
19:25:42 <mordred> dopenstack
19:25:44 <clarkb> diablo_rojo: I guess from openstack's perspective the contributor guide and infra-manual openstack specific bits might be redundant?
19:25:52 <clarkb> diablo_rojo: and so it doesn't make sense to keep infra-manual the repo under governance?
19:25:59 <fungi> a bit of dope'n'stacking yes
19:26:18 <clarkb> if that is the case we can remove it from the yaml file then next time we do a repo renaming move it into opendev/ and work to dopenstack it
19:26:48 <diablo_rojo> the definitely are redundant which is why I think it would make sense to move the infra manual out and keep the openstack specifics to the contributor guide and keep the opendev stuff to the infra manual
19:27:01 <diablo_rojo> Agreed :)
19:27:06 <clarkb> diablo_rojo: ok, in that case I'll push up a new ps
19:27:12 <diablo_rojo> Woot!
19:27:14 <clarkb> any objections to ^
19:27:21 <corvus> sgtm
19:27:37 <fungi> i'm on board
19:27:50 <fungi> thanks for the great suggestion, diablo_rojo!
19:28:05 <diablo_rojo> Double woot :)
19:28:07 <clarkb> alright next up is config mgmt
19:28:10 <diablo_rojo> glad I spoke up :)
19:28:15 <clarkb> ++
19:28:17 <diablo_rojo> thanks clarkb!
19:28:22 <clarkb> #topic Update Config Management
19:28:43 <clarkb> mordred: I think despite fires there has been good progress on gerrit ansible+docker?
19:28:48 <clarkb> including addition of LE?
19:29:38 <mordred> yes! fungi made a great comment which I didn't see
19:29:56 <fungi> i'll try to make my comments more visible in the future
19:29:59 <mordred> until earlier today - which is that we should also grab the LE stuff for the .openstack.org versions so we can do the redirects
19:30:19 <mordred> so I've got the LE dns records in place for review and review-dev in rackspace dns
19:30:23 <fungi> cutting over without the review.openstack.org redirect would have been not great
19:30:27 <mordred> and have updated the stack to include the redirects
19:30:38 <fungi> and now it's failing a puppet4 job
19:30:45 <mordred> that stack needs a good recheck after this morning's fun
19:30:51 <fungi> (in case you haven't seen)
19:30:52 <mordred> (eyah - that's a virtualenv based issue)
19:30:58 <fungi> got it
19:31:13 <mordred> https://review.opendev.org/#/c/707214/ <--
19:31:18 <clarkb> I filed a bug upstream in voxpululi but ^ works around it for us
19:31:19 <mordred> that should be the fix for the virtualenv issue
19:31:34 <clarkb> https://github.com/voxpupuli/puppet-python/issues/534 if curous
19:31:35 <mordred> puppet askbot will also be borked
19:31:47 <clarkb> mordred: can probably use a similar workaround there?
19:31:59 <mordred> probably
19:33:06 <mordred> that said ...
19:33:25 <mordred> assuming the puppet patch works and the gerrit stack goes green - a) it's ready for review :) ...
19:33:32 <clarkb> ++
19:33:34 <mordred> b) we should talk about how we want to do the actual review.o.o
19:33:53 <ianw> i should move that afsmon call to mirror-update.opendev.org anyway
19:33:58 <mordred> (cant' remember if I mentioned, but review-dev.opendev.org is running from the tip of the previous rev of that stack - minus the redirects)
19:34:19 <clarkb> mordred: is this on xenial or bionic (the new stuff)
19:34:24 <mordred> xenial
19:34:32 <mordred> because review.o.o is xenial
19:34:33 <clarkb> ok so we don't have to create a new server and cut over if we don't want to
19:34:36 <mordred> nope
19:34:42 <clarkb> but that may still be a good idea just to start clean?
19:35:15 <corvus> do we feel like we need to do the new ip notification dance?
19:35:18 <mordred> nah - I think let's go with what we've got for now - new server and cutover is a bunch more work
19:35:39 <clarkb> mordred: got it and good point corvus (we probably would want to do that at least for a week or two)
19:36:01 <mordred> we've got decent runway on xenial right?
19:36:03 <corvus> in which case, yeah, let's keep the existing :)
19:36:24 <mordred> the container shift should protect us from userland impacting needs for upgrading for a while longer
19:36:27 <clarkb> mordred: ~14 months
19:36:48 <mordred> yeah. that's awesome. we should be able to get firmly onto 3.x and be happy with our container workflow in that time :)
19:37:13 <clarkb> and bionic is 10 yaers of support so we can just park there and retire :)
19:37:21 <clarkb> alright anything else or should we move on?
19:38:13 <fungi> i got nothin'
19:38:17 <clarkb> #topic General Topics
19:38:30 <clarkb> First up, we've been asked if we will want space at the Vancvouer PTG
19:38:42 <clarkb> I plan to be there and expect that fungi does as well
19:38:48 <fungi> yup
19:38:55 <fungi> it's a swell place
19:39:09 <clarkb> if we have more than 2 I'll fill out the form and request a spot
19:39:16 <clarkb> so let me know if you think you'll be there
19:39:25 <fungi> could be opendev or openstack infra i guess
19:39:42 <clarkb> ya I think the overlap will still be large and won't try to be specific to one or the other
19:40:12 <mordred> I imagine I'll be there - but I'm still in "I don't want to travel anywhere ever again" mindset ...
19:40:13 <clarkb> I have about 3 weeks to answer so no rush but sooner is probably better for diablo_rojo
19:40:57 <clarkb> Next up is server upgrades. I'll start with a quick refstack update then ianw and fungi can add wiki and static updates
19:41:20 <fungi> mordred: at least it's within the same conference?
19:41:31 <fungi> er, same continent
19:41:40 * fungi struggles with words
19:41:40 <clarkb> With refstack apparently some of the board is still interested in running that service. The foundation is going to ask around and see if any of that translates to willingness to maintain the software
19:41:45 <mordred> fungi: I did my due-dilligence on amtrak
19:41:51 <fungi> fair
19:42:02 <clarkb> I pointed out that I don't think it is significant amounts of work, but someone does need to do it.
19:42:11 <clarkb> Until then I'll sit on my dockerfied refstack stack
19:42:19 <clarkb> fungi: any wiki updates?
19:42:22 <fungi> yeah, if the board is interested in running the service, then the board should run th eservice
19:42:47 <fungi> no new updates since just prior to fosdem. i've gotten little done in the post-fosdem hangover fog
19:43:02 <fungi> still need to test plugins now that we have content loading
19:43:57 <clarkb> ianw: any thing new on static?
19:44:09 <fungi> it's basically done except for tarballs, yeah?
19:44:18 <ianw> umm, i wouldn't say so unfortunately
19:44:18 <fungi> also i suspect we didn't actually need to migrate the logs site
19:44:22 <ianw> https://review.opendev.org/#/q/topic:static-services
19:44:27 <ianw> is ready for review
19:44:45 <clarkb> ah ok need the modified afs vos release toolchain
19:44:46 <ianw> there are changes to publish tarballs to AFS in parallel
19:45:08 <ianw> i have also proposed as clarkb mentions some changes so we can make a dashboard for vos release
19:45:34 <ianw> because i'm a little concerned this will be the biggest volume by far in that path
19:45:36 <fungi> clarkb: have you #chair'ed me? you need to scoot now, right?
19:45:42 <clarkb> ya I need to scoot
19:45:45 <clarkb> #chair fungi
19:45:46 <openstack> Current chairs: clarkb fungi
19:46:00 * fungi cackles maniacally with power
19:46:07 <clarkb> thanks! and I'm out
19:46:43 <fungi> i agree, having good stats around vos release timing should be helpful
19:46:49 <ianw> there's still releases after this, and then a few other straggler sites
19:47:21 <fungi> though hopefully the updates to that volume are not high-impact, as they're generally incremental and append-only
19:47:34 <fungi> (tarballs i mean)
19:47:56 <ianw> yep -- i certainly hope that's the case and we see no issues at all :)
19:48:01 <fungi> it's not like the mirror volumes where we add and delete (far more) data
19:48:57 <ianw> there's also the option to serve directly from the r/w volume, at, AIUI at cost of much more network traffic ... but given the loads on the server it may be acceptable
19:49:10 <fungi> also i'm complicating matters by adding docs.airshipit.io into the mix
19:49:47 <ianw> anyway, i'll chase up on things as the virtualenvopolypse settles
19:49:53 <fungi> but it's at least helping me better understand the tooling around all of this
19:50:07 <fungi> i should be able to review the rest of the vos release stack after the meeting
19:51:30 <ianw> thanks; happy to babysit it as appropriate
19:52:12 <fungi> okay, on to...
19:52:14 <fungi> New Arm64 cloud (ianw 20200211)
19:52:59 <ianw> i think that is, as they say, fully operational now
19:53:28 <ianw> we had some teething problems with routing and storage, but we migrated nb03 to the new region and it appears to be working
19:53:30 <fungi> i appreciate the death star reference
19:54:02 <ianw> kevinz fixed ceph on the london side so our orphaned images disappeared yesterday afternoon my time
19:54:12 <mordred> woot
19:54:40 * corvus tries to remember tarkin discussing teething problems
19:54:40 <fungi> though an upshot of that is we discovered an unresponsive provider will block the image cleanup thread from processing old images in other providers on the same builder
19:54:54 <fungi> corvus: the "fully operational" line
19:55:01 <ianw> so, i'm not aware of any outstanding issues
19:55:16 <corvus> fungi: i'm going to pursue the teething line of inquiry
19:55:38 <fungi> moff corvus
19:55:39 <ianw> note we have "check-arm64" for anyone who would like to add jobs but is a bit shy to put them in their main gate
19:56:07 <ianw> i think we can promote that more now that we have a bigger cloud, and as it proves itself people can promote jobs to main gates
19:56:22 <fungi> what's the rough quota now?
19:56:25 <corvus> this is our second region, right?
19:56:35 <fungi> second but, as i understand, much larger?
19:56:43 <ianw> yes, i think it's 8 nodes in london, and about 48 in US
19:56:57 <fungi> that is substantial, yes (6x)
19:57:18 <fungi> so lon may as well not exist, by comparison
19:58:07 <fungi> if we see significant uptake and then usa falls offline, lon isn't going to make much of a dent in the backlog
19:58:49 <fungi> we have like a minute to talk about the airship cloud
19:59:00 <fungi> and clarkb isn't here to fill us in
19:59:05 <fungi> but sounds like it's up and in use
19:59:17 <fungi> there's been some confusion over the naming
19:59:42 <fungi> to be clear, it's resources provided by ericsson at the request of airship by purchasing capacity in citycloud
19:59:57 <fungi> so ultimately it's in citycloud, hence the name
20:00:18 <fungi> though it's supposed to also provide some general resource quota for our overall pool too
20:00:43 <fungi> in addition it has some very large (including 32gb) nested virt flavors
20:01:04 <fungi> which airship requested for their integration testing
20:01:21 <fungi> and we're over time
20:01:37 <fungi> i can entertain questions (as far as i'm able) in #openstack-infra though
20:01:40 <fungi> thanks all!
20:01:46 <fungi> #endmeeting