19:01:05 #startmeeting infra 19:01:06 Meeting started Tue Apr 9 19:01:05 2019 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:10 The meeting name has been set to 'infra' 19:01:12 #link http://lists.openstack.org/pipermail/openstack-infra/2019-April/006308.html 19:01:30 #topic Announcements 19:01:47 I don't have anything to announce 19:01:59 #topic Actions from last meeting 19:02:07 #link http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-04-02-19.01.txt minutes from last meeting 19:02:29 ianw was going to talk to fedora about their i18n plans. This happened and there is a separate meeting agenda item for it 19:02:47 ianw: do you want to talk about that now or should we continue on and get to it when it comes up in our agenda runthrough? 19:03:10 clarkb: let's just talk at end and do more important stuff first 19:03:14 ok 19:03:20 #topic Specs approval 19:03:33 Unsurprisingly I think we've been largely heads down on the opendev and LE specs so nothing new here 19:03:41 #topic Priority Efforts 19:03:59 I guess we get to dig straight into the fun topics today :) 19:04:04 #topic Update Config Management 19:04:32 The puppet-4 upgrades have slowed down due to the openstack release this week. I plan to pick that up again on thursday/friday time permitting 19:04:47 we have been able to cleanup some of our ansible around puppet installation and configuration which is nice 19:05:09 On the docker side of things our docker image build jobs don't work with ipv6 because docker doesn't work with ipv6 19:05:32 https://review.openstack.org/651353 is a possible fix but I'm not super confident in it as it assumes skopeo doesn't have the same issue that docker has 19:05:44 * mordred has faith in skopeo 19:05:55 #link https://github.com/moby/moby/issues/39033 Upstream docker bug 19:06:23 mostly a heads up that we may see these jobs fail until they are fixed and if you have spare cycles to review that potential fix it will liekly be helpful 19:06:34 the parent change explains some of the problems in more detail 19:07:57 pabelanger and dmsimard have been poking at the zuulcd jobs again and I approved pabelanger's fix for the current/previous issue 19:08:20 if that fixed it in general I think we should pivot the existing job away from doing zuul reloads to configuring gitea and/or the nameservers since gitea and the nameservers do not puppet 19:08:33 I am hoping to dig into this particular thing this afternoon 19:08:58 anything else to add on this topic? 19:08:58 ++ 19:10:29 Sounds like no. Onward! 19:10:33 #topic OpenDev 19:11:02 https://review.openstack.org/#/c/651268/ is on its way in to test our cgit to gitea redirects 19:11:29 ianw notes it is ok to merge git -> https protocol changes. I approved a couple yesterday 19:12:12 One thing I realized yesterday was that we should confirm that the new project creation in gitea works without needing fixups. I think things look ok but I haven't been able to check a new projects with multiple branches to ensure the default branch = master fix is working 19:12:13 worth noting, i use a /etc/hosts addition like the following for testing the legacy site redirects: 19:12:20 2001:4800:7817:103:be76:4eff:fe04:e3e3 git.airshipit.org git.openstack.org git.starlingx.io git.zuul-ci.org 19:12:36 fungi: I expect you'll be passing that along to the various projects as well? 19:12:41 clarkb: I cannot speak for projects with multiple branches - but the three projects I just created in gitea all worked, so that's good 19:12:55 mordred: yup projects with just master branches look great 19:12:56 clarkb: oh, i just wanted to confirm if it was ok if i merged those as we discussed 19:12:56 i was hoping to give dtroyer and jroll and ... (who's our airship contact?) a heads up now 19:13:13 * jroll pops head up 19:13:15 * dtroyer raises head… :) 19:13:15 ianw: oh got it. Ya I think so 19:13:20 * mordred hands jroll and dtroyer pies 19:13:26 ianw: the git:// protocol is going away :) 19:13:30 butterscotch! 19:13:57 jroll: dtroyer about half an hour after https://review.openstack.org/#/c/651268/ merges you'll be able to test our redirects from cgit to gitea 19:14:02 yep, ok, so i'll probably do that thu or fri .au time, get them out of the way (for any that don't have comments ... and maybe one or two have merge failures) 19:14:40 cool 19:14:44 jroll: dtroyer 23.253.125.17 is the ipv4 address of the host if you can't use the ipv6 rule fungi pasted above 19:14:49 jroll: dtroyer: see the /etc/hosts entry i mentioned above for how i recommend performing local testing 19:15:04 so I should just be able to drop that hosts file entry and clone some things? 19:15:06 also noted in a review comment on that change 19:15:13 jroll: that and browse urls yup 19:15:20 yes, cloning and browsing should both be supported 19:15:23 gotcha, thanks! 19:15:34 excellent, thank you 19:15:38 I'll do that tomorrow 19:15:40 technically the git.openstack.org one has been in place for testing for a couple weeks already 19:16:08 that change is adding the necessary fixups to also support git.airshipit.org, git.starlingx.io and git.zuul-ci.org 19:16:26 on the testing front the other gitea thing I wanted to make sure we have tested is that the gitea side redirects work (at least generally) 19:16:33 mordred: fungi ^ do you know if we've done that yet? 19:16:58 yes - I have tested broadly-speaking that gitea-side redirects work 19:17:00 when performing repository renames and namespace moves? i believe that was tested already 19:17:09 great 19:17:25 I know corvus did some work on them and found the inter org didn't work as is and then he got ap atch merged to fix that 19:17:29 and I think corvus sent a patch upstream to improve cases where a redirect redirected to a redirect or something crazy like that 19:17:37 yeah 19:18:04 seems like things are coming together and its mostly down to writing a script/playbook/somethign to do the actual changesi n gerrit and gitea? 19:18:07 it was if you renamed a project to teh old name of another project 19:18:18 (which is now fixed, i gather) 19:18:45 yes, i'm on the hook to get ther in-repo edits scripted and tested, but happy to take help 19:18:59 * mordred offers fungi help by cheering him on 19:19:16 fungi: feel free to reach out when you've got some tasks that you can point me at. 19:19:18 fungi, fungi, he's our man, if he can't do it ... we have no backup options! 19:19:37 (yeah, also happy to actually help) 19:19:53 as discussed last week, this will be both the git.openstack.org to opendev.org edits as well as renaming references to the repositories and their namespaces in those repositories and other repositories which may mention them 19:20:02 fungi: while we're on the topic, do you want/need anything from me for scripting the no-longer-openstack-namespace renames? 19:20:03 clarkb: thanks, will do 19:20:04 I picturing that with pom-poms 19:20:26 I always picture mordred with pom-poms :D 19:20:52 jroll: fungi on that topic I think we should pick a day to freeze project changes. Say starting Thursday after the openstack release? 19:20:58 it explains the typos 19:21:28 maybe we don't need to be so conservative, but that gives us a chance to generate accurate reviewable lists that can be reviewed without wondering "where does this random new project go" 19:21:36 clarkb: as in code changes? or changes to the project list? 19:21:36 I assume the latter 19:21:59 jroll: ya chagnes to the project list. So that on the 19th we don't have to figure out where openstack/new-thing-from-yesterday goes 19:22:19 clarkb: I don't see any problem freezing that for a week 19:22:19 jroll: maybe at least a clear plan... i'm imagining something like: leave anything mentioned ni openstack governance projects.yaml in the openstack namespace (except for infra and qa projects which are in openstack-infra and openstack-dev?) and then move everything else to a not-openstack namespace (i had some new ideas to float, btw) 19:22:40 oh, also probably repos for sigs and the governance repos themselves would remain in the openstack namespace? 19:22:46 would be good to have all that spelled out though 19:23:06 fungi: ok, maybe we can chat tomorrow sometime? 19:23:18 happy to, but maybe not in the middle of release activity 19:23:28 after 15:00z should work for me 19:23:42 ah right. friday would also work, since the release will be done by then :) 19:24:06 catch up with me when you have time 19:24:11 will do, thanks 19:24:33 how's the plan with ipv6 for opendev.org? 19:24:44 not that we have many changes but I'll announce that freeze to stx also… 19:25:14 yeah, airship just proposed a new repo earlier today, so there is some activity from the pilot projects as well 19:25:21 jroll: fungi: when you've got a plan sorted out we should start sending periodic communication reminders 19:25:29 yes, definitely 19:25:32 ++ 19:25:38 frickler: I think to start we aren't going to ipv6? 19:25:57 frickler: then when mnaser is able to deploy ipv6 in that cloud we'll update things for ipv6? 19:26:35 I think we're in a week or two away, sorry for the delay, carriers can kinda suck :) 19:26:35 that shouldn't break anything (we already have jobs use ipv4 only github repos from our ipv6 only clouds) 19:26:37 clarkb: I was hoping that that was going to happen earlier 19:26:49 and jobs should use the zuul provided repo 19:26:58 rather than talk to opendev.org directly 19:27:13 they've blocked us because of router issues apparently but they're patched it and ipv6 will arrive natively without any changes 19:27:30 mnaser: our servers will start getting RAs one day? 19:27:41 clarkb: yup pretty much 19:27:46 neat 19:27:50 so that should be transparent as long as you have autoconf enabled 19:28:08 frickler: ^ so we should be able to add AAAA records as soon as our servers notice the RAs and configure themselves 19:28:22 that will be pleasing 19:28:29 might also need to restart haproxy and gitea to start listening on the new addr depending on how they are configured 19:28:45 o.k., two weeks doesn't sound too far in the future 19:29:11 hopefully we don't have any contributors/users hitting those from v6-only parts of the internet with no 6to4 nat available 19:29:50 Alright anything else before we move on to storyboard? 19:31:04 #topic Storyboard 19:31:35 fungi diablo_rojo sorry I've been very swamped with the other priority efforts and server upgrades and haven't had time for storyboard lately 19:31:46 you and me both, i'm afraid 19:32:02 Looks like mkarray is working to improve mysql queries which is good 19:32:12 * mordred also apologizes 19:32:45 yes, database optimization is sorely needed for the storyboard.openstack.org deployment in particular 19:33:11 observe the time it takes to load, say, the story for our opendev git/gerrit transition 19:33:17 which is not an overly massive one 19:33:44 any idea where we are with outreachy? 19:33:56 I think diablo_rojo was particulary involved in ^ 19:34:03 #link https://storyboard.openstack.org/#!/story/2004627 OpenDev Gerrit Hosting 19:34:53 Alright any other storyboard topics? Our general topics list is reasonably large so we should keep moving if not 19:35:06 yeah, i think we can move along 19:35:13 #topic General Topics 19:35:29 #link https://www.openstack.org/ptg#tab_schedule PTG Schedule has us Thursday and Friday 19:35:37 as a reminder we've got Thursday and Friday at the PTG 19:35:57 One thing I've struggled with all the rather large changes we've been making recently is figuring out a ptg schedule 19:36:22 One Idea I had was to do it more unconference style. The other was don't worry about it until the week before the PTG 19:36:44 If you plan to attend any sense for whether or not waiting until the week before to write down a schedule will be a hindrance? 19:36:57 its just hard to predict what will be important after the opendev switch 19:37:15 i tried the unconference method for our first couple of ptgs. it was a good way for me to deflect attention from my abject lack of planning ;) 19:37:41 we also don't have the dedicated helproom time 19:37:53 but I figure I'll be around saturday and can help with those things then 19:38:22 I'm not hearing screaming about that so for now I'll continue to not worry about it until after the opendev switch on the 19th :) 19:38:38 Next up is letsencrypt progress 19:38:48 ianw any luck with the staging service on graphite? 19:39:43 clarkb: need to look soon, got stuck yesterday with ansible stopping due to the puppet deb thing 19:39:46 #link https://review.openstack.org/#/c/651055/ Really letsencrypt graphite.o.o 19:40:08 ok I'll review ^ after the meeting and the puppet deb fix merged 19:40:13 oh yeah, that going in will help too, thanks 19:40:18 Next is Trusty server upgrades 19:40:28 #link https://etherpad.openstack.org/p/201808-infra-server-upgrades-and-cleanup 19:40:36 I'd like to do the lists.o.o upgrade Friday 19:40:42 #link https://etherpad.openstack.org/p/lists.o.o-upgrade-notice Upgrade lists.o.o Friday 19:41:09 if ya'll want to read over that notice ^ I'll send it out in a bit 19:41:28 Basically says we need to upgrade due to ubuntu lts support cycles and that the outage should be minimally visible 19:41:42 I've tested the upgrade with notes linked to from the first etherpad there 19:41:57 Anyone know of a reason to not do it this friday other than our mail guru is hiking in the desert? 19:42:24 that would be the primary reason - but we're _probably_ fine 19:42:43 ya I tested that our vhosting of mailman works after the upgrade which was my biggest concern 19:42:43 I'm pretty sure at this point you're a mailman expert :) 19:43:00 mordred: that was fun debugging for sure 19:43:17 I'm a little surprised you didn't rage-rewrite it in haskell 19:43:24 Once lists.o.o is upgraded we are left with static, status, ask, refstack, and wiki 19:43:38 fungi is working on wiki I think. which means we need volunteers for the other 4 19:43:48 if you are able to please grab one :) 19:43:55 I can look at status 19:44:26 Also I spoke with people on the foundation side about groups going away and that is still the plan and they are aware of it 19:44:37 so no change there (which is good means we can delete it) 19:44:59 And that takes us to Fedora Zanata plans (and mulling our own i18n plans) 19:45:06 ianw: ^ thank you for following up with fedora on that 19:46:10 It sounds like they are moving to another tool: weblate 19:46:38 a python django app that hopefulyl doesn't present much trouble for us to run if we want to move to it too 19:46:41 which, according to AJaeger, is used by (some of) suse? 19:46:43 but I haven't looked at the details yet 19:47:00 yeah, i got no concrete answers, but weblate seems ok 19:47:19 One of the big features our translators want is translation memory. I expect if we don't lose that they will be happy 19:47:30 i had a bit of a play, it seems like it could integrate in a similar way to zanata 19:48:00 one thing i wanted to float was, that if this is communicating via being notified of changes via webhooks in a post job, and proposing updates back via gerrit 19:48:10 ... the hosted options look pretty compelling 19:49:11 sure, our beef with transifex is that their hosted service wasn't the same as the software they published, and eventually ended up dropping their gratis offering entirely 19:49:19 looks like they negotiate hosted service for open source software 19:49:20 yah 19:49:27 and there was no hosted zanata 19:49:33 so we hosted that ourselves 19:49:40 so we shouldn't assume we'll be able to negotiate a favoriable setup but certainly seems worth reaching out to them 19:50:01 yeah, this isn't run by $MEGACORP, seems committed to open source and it seems like it would be mutually beneficial 19:50:06 anyway, just wanted to mention that 19:50:23 and they list translation memory as a feature on the hosted option so it checks that requirement 19:50:34 ianw: my biggest concern would be its ability to propose changes to gerrit (since lots of things that integrate with git assume that they're going to just push to git) ... but assuming engineering can be sorted, I wouldn't have an issue with a hosted thing 19:50:35 we may also lack for configurability/flexibility if we don't host it ourselves, but that might be okay. one thing we lost when dropping transifex was the existing community of translators who would just pick up random projects 19:51:02 mordred: yeah, it has gerrit plugins, and uses git-review underneath. it's not well documented, but it's an option 19:51:08 oh wow 19:51:09 fungi: I think transifex shared memory across all their hosted projects too 19:51:16 fungi: so you'd get free translations for common strings 19:51:24 ianw: nice 19:51:28 so cool - yeah - that sounds like a potential win 19:51:55 I wonder what the proper way to reach out there is. I think reed did the transifex "barganing" in the past 19:52:05 i'm entirely fine with recommending gratis services we don't operate when they're running free software themselves 19:52:15 clarkb: I'm sure jbryce has tons of free time 19:52:18 maybe I send them a nice email and cc frank/ianychoi/ianw/ and jbryce ? 19:52:31 clarkb: ++ seems like a good start 19:52:40 mordred: exactly :) I'll see if there is any concern on the foundation side as they may have to handle any official contracting 19:53:16 clarkb: also - if we can figure it out generally, it would be a nice integration/partnership for opendev overall potentially 19:53:18 sure; there will definitely be engineering but nice thing is it could all be parallel; no need to shut zanata down to experiement with it 19:53:28 ++ 19:54:00 sounds good. Thanks again for digging into that ianw 19:54:07 #topic Open Discussion 19:54:52 Yesterday I put together https://etherpad.openstack.org/p/infra-backlog-cleanout to collect a list of infra related changes that we could clear out of our backlog easily 19:54:59 I've added and remvoed things as changes merge 19:55:16 feel free to add things if you notice they need attention 19:55:29 my next step was to try and figure out dashboarding of that so you could jsut get it at a gerrit url 19:56:04 Also remember we are trying to be slushy to avoid causing problems for the openstack release tomorr 19:56:07 *tomorrow 19:57:07 I can be slushy 19:57:16 * anteaya looks out the window 19:57:34 speaking of slushy the weather in denver today is like 20C tomorrow they have a blizzard warning 19:57:38 it's a bit too warm here to be slushy 19:57:56 If the weather is good when we are there I think it would be fun to go back to the beer garden :) 19:58:01 it's 31C here today 19:58:12 wow, shorts weather 19:58:19 but clearly that can't be sorted until we are closer to the summit/ptg 19:58:25 it's maple syrup weather here 19:58:27 though as we come out the other end of the openstack release, it might be good to merge dmsimard's gerrit replication config change and then restart gerrit with that so we can test with it before the opendev maintenance 19:58:34 fungi: ++ 19:58:44 I think thursday might eb a lot less slushy :) 19:58:54 I'd like to get some puppet-4 upgrades in on thursday if release team is happy with where they are 19:59:11 And we are at time. Thanks everyone 19:59:14 #endmeeting