19:01:27 #startmeeting infra 19:01:28 Meeting started Tue Feb 20 19:01:27 2018 UTC and is due to finish in 60 minutes. The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:32 The meeting name has been set to 'infra' 19:02:19 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:02:26 o/ 19:02:37 will wait for a few more to wander in before starting with announcements 19:02:45 o/ 19:02:46 o/ 19:03:07 o/ 19:03:17 #topic Announcements 19:03:27 Mostly PTG related this week. 19:03:36 next week many of us will be at the PTG 19:03:45 #link https://etherpad.openstack.org/p/infra-rocky-ptg PTG Brainstorming 19:03:53 its not too late to go over the topics or propose more ^ 19:04:00 #link https://ethercalc.openstack.org/cvro305izog2 Rough PTG Schedule 19:04:13 * fungi is actually around, just got done with internet service tech, all wired up again 19:04:16 I've got a rough schedule going there to help people that will be changing rooms in particular 19:04:32 please looks those over if you will be attending to make sure it looks ok to you 19:05:17 It will be 7pm in dublin during the infra meeting next week which is dinner time so I am going to cancel the meeting next week. If you won't be in dublin and have stuff to talk about feel free to use the time but don't feel compelled to either :) 19:05:32 wfm 19:06:08 and finally if you will be joining us in dublin pabelanger is putting together a team dinner plan which you can sign upfor at https://ethercalc.openstack.org/pqhemnrgnz7t 19:06:16 #link https://ethercalc.openstack.org/pqhemnrgnz7t PTG team dinner sign up 19:07:09 #topic Actions from last meeting 19:07:18 #link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-02-13-19.01.txt minutes from last meeting 19:07:42 I'm really bad and still haven't cleaned out old specs. It seems like such a low priority item compared to fixing timeouts in zuul and helping make zuul status work again :) 19:07:47 #action clarkb actually clean up specs 19:08:01 ianw on the other hand totally got a ppa going for the afs packages we are using now on ubuntu hwe and arm kernels 19:08:24 even running in production! 19:08:28 ianw was able to build a mirror host in linaro's cloud using our puppetry using that 19:08:34 very exciting 19:08:57 yep, seems to be ok 19:09:25 also I don't think we've seen an oom on the zuul executors since upgrading their kernels (which required the afs ppa on x86 too) 19:09:28 wow! 19:09:35 so thank you for getting that done 19:09:45 (on the arm64 mirror puppeting, but also yay on lack of oom) 19:09:56 whcih is a good transition into talking about zuul 19:09:57 yah, OOM is much better 19:10:07 #topic Priority Efforts 19:10:08 yeah, looks like they're all running (just crhecked) 19:10:15 #topic Zuul v3 19:10:32 I don't think there are any current fires but might be worth a general status update as a few things have changed in the last week 19:10:42 quick update from meeting yesterday: https://storyboard.openstack.org/#!/board/53 19:10:49 we have 4 things to do before release 19:11:01 probably not going to happen this week. maybe at ptg? or maybe 2 weeks after... 19:11:20 so close 19:11:26 (i think several of us are scattered during week-after-ptg, so unlikely to release that week) 19:11:49 the post-timeout landed, as did host-vars and group-vars 19:12:00 i think maybe we should send an openstack-dev email about those things? 19:12:06 ++ 19:12:29 memory is also down this week! Thanks for that corvus 19:12:37 yay! 19:12:46 cpu down too also, no ? 19:12:59 yeah, we've landed one small memory improvement, raised cpu in the process, but then landed a fix to that. 19:13:08 down with resource utilization, up with servers! 19:13:15 so the upshot is that we're using significantly less memory, and a tiny bit more cpu at the end of the day 19:13:25 (a dynamic config generation now takes 12s, up from 10s last week) 19:13:30 +1 19:13:52 i haven't actually started on the things i planned to do to really reduce memory usage 19:14:07 the thing so far was just something i noticed along the way during refactoring 19:14:49 i'm planning on getting the marginal memory cost for a dynamic config to be in the range of several kb 19:15:17 but that's still back-seat to release blockers 19:15:35 corvus: still a great improvement ;) 19:15:59 corvus: do you want to send email about timeout and vars changes? 19:16:08 (I'm not sure I grok the vars changes well enough to do them justice) 19:16:34 andreaf is sending out emails about the devstack-tempest job... probably good for folks to keep an eye on those, it's kind of a top-tier job so it's good if we can all help make it as good as possible and set a good pattern 19:16:39 clarkb: okay, i can do 19:17:35 that's all i can think of off the top of my head 19:17:39 yeah, i wasn't paying attention on the vars changes either, so recap would be swell 19:18:02 oh, probably everyone knows about the json change 19:18:04 i got it was something that came up in the course of trying to generalize the devstack/tempest job configs 19:18:15 but we've dropped ".json" suffixes from the zuul api 19:18:48 so if anyone reports issues related to that, let them know. the dashboard is updated, but folks may need to reload js if they have trouble with that 19:18:52 i can include that in email too...? 19:19:05 ahh, yes i expect everyone who was going to be surprised by the status.json->status change is now done being surprised 19:19:31 but can't hurt to mention 19:19:47 ya we've had a couple people ask about it this morning and a refresh did sort them out 19:20:00 (part of the reasoning for that is to restore the feature to just fetch status for a single change) 19:20:22 oh, also, see this tc resolution: https://review.openstack.org/545065 19:20:29 which is needed for any upcoming attempts to retry status embedded in gerrit change views 19:20:42 (the per-change status queries, that is) 19:20:42 fungi: yep. we'll also use it in the github status link. 19:21:21 #link tc resolution about reporting on github https://review.openstack.org/545065 19:21:38 #link infra manual update about reporting on github https://review.openstack.org/545077 19:22:11 [ok really out of things now :)] 19:22:30 after some initial explaining, tc reception of the resolution seems to have been positive 19:22:41 yeah, it's been a good conversation 19:24:24 alright, anything else zuul related? 19:26:07 sounds like no 19:26:16 #topic General Topics 19:26:55 ianw: any other aarch64 updates worth sharing? 19:27:13 yeah, if i could get some eyes on 19:27:17 #link https://review.openstack.org/#/c/546025/ 19:27:36 that just adds ability to have a specific config for a nodepool builder, as discussed in email thread 19:27:58 with a config 19:28:00 #link https://review.openstack.org/546027 19:28:18 that should allow me to hand-deploy our arm dib changes on a builder and see if we get a .qcow out the other end 19:28:36 once we have that, we can try launching something 19:28:49 :) 19:29:47 * clarkb has added them to the review list 19:30:33 Just a quick reminder that March 16th is current planned day for project renaming 19:30:59 #topic Open Discussion 19:31:02 anything else? 19:33:05 ubuntu-bionic mirrors are online now 19:34:21 pabelanger: did we figure out why that seemed to cause problems for xenial mirror? 19:34:33 like maybe we synced in the middle of an update to xenial? 19:34:59 clarkb: yah, it seems the xenial mirror was stale since 02/05, but I didn't notice it at first 19:35:26 then, my reprepro update process deleted the older references, so when we did vos release, everything in the gate got deleted (old packages) 19:35:52 so, for next time, the manually mirror process shouldn't delete old stale packages, until first vos release is done 19:36:01 then we can mirror as normal to remove the stale 19:36:30 ahh, so it was the usual stale caches taking too long to update during a large vos release 19:37:08 yah, the bionic import took a day or so, and about 2 hours to release 19:37:19 or jobs had done apt-get update and then the vos release replaced packages out from under them before (or during) an apt-get install run 19:37:26 we've also got to add in aarch64 too right? ianw ^ is that done? 19:37:28 yes, that 19:37:43 clarkb: no, that's in ports so will need some different handling i think 19:37:54 oh right unlike debian its a separate repo source 19:38:44 In other contexts, I found it useful to have a config file that determined hostname-by-architecture for mirrors, as a) different distros do it differently, and b) different releases sometimes do it differently for certain distros (Ubuntu having done that once in the past). 19:39:11 My scripts read that file to determine the right mirror host to use. 19:40:31 persia: i guess we more or less have that in https://git.openstack.org/cgit/openstack-infra/openstack-zuul-jobs/tree/roles/mirror-info ? 19:40:45 in this case we'd probably mimic ubuntu and have a $host/ubuntu-ports/ path? 19:41:11 wfm 19:41:38 ianw: That would be the place, although at first glance, I don't see how hostname or pathname differs by architecture. 19:42:42 no it doesn't, yet 19:43:25 one of the "nice" things about being single architecture for so long is you can ignore that until you can't. I remember when we first added x86 solaris at a previous job then all our nfs mounted gnu tools broke :) 19:44:14 also, https://review.openstack.org/497948/ if people want to help make our mirrors more reload friendly 19:44:23 right now, each time we make vhost changes, we break jobs 19:44:31 since we stop / start apache 19:44:42 #link https://review.openstack.org/497948/ make apache vhost change application more graceful 19:44:52 (keep in mind that means we might have to out of band restart apache for certain changes) 19:45:29 anything else before I stop the meeting 15 minutes early? 19:45:34 yeah, adding/removing modules still needs to notify service['httpd'] instead 19:45:58 yah, this should just affect vhost changes, for reload. everything else should still restart 19:48:02 alright calling it then. Find us in #openstack-infra or on the infra mailing list if there are additional items to discuss. See some of you at the PTG! 19:48:05 thanks everyone 19:48:09 #endmeeting