19:00:25 <fungi> #startmeeting infra
19:00:25 <openstack> Meeting started Tue Dec  8 19:00:25 2015 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:29 <openstack> The meeting name has been set to 'infra'
19:00:30 <AJaeger> \o/
19:00:34 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:00:38 <fungi> we have a _very_ full agenda so i'm going to timebox the topics and rearrange the order a little to make sure we hit scheduling-critical discussions
19:00:41 <SotK> o/
19:00:42 <fungi> if i cut you off in the middle of discussing something, please don't take offense and make a note to continue with it on the mailing list or in the infra channel after we're done
19:00:47 <fungi> now, on with the show!
19:00:58 <fungi> #topic Announcements [timebox 1 minute, until 19:01]
19:01:07 <fungi> #info Reminder: Gerrit 2.11 upgrade is now scheduled for Wednesday of next week, December 16, 17:00 UTC.
19:01:15 <fungi> #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081037.html
19:01:23 <jhesketh> o/
19:01:23 <fungi> #topic Actions from last meeting [timebox 1 minute, until 19:02]
19:01:27 <cody-somerville> \o_
19:01:28 <asselin_> o/
19:01:29 <clarkb> hello
19:01:32 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-01-19.01.html
19:01:37 <dhellmann> o/
19:01:40 <fungi> nibalizer: send announcement about rescheduled gerrit upgrade maintenance
19:01:48 <fungi> completed, see above
19:02:02 <fungi> #topic Specs approval [timebox 1 minute, until 19:03]
19:02:11 <fungi> PROPOSED: Complete the reviewable release automation work (dhellmann)
19:02:16 <fungi> #link https://review.openstack.org/245907
19:02:20 <pabelanger> o/
19:02:23 <fungi> #info Voting is open on the "Complete the reviewable release automation work" spec until 19:00 UTC Thursday, December 10.
19:02:28 <ruagair> I/
19:02:51 <dhellmann> fungi : I'm not sure what the process is, but if you want to discuss it in the meeting I'm here for that
19:03:00 <dhellmann> otherwise comments on the spec work, too
19:03:06 <fungi> dhellmann: if we weren't so short on time, yes
19:03:12 <fungi> but in this case, comments on the review please
19:03:15 <dhellmann> fungi : understood
19:03:18 <fungi> #topic Mid-cycle sprint for infra-cloud in Ft. Collins proposal (pleia2, jhesketh) [timebox 10 minutes, until 19:13]
19:03:24 <fungi> during the mitaka priorities discussion in tokyo, it was pointed out that infra-cloud is one of the biggest efforts we've undertaken as a team, and that organizing a mid-cycle sprint around it would useful
19:03:31 <fungi> the timing is convenient since it's just after hpcloud sunset and we've got the possibility of inheriting a fair amount of replacement hardware/upgrades from that
19:03:37 <fungi> pleia2 has done an awesome job (along with wendar and purp) of negotiating access for contributors to tour the facility where our west region of infra-cloud is housed, so this makes hp's fort collins site an excellent location for a sprint
19:03:44 <fungi> late february, say the last week in february, would probably work out best? it doesn't conflict with lca and so far i don't see any february sprints scheduled in the wiki
19:03:52 <fungi> #link https://wiki.openstack.org/wiki/Sprints#Mitaka_sprints
19:03:57 <fungi> jhesketh has offered to take point coordinating logistics for this
19:03:57 <jeblair> ++
19:04:43 <anteaya> last week of feb also misses ansible fest dates according to the dates mordred posted in a prior meeting
19:04:48 <jhesketh> Yep, so probably we just need to get an indication if end of Feb in ft Collins is a good idea or if there are any strong objections
19:04:50 <pleia2> anteaya: thanks for checking
19:04:52 <anteaya> last week of feb works for me
19:05:03 <anteaya> good idea
19:05:07 <fungi> i'm strongly in favor
19:05:16 <jhesketh> +1
19:05:20 <jeblair> also skiing
19:05:30 <fungi> heh
19:05:36 <fungi> ttx will want to come!
19:05:38 <clarkb> works for me, does anyone know if flaying to FNL is doable or is DEN the best bet? (probably getting ahead of myself iwth that question)
19:05:48 <pleia2> greghaynes, crinkle, does this work for you?
19:06:14 <greghaynes> I wouldnt be able to make feb, but I also am pretty unlikely to make any of the possible dates
19:06:20 <greghaynes> (baby incoming)
19:06:21 <anteaya> clarkb: Sam-I-Am would know
19:06:29 <pleia2> greghaynes: ah, right
19:06:52 <fungi> so just to confirm, the proposal is for fort collins, colorado, usa, the week of february 22, 2016
19:06:54 <crinkle> if we made it march-ish maybe greghaynes would be more able to pbx in?
19:06:59 <anteaya> he flies small aircraft out of denver
19:07:13 <crinkle> feb works for me
19:07:31 <clarkb> as someone that just babied I highly recommend against rytrying to do that in march
19:07:35 <jeblair> do they have those telepresence robots at ftc?
19:07:43 <greghaynes> crinkle: possible. I wouldnt plan around me - I will do my best to call in but I cant really promise anything about any dates around then
19:07:49 <anteaya> jeblair: not that I have seen
19:07:51 <nibalizer> o/
19:07:52 <crinkle> greghaynes: mmk
19:07:55 <anteaya> jeblair: just speaker phones
19:07:58 <rcarrillocruz> wfm end of Feb
19:08:07 <fungi> march is also starting to close in on cycle end and summit prep, which is why we were thinking earlier
19:08:13 <jeblair> we'll get a little wagon to pull a speakerphone around on.
19:08:35 <fungi> also, how many and which days should we be considering for this?
19:08:52 <pleia2> thinking 3-5, but I don't know which in that range is best
19:08:56 <anteaya> I think we have said in the past optimum time for a mid-cycle is 3 days
19:08:57 <clarkb> early week is best for me
19:09:03 <pleia2> by day 5 I get very useless and tired
19:09:08 <anteaya> folks get tired by end of day 3
19:09:12 <jhesketh> I'd say min of 3 to make it worth the trip
19:09:12 <pleia2> anteaya: ++
19:09:15 <pleia2> jhesketh: yeah
19:09:17 <fungi> monday through wednesday? with thursday as an option?
19:09:28 <rcarrillocruz> erm
19:09:28 <rcarrillocruz> yeah
19:09:31 <jeblair> we should be thinking of this as dedicated single-topic workdays
19:09:31 <rcarrillocruz> less than 3
19:09:34 <jeblair> so hopefully not as exhausting
19:09:35 <zaro> o/
19:09:37 <rcarrillocruz> would be killing for non-US people
19:09:42 <clarkb> (it is easier for us to get help with babies early week due to family schedules)
19:09:54 <jhesketh> Attendance for the whole thing is also clearly optional. So we could aim for 4 and have a light schedule
19:09:58 <pabelanger> 4 days works here
19:10:03 <greghaynes> actually, SpamapS ^
19:10:04 <fungi> jeblair: very good point, we'll have to make sure we plan the topic->day mapping to make it a little less exhausiting
19:10:20 <greghaynes> SpamapS: This is relevant to your interests
19:10:21 <jeblair> 4 with understanding that some may have to leave after 3 sounds like it might work
19:10:25 <rcarrillocruz> yeah, 4 seems like a good sweet spot to me
19:10:34 <pleia2> jeblair: agreed
19:10:43 <anteaya> monday-wednesday with optional thursday is my vote
19:10:50 <rcarrillocruz> ^ ++
19:10:59 <jhesketh> Or even people start rolling in late/finishing early etc throughout the week
19:11:07 <jeblair> (and skiing on friday)
19:11:09 <jhesketh> As they tire
19:11:21 <anteaya> jeblair: has to be
19:11:39 <krotscheck> o/
19:12:06 <jhesketh> Okay so I can take an action to work with pleia2 on the logistics and announcing etc
19:12:08 <fungi> we've got one more minute budgeted for this topic. are we at consensus or do we need to flesh out details on the infra ml?
19:12:21 <fungi> thanks jhesketh!
19:12:31 <anteaya> I'm feeling heard on this topic
19:12:39 <jhesketh> Assuming pleia2 is happy to help with the office side
19:12:49 <jeblair> skiing!
19:12:51 <jeblair> i'm done
19:12:53 <fungi> #action jhesketh finalize omfra-cloud sprint planning details on the infra ml
19:13:00 <jhesketh> Heh :-)
19:13:05 <anteaya> omfra-cloud
19:13:06 <greghaynes> I like omfra-cloud, makes it sound epic
19:13:08 <fungi> #undo
19:13:08 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0x9736910>
19:13:10 <rcarrillocruz> heh
19:13:11 <crinkle> i am in favor of calling it omfra-cloud
19:13:11 <anteaya> loving the new words
19:13:13 <fungi> #action jhesketh finalize infra-cloud sprint planning details on the infra ml
19:13:14 <jeblair> oomfra loompahs?
19:13:18 <krotscheck> ommmmmmmmmfra cloud?
19:13:20 * fungi is a terrible typist
19:13:24 <jhesketh> Aww
19:13:26 <anteaya> good typos
19:13:30 <fungi> #topic Priority Efforts: Gerrit 2.11 Upgrade [timebox 10 minutes, until 19:23]
19:13:36 <fungi> zaro: are we still on track for next week? looks like there's still quite a few open changes needing reviews...
19:13:44 <fungi> #link https://review.openstack.org/#/q/status:open+topic:gerrit-upgrade,n,z
19:13:52 <zaro> yes, just need reviews.
19:14:21 <zaro> i cherry-picked what i thought were the most important fixes for 2.11 onto our 2.11.4 branch.
19:15:33 <mordred> o/
19:15:40 <clarkb> did we end up deciding on what fix we would use for the openid redirects? notmorgan's proxypass vhost change?
19:15:46 <anteaya> yes
19:15:59 <zaro> i believe notmorgan is the best solution
19:16:02 <anteaya> I believe that is what is hand configured on review-dev now
19:16:04 <notmorgan> \o/
19:16:22 <zaro> anteaya is correct
19:16:24 <jeblair> oh cool, notmorgan you were able to do the thing you hoped you would be able to do that fixes it in apache without breaking the initial redirect?
19:16:32 <notmorgan> Yep!
19:16:34 <fungi> zaro: i also noted that we've still got some pending cleanup for the "akanada" typo from the last rename maintenance. i'm worried our cruft projects database cleanup step to fix indexing will end up misinterpreting that as a missing project, so i assume we should fix it at the start of the maintenance?
19:17:05 <fungi> mainly want to make sure that ends up as part of the maintenance plan if so
19:17:12 * jeblair hands notmorgan a case of booze he filched from mordred
19:17:14 <zaro> fungi: i can look into that if you can provide another dump of the db.
19:17:31 <fungi> zaro: get up with me after the meeting and i absolutely will--thanks!
19:17:34 * mordred warns notmorgan it's not booze in there, but apple juice he used to make it look like he drinks less
19:18:19 <notmorgan> mordred: ahh. I see.
19:18:26 <krotscheck> "apple juice"
19:18:27 * krotscheck is skeptical
19:18:41 <mordred> krotscheck: ++
19:18:54 <clarkb> so its mostly just reviews then?
19:18:55 <jeblair> fungi: i agree we should attempt to not screw that up.  :)
19:18:58 * clarkb makes note to do reviews
19:19:02 <anteaya> clarkb: yes
19:19:12 <zaro> clarkb: correct
19:19:44 <fungi> #link https://etherpad.openstack.org/p/mitaka-infra-gerritdevelopment
19:19:54 <fungi> #link https://etherpad.openstack.org/p/test-gerrit-2.11
19:20:02 <fungi> #link https://etherpad.openstack.org/p/gerrit-2.11-upgrade
19:20:16 <fungi> (the planning bits)
19:21:11 <fungi> we have 2 more minutes for this. anything else we need to cover or remind in preparation for next week?
19:21:21 <anteaya> I'm happy
19:21:25 <fungi> do we need a reminder notice to the ml closer to the window?
19:21:35 <anteaya> can't hurt
19:21:36 <fungi> like at the beginning of the week?
19:21:39 <notmorgan> fungi: reminder never hurts
19:21:53 <zaro> i have nothing else.
19:22:00 <fungi> since it's in the middle of a wednesday and all, there are likely those who will be taken by surprise
19:22:13 <jeblair> yeah, also, things seem _busy_ now.
19:22:13 <AJaeger> can we get it into the weekly newsletter?
19:22:32 <fungi> AJaeger: we should get thingee to add it to his dev digest, yes
19:22:46 <fungi> nibalizer: do you mind following up on your maintenance notice with a reminder on mondayish too?
19:23:04 <nibalizer> can do
19:23:25 <fungi> #action fungi get gerrit maintenance included in thingee's dev digest
19:23:35 <fungi> #action nibalizer send follow-up gerrit maintenance reminder
19:23:46 <fungi> #topic Priority Efforts: Infra-cloud [timebox 10 minutes, until 19:33]
19:23:50 <crinkle> hi
19:23:52 <fungi> oof, a lot of stuff here on the agenda. crinkle, can you run through this real quick?
19:23:57 <crinkle> yes
19:23:59 <mordred> omfra-cloud!
19:24:04 <fungi> ;)
19:24:06 <crinkle> there is a small cloud up that rooters can log into and poke at
19:24:15 <mordred> \o/
19:24:23 <crinkle> I would like help reviewing topic:infra-cloud and I need rooters to help get DNS and hiera stuff set up
19:24:24 <anteaya> yay
19:24:34 <crinkle> I had some discussion items but I can bring those up after the meeting
19:24:38 <crinkle> fungi: done
19:24:42 <jeblair> can we point a nodepool at it?
19:24:47 <fungi> crinkle: wow, fast!
19:24:49 <jeblair> just to exercise it?
19:24:59 <fungi> i'm happy to volunteer to do the dns bits, unless someone else wants that
19:25:06 <crinkle> jeblair: we'd need a user and possibly sec groups and stuff but yes
19:25:06 <nibalizer> oo cloud
19:25:25 <crinkle> also nibalizer had a policy issue that i haven't looked at yet
19:25:27 <clarkb> ++ to pointing nodepool at it
19:25:27 <mordred> would anyone mind me creating two non-admin users on it for Shrews and I to do functional shade testing against it?
19:25:39 <crinkle> mordred: go for it
19:25:44 <jeblair> can anyone else point nodepool at it?
19:25:55 <jeblair> i'm trying to keep my plate clear for zuulv3 for a little bit
19:26:06 <nibalizer> im interested in learning how to do that
19:26:07 <jeblair> (doesn't have to be prod nodepool, can be a private nodepool)
19:26:15 * mordred can help nibalizer
19:26:24 <nibalizer> sweet
19:26:30 <jeblair> cool, i'll be backup help
19:26:43 <pabelanger> I can help if needed
19:26:53 <asselin_> I have a private nodepool as well we can use. I can also help.
19:26:58 <mordred> crinkle: so does that mean that the stack of infra-cloud patches should land now?
19:27:13 <fungi> nodepool people are coming out of the woodwork--glad i didn't volunteer! ;)
19:27:19 <crinkle> mordred: there are a couple of blockers there
19:27:21 <mordred> ooh. neat. nibalizer you might get higher bandwidth from asselin_
19:27:27 <crinkle> but at least getting feedback on them would be good
19:27:31 <mordred> ++
19:27:41 <nibalizer> cool
19:27:56 <clarkb> I am also happy to help
19:28:06 <clarkb> its dead simple to make nodepool and point at cloud
19:28:17 <nibalizer> crinkle: you'll create users for me or I'll be doing that myself?
19:28:19 <clarkb> (devstack plugin in tree should be a good example of how that looks step by step)
19:28:33 <crinkle> nibalizer: you do it and if you have the same issue i'll help
19:28:42 <nibalizer> kk
19:29:08 <Clint> o/
19:29:21 <mordred> crinkle: where are the creds to log in to the cloud found?
19:29:43 <nibalizer> mordred: we must make our users
19:29:45 <fungi> our ssh keys are installed on the bastion
19:29:45 <crinkle> mordred: the IP is 15.184.52.4 and your rooter key is on it, and the cloud credentials are in /root/adminrc
19:29:51 <mordred> thanks
19:29:56 <mordred> /root/adminrc was what I was looking for
19:30:00 <fungi> beyond that, it's a self-service pump
19:30:05 <mordred> yup
19:30:07 * mordred can pump
19:31:23 <fungi> what name did we end up giving this first region?
19:31:35 <mordred> RegionOne
19:31:46 <fungi> swell--just like bluebox!
19:31:54 * notmorgan is reminded that he needs to look over those configs.
19:32:01 <greghaynes> we put our most creative minds to work on coming up with that name
19:32:06 <rcarrillocruz> heh
19:32:12 <crinkle> heh that's just default, we can change it
19:32:23 <fungi> so the region rcarrillocruz and yolanda are hacking on is RegionTwo?
19:32:24 <jeblair> we'll fix that with the mirror rename... but we should also rename it i think
19:32:40 <mordred> how about hpuswest - to match what's in the hostname
19:32:40 <jeblair> didn't we say we could start with vanilla?
19:32:42 <rcarrillocruz> yeah, east in HP naming
19:32:49 <crinkle> ya it should be hpuswest
19:32:51 <clarkb> or regionb
19:32:53 <clarkb> :)
19:32:56 <rcarrillocruz> i'd rather call it hpuswest / hpuseast yeah
19:33:00 <mordred> clarkb: _that_ would be confusing :)
19:33:07 <crinkle> lol
19:33:14 <rcarrillocruz> indeed
19:33:15 <rcarrillocruz> :-)
19:33:23 <mordred> oh!
19:33:27 <mordred> there is a service running we don't need
19:33:33 <fungi> #info Our initial infra-cloud region is accessible to Infra root admins, and the next phase of acceptance testing will be exercising via nodepool and glean.
19:33:33 * mordred learns how to remove things with the puppet ...
19:33:36 <AJaeger> hpeuswest ?
19:33:45 <mordred> 2. there are 2 services we do not need
19:33:58 <clarkb> mordred: which services?
19:34:08 <mordred> crinkle: after the meeting, I would like to learn enough about the puppet to learn how to remove "ec2" and "computev3"
19:34:22 <fungi> there's some time budgeted for open discussion at the end of the meeting too if we need
19:34:24 * mordred knows he can delete them from the catalog in keystone
19:34:32 <fungi> #topic Priority Efforts: Store Build Logs in Swift [timebox 1 minute, until 19:34]
19:34:34 <mordred> but wants to use this to learn more about our setup
19:34:39 <fungi> looks like this is primarily a request from jhesketh to review a blocking change, so for the sake of time i'll link it here and skip ahead
19:34:44 <fungi> #link https://review.openstack.org/254718
19:34:53 <fungi> was there anything else urgent with that, jhesketh?
19:35:10 <jhesketh> Yep no need to waste more time, just getting some eyes would be good :-)
19:35:14 <fungi> #topic Priority Efforts: maniphest migration [timebox 1 minute, until 19:35]
19:35:19 <fungi> similarly, a few blocking reviews for this which ruagair wants to highlight
19:35:25 <fungi> #link https://review.openstack.org/#/q/status:open+topic:maniphest,n,z
19:35:27 <ruagair> Yes.
19:35:30 <ruagair> Morning.
19:35:55 <ruagair> airporting so I'll be interrupted.
19:36:16 <fungi> ruagair: i didn't budget additional time for this topic, mostly just reminding people to review those
19:36:21 <notmorgan> mordred: ++
19:36:24 <fungi> hope that's okay
19:36:34 <ruagair> Yes.
19:36:36 <fungi> #topic Priority Efforts: Zuul v3 [timebox 5 minutes, until 19:40]
19:36:40 <fungi> jeblair: you had a couple of (hopefully quick) items here
19:36:43 <jeblair> i have created the feature/zuulv3 branch on zuul and i will do the same for nodepool after the builder changes land
19:36:43 <jeblair> since we're on a branch, i'd like to take the approach of rapidly sketching out the basic work on zuulv3 by focusing on the simple case and breaking everything else -- skipping tests, etc.
19:36:43 <jeblair> but not *removing* tests -- after the basics are there, we can work on getting it back into shape.  this way we can see the whole thing take shape and find any big design flaws early
19:36:43 <jeblair> (basically, facebook it at first and then knuth it later)
19:36:43 <jeblair> i plan to focus heavily on this and will be less available for interrupt-driven work for a while
19:36:44 <jeblair> if others can pick up any new-provider work that pops up, that would be great (though there isn't much of that right now)
19:36:44 <jeblair> [end of pastebomb]
19:37:15 <mordred> ++
19:37:17 <fungi> hah on facebook-to-knuth
19:37:27 <clarkb> my only ocncern is that its really hard to review nodepool and zuul changes without tests
19:37:47 <clarkb> I worry that we might not understand the general shape if we don't have something there to point out where it isn't working yet and where it is working
19:38:09 <pabelanger> I actually plan to running zuulv3 locally as soon as possible. So, I don't mind providing feedback from a test POV
19:38:10 <fungi> maybe a zuul-dev continuous deployment from that feature branch?
19:38:11 <jeblair> clarkb: in my first change, i have one test working.
19:38:30 <clarkb> or run the tests non voting so reviewres can at least look at the results
19:38:33 <jeblair> that's enough for me to see the general approcah
19:38:35 <clarkb> (instead of skipping them entirely)
19:38:37 <jeblair> clarkb: sure, they'll break and timeout
19:38:59 <jeblair> (i'm talking tests, not jobs)
19:39:29 * SpamapS arrives late after double-booked call
19:39:42 <jeblair> my hope is by using this approach, we can avoid giant patches
19:40:08 <greghaynes> The skipping test - that is just to land things in the zuulv3 branch?
19:40:14 <fungi> and lots more tests toward the end?
19:40:19 <jeblair> lots of small patches that are easier to work with, even though they may not work in all cases, but then followup patches that will flesh things out more and fix more cases
19:40:31 <jeblair> greghaynes: yes, only in the v3 branch
19:40:34 <clarkb> I juts know that experience has said the hurry up and test later process doesn't work that well
19:40:49 <clarkb> that doesn't necessarily mean we can't do better this time around, but I am concerned about it
19:40:52 <jeblair> fungi: yes, certainly we would not merge the feature branch until it is robust
19:41:21 <jeblair> clarkb: the other approach does not work for us for large changes
19:41:36 <clarkb> I think we can do small changes and have tests
19:41:46 <jeblair> we have spent 6-8 months on nodepool and zuul changes that are *much* smaller in scope than this
19:41:48 <greghaynes> one other thing that I have noticed while doing the nodepool builders is that bugfixes which conflict with the patch series are *extremely* easy to accidentally regress on (you fix the merge conflict but dont copy the fix in to your copied out code)
19:41:53 <mordred> yeah - I agree with jeblair
19:41:57 <clarkb> we may not have every test working but if we add in tests specific to zuulv3 we can see they work and watch the existing tests converege to working
19:42:01 <mordred> this isn't intended to be incrementally working
19:42:06 <greghaynes> so one suggestion I would have is to also make sure any zuul bugfixes have tests for that fix while we work on zuulv3
19:42:14 <clarkb> jeblair: yes and the problem with nodepool has been we have had to backhaul tests in that did not exist
19:42:24 <clarkb> the large changes have trouble because the safety net is missing
19:42:31 <mordred> the whole point of the v3 work was so that we can clean slate it
19:42:38 <jeblair> clarkb: yes, i plan on focusing only on tests that immediately exercise the code being written
19:42:42 <clarkb> yup I am fine with tests failing and clean slating
19:42:51 <mordred> cool
19:42:55 <clarkb> I am just saying we should continue to run the tests
19:42:59 <clarkb> not skip them
19:43:14 <greghaynes> ah, if it is that big of a code removal then the bugfix thing is not as relavent
19:43:24 <jeblair> clarkb: that does make it harder to merge
19:43:48 <clarkb> jeblair: but only when the change in question breaks tests that shouldn't break?
19:44:22 <jeblair> clarkb: i want to break all the tests
19:44:26 <jeblair> except one
19:44:31 <fungi> i guess it's a question of how closely tied the current tests are to zuul v2 internals and design
19:44:43 <fungi> v3 isn't aiming to be backward-compatible
19:44:44 <jeblair> i want "test_jobs_launched" to work
19:44:46 <fungi> afaik
19:44:49 <clarkb> in particular with nodepool we have done a lot of work recently to add tests in and fix bugs
19:44:59 <clarkb> but because the code is already merged there is less interest in reviewing and getting that code in
19:45:11 <jeblair> i don't care about anything else right now... later on, i want to make sure each test either works or is altered or removed as appropriate for the new design
19:45:12 <mordred> right. I tihnk this is different than that
19:45:23 <clarkb> mordred: we are saying upfront don't write tests till the end
19:45:28 <jeblair> clarkb: oh no
19:45:32 <clarkb> but at that point if everything is merged why will anyone care?
19:45:38 <krotscheck> jeblair: Do you need help making the web side of zuulv3 pretty?
19:45:38 <jeblair> clarkb: we should write tests as we go
19:45:51 <clarkb> jeblair: ok I misunderstood then I thought you were saying no tests
19:45:57 <clarkb> jeblair: until some undetermined point in the future
19:46:12 <krotscheck> s/need/want/
19:46:12 <clarkb> (which we know doesn't work well)
19:46:17 <jeblair> clarkb: i'm saying zuul has 200 tests i don't care about right now
19:46:56 <fungi> cool, contention clarified. i let the discussion run longer than budgeted to make sure that was settled
19:47:07 <jeblair> fungi: ack
19:47:08 <fungi> #topic Mirror efforts (krotscheck, greghaynes) [timebox 5 minutes, until 19:52]
19:47:14 <krotscheck> Spec: https://review.openstack.org/#/c/252678/
19:47:15 <krotscheck> First patch in chain: https://review.openstack.org/#/c/253236/
19:47:15 <krotscheck> All the patches: https://review.openstack.org/#/q/status:open+branch:master+topic:unified_mirror,n,z
19:47:16 <krotscheck> [eopb]
19:47:36 <fungi> mordred: you were working with greghaynes on the last steps of the pypi mirror builder deployment?
19:47:50 <fungi> rather, pypi wheel builder
19:47:57 <greghaynes> fungi: We had to rework our plan after some issues were discovered in the patches
19:48:07 <mordred> yah
19:48:16 * mordred is awaiting further instructions from krotscheck and greghaynes
19:48:27 <krotscheck> Right, so the whole thing is now a unified mirror effort.
19:48:38 <greghaynes> Yep, step1 is now getting buy in on the spec
19:48:39 <krotscheck> All patches are dependent on the spec merging.
19:49:03 <krotscheck> The big infra-root effort starts when https://review.openstack.org/#/c/238754/ lands.
19:49:21 <krotscheck> So, go forth and review :)
19:49:39 <fungi> #link https://review.openstack.org/238754
19:49:58 <jeblair> i am not sure we can accomodate that amount of space everywhere
19:50:27 <fungi> so we're delaying the wheel mirror deployment based on needing further design work and a new spec?
19:51:20 <jeblair> the spec looks good and i think we can still proceed
19:51:35 <krotscheck> fungi: Yeah, there were a couple of things pointed out in the wheel work that made it not work so well.
19:51:38 <fungi> #link https://review.openstack.org/252678
19:51:41 <greghaynes> fungi: Yes - what came up is we shouldnt be spinning up new nodes for wheel mirrors given that we have a goal of mirrors using a single host
19:52:09 <fungi> well, we were spinning up new job workers to do the on-demand wheel building
19:52:22 <fungi> (per platform)
19:52:36 <clarkb> jeblair: if we end up not having that space maybe we should look into some version of global load balancing
19:52:36 <fungi> but i'll look over the spec
19:52:43 <clarkb> and use the nodes where we do have the resources as the backends
19:53:05 <jeblair> clarkb: or distributed filesystems or caching http proxies
19:53:24 <fungi> #topic Infra "holiday party" knowledge transfer virtual sprint (anteaya) [timebox 5 minutes, until 19:58]
19:53:27 <jeblair> clarkb: but we should probably proceed with this plan first
19:53:30 <krotscheck> jeblair: Maybe ^^ makes the most sence. Basically an infra mirror CDN
19:53:33 <clarkb> jeblair: agreed
19:53:45 <anteaya> #link https://etherpad.openstack.org/p/infra-holiday-party-2015
19:54:03 <anteaya> so we don't get to see each other in person to celebrate the holiday
19:54:10 <anteaya> so I thought we could try something online
19:54:16 <fungi> <insert preferred holiday>
19:54:29 <anteaya> you aren't obliged if this isn't your thing
19:54:31 <pleia2> that's sweet :)
19:54:40 <anteaya> but I thought we could try something
19:54:43 <pabelanger> fungi: festivus for the rest of us
19:54:46 <fungi> i thought it was a neat idea
19:54:48 <clarkb> I will be out all day on the 21st, have babysitter and star wars tickets
19:54:50 <anteaya> the etherpad offers my thoughts
19:54:55 <clarkb> no spoilers!
19:54:55 <anteaya> you are welcome to add yours
19:55:03 <anteaya> I suggested 3 days
19:55:17 <fungi> currently all o fthe listed days work for my schedule
19:55:18 <jeblair> clarkb: darth vader is luke's father
19:55:19 <pleia2> all work for me (my tickets are for the 17th)
19:55:23 <anteaya> please vote on your preference and include reasons you will be out a certain day if you will
19:55:38 <anteaya> clarkb: okay star wars counts
19:55:40 <fungi> i'm not expecting to disappear until teh 23rd, since i have a lot of ground to cover in a car over the subsequent 5 days
19:56:07 <anteaya> that was all for meeting time, I think the rest can take place on the etherpad
19:56:10 <anteaya> thank you
19:56:12 <fungi> so i'll be not at all around 23-27
19:56:17 * krotscheck will be sipping mai thai's during all those days, which is sortof like a holiday.
19:56:26 * krotscheck does not plan on being online :)
19:56:31 <fungi> krotscheck: that's my idea of a holiday anyway
19:56:38 <jeblair> krotscheck: that's not a normal day? :)
19:56:47 <fungi> jeblair: normal day is just shots
19:56:54 <krotscheck> jeblair: Normal day involves internet :D
19:57:10 <fungi> paper drink umbrellas are for special occasions
19:57:15 * jeblair tops up
19:58:02 <pleia2> I'll be around through the holidays
19:58:10 <pleia2> for some reason no one schedules conferences then
19:58:24 <fungi> so the one objection was for the 21st. we can go with either the 18th or the 22nd?
19:58:25 <anteaya> I thinnk it is the perfect time
19:58:29 <anteaya> few tourists
19:58:33 <clarkb> fungi: +1 18th
19:58:39 <pleia2> I like the 18th
19:58:45 <fungi> i'd lean toward the 18th. we can celebrate the (successful!) gerrit upgrade
19:58:46 <anteaya> either is fine
19:58:53 <pleia2> and celebrate star wars
19:59:02 <anteaya> happy to lean toward the 18th
19:59:19 <zaro> i'm ok with any
19:59:24 <fungi> #action anteaya plan an infra virtual sprint for knowledge transfer and holiday festivity, friday, december 18th
19:59:30 <pleia2> thanks anteaya!
19:59:31 <anteaya> 00:00 utc on the 18th until 23:59 utc?
19:59:36 <anteaya> can do
19:59:39 <anteaya> welcome
19:59:41 <fungi> #topic Open discussion [timebox 30 seconds, until 20:00]
19:59:45 <anteaya> other activity suggestions welcome
19:59:53 <fungi> heh--i promised open discussion and i delivered!
20:00:02 <Zara> :)
20:00:11 <fungi> and now we're out of time--thanks everyone!
20:00:13 <fungi> #endmeeting