19:00:25 #startmeeting infra 19:00:25 Meeting started Tue Dec 8 19:00:25 2015 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:29 The meeting name has been set to 'infra' 19:00:30 \o/ 19:00:34 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:00:38 we have a _very_ full agenda so i'm going to timebox the topics and rearrange the order a little to make sure we hit scheduling-critical discussions 19:00:41 o/ 19:00:42 if i cut you off in the middle of discussing something, please don't take offense and make a note to continue with it on the mailing list or in the infra channel after we're done 19:00:47 now, on with the show! 19:00:58 #topic Announcements [timebox 1 minute, until 19:01] 19:01:07 #info Reminder: Gerrit 2.11 upgrade is now scheduled for Wednesday of next week, December 16, 17:00 UTC. 19:01:15 #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081037.html 19:01:23 o/ 19:01:23 #topic Actions from last meeting [timebox 1 minute, until 19:02] 19:01:27 \o_ 19:01:28 o/ 19:01:29 hello 19:01:32 #link http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-12-01-19.01.html 19:01:37 o/ 19:01:40 nibalizer: send announcement about rescheduled gerrit upgrade maintenance 19:01:48 completed, see above 19:02:02 #topic Specs approval [timebox 1 minute, until 19:03] 19:02:11 PROPOSED: Complete the reviewable release automation work (dhellmann) 19:02:16 #link https://review.openstack.org/245907 19:02:20 o/ 19:02:23 #info Voting is open on the "Complete the reviewable release automation work" spec until 19:00 UTC Thursday, December 10. 19:02:28 I/ 19:02:51 fungi : I'm not sure what the process is, but if you want to discuss it in the meeting I'm here for that 19:03:00 otherwise comments on the spec work, too 19:03:06 dhellmann: if we weren't so short on time, yes 19:03:12 but in this case, comments on the review please 19:03:15 fungi : understood 19:03:18 #topic Mid-cycle sprint for infra-cloud in Ft. Collins proposal (pleia2, jhesketh) [timebox 10 minutes, until 19:13] 19:03:24 during the mitaka priorities discussion in tokyo, it was pointed out that infra-cloud is one of the biggest efforts we've undertaken as a team, and that organizing a mid-cycle sprint around it would useful 19:03:31 the timing is convenient since it's just after hpcloud sunset and we've got the possibility of inheriting a fair amount of replacement hardware/upgrades from that 19:03:37 pleia2 has done an awesome job (along with wendar and purp) of negotiating access for contributors to tour the facility where our west region of infra-cloud is housed, so this makes hp's fort collins site an excellent location for a sprint 19:03:44 late february, say the last week in february, would probably work out best? it doesn't conflict with lca and so far i don't see any february sprints scheduled in the wiki 19:03:52 #link https://wiki.openstack.org/wiki/Sprints#Mitaka_sprints 19:03:57 jhesketh has offered to take point coordinating logistics for this 19:03:57 ++ 19:04:43 last week of feb also misses ansible fest dates according to the dates mordred posted in a prior meeting 19:04:48 Yep, so probably we just need to get an indication if end of Feb in ft Collins is a good idea or if there are any strong objections 19:04:50 anteaya: thanks for checking 19:04:52 last week of feb works for me 19:05:03 good idea 19:05:07 i'm strongly in favor 19:05:16 +1 19:05:20 also skiing 19:05:30 heh 19:05:36 ttx will want to come! 19:05:38 works for me, does anyone know if flaying to FNL is doable or is DEN the best bet? (probably getting ahead of myself iwth that question) 19:05:48 greghaynes, crinkle, does this work for you? 19:06:14 I wouldnt be able to make feb, but I also am pretty unlikely to make any of the possible dates 19:06:20 (baby incoming) 19:06:21 clarkb: Sam-I-Am would know 19:06:29 greghaynes: ah, right 19:06:52 so just to confirm, the proposal is for fort collins, colorado, usa, the week of february 22, 2016 19:06:54 if we made it march-ish maybe greghaynes would be more able to pbx in? 19:06:59 he flies small aircraft out of denver 19:07:13 feb works for me 19:07:31 as someone that just babied I highly recommend against rytrying to do that in march 19:07:35 do they have those telepresence robots at ftc? 19:07:43 crinkle: possible. I wouldnt plan around me - I will do my best to call in but I cant really promise anything about any dates around then 19:07:49 jeblair: not that I have seen 19:07:51 o/ 19:07:52 greghaynes: mmk 19:07:55 jeblair: just speaker phones 19:07:58 wfm end of Feb 19:08:07 march is also starting to close in on cycle end and summit prep, which is why we were thinking earlier 19:08:13 we'll get a little wagon to pull a speakerphone around on. 19:08:35 also, how many and which days should we be considering for this? 19:08:52 thinking 3-5, but I don't know which in that range is best 19:08:56 I think we have said in the past optimum time for a mid-cycle is 3 days 19:08:57 early week is best for me 19:09:03 by day 5 I get very useless and tired 19:09:08 folks get tired by end of day 3 19:09:12 I'd say min of 3 to make it worth the trip 19:09:12 anteaya: ++ 19:09:15 jhesketh: yeah 19:09:17 monday through wednesday? with thursday as an option? 19:09:28 erm 19:09:28 yeah 19:09:31 we should be thinking of this as dedicated single-topic workdays 19:09:31 less than 3 19:09:34 so hopefully not as exhausting 19:09:35 o/ 19:09:37 would be killing for non-US people 19:09:42 (it is easier for us to get help with babies early week due to family schedules) 19:09:54 Attendance for the whole thing is also clearly optional. So we could aim for 4 and have a light schedule 19:09:58 4 days works here 19:10:03 actually, SpamapS ^ 19:10:04 jeblair: very good point, we'll have to make sure we plan the topic->day mapping to make it a little less exhausiting 19:10:20 SpamapS: This is relevant to your interests 19:10:21 4 with understanding that some may have to leave after 3 sounds like it might work 19:10:25 yeah, 4 seems like a good sweet spot to me 19:10:34 jeblair: agreed 19:10:43 monday-wednesday with optional thursday is my vote 19:10:50 ^ ++ 19:10:59 Or even people start rolling in late/finishing early etc throughout the week 19:11:07 (and skiing on friday) 19:11:09 As they tire 19:11:21 jeblair: has to be 19:11:39 o/ 19:12:06 Okay so I can take an action to work with pleia2 on the logistics and announcing etc 19:12:08 we've got one more minute budgeted for this topic. are we at consensus or do we need to flesh out details on the infra ml? 19:12:21 thanks jhesketh! 19:12:31 I'm feeling heard on this topic 19:12:39 Assuming pleia2 is happy to help with the office side 19:12:49 skiing! 19:12:51 i'm done 19:12:53 #action jhesketh finalize omfra-cloud sprint planning details on the infra ml 19:13:00 Heh :-) 19:13:05 omfra-cloud 19:13:06 I like omfra-cloud, makes it sound epic 19:13:08 #undo 19:13:08 Removing item from minutes: 19:13:10 heh 19:13:11 i am in favor of calling it omfra-cloud 19:13:11 loving the new words 19:13:13 #action jhesketh finalize infra-cloud sprint planning details on the infra ml 19:13:14 oomfra loompahs? 19:13:18 ommmmmmmmmfra cloud? 19:13:20 * fungi is a terrible typist 19:13:24 Aww 19:13:26 good typos 19:13:30 #topic Priority Efforts: Gerrit 2.11 Upgrade [timebox 10 minutes, until 19:23] 19:13:36 zaro: are we still on track for next week? looks like there's still quite a few open changes needing reviews... 19:13:44 #link https://review.openstack.org/#/q/status:open+topic:gerrit-upgrade,n,z 19:13:52 yes, just need reviews. 19:14:21 i cherry-picked what i thought were the most important fixes for 2.11 onto our 2.11.4 branch. 19:15:33 o/ 19:15:40 did we end up deciding on what fix we would use for the openid redirects? notmorgan's proxypass vhost change? 19:15:46 yes 19:15:59 i believe notmorgan is the best solution 19:16:02 I believe that is what is hand configured on review-dev now 19:16:04 \o/ 19:16:22 anteaya is correct 19:16:24 oh cool, notmorgan you were able to do the thing you hoped you would be able to do that fixes it in apache without breaking the initial redirect? 19:16:32 Yep! 19:16:34 zaro: i also noted that we've still got some pending cleanup for the "akanada" typo from the last rename maintenance. i'm worried our cruft projects database cleanup step to fix indexing will end up misinterpreting that as a missing project, so i assume we should fix it at the start of the maintenance? 19:17:05 mainly want to make sure that ends up as part of the maintenance plan if so 19:17:12 * jeblair hands notmorgan a case of booze he filched from mordred 19:17:14 fungi: i can look into that if you can provide another dump of the db. 19:17:31 zaro: get up with me after the meeting and i absolutely will--thanks! 19:17:34 * mordred warns notmorgan it's not booze in there, but apple juice he used to make it look like he drinks less 19:18:19 mordred: ahh. I see. 19:18:26 "apple juice" 19:18:27 * krotscheck is skeptical 19:18:41 krotscheck: ++ 19:18:54 so its mostly just reviews then? 19:18:55 fungi: i agree we should attempt to not screw that up. :) 19:18:58 * clarkb makes note to do reviews 19:19:02 clarkb: yes 19:19:12 clarkb: correct 19:19:44 #link https://etherpad.openstack.org/p/mitaka-infra-gerritdevelopment 19:19:54 #link https://etherpad.openstack.org/p/test-gerrit-2.11 19:20:02 #link https://etherpad.openstack.org/p/gerrit-2.11-upgrade 19:20:16 (the planning bits) 19:21:11 we have 2 more minutes for this. anything else we need to cover or remind in preparation for next week? 19:21:21 I'm happy 19:21:25 do we need a reminder notice to the ml closer to the window? 19:21:35 can't hurt 19:21:36 like at the beginning of the week? 19:21:39 fungi: reminder never hurts 19:21:53 i have nothing else. 19:22:00 since it's in the middle of a wednesday and all, there are likely those who will be taken by surprise 19:22:13 yeah, also, things seem _busy_ now. 19:22:13 can we get it into the weekly newsletter? 19:22:32 AJaeger: we should get thingee to add it to his dev digest, yes 19:22:46 nibalizer: do you mind following up on your maintenance notice with a reminder on mondayish too? 19:23:04 can do 19:23:25 #action fungi get gerrit maintenance included in thingee's dev digest 19:23:35 #action nibalizer send follow-up gerrit maintenance reminder 19:23:46 #topic Priority Efforts: Infra-cloud [timebox 10 minutes, until 19:33] 19:23:50 hi 19:23:52 oof, a lot of stuff here on the agenda. crinkle, can you run through this real quick? 19:23:57 yes 19:23:59 omfra-cloud! 19:24:04 ;) 19:24:06 there is a small cloud up that rooters can log into and poke at 19:24:15 \o/ 19:24:23 I would like help reviewing topic:infra-cloud and I need rooters to help get DNS and hiera stuff set up 19:24:24 yay 19:24:34 I had some discussion items but I can bring those up after the meeting 19:24:38 fungi: done 19:24:42 can we point a nodepool at it? 19:24:47 crinkle: wow, fast! 19:24:49 just to exercise it? 19:24:59 i'm happy to volunteer to do the dns bits, unless someone else wants that 19:25:06 jeblair: we'd need a user and possibly sec groups and stuff but yes 19:25:06 oo cloud 19:25:25 also nibalizer had a policy issue that i haven't looked at yet 19:25:27 ++ to pointing nodepool at it 19:25:27 would anyone mind me creating two non-admin users on it for Shrews and I to do functional shade testing against it? 19:25:39 mordred: go for it 19:25:44 can anyone else point nodepool at it? 19:25:55 i'm trying to keep my plate clear for zuulv3 for a little bit 19:26:06 im interested in learning how to do that 19:26:07 (doesn't have to be prod nodepool, can be a private nodepool) 19:26:15 * mordred can help nibalizer 19:26:24 sweet 19:26:30 cool, i'll be backup help 19:26:43 I can help if needed 19:26:53 I have a private nodepool as well we can use. I can also help. 19:26:58 crinkle: so does that mean that the stack of infra-cloud patches should land now? 19:27:13 nodepool people are coming out of the woodwork--glad i didn't volunteer! ;) 19:27:19 mordred: there are a couple of blockers there 19:27:21 ooh. neat. nibalizer you might get higher bandwidth from asselin_ 19:27:27 but at least getting feedback on them would be good 19:27:31 ++ 19:27:41 cool 19:27:56 I am also happy to help 19:28:06 its dead simple to make nodepool and point at cloud 19:28:17 crinkle: you'll create users for me or I'll be doing that myself? 19:28:19 (devstack plugin in tree should be a good example of how that looks step by step) 19:28:33 nibalizer: you do it and if you have the same issue i'll help 19:28:42 kk 19:29:08 o/ 19:29:21 crinkle: where are the creds to log in to the cloud found? 19:29:43 mordred: we must make our users 19:29:45 our ssh keys are installed on the bastion 19:29:45 mordred: the IP is 15.184.52.4 and your rooter key is on it, and the cloud credentials are in /root/adminrc 19:29:51 thanks 19:29:56 /root/adminrc was what I was looking for 19:30:00 beyond that, it's a self-service pump 19:30:05 yup 19:30:07 * mordred can pump 19:31:23 what name did we end up giving this first region? 19:31:35 RegionOne 19:31:46 swell--just like bluebox! 19:31:54 * notmorgan is reminded that he needs to look over those configs. 19:32:01 we put our most creative minds to work on coming up with that name 19:32:06 heh 19:32:12 heh that's just default, we can change it 19:32:23 so the region rcarrillocruz and yolanda are hacking on is RegionTwo? 19:32:24 we'll fix that with the mirror rename... but we should also rename it i think 19:32:40 how about hpuswest - to match what's in the hostname 19:32:40 didn't we say we could start with vanilla? 19:32:42 yeah, east in HP naming 19:32:49 ya it should be hpuswest 19:32:51 or regionb 19:32:53 :) 19:32:56 i'd rather call it hpuswest / hpuseast yeah 19:33:00 clarkb: _that_ would be confusing :) 19:33:07 lol 19:33:14 indeed 19:33:15 :-) 19:33:23 oh! 19:33:27 there is a service running we don't need 19:33:33 #info Our initial infra-cloud region is accessible to Infra root admins, and the next phase of acceptance testing will be exercising via nodepool and glean. 19:33:33 * mordred learns how to remove things with the puppet ... 19:33:36 hpeuswest ? 19:33:45 2. there are 2 services we do not need 19:33:58 mordred: which services? 19:34:08 crinkle: after the meeting, I would like to learn enough about the puppet to learn how to remove "ec2" and "computev3" 19:34:22 there's some time budgeted for open discussion at the end of the meeting too if we need 19:34:24 * mordred knows he can delete them from the catalog in keystone 19:34:32 #topic Priority Efforts: Store Build Logs in Swift [timebox 1 minute, until 19:34] 19:34:34 but wants to use this to learn more about our setup 19:34:39 looks like this is primarily a request from jhesketh to review a blocking change, so for the sake of time i'll link it here and skip ahead 19:34:44 #link https://review.openstack.org/254718 19:34:53 was there anything else urgent with that, jhesketh? 19:35:10 Yep no need to waste more time, just getting some eyes would be good :-) 19:35:14 #topic Priority Efforts: maniphest migration [timebox 1 minute, until 19:35] 19:35:19 similarly, a few blocking reviews for this which ruagair wants to highlight 19:35:25 #link https://review.openstack.org/#/q/status:open+topic:maniphest,n,z 19:35:27 Yes. 19:35:30 Morning. 19:35:55 airporting so I'll be interrupted. 19:36:16 ruagair: i didn't budget additional time for this topic, mostly just reminding people to review those 19:36:21 mordred: ++ 19:36:24 hope that's okay 19:36:34 Yes. 19:36:36 #topic Priority Efforts: Zuul v3 [timebox 5 minutes, until 19:40] 19:36:40 jeblair: you had a couple of (hopefully quick) items here 19:36:43 i have created the feature/zuulv3 branch on zuul and i will do the same for nodepool after the builder changes land 19:36:43 since we're on a branch, i'd like to take the approach of rapidly sketching out the basic work on zuulv3 by focusing on the simple case and breaking everything else -- skipping tests, etc. 19:36:43 but not *removing* tests -- after the basics are there, we can work on getting it back into shape. this way we can see the whole thing take shape and find any big design flaws early 19:36:43 (basically, facebook it at first and then knuth it later) 19:36:43 i plan to focus heavily on this and will be less available for interrupt-driven work for a while 19:36:44 if others can pick up any new-provider work that pops up, that would be great (though there isn't much of that right now) 19:36:44 [end of pastebomb] 19:37:15 ++ 19:37:17 hah on facebook-to-knuth 19:37:27 my only ocncern is that its really hard to review nodepool and zuul changes without tests 19:37:47 I worry that we might not understand the general shape if we don't have something there to point out where it isn't working yet and where it is working 19:38:09 I actually plan to running zuulv3 locally as soon as possible. So, I don't mind providing feedback from a test POV 19:38:10 maybe a zuul-dev continuous deployment from that feature branch? 19:38:11 clarkb: in my first change, i have one test working. 19:38:30 or run the tests non voting so reviewres can at least look at the results 19:38:33 that's enough for me to see the general approcah 19:38:35 (instead of skipping them entirely) 19:38:37 clarkb: sure, they'll break and timeout 19:38:59 (i'm talking tests, not jobs) 19:39:29 * SpamapS arrives late after double-booked call 19:39:42 my hope is by using this approach, we can avoid giant patches 19:40:08 The skipping test - that is just to land things in the zuulv3 branch? 19:40:14 and lots more tests toward the end? 19:40:19 lots of small patches that are easier to work with, even though they may not work in all cases, but then followup patches that will flesh things out more and fix more cases 19:40:31 greghaynes: yes, only in the v3 branch 19:40:34 I juts know that experience has said the hurry up and test later process doesn't work that well 19:40:49 that doesn't necessarily mean we can't do better this time around, but I am concerned about it 19:40:52 fungi: yes, certainly we would not merge the feature branch until it is robust 19:41:21 clarkb: the other approach does not work for us for large changes 19:41:36 I think we can do small changes and have tests 19:41:46 we have spent 6-8 months on nodepool and zuul changes that are *much* smaller in scope than this 19:41:48 one other thing that I have noticed while doing the nodepool builders is that bugfixes which conflict with the patch series are *extremely* easy to accidentally regress on (you fix the merge conflict but dont copy the fix in to your copied out code) 19:41:53 yeah - I agree with jeblair 19:41:57 we may not have every test working but if we add in tests specific to zuulv3 we can see they work and watch the existing tests converege to working 19:42:01 this isn't intended to be incrementally working 19:42:06 so one suggestion I would have is to also make sure any zuul bugfixes have tests for that fix while we work on zuulv3 19:42:14 jeblair: yes and the problem with nodepool has been we have had to backhaul tests in that did not exist 19:42:24 the large changes have trouble because the safety net is missing 19:42:31 the whole point of the v3 work was so that we can clean slate it 19:42:38 clarkb: yes, i plan on focusing only on tests that immediately exercise the code being written 19:42:42 yup I am fine with tests failing and clean slating 19:42:51 cool 19:42:55 I am just saying we should continue to run the tests 19:42:59 not skip them 19:43:14 ah, if it is that big of a code removal then the bugfix thing is not as relavent 19:43:24 clarkb: that does make it harder to merge 19:43:48 jeblair: but only when the change in question breaks tests that shouldn't break? 19:44:22 clarkb: i want to break all the tests 19:44:26 except one 19:44:31 i guess it's a question of how closely tied the current tests are to zuul v2 internals and design 19:44:43 v3 isn't aiming to be backward-compatible 19:44:44 i want "test_jobs_launched" to work 19:44:46 afaik 19:44:49 in particular with nodepool we have done a lot of work recently to add tests in and fix bugs 19:44:59 but because the code is already merged there is less interest in reviewing and getting that code in 19:45:11 i don't care about anything else right now... later on, i want to make sure each test either works or is altered or removed as appropriate for the new design 19:45:12 right. I tihnk this is different than that 19:45:23 mordred: we are saying upfront don't write tests till the end 19:45:28 clarkb: oh no 19:45:32 but at that point if everything is merged why will anyone care? 19:45:38 jeblair: Do you need help making the web side of zuulv3 pretty? 19:45:38 clarkb: we should write tests as we go 19:45:51 jeblair: ok I misunderstood then I thought you were saying no tests 19:45:57 jeblair: until some undetermined point in the future 19:46:12 s/need/want/ 19:46:12 (which we know doesn't work well) 19:46:17 clarkb: i'm saying zuul has 200 tests i don't care about right now 19:46:56 cool, contention clarified. i let the discussion run longer than budgeted to make sure that was settled 19:47:07 fungi: ack 19:47:08 #topic Mirror efforts (krotscheck, greghaynes) [timebox 5 minutes, until 19:52] 19:47:14 Spec: https://review.openstack.org/#/c/252678/ 19:47:15 First patch in chain: https://review.openstack.org/#/c/253236/ 19:47:15 All the patches: https://review.openstack.org/#/q/status:open+branch:master+topic:unified_mirror,n,z 19:47:16 [eopb] 19:47:36 mordred: you were working with greghaynes on the last steps of the pypi mirror builder deployment? 19:47:50 rather, pypi wheel builder 19:47:57 fungi: We had to rework our plan after some issues were discovered in the patches 19:48:07 yah 19:48:16 * mordred is awaiting further instructions from krotscheck and greghaynes 19:48:27 Right, so the whole thing is now a unified mirror effort. 19:48:38 Yep, step1 is now getting buy in on the spec 19:48:39 All patches are dependent on the spec merging. 19:49:03 The big infra-root effort starts when https://review.openstack.org/#/c/238754/ lands. 19:49:21 So, go forth and review :) 19:49:39 #link https://review.openstack.org/238754 19:49:58 i am not sure we can accomodate that amount of space everywhere 19:50:27 so we're delaying the wheel mirror deployment based on needing further design work and a new spec? 19:51:20 the spec looks good and i think we can still proceed 19:51:35 fungi: Yeah, there were a couple of things pointed out in the wheel work that made it not work so well. 19:51:38 #link https://review.openstack.org/252678 19:51:41 fungi: Yes - what came up is we shouldnt be spinning up new nodes for wheel mirrors given that we have a goal of mirrors using a single host 19:52:09 well, we were spinning up new job workers to do the on-demand wheel building 19:52:22 (per platform) 19:52:36 jeblair: if we end up not having that space maybe we should look into some version of global load balancing 19:52:36 but i'll look over the spec 19:52:43 and use the nodes where we do have the resources as the backends 19:53:05 clarkb: or distributed filesystems or caching http proxies 19:53:24 #topic Infra "holiday party" knowledge transfer virtual sprint (anteaya) [timebox 5 minutes, until 19:58] 19:53:27 clarkb: but we should probably proceed with this plan first 19:53:30 jeblair: Maybe ^^ makes the most sence. Basically an infra mirror CDN 19:53:33 jeblair: agreed 19:53:45 #link https://etherpad.openstack.org/p/infra-holiday-party-2015 19:54:03 so we don't get to see each other in person to celebrate the holiday 19:54:10 so I thought we could try something online 19:54:16 19:54:29 you aren't obliged if this isn't your thing 19:54:31 that's sweet :) 19:54:40 but I thought we could try something 19:54:43 fungi: festivus for the rest of us 19:54:46 i thought it was a neat idea 19:54:48 I will be out all day on the 21st, have babysitter and star wars tickets 19:54:50 the etherpad offers my thoughts 19:54:55 no spoilers! 19:54:55 you are welcome to add yours 19:55:03 I suggested 3 days 19:55:17 currently all o fthe listed days work for my schedule 19:55:18 clarkb: darth vader is luke's father 19:55:19 all work for me (my tickets are for the 17th) 19:55:23 please vote on your preference and include reasons you will be out a certain day if you will 19:55:38 clarkb: okay star wars counts 19:55:40 i'm not expecting to disappear until teh 23rd, since i have a lot of ground to cover in a car over the subsequent 5 days 19:56:07 that was all for meeting time, I think the rest can take place on the etherpad 19:56:10 thank you 19:56:12 so i'll be not at all around 23-27 19:56:17 * krotscheck will be sipping mai thai's during all those days, which is sortof like a holiday. 19:56:26 * krotscheck does not plan on being online :) 19:56:31 krotscheck: that's my idea of a holiday anyway 19:56:38 krotscheck: that's not a normal day? :) 19:56:47 jeblair: normal day is just shots 19:56:54 jeblair: Normal day involves internet :D 19:57:10 paper drink umbrellas are for special occasions 19:57:15 * jeblair tops up 19:58:02 I'll be around through the holidays 19:58:10 for some reason no one schedules conferences then 19:58:24 so the one objection was for the 21st. we can go with either the 18th or the 22nd? 19:58:25 I thinnk it is the perfect time 19:58:29 few tourists 19:58:33 fungi: +1 18th 19:58:39 I like the 18th 19:58:45 i'd lean toward the 18th. we can celebrate the (successful!) gerrit upgrade 19:58:46 either is fine 19:58:53 and celebrate star wars 19:59:02 happy to lean toward the 18th 19:59:19 i'm ok with any 19:59:24 #action anteaya plan an infra virtual sprint for knowledge transfer and holiday festivity, friday, december 18th 19:59:30 thanks anteaya! 19:59:31 00:00 utc on the 18th until 23:59 utc? 19:59:36 can do 19:59:39 welcome 19:59:41 #topic Open discussion [timebox 30 seconds, until 20:00] 19:59:45 other activity suggestions welcome 19:59:53 heh--i promised open discussion and i delivered! 20:00:02 :) 20:00:11 and now we're out of time--thanks everyone! 20:00:13 #endmeeting