19:02:50 #startmeeting infra 19:02:51 Meeting started Tue Apr 19 19:02:50 2016 UTC and is due to finish in 60 minutes. The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:55 The meeting name has been set to 'infra' 19:02:59 #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting 19:03:04 #topic Announcements 19:03:11 #info Infra team meeting for April 26 (next week's) is hereby cancelled on account of there's a summit some of us might be attending. Join us again on May 3 at 19:00 UTC for our usual IRC meeting shenanigans. 19:03:28 #topic Actions from last meeting 19:03:34 #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-12-19.02.html 19:03:39 "1. (none)" 19:03:45 #topic Specs approval 19:03:50 none new this week 19:04:20 #topic Priority Efforts: Infra-cloud (crinkle, yolanda, rcarrillocruz) 19:04:25 hi 19:04:29 hi 19:04:31 i hear there's some water 19:04:32 hi 19:04:40 hello infra cloud? 19:04:41 o/ 19:04:42 did they relocate our cloud to a lake? 19:04:48 so we learned this morning that the houston data center was just shut off due to severe flooding in the area 19:04:50 houston ecopod is shut down 19:04:57 we don't know the state of the machines 19:05:00 o/ 19:05:04 wow 19:05:09 eep 19:05:10 yikes 19:05:10 but we can guess that this will likely delay our ability to start working on them 19:05:11 kindof scary 19:05:12 hopefully they won't be swimming on a lake now 19:05:13 for the summit 19:05:13 o.O 19:05:40 also the hipchat server was in houston so we can't exactly ask 19:05:40 so last news i had from the servers were from yesterday 19:05:58 unfortunate, though i guess we won't know more until lake houston recedes 19:05:59 they had nearly completed the setup, pending on dhcp reconfiguration for ILOs 19:06:09 but yes, cannot get more info until this passes 19:06:28 * mordred suggests that in the future, putting datacenters in flood prone areas is less than ideal .. 19:06:31 looks like continued rain for the rest of the week 19:06:43 I didn't know Houston was flood prone 19:06:50 mordred: in the future, all areas will be flood prone :( 19:06:51 so just wanted to let everyone know what was going on and that we might plan to work on non-infra-cloud things during the friday workday 19:06:56 thanks for the heads up! there's still plenty if infracloudish things we can kick around on summit friday even if we don't have machines to do it on 19:06:57 any area would be flood prone if they got 45 cm rain in 24 hours 19:07:06 anteaya: it's a frequent target of hurricanes 19:07:08 New requirement: datacenters need to be constructed on large pontoons for additional 9s 19:07:15 mordred: ah, thank you 19:07:28 anteaya: well, yeah. any place would. except for places with an elevation compared to their surroundings of greater than 45cm 19:07:36 anteaya: unfortunately much of our current landmass in north america was once occupied by an inland sea. perhaps it's returning 19:07:40 mordred: fair enough 19:07:52 fungi: seems to be a possiblity 19:07:56 fungi: ++ 19:08:16 anything else on this topic for now? 19:08:23 not from me 19:08:43 I lost my weechat server so won't be online while my compter is off 19:08:47 #topic Storyboard needs a dev server (Zara, pleia2) 19:08:56 #link https://storyboard.openstack.org/#!/story/2000028 Storyboard needs a dev server 19:09:08 I don't really have much more to add to this, we should provide them one 19:09:15 pleia2: ++ 19:09:16 hi! :) 19:09:18 thoughts? concerns, etc? 19:09:18 concise! 19:09:25 agreement 19:09:26 i thought we had one? 19:09:30 yeah, kind of same from me, detail's in the story (thanks for adding context, fungi) 19:09:32 clearly i am mistaken 19:09:48 I didn't understand the significance of the draft builds at all, so that's interesting 19:09:54 we were going to have one, krotscheck had started some puppeting for it, those changes may still be kicking around in gerrit somewhere 19:10:11 is the idea to build it with puppet or ansible? 19:10:19 "we want to update the user docs, so want to deploy a version with Ansible as non-Ansible-experts" 19:10:40 nibalizer: a lot of the notes in that story are old 19:10:48 nibalizer: actually, that one's new 19:11:00 we do want puppet, like everything else 19:11:06 we have developer docs for storyboard, and there was a section for operator docs 19:11:17 but the operator docs turned out to be completely wrong 19:11:20 pleia2: especially since we already puppet ansible 19:11:20 haha 19:11:39 https://galaxy.ansible.com/palvarez89/storyboard/ may or may not be useful if some ansible is desired 19:11:50 yeah, i think the solution here is to proceed in the direction we started originally... dev/prod puppet classes like we have for other stuff, spin up a second machine with the dev class, add a separate trove db with dummy/sample data or something 19:12:08 sounds good, I'm happy to work with the storyboard folks on this post summit 19:12:18 me too 19:12:19 will pull in help as needed 19:12:22 thanks nibalizer! 19:12:25 o7 19:12:29 heya 19:12:41 so yeah, at the time, we were thinking 'we should get round to deploying something with ansible, as people who don't know ansible, so we can write docs for setting up wiht that role', and a dev server seemed a way to kill two birds with one stone 19:12:45 sorry, raining a bit causing traffic jam back home 19:12:58 Zara: i think a dev server should look very close to our prod server 19:12:59 rcarrillocruz: are you in houston? 19:13:07 :-) 19:13:22 but it's not necessarily part of a dev server setup; it seemed the simplest option at the time (where we thought we might be on our own for it) 19:13:29 i was driving a car, not rowing a boat 19:13:31 :D 19:13:32 fungi: that approach sounds good to me 19:13:49 so no, not in houston yet 19:13:58 so that's the context on that 19:14:12 there's nothing wrong with having multiple dev servers, i just continue to believe that one dev server for it should be hosted where we can point the storyboard-webclient draft jobs 19:14:18 * mordred is supportive of a dev server for Zara 19:14:31 agreed 19:14:34 if someone wants to get FANCY ... 19:14:37 and SotK 19:14:44 but also the goal of having the puppet module be reconsumable is to make it easy for others to set up dev or prod servers of their own 19:14:46 syncing data from prod sb would be neat 19:14:51 on a periodic basis 19:14:58 o/ 19:15:03 an ansible role to deploy storyboard is something i think we'd be happy to host and test 19:15:05 but I do not think anyone should block doing the dev server on that 19:15:08 odyssey4me: you made it! 19:15:36 i'm happy to help with that 19:15:39 anteaya thanks goodness for appointment reminders, which are useful if you've got the dates set right :) 19:15:48 i'm not opposed to having an ansible role for deploying storyboard, but i think that whatever server we're maintaining still needs to be deployed with puppet similar to teh rest of our servers 19:15:49 i'm idle-ish on my puppet/ansible tasks lately 19:15:51 odyssey4me: yay correct dates 19:16:06 fungi: agreement 19:16:17 don't want to get into a one-of-these-things-is-not-like-the-other situation with our server management 19:16:19 possibly a puppet module for dev sb, plus an ansible playbook to sync up data and all 19:16:21 ? 19:16:22 fungi: yeah 19:16:27 :) thanks, everyone, and yeah, if it's on the same infra, I agree to do it the same way 19:16:53 s/to/we should 19:17:04 okay, so seems like general agreement, and the task was already begun at one point, so should be reasonable to just pick it back up and continue running with it 19:17:10 \o/ 19:17:27 no 19:17:41 the puppet module for storyboard should deploy both serevrs 19:17:45 with slightly different inputs 19:18:11 right, that's what i meant about having two classes (implication was in system-config) 19:18:40 both using a generalized storyboard puppet module 19:18:51 and generally i feel that our pattern of two classes isn't correct 19:19:00 we should really only need one with differen inputs 19:19:00 * fungi handwaves 19:19:07 but i'm not exactly volunteering to do that refactor 19:19:23 you prefer we abstract our differing inputs out into the global site manifest? (or hiera something something?) 19:19:24 anyways i think we have consensus 19:19:32 okay, cool 19:19:44 no need to beat this to death in-meeting. moving on 19:20:06 ++ 19:20:18 yep 19:20:20 the prod/dev classes is horrible 19:20:21 #topic Infra-cloud changes needed with new setup: east/west, network ranges (crinkle, yolanda, rcarrillocruz) 19:20:26 hi 19:20:38 so with the movement,we don't have east/west 19:20:45 everything ended into the same network 19:20:57 so I wanted to raise the topic about the two clouds separation now 19:21:14 do we still want to keep them separate? if so, what about the naming, network ranges, etc? 19:21:28 I think having two clouds is a good idea, then we can do staggered upgrades and have no shared data or services 19:21:40 ya++ 19:21:42 which fits into our original model of completely tearing it down an drebuilding it as the upgrade path 19:21:43 yeah i think we still wanted that logical separation for that ^ 19:21:44 as in should we continue with the plan we discussed in ft. collins to assign hosts to one "cloud" or the other and then assign them separate public network allocations? 19:21:56 separation++ 19:21:58 sounds right 19:22:04 ++ 19:22:12 or really we can even stick with the same netblocks/upstream gateways/routes i think and just adjust the assignment ranges in neutron? 19:22:13 so in terms of network, with have a /19 public range now, so we split that in two, and have the same numbers of servers in both? 19:22:16 as far as names we could continue to call them east and west just to be confusing :) 19:22:25 and east/west, sounds confusing 19:22:43 vanilla cloud, chocolate cloud 19:22:51 or cloud1 cloud2 19:22:57 * fungi engages in the bikeshed 19:22:59 cloud1/cloud2++ 19:23:05 also, the number of servers in east/west was so unequal, this can be a good opportunity to rebalance the servers, and have the same numbers on both 19:23:11 i like vanilla/chocolate :) 19:23:13 yolanda: ++ 19:23:15 RegionOne and RegionNone! 19:23:16 i like vanilla chocolate 19:23:24 mordred: ++ 19:23:26 ++ for vanilla chocolate 19:23:33 splitting the server baesd on hardware class makes sense to me 19:23:38 yeah, although vanilla could tell people there's a special custom flavor on chocolate :D 19:23:39 yolanda: for the networking I think we get the most flexibility if we do split it between the clouds 19:23:40 as in 19:23:41 we had 3 different models 19:23:42 vanilla kernel 19:23:43 vanilla cloud 19:23:47 yeah, i was wondering if we should be grouping by model, roughly 19:23:47 anyway, bikeshedding... 19:23:51 there is different hardware in west and east so there might end up being differences, like in the mellanox cards 19:23:55 yolanda: in theory because we use provider networks we don't need to do that but will let us move away from that in the future 19:24:10 crinkle: ya, I have a feeling that it will be best effort due to that 19:24:17 would it make sense to do 3 clouds then? 19:24:21 ya so split clouds, vanilla/chocolate naming scheme, and split on hardware type 19:24:31 strawberry, obviously 19:24:34 ya 19:24:40 shiny new hardware on one cloud, crap hardware on the other? 19:24:44 so we can be neopolitan 19:24:50 is the 3rd model type different enough to warrant that? 19:24:52 cookiedoughcloud 19:24:55 i highly doubt anything is 'shiny new' 19:25:11 old vs oldest? 19:25:16 haha 19:25:25 sure 19:25:37 really old, and really really old 19:25:52 point is if we have to firmware upgrade the mellanox cards or the whatever that would affect only one cloud 19:26:16 which i think is more our style than trying to run 2 clouds at half capacity 19:26:31 yeah, i see the most benefit from roughly splitting by server models, where we can 19:26:38 what are the 3 classes? 19:26:41 i think grouping servers in a rack together makes sense, in case they decide they want to move one or a switch goes down on one 19:26:43 i only see two in http://docs.openstack.org/infra/system-config/infra-cloud.html 19:27:07 crinkle: any chance the racks are mostly one model or another? 19:27:09 unfortunately that lacks model numbers 19:27:10 crinkle, do you have the rack diagrams handy? not on vpn now 19:27:16 crinkle: that's a good suggestion 19:27:19 like that 19:27:35 fungi: i think west the west rack is moreorless one model, east may be a couple different models 19:27:38 crinkle: i think grouping servers by geolocation is good too 19:27:38 yolanda: nope i do not 19:27:44 cos it's pretty common to lose availability in racks 19:27:54 probably better than grouping by hardware classification 19:27:58 i can get those tomorrow, there are listed on a jira ticket 19:28:01 can we update the documentation to reflect reality? 19:28:17 oh wait, i have a copy on email 19:28:29 sure. it looked like they were doing 802.3ad lag cross-device for uplinks, but still if we're not aggregating at teh host level then a switch outage is still going to impact half a rack 19:29:02 i feel a little in the dark here. i agree that in principle having a cloud for each of our major configurations sounds reasonable, but i don't know how many that is, or whether the differences are substantial. 19:29:09 so we have rack 5, 8, 9, 12, and 13 19:29:10 nice 19:29:46 also, i'm with jeblair here, our docs should ideally grow at least a loose rack diagram with host names, and then we should have some table somewhere mapping up models to host names/ranges 19:29:58 i can take care of that 19:30:09 thanks yolanda 19:30:14 makes it easier to reason about stuff like this 19:30:16 thanks! 19:30:23 ++ 19:30:28 yep, we better have all documentation on place, then decide 19:31:18 okay, anything else on this front? 19:31:59 not from my side 19:32:23 #topic Contact address for donated test resources 19:32:41 this is something i'd like to get a little consensus, mostly from the root sysadmin team on 19:32:59 we had a service provider donating resources to us try to reach out to let us know our service was expiring 19:33:28 the address we gave to contact us is a dumping ground for backscatter from gerrit and other sources of e-mail 19:33:38 which nobody looks at afaik 19:33:45 it's on a webmail service 19:33:47 we look at it when we know we need to look there 19:33:59 yeah 19:34:02 i hopped on there today and it had almost 25k unread messages 19:34:24 wondering whether anyone objects to splitting that up and not pointing service provider accounts at that 19:34:29 we should probably have a root-spam@ and a root-important@ 19:34:34 ya 19:34:53 i would read mail forwarded to me from a root-important@ 19:34:59 I think having a place that does not get cron or gerrit emails that we can point account signup things to would be great 19:35:05 this is more or less what i was considering 19:35:32 yes 19:35:34 mordred: +1 19:35:42 mordred ++ 19:35:43 agreed 19:35:45 and seems likely I'd have less of a bounce problem if I was just getting -important 19:35:50 of course, we probalby need the main root-important@ alias, and then probably more than one alias pointing to it - because at some places we need more than one email to sign up for more than one account 19:36:02 (my gmail usage means we turned it off entirely for me) 19:36:03 we could i suppose have a "hidden" ml at lists.o.o and subscribe interested infra-root members (since they're the only ones who have access to the logins for those services) 19:36:12 pleia2: this is not the same thing as that 19:36:18 jeblair: oh 19:36:21 fungi: I would be okay with that too 19:36:21 mordred: +parts? 19:36:27 pleia2: that is mail to root from systems 19:36:28 nibalizer: maybe so 19:36:33 pleia2: this is mail from external services 19:36:34 yeah, this particular address currently does not go directly to any of us 19:36:43 i love getting email from humans 19:36:44 jeblair: ah yes, we do have two email accounts 19:36:46 it goes into a webmail account in rackspace and rots until someone goes spelunking 19:36:48 i hate getting email from robots 19:36:55 er, config things 19:36:55 I don't REALLY like human email 19:36:59 but I'll deal with it 19:37:05 this would still pretty much all be from robots 19:37:11 just fewer of them 19:37:14 mordred, ++ 19:37:25 some of my best friends are robots, after all 19:37:27 well lets try it 19:37:28 hopefully saying important things 19:37:48 move a couple accounts to the new addr and see if it seems like its working, then move the rest 19:37:56 so the main question was hidden list in mailman vs aliases forwarder somewhere? 19:38:07 fungi: mlist sounds good; though as mordred mentions, we'll need at least 2. 19:38:21 mailman+ 19:38:28 sure, that's really no harder than one ;) 19:38:30 (but one can obviously just be tied to the other) 19:39:20 and the only reason i say "hidden" is so that the general population doesn't think it's a way to reach us for support 19:39:41 well also I think the archives should be hidden too 19:39:44 fungi: also, this will be a vector for compromising our cloud accounts 19:39:53 as the posts may contain credentials or other sensitive information 19:39:54 jeblair: ah hrm 19:40:01 anteaya: so, not archived at all :) 19:40:07 even better 19:40:15 jeblair: anteaya: correct 19:40:32 and a very good point 19:40:51 so now wondering if an ml alias somewhere more secure wouldn't be a better idea 19:41:01 er, e-mail alias, non-ml 19:41:03 if we're worried about a mailman bug opening that up, better to stick with a redirect, though we don't have a great host for that. 19:41:44 a very small vm? 19:41:50 might be able to just set up forwarding addresses (without being real accounts) on the foundation mail server, but i'm not positive, and that has limited access for us 19:42:07 porque no procmail? 19:42:16 jeblair: i think that might be the way to do it 19:42:25 just configure it with a rule to forward 19:42:45 the current keys to the kingdom are on a server which we probably don't want processing inbound e-mail from the internet 19:42:54 no. probably not 19:43:11 so i'm feeling more and more like a separate very tiny vm would be more suitable, as much as i hate to suggest that 19:43:28 the best bad idea going it sounds like 19:43:50 also stopping using @openstack.org for it means that we're reducing at least some of the risk exposure for compromising our accounts 19:44:00 mark my words -- we'll be running a cyrus server in no time. :) 19:44:07 somealias@somehost.openstack.org instead 19:44:10 would a simple forward not work? 19:44:24 clarkb: simple forward where and how from what to what? 19:44:26 jeblair: *shudder* 19:44:26 clarkb: i think that is the idea under discussion. 19:44:27 or require all infra root to imap the existing account 19:44:39 fungi: from current account to each of our personal accounts 19:44:43 or invert it and pull via map 19:44:55 that's an option i hadn't considered, though the existing account is a wasteland of terrible. best to cut our losses there 19:45:02 clarkb: oh, well the current account gets all of the backscatter from everything. 19:45:09 option being imap the current account 19:45:20 ok filter on to infra-root or wheatever 19:45:26 that isn't unsolvable aiui 19:45:30 to be clear, the current account is exactly what we already want -- it's just too much of what we want :) 19:45:30 i don't know about setting up a forward from the current account. we *might* have that ability in the mailbox configuration 19:45:38 fungi: we do 19:45:47 i'd also suggest using plus addressing on the recipient, to make your personal filters trivial. 19:45:59 the problem with that is not everything takes it 19:46:01 fungi: that's how my account was set up before it was deleted. 19:46:03 okay, so that doesn't need someone at the foundation to fix for us then? just adding another alias would need assistance? 19:46:04 which can be super frustrating 19:46:20 fungi: i'm not clear what's actually being proposed though 19:46:38 jeblair: however, i still very much like the idea of an account on a server which someone who isn't us is less likely to delete 19:46:39 tell me what i'm missing from "forward all 1k messages/day to all of us" 19:46:51 jeblair: filter out the stuff that isn't to infra-root 19:47:00 aiui infra-root is just an alias for another thing that has been in use for forever? 19:47:13 assuming we're not already using infra-root@o.o for other stuff 19:47:13 infra-root is the account; there are several aliases that point *to* it 19:47:20 jeblair: ah ok 19:47:34 is there not one alias that is significantly less noise than the others? 19:47:37 we generally have not signed up with infra-root 19:47:43 most of the mail goes to the other aliases 19:47:53 so create root-important@o.o and fwd it to infra-root@o.o and configure the infra-root account to spray things that came in to root-important out to each of us 19:47:58 like 'gerrit@' and 'jenkins@' and 'openstackinfrastructurebot@' or whatever 19:48:04 jeblair: ya 19:48:08 though that mail service does have some manipulation rules which could possibly be used to forward based on recipient address 19:48:17 so as long as we have one that isn't the spam addr, it should mostly work? 19:48:22 i haven't looked closely at that feature 19:48:30 honestly, I think even if the mail service does not have selective forwarding 19:48:32 it' snot a problem 19:48:38 we all have the ability to filter email locally 19:48:44 (the reason we have multiple aliases pointing to the account is that the foundation pays per-account, but not per-address, and they wanted to keep costs down, so we collapsed them that way) 19:48:45 and most of us pretty trivially 19:48:52 yah 19:48:54 that's still a lot of messages for all of us to bitbucket continually 19:49:01 totally 19:49:13 fungi: mordred if we did it server side int oa folder then imap only that it wouldn't be 19:49:23 just saying - if we can't forward per recip - we can filter locally 19:49:45 put all mail addressed to important user foo in folder important, then everyone imap that 19:49:53 or forward just that folder, eithe rway we should be able to make something work 19:49:58 sure. it's email, one of the most robust systems on the planet. there are at least 1-billion ways to solve this. I support all of them 19:50:03 i'm open to adding another imap box in my mutt config as long as someone is volunteering to fiddle with the filtering mechanism there 19:50:24 i can do so, but not this week 19:50:26 ("there" meaning in the rackspace mail app stuff) 19:50:50 yeah, i don't think this is crazy-urgent, but it is something i need to make sure we don't forget we're not doing 19:51:08 if you want to action me to investigate that, i can 19:51:12 if it's imap, then you should definitely plus address, as that lets you pick the subfolder on the host, and not use filters at all. 19:51:22 and since we're not meeting next week, that gives me some time :) 19:51:24 dougwig: as nibalizer said yo ucannot rely on that 19:51:30 dougwig: its great when it works, terrible when it doesn't 19:51:37 on a related note, we ought to get contact addresses for things like the providers' incident trackers coifigured to go into whatever solution we're coming up with 19:51:40 we should not sign up for anything that does not accept a plus address :) 19:51:41 er, configured 19:51:46 eh, you control the destination MTA, so you absolutely can. 19:52:06 jeblair: I suppose that is also a valid stance 19:52:15 dougwig: its the side sending you email that often rejects valid email addresses 19:52:20 because right now i think one of them sends updates to monty, one sends them to someone in foundation executive management, et cetera. it's all over the place 19:52:53 fungi: is this the last topic you wanted to get to today? 19:53:02 where we went wrong was ever signing up something that *didn't* go to monty 19:53:10 clarkb: *blinks* -- the sending mta has nothing to do with it; it's fully compliant. 19:53:13 #action jeblair investigate mail filtering by recipient options for infra-root@o.o inbox 19:53:14 mordred is an excellent email filter/responder thing 19:53:25 dougwig: many services will reject email addresses with + in them 19:53:25 anteaya: had one more quick one 19:53:31 fungi: k 19:53:39 #topic Summit session planning 19:53:48 just some quick updates/links here for convenience 19:53:56 as teh summit draws ever closer 19:54:09 dougwig: we also don't control the receiving mta if we continue to use the foundation's mta (but if we spin up a server, we will, and could use + addressing with any character) 19:54:59 pleia2 has already started a list of "infra-relevant sessions not in our track" at the bottom of the planning etherpad, so that might be a good place for people to add others and try to coordinate and make sure we get good coverage when there are conflicts 19:55:15 #link https://etherpad.openstack.org/p/infra-newton-summit-planning add infra-relevant sessions at the bottom for coordination purposes 19:55:46 in my e-mail announcement to the list with our finalized schedule, i also included a shorthand for some of the conflicts i spotted 19:55:55 thank you! 19:56:00 #link http://lists.openstack.org/pipermail/openstack-infra/2016-April/004162.html finalized session schedule with conflicts noted 19:56:08 :) thank you 19:56:17 thank you 19:56:37 #link also pleia2 started our etherpads for the various sessions and i've linked them from the usual wiki page 19:56:46 #link https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Infrastructure summit sessions wiki links 19:56:52 they are also linked fro mthe session schedule 19:56:54 thanks pleia2 19:57:07 thank you pleia2 and fungi 19:57:23 thanks pleia2, fungi 19:57:27 looking forward to the collaboration 19:57:30 i included utc time translations there for the benefit of people trying to follow along remotely in etherpads during/after sessions (or who, like me, keep personal time in utc when travelling) 19:57:54 thank you 19:58:00 thanks pleia2 19:58:02 clarkb: ahh, yep, i did also put the etherpads in the official schedule as you noted 19:58:02 * anteaya also uses utc time 19:58:11 fungi: you should include a key that tells us what time drinking starts in UTC in the area 19:58:14 #link https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Infrastructure%3A official summit schedule 19:58:21 if/when we split the developer summit from the marketing summit, we should have the developer summit schedule be in utc 19:58:23 mordred: starts? 19:58:30 fungi: good point 19:58:37 jeblair: agreed 19:58:46 ah, good idea re: utc 19:58:49 fungi: for this summit, maybe we should make it a goal to do one-drink-per-session 19:58:50 as another shibboleth to let people know they're at the wrong event :) 19:58:57 i'll try to backlink other appropriate metadata in the etherpad headers before the end of the week too, in preparation 19:59:03 jeblair: ha ha ha 19:59:21 any summit questions in these last few seconds? 19:59:30 we're at about 30 seconds remaining 19:59:31 'hi, you showed up at 3am, you probably meant to attend this *other* event' 19:59:39 can we postpone for a week or two you think? hold out for better weather? 19:59:45 better weather 19:59:52 yes, prepare for a lot of wet 20:00:03 i'm told inner tubes are a good travel accessory 20:00:12 woooo water 20:00:20 and we're at time. thanks all, hope to see lots of you in austin!!! 20:00:24 \o/ 20:00:26 #endmeeting