19:02:50 <fungi> #startmeeting infra
19:02:51 <openstack> Meeting started Tue Apr 19 19:02:50 2016 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:55 <openstack> The meeting name has been set to 'infra'
19:02:59 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:04 <fungi> #topic Announcements
19:03:11 <fungi> #info Infra team meeting for April 26 (next week's) is hereby cancelled on account of there's a summit some of us might be attending. Join us again on May 3 at 19:00 UTC for our usual IRC meeting shenanigans.
19:03:28 <fungi> #topic Actions from last meeting
19:03:34 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-04-12-19.02.html
19:03:39 <fungi> "1. (none)"
19:03:45 <fungi> #topic Specs approval
19:03:50 <fungi> none new this week
19:04:20 <fungi> #topic Priority Efforts: Infra-cloud (crinkle, yolanda, rcarrillocruz)
19:04:25 <crinkle> hi
19:04:29 <yolanda> hi
19:04:31 <fungi> i hear there's some water
19:04:32 <eil397> hi
19:04:40 <jeblair> hello infra cloud?
19:04:41 <pabelanger> o/
19:04:42 <fungi> did they relocate our cloud to a lake?
19:04:48 <crinkle> so we learned this morning that the houston data center was just shut off due to severe flooding in the area
19:04:50 <yolanda> houston ecopod is shut down
19:04:57 <crinkle> we don't know the state of the machines
19:05:00 <olaph> o/
19:05:04 <mordred> wow
19:05:09 <pabelanger> eep
19:05:10 <docaedo> yikes
19:05:10 <crinkle> but we can guess that this will likely delay our ability to start working on them
19:05:11 <morgan> kindof scary
19:05:12 <yolanda> hopefully they won't be swimming on a lake now
19:05:13 <crinkle> for the summit
19:05:13 <Zara> o.O
19:05:40 <crinkle> also the hipchat server was in houston so we can't exactly ask
19:05:40 <yolanda> so last news i had from the servers were from yesterday
19:05:58 <fungi> unfortunate, though i guess we won't know more until lake houston recedes
19:05:59 <yolanda> they had nearly completed the setup, pending on dhcp reconfiguration for ILOs
19:06:09 <yolanda> but yes, cannot get more info until this passes
19:06:28 * mordred suggests that in the future, putting datacenters in flood prone areas is less than ideal ..
19:06:31 <anteaya> looks like continued rain for the rest of the week
19:06:43 <anteaya> I didn't know Houston was flood prone
19:06:50 <jeblair> mordred: in the future, all areas will be flood prone :(
19:06:51 <crinkle> so just wanted to let everyone know what was going on and that we might plan to work on non-infra-cloud things during the friday workday
19:06:56 <fungi> thanks for the heads up! there's still plenty if infracloudish things we can kick around on summit friday even if we don't have machines to do it on
19:06:57 <anteaya> any area would be flood prone if they got 45 cm rain in 24 hours
19:07:06 <mordred> anteaya: it's a frequent target of hurricanes
19:07:08 <bkero> New requirement: datacenters need to be constructed on large pontoons for additional 9s
19:07:15 <anteaya> mordred: ah, thank you
19:07:28 <mordred> anteaya: well, yeah. any place would. except for places with an elevation compared to their surroundings of greater than 45cm
19:07:36 <fungi> anteaya: unfortunately much of our current landmass in north america was once occupied by an inland sea. perhaps it's returning
19:07:40 <anteaya> mordred: fair enough
19:07:52 <anteaya> fungi: seems to be a possiblity
19:07:56 <mordred> fungi: ++
19:08:16 <fungi> anything else on this topic for now?
19:08:23 <crinkle> not from me
19:08:43 <anteaya> I lost my weechat server so won't be online while my compter is off
19:08:47 <fungi> #topic Storyboard needs a dev server (Zara, pleia2)
19:08:56 <fungi> #link https://storyboard.openstack.org/#!/story/2000028 Storyboard needs a dev server
19:09:08 <pleia2> I don't really have much more to add to this, we should provide them one
19:09:15 <anteaya> pleia2: ++
19:09:16 <Zara> hi! :)
19:09:18 <pleia2> thoughts? concerns, etc?
19:09:18 <fungi> concise!
19:09:25 <anteaya> agreement
19:09:26 <nibalizer> i thought we had one?
19:09:30 <Zara> yeah, kind of same from me, detail's in the story (thanks for adding context, fungi)
19:09:32 <nibalizer> clearly i am mistaken
19:09:48 <Zara> I didn't understand the significance of the draft builds at all, so that's interesting
19:09:54 <fungi> we were going to have one, krotscheck had started some puppeting for it, those changes may still be kicking around in gerrit somewhere
19:10:11 <nibalizer> is the idea to build it with puppet or ansible?
19:10:19 <nibalizer> "we want to update the user docs, so want to deploy a version with Ansible as non-Ansible-experts"
19:10:40 <pleia2> nibalizer: a lot of the notes in that story are old
19:10:48 <Zara> nibalizer: actually, that one's new
19:11:00 <pleia2> we do want puppet, like everything else
19:11:06 <Zara> we have developer docs for storyboard, and there was a section for operator docs
19:11:17 <Zara> but the operator docs turned out to be completely wrong
19:11:20 <clarkb> pleia2: especially since we already puppet ansible
19:11:20 <nibalizer> haha
19:11:39 <persia> https://galaxy.ansible.com/palvarez89/storyboard/ may or may not be useful if some ansible is desired
19:11:50 <fungi> yeah, i think the solution here is to proceed in the direction we started originally... dev/prod puppet classes like we have for other stuff, spin up a second machine with the dev class, add a separate trove db with dummy/sample data or something
19:12:08 <pleia2> sounds good, I'm happy to work with the storyboard folks on this post summit
19:12:18 <nibalizer> me too
19:12:19 <pleia2> will pull in help as needed
19:12:22 <pleia2> thanks nibalizer!
19:12:25 <nibalizer> o7
19:12:29 <rcarrillocruz> heya
19:12:41 <Zara> so yeah, at the time, we were thinking 'we should get round to deploying something with ansible, as people who don't know ansible, so we can write docs for setting up wiht that role', and a dev server seemed a way to kill two birds with one stone
19:12:45 <rcarrillocruz> sorry, raining a bit causing traffic jam back home
19:12:58 <nibalizer> Zara: i think a dev server should look very close to our prod server
19:12:59 <clarkb> rcarrillocruz: are you in houston?
19:13:07 <rcarrillocruz> :-)
19:13:22 <Zara> but it's not necessarily part of a dev server setup; it seemed the simplest option at the time (where we thought we might be on our own for it)
19:13:29 <rcarrillocruz> i was driving a car, not rowing a boat
19:13:31 <rcarrillocruz> :D
19:13:32 <SotK> fungi: that approach sounds good to me
19:13:49 <rcarrillocruz> so no, not in houston yet
19:13:58 <Zara> so that's the context on that
19:14:12 <fungi> there's nothing wrong with having multiple dev servers, i just continue to believe that one dev server for it should be hosted where we can point the storyboard-webclient draft jobs
19:14:18 * mordred is supportive of a dev server for Zara
19:14:31 <anteaya> agreed
19:14:34 <mordred> if someone wants to get FANCY ...
19:14:37 <anteaya> and SotK
19:14:44 <fungi> but also the goal of having the puppet module be reconsumable is to make it easy for others to set up dev or prod servers of their own
19:14:46 <mordred> syncing data from prod sb would be neat
19:14:51 <mordred> on a periodic basis
19:14:58 <odyssey4me> o/
19:15:03 <nibalizer> an ansible role to deploy storyboard is something i think we'd be happy to host and test
19:15:05 <mordred> but I do not think anyone should block doing the dev server on that
19:15:08 <anteaya> odyssey4me: you made it!
19:15:36 <rcarrillocruz> i'm happy to help with that
19:15:39 <odyssey4me> anteaya thanks goodness for appointment reminders, which are useful if you've got the dates set right :)
19:15:48 <fungi> i'm not opposed to having an ansible role for deploying storyboard, but i think that whatever server we're maintaining still needs to be deployed with puppet similar to teh rest of our servers
19:15:49 <rcarrillocruz> i'm idle-ish on my puppet/ansible tasks lately
19:15:51 <anteaya> odyssey4me: yay correct dates
19:16:06 <anteaya> fungi: agreement
19:16:17 <fungi> don't want to get into a one-of-these-things-is-not-like-the-other situation with our server management
19:16:19 <rcarrillocruz> possibly a puppet module for dev sb, plus an ansible playbook to sync up data and all
19:16:21 <rcarrillocruz> ?
19:16:22 <pleia2> fungi: yeah
19:16:27 <Zara> :) thanks, everyone, and yeah, if it's on the same infra, I agree to do it the same way
19:16:53 <Zara> s/to/we should
19:17:04 <fungi> okay, so seems like general agreement, and the task was already begun at one point, so should be reasonable to just pick it back up and continue running with it
19:17:10 <Zara> \o/
19:17:27 <nibalizer> no
19:17:41 <nibalizer> the puppet module for storyboard should deploy both serevrs
19:17:45 <nibalizer> with slightly different inputs
19:18:11 <fungi> right, that's what i meant about having two classes (implication was in system-config)
19:18:40 <fungi> both using a generalized storyboard puppet module
19:18:51 <nibalizer> and generally i feel that our pattern of two classes isn't correct
19:19:00 <nibalizer> we should really only need one with differen inputs
19:19:00 * fungi handwaves
19:19:07 <nibalizer> but i'm not exactly volunteering to do that refactor
19:19:23 <fungi> you prefer we abstract our differing inputs out into the global site manifest? (or hiera something something?)
19:19:24 <nibalizer> anyways i think we have consensus
19:19:32 <fungi> okay, cool
19:19:44 <fungi> no need to beat this to death in-meeting. moving on
19:20:06 <rcarrillocruz> ++
19:20:18 <nibalizer> yep
19:20:20 <rcarrillocruz> the prod/dev classes is horrible
19:20:21 <fungi> #topic Infra-cloud changes needed with new setup: east/west, network ranges (crinkle, yolanda, rcarrillocruz)
19:20:26 <yolanda> hi
19:20:38 <yolanda> so with the movement,we don't have east/west
19:20:45 <yolanda> everything ended into the same network
19:20:57 <yolanda> so I wanted to raise the topic about the two clouds separation now
19:21:14 <yolanda> do we still want to keep them separate? if so, what about the naming, network ranges, etc?
19:21:28 <clarkb> I think having two clouds is a good idea, then we can do staggered upgrades and have no shared data or services
19:21:40 <mordred> ya++
19:21:42 <clarkb> which fits into our original model of completely tearing it down an drebuilding it as the upgrade path
19:21:43 <crinkle> yeah i think we still wanted that logical separation for that ^
19:21:44 <fungi> as in should we continue with the plan we discussed in ft. collins to assign hosts to one "cloud" or the other and then assign them separate public network allocations?
19:21:56 <nibalizer> separation++
19:21:58 <mordred> sounds right
19:22:04 <jeblair> ++
19:22:12 <fungi> or really we can even stick with the same netblocks/upstream gateways/routes i think and just adjust the assignment ranges in neutron?
19:22:13 <yolanda> so in terms of network, with have a /19 public range now, so we split that in two, and have the same numbers of servers in both?
19:22:16 <clarkb> as far as names we could continue to call them east and west just to be confusing :)
19:22:25 <yolanda> and east/west, sounds confusing
19:22:43 <fungi> vanilla cloud, chocolate cloud
19:22:51 <clarkb> or cloud1 cloud2
19:22:57 * fungi engages in the bikeshed
19:22:59 <mordred> cloud1/cloud2++
19:23:05 <yolanda> also, the number of servers in east/west was so unequal, this can be a good opportunity to rebalance the servers, and have the same numbers on both
19:23:11 <jeblair> i like vanilla/chocolate :)
19:23:13 <clarkb> yolanda: ++
19:23:15 <fungi> RegionOne and RegionNone!
19:23:16 <nibalizer> i like vanilla chocolate
19:23:24 <morgan> mordred: ++
19:23:26 <yolanda> ++ for vanilla chocolate
19:23:33 <nibalizer> splitting the server baesd on hardware class makes sense to me
19:23:38 <rcarrillocruz> yeah, although vanilla could tell people there's a special custom flavor on chocolate :D
19:23:39 <clarkb> yolanda: for the networking I think we get the most flexibility if we do split it between the clouds
19:23:40 <rcarrillocruz> as in
19:23:41 <nibalizer> we had 3 different models
19:23:42 <rcarrillocruz> vanilla kernel
19:23:43 <rcarrillocruz> vanilla cloud
19:23:47 <fungi> yeah, i was wondering if we should be grouping by model, roughly
19:23:47 <rcarrillocruz> anyway, bikeshedding...
19:23:51 <crinkle> there is different hardware in west and east so there might end up being differences, like in the mellanox cards
19:23:55 <clarkb> yolanda: in theory because we use provider networks we don't need to do that but will let us move away from that in the future
19:24:10 <clarkb> crinkle: ya, I have a feeling that it will be best effort due to that
19:24:17 <jeblair> would it make sense to do 3 clouds then?
19:24:21 <nibalizer> ya so split clouds, vanilla/chocolate naming scheme, and split on hardware type
19:24:31 <fungi> strawberry, obviously
19:24:34 <jeblair> ya
19:24:40 <yolanda> shiny new hardware on one cloud, crap hardware on the other?
19:24:44 <fungi> so we can be neopolitan
19:24:50 <jeblair> is the 3rd model type different enough to warrant that?
19:24:52 <crinkle> cookiedoughcloud
19:24:55 <nibalizer> i highly doubt anything is 'shiny new'
19:25:11 <yolanda> old vs oldest?
19:25:16 <crinkle> haha
19:25:25 <nibalizer> sure
19:25:37 <fungi> really old, and really really old
19:25:52 <nibalizer> point is if we have to firmware upgrade the mellanox cards or the whatever that would affect only one cloud
19:26:16 <nibalizer> which i think is more our style than trying to run 2 clouds at half capacity
19:26:31 <fungi> yeah, i see the most benefit from roughly splitting by server models, where we can
19:26:38 <jeblair> what are the 3 classes?
19:26:41 <crinkle> i think grouping servers in a rack together makes sense, in case they decide they want to move one or a switch goes down on one
19:26:43 <jeblair> i only see two in http://docs.openstack.org/infra/system-config/infra-cloud.html
19:27:07 <fungi> crinkle: any chance the racks are mostly one model or another?
19:27:09 <nibalizer> unfortunately that lacks model numbers
19:27:10 <yolanda> crinkle, do you have the rack diagrams handy? not on vpn now
19:27:16 <rcarrillocruz> crinkle: that's a  good suggestion
19:27:19 <rcarrillocruz> like that
19:27:35 <crinkle> fungi: i think west the west rack is moreorless one model, east may be a couple different models
19:27:38 <nibalizer> crinkle: i think grouping servers by geolocation is good too
19:27:38 <crinkle> yolanda: nope i do not
19:27:44 <rcarrillocruz> cos it's pretty common to lose availability in racks
19:27:54 <nibalizer> probably better than grouping by hardware classification
19:27:58 <yolanda> i can get those tomorrow, there are listed on a jira ticket
19:28:01 <jeblair> can we update the documentation to reflect reality?
19:28:17 <yolanda> oh wait, i have a copy on email
19:28:29 <fungi> sure. it looked like they were doing 802.3ad lag cross-device for uplinks, but still if we're not aggregating at teh host level then a switch outage is still going to impact half a rack
19:29:02 <jeblair> i feel a little in the dark here.  i agree that in principle having a cloud for each of our major configurations sounds reasonable, but i don't know how many that is, or whether the differences are substantial.
19:29:09 <yolanda> so we have rack 5, 8, 9, 12, and 13
19:29:10 <yolanda> nice
19:29:46 <fungi> also, i'm with jeblair here, our docs should ideally grow at least a loose rack diagram with host names, and then we should have some table somewhere mapping up models to host names/ranges
19:29:58 <yolanda> i can take care of that
19:30:09 <crinkle> thanks yolanda
19:30:14 <fungi> makes it easier to reason about stuff like this
19:30:16 <fungi> thanks!
19:30:23 <pabelanger> ++
19:30:28 <yolanda> yep, we better have all documentation on place, then decide
19:31:18 <fungi> okay, anything else on this front?
19:31:59 <yolanda> not from my side
19:32:23 <fungi> #topic Contact address for donated test resources
19:32:41 <fungi> this is something i'd like to get a little consensus, mostly from the root sysadmin team on
19:32:59 <fungi> we had a service provider donating resources to us try to reach out to let us know our service was expiring
19:33:28 <fungi> the address we gave to contact us is a dumping ground for backscatter from gerrit and other sources of e-mail
19:33:38 <fungi> which nobody looks at afaik
19:33:45 <fungi> it's on a webmail service
19:33:47 <jeblair> we look at it when we know we need to look there
19:33:59 <fungi> yeah
19:34:02 <fungi> i hopped on there today and it had almost 25k unread messages
19:34:24 <fungi> wondering whether anyone objects to splitting that up and not pointing service provider accounts at that
19:34:29 <mordred> we should probably have a root-spam@ and a root-important@
19:34:34 <nibalizer> ya
19:34:53 <nibalizer> i would read mail forwarded to me from a root-important@
19:34:59 <mordred> I think having a place that does not get cron or gerrit emails that we can point account signup things to would be great
19:35:05 <fungi> this is more or less what i was considering
19:35:32 <nibalizer> yes
19:35:34 <clarkb> mordred: +1
19:35:42 <yolanda> mordred ++
19:35:43 <pabelanger> agreed
19:35:45 <pleia2> and seems likely I'd have less of a bounce problem if I was just getting -important
19:35:50 <mordred> of course, we probalby need the main root-important@ alias, and then probably more than one alias pointing to it - because at some places we need more than one email to sign up for more than one account
19:36:02 <pleia2> (my gmail usage means we turned it off entirely for me)
19:36:03 <fungi> we could i suppose have a "hidden" ml at lists.o.o and subscribe interested infra-root members (since they're the only ones who have access to the logins for those services)
19:36:12 <jeblair> pleia2: this is not the same thing as that
19:36:18 <pleia2> jeblair: oh
19:36:21 <pabelanger> fungi: I would be okay with that too
19:36:21 <nibalizer> mordred: +parts?
19:36:27 <jeblair> pleia2: that is mail to root from systems
19:36:28 <mordred> nibalizer: maybe so
19:36:33 <jeblair> pleia2: this is mail from external services
19:36:34 <fungi> yeah, this particular address currently does not go directly to any of us
19:36:43 <nibalizer> i love getting email from humans
19:36:44 <pleia2> jeblair: ah yes, we do have two email accounts
19:36:46 <fungi> it goes into a webmail account in rackspace and rots until someone goes spelunking
19:36:48 <nibalizer> i hate getting email from robots
19:36:55 <pleia2> er, config things
19:36:55 <mordred> I don't REALLY like human email
19:36:59 <mordred> but I'll deal with it
19:37:05 <jeblair> this would still pretty much all be from robots
19:37:11 <jeblair> just fewer of them
19:37:14 <rocky_g> mordred, ++
19:37:25 <fungi> some of my best friends are robots, after all
19:37:27 <nibalizer> well lets try it
19:37:28 <anteaya> hopefully saying important things
19:37:48 <nibalizer> move a couple accounts to the new addr and see if it seems like its working, then move the rest
19:37:56 <fungi> so the main question was hidden list in mailman vs aliases forwarder somewhere?
19:38:07 <jeblair> fungi: mlist sounds good; though as mordred mentions, we'll need at least 2.
19:38:21 <pabelanger> mailman+
19:38:28 <fungi> sure, that's really no harder than one ;)
19:38:30 <jeblair> (but one can obviously just be tied to the other)
19:39:20 <fungi> and the only reason i say "hidden" is so that the general population doesn't think it's a way to reach us for support
19:39:41 <anteaya> well also I think the archives should be hidden too
19:39:44 <jeblair> fungi: also, this will be a vector for compromising our cloud accounts
19:39:53 <anteaya> as the posts may contain credentials or other sensitive information
19:39:54 <nibalizer> jeblair: ah hrm
19:40:01 <jeblair> anteaya: so, not archived at all :)
19:40:07 <anteaya> even better
19:40:15 <fungi> jeblair: anteaya: correct
19:40:32 <fungi> and a very good point
19:40:51 <fungi> so now wondering if an ml alias somewhere more secure wouldn't be a better idea
19:41:01 <fungi> er, e-mail alias, non-ml
19:41:03 <jeblair> if we're worried about a mailman bug opening that up, better to stick with a redirect, though we don't have a great host for that.
19:41:44 <fungi> a very small vm?
19:41:50 <jeblair> might be able to just set up forwarding addresses (without being real accounts) on the foundation mail server, but i'm not positive, and that has limited access for us
19:42:07 <bkero> porque no procmail?
19:42:16 <nibalizer> jeblair: i think that might be the way to do it
19:42:25 <nibalizer> just configure it with a rule to forward
19:42:45 <fungi> the current keys to the kingdom are on a server which we probably don't want processing inbound e-mail from the internet
19:42:54 <mordred> no. probably not
19:43:11 <fungi> so i'm feeling more and more like a separate very tiny vm would be more suitable, as much as i hate to suggest that
19:43:28 <anteaya> the best bad idea going it sounds like
19:43:50 <fungi> also stopping using @openstack.org for it means that we're reducing at least some of the risk exposure for compromising our accounts
19:44:00 <jeblair> mark my words -- we'll be running a cyrus server in no time.  :)
19:44:07 <fungi> somealias@somehost.openstack.org instead
19:44:10 <clarkb> would a simple forward not work?
19:44:24 <fungi> clarkb: simple forward where and how from what to what?
19:44:26 <dougwig> jeblair: *shudder*
19:44:26 <jeblair> clarkb: i think that is the idea under discussion.
19:44:27 <clarkb> or require all infra root to imap the existing account
19:44:39 <clarkb> fungi: from current account to each of our personal accounts
19:44:43 <clarkb> or invert it and pull via map
19:44:55 <fungi> that's an option i hadn't considered, though the existing account is a wasteland of terrible. best to cut our losses there
19:45:02 <jeblair> clarkb: oh, well the current account gets all of the backscatter from everything.
19:45:09 <fungi> option being imap the current account
19:45:20 <clarkb> ok filter on to infra-root or wheatever
19:45:26 <clarkb> that isn't unsolvable aiui
19:45:30 <jeblair> to be clear, the current account is exactly what we already want -- it's just too much of what we want :)
19:45:30 <fungi> i don't know about setting up a forward from the current account. we *might* have that ability in the mailbox configuration
19:45:38 <jeblair> fungi: we do
19:45:47 <dougwig> i'd also suggest using plus addressing on the recipient, to make your personal filters trivial.
19:45:59 <nibalizer> the problem with that is not everything takes it
19:46:01 <jeblair> fungi: that's how my account was set up before it was deleted.
19:46:03 <fungi> okay, so that doesn't need someone at the foundation to fix for us then? just adding another alias would need assistance?
19:46:04 <nibalizer> which can be super frustrating
19:46:20 <jeblair> fungi: i'm not clear what's actually being proposed though
19:46:38 <fungi> jeblair: however, i still very much like the idea of an account on a server which someone who isn't us is less likely to delete
19:46:39 <jeblair> tell me what i'm missing from "forward all 1k messages/day to all of us"
19:46:51 <clarkb> jeblair: filter out the stuff that isn't to infra-root
19:47:00 <clarkb> aiui infra-root is just an alias for another thing that has been in use for forever?
19:47:13 <fungi> assuming we're not already using infra-root@o.o for other stuff
19:47:13 <jeblair> infra-root is the account; there are several aliases that point *to* it
19:47:20 <clarkb> jeblair: ah ok
19:47:34 <clarkb> is there not one alias that is significantly less noise than the others?
19:47:37 <jeblair> we generally have not signed up with infra-root
19:47:43 <jeblair> most of the mail goes to the other aliases
19:47:53 <nibalizer> so create root-important@o.o and fwd it to infra-root@o.o and configure the infra-root account to spray things that came in to root-important out to each of us
19:47:58 <jeblair> like 'gerrit@' and 'jenkins@' and 'openstackinfrastructurebot@' or whatever
19:48:04 <clarkb> jeblair: ya
19:48:08 <fungi> though that mail service does have some manipulation rules which could possibly be used to forward based on recipient address
19:48:17 <clarkb> so as long as we have one that isn't the spam addr, it should mostly work?
19:48:22 <fungi> i haven't looked closely at that feature
19:48:30 <mordred> honestly, I think even if the mail service does not have selective forwarding
19:48:32 <mordred> it' snot a problem
19:48:38 <mordred> we all have the ability to filter email locally
19:48:44 <jeblair> (the reason we have multiple aliases pointing to the account is that the foundation pays per-account, but not per-address, and they wanted to keep costs down, so we collapsed them that way)
19:48:45 <mordred> and most of us pretty trivially
19:48:52 <nibalizer> yah
19:48:54 <fungi> that's still a lot of messages for all of us to bitbucket continually
19:49:01 <mordred> totally
19:49:13 <clarkb> fungi: mordred if we did it server side int oa folder then imap only that it wouldn't be
19:49:23 <mordred> just saying - if we can't forward per recip - we can filter locally
19:49:45 <clarkb> put all mail addressed to important user foo in folder important, then everyone imap that
19:49:53 <clarkb> or forward just that folder, eithe rway we should be able to make something work
19:49:58 <mordred> sure. it's email, one of the most robust systems on the planet. there are at least 1-billion ways to solve this. I support all of them
19:50:03 <fungi> i'm open to adding another imap box in my mutt config as long as someone is volunteering to fiddle with the filtering mechanism there
19:50:24 <jeblair> i can do so, but not this week
19:50:26 <fungi> ("there" meaning in the rackspace mail app stuff)
19:50:50 <fungi> yeah, i don't think this is crazy-urgent, but it is something i need to make sure we don't forget we're not doing
19:51:08 <jeblair> if you want to action me to investigate that, i can
19:51:12 <dougwig> if it's imap, then you should definitely plus address, as that lets you pick the subfolder on the host, and not use filters at all.
19:51:22 <jeblair> and since we're not meeting next week, that gives me some time :)
19:51:24 <clarkb> dougwig: as nibalizer said yo ucannot rely on that
19:51:30 <clarkb> dougwig: its great when it works, terrible when it doesn't
19:51:37 <fungi> on a related note, we ought to get contact addresses for things like the providers' incident trackers coifigured to go into whatever solution we're coming up with
19:51:40 <jeblair> we should not sign up for anything that does not accept a plus address :)
19:51:41 <fungi> er, configured
19:51:46 <dougwig> eh, you control the destination MTA, so you absolutely can.
19:52:06 <clarkb> jeblair: I suppose that is also a valid stance
19:52:15 <clarkb> dougwig: its the side sending you email that often rejects valid email addresses
19:52:20 <fungi> because right now i think one of them sends updates to monty, one sends them to someone in foundation executive management, et cetera. it's all over the place
19:52:53 <anteaya> fungi: is this the last topic you wanted to get to today?
19:53:02 <jeblair> where we went wrong was ever signing up something that *didn't* go to monty
19:53:10 <dougwig> clarkb: *blinks* -- the sending mta has nothing to do with it; it's fully compliant.
19:53:13 <fungi> #action jeblair investigate mail filtering by recipient options for infra-root@o.o inbox
19:53:14 <jeblair> mordred is an excellent email filter/responder thing
19:53:25 <clarkb> dougwig: many services will reject email addresses with + in them
19:53:25 <fungi> anteaya: had one more quick one
19:53:31 <anteaya> fungi: k
19:53:39 <fungi> #topic Summit session planning
19:53:48 <fungi> just some quick updates/links here for convenience
19:53:56 <fungi> as teh summit draws ever closer
19:54:09 <jeblair> dougwig: we also don't control the receiving mta if we continue to use the foundation's mta (but if we spin up a server, we will, and could use + addressing with any character)
19:54:59 <fungi> pleia2 has already started a list of "infra-relevant sessions not in our track" at the bottom of the planning etherpad, so that might be a good place for people to add others and try to coordinate and make sure we get good coverage when there are conflicts
19:55:15 <fungi> #link https://etherpad.openstack.org/p/infra-newton-summit-planning add infra-relevant sessions at the bottom for coordination purposes
19:55:46 <fungi> in my e-mail announcement to the list with our finalized schedule, i also included a shorthand for some of the conflicts i spotted
19:55:55 <mordred> thank you!
19:56:00 <fungi> #link http://lists.openstack.org/pipermail/openstack-infra/2016-April/004162.html finalized session schedule with conflicts noted
19:56:08 <Zara> :) thank you
19:56:17 <eil397> thank you
19:56:37 <fungi> #link also pleia2 started our etherpads for the various sessions and i've linked them from the usual wiki page
19:56:46 <fungi> #link https://wiki.openstack.org/wiki/Design_Summit/Newton/Etherpads#Infrastructure summit sessions wiki links
19:56:52 <clarkb> they are also linked fro mthe session schedule
19:56:54 <nibalizer> thanks pleia2
19:57:07 <anteaya> thank you pleia2 and fungi
19:57:23 <SotK> thanks pleia2, fungi
19:57:27 <pabelanger> looking forward to the collaboration
19:57:30 <fungi> i included utc time translations there for the benefit of people trying to follow along remotely in etherpads during/after sessions (or who, like me, keep personal time in utc when travelling)
19:57:54 <anteaya> thank you
19:58:00 <eil397> thanks pleia2
19:58:02 <fungi> clarkb: ahh, yep, i did also put the etherpads in the official schedule as you noted
19:58:02 * anteaya also uses utc time
19:58:11 <mordred> fungi: you should include a key that tells us what time drinking starts in UTC in the area
19:58:14 <fungi> #link https://www.openstack.org/summit/austin-2016/summit-schedule/global-search?t=Infrastructure%3A official summit schedule
19:58:21 <jeblair> if/when we split the developer summit from the marketing summit, we should have the developer summit schedule be in utc
19:58:23 <fungi> mordred: starts?
19:58:30 <mordred> fungi: good point
19:58:37 <anteaya> jeblair: agreed
19:58:46 <pleia2> ah, good idea re: utc
19:58:49 <mordred> fungi: for this summit, maybe we should make it a goal to do one-drink-per-session
19:58:50 <jeblair> as another shibboleth to let people know they're at the wrong event :)
19:58:57 <fungi> i'll try to backlink other appropriate metadata in the etherpad headers before the end of the week too, in preparation
19:59:03 <anteaya> jeblair: ha ha ha
19:59:21 <fungi> any summit questions in these last few seconds?
19:59:30 <fungi> we're at about 30 seconds remaining
19:59:31 <jeblair> 'hi, you showed up at 3am, you probably meant to attend this *other* event'
19:59:39 <docaedo> can we postpone for a week or two you think? hold out for better weather?
19:59:45 <fungi> better weather
19:59:52 <fungi> yes, prepare for a lot of wet
20:00:03 <fungi> i'm told inner tubes are a good travel accessory
20:00:12 <anteaya> woooo water
20:00:20 <fungi> and we're at time. thanks all, hope to see lots of you in austin!!!
20:00:24 <Zara> \o/
20:00:26 <fungi> #endmeeting