19:02:17 <fungi> #startmeeting infra
19:02:18 <openstack> Meeting started Tue Feb  9 19:02:17 2016 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:19 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:21 <openstack> The meeting name has been set to 'infra'
19:02:25 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:26 <olaph> o/
19:02:32 <fungi> #topic Announcements
19:02:34 <rcarrillocruz> o/
19:02:39 <fungi> i didn't have anything specific for today, and noting that we've got the upcoming infra-cloud sprint discussed later as a general meeting topic anyway, as well as afs/mirror stuff
19:02:50 <fungi> oh, actually, there is one thing
19:03:26 <fungi> #info Matthew Wagoner (olaph) has volunteered as the Infra Team's Cross-Project Specs Liaison
19:03:44 <AJaeger> thanks, olaph !
19:03:45 <jeblair> olaph: thanks!
19:03:46 <cody-somerville> \o
19:03:47 <mordred> woot
19:03:50 <Zara> thanks! :)
19:03:54 <fungi> thanks olaph! let us know if there are infra implications on any upcoming cross-project specs we need to keep a closer eye on
19:04:05 <olaph> will do!
19:04:28 <fungi> #topic Actions from last meeting
19:04:30 <olaph> so far, it's been quiet
19:04:34 <fungi> #link http://eavesdrop.openstack.org/meetings/infra/2016/infra.2016-02-02-19.03.html
19:04:38 <fungi> there were none, all successful
19:04:46 <fungi> #topic Specs approval
19:04:53 <fungi> #info Approved "Unified Mirrors" specification
19:05:02 <fungi> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/unified_mirrors.html
19:05:08 <fungi> (that link _should_ be correct once the post pipeline backlog clears)
19:05:27 <fungi> there are also some related spec updates proposed
19:05:32 <fungi> PROPOSED: Update unified mirror spec to support AFS (jeblair)
19:05:37 <fungi> #link https://review.openstack.org/273673
19:05:45 <jeblair> this is also an in-production spec
19:05:49 <mordred> it's a good proposal, given that it's what we implemented :)
19:05:49 <fungi> jeblair: this one's ready for a council vote?
19:05:58 <jeblair> so once we approve this, we will be caught up with reality :)
19:06:01 <fungi> yeah, let's rubber-stamp it this week
19:06:08 <fungi> #info Voting is open on "Update unified mirror spec to support AFS" until 19:00 UTC Thursday, February 11.
19:06:15 <fungi> PROPOSED: Maintenance change to doc publishing spec (jeblair)
19:06:24 <fungi> #link https://review.openstack.org/276481
19:06:37 <jeblair> this is dusting off the docs publishing spec
19:06:43 <fungi> looks like AJaeger is already on board
19:06:55 <jeblair> no changes to the actual proposal, just updates to the requirements / current situation
19:07:19 <annegentle_> yay AJaeger
19:07:25 <annegentle_> jeblair: thanks for addressing my qs
19:07:26 <jeblair> it should be reviewed for correctness (which AJaeger has -- and i pulled in a comment from annegentle_ on another spec into there too)
19:07:38 <jeblair> annegentle_: thank you! :)
19:07:42 <fungi> i guess no harm in putting it up for pending approval thursday in that case
19:07:42 <AJaeger> thanks, jeblair for updating!
19:07:56 <fungi> #info Voting is open on "Maintenance change to doc publishing spec" until 19:00 UTC Thursday, February 11.
19:08:12 <fungi> #topic Priority Efforts: Infra-cloud
19:08:19 <crinkle> hello
19:08:25 <fungi> hi crinkle!
19:08:30 <eil397> hello
19:08:43 <crinkle> nibalizer and I started working on getting the us west region into production and in nodepool
19:08:58 <crinkle> i want to be clear that it is still in 'get a terrible cloud up' stage
19:09:13 <crinkle> it's still kilo and has no HA
19:09:33 <crinkle> but I haven't heard any objectioins so far to moving forward so that's what we're doing
19:09:34 <mordred> sounds great
19:09:40 <clarkb> looks like we got the compute host and controller host site.pp updates in and the ansible group change. From a run ansbile to deploy via puppet we are good right?
19:09:42 <rcarrillocruz> has the ssl certificates and puppet boostrapping been sorted out?
19:09:44 <fungi> crinkle: that makes it not entirely dissimilar from the other clouds we use or have used in the past (no offense to our esteemed sponsors/donors)
19:09:49 <clarkb> now it is just a matter of making a deployment we are happy with ?
19:10:01 <zaro> o/
19:10:13 <crinkle> rcarrillocruz: for now we're just using the certs I generated in testing
19:10:24 <nibalizer> ya so the next step is https://review.openstack.org/#/c/234535/11
19:10:25 <fungi> openstack in general is likely always in 'get a terrible cloud up' stage
19:10:36 <nibalizer> I've manually tested the steps taht playbook will take
19:10:42 <nibalizer> so its a land and go patch
19:11:00 <nibalizer> at which point infracloud (-west) at least will be managed by the infra team, formally
19:11:00 <cody-somerville> Do we plan to (eventually) get to point where we can say we're a "best-in-class" reference deployment?
19:11:20 <crinkle> cody-somerville: yes at some point we would like it to be better :)
19:11:27 <fungi> as for concerns (or lack thereof) over putting it into production when we know there's a major multi-day/week outage coming up for it, it's not _that_ hard for us to disable it in nodepool and enable it again once it's done
19:11:41 <nibalizer> fungi: my thoughts exactly
19:11:47 <crinkle> okay great
19:11:57 <cody-somerville> Shouldn't nodepool just deal? or is that just an optimization?
19:12:02 <jeblair> also, i promised to make a way to easily perform the disable/enable in nodepool.  it sounds like it's time for me to do that.
19:12:02 <fungi> cody-somerville: i would love to see it eventually be a model/reference deployment we can point others at as an example of how this can work
19:12:07 <anteaya> yeah it is just a patch to nodepool.yaml
19:12:22 <rcarrillocruz> fungi: so it's an iterate thing, rather than getting shiny before prod?
19:12:41 <rcarrillocruz> just trying to get an understanding of expectations from users
19:12:50 <fungi> cody-somerville: nodepool's scheduling/demand algorithm is optimized for providers actually being in a working state, and so we end up being a bit inefficient on allocations when a provider is completely broken
19:12:56 <jeblair> cody-somerville: if all of the nodes disappear while running jobs, nodepool/zuul may deal (depending on exactly the mechanism), but it may be somewhat disruptive.
19:13:00 <clarkb> but nodepool will continue to function
19:13:11 <jeblair> fungi: if it's _completely_ broken, it'll be okay
19:13:17 <fungi> fair point
19:13:21 <rcarrillocruz> ok
19:13:23 <jeblair> fungi: if it's partially broken, it's suboptimal
19:13:26 <fungi> if nodepool doesn't _realize_ it's broken... badnez
19:13:34 * cody-somerville nods.
19:13:37 <jeblair> and if the nodes just disappear, zuul may restart the jobs
19:13:47 <jeblair> depending on exactly what jenkins does
19:14:18 <jeblair> but what we really want to do is to be able to say "we're taking this region down for planned maintenance, quietly run it down to 0"
19:14:51 <jeblair> which is what i promised to add to nodepool as a command, rather than a config change, so that we could eventually automate it and tie it into actual infra-cloud deployment playbooks
19:14:56 <fungi> ant to that point, it's a simple one-line patch to a config file to do that and undo it again later
19:15:11 <fungi> but yeah, cli would be awesome
19:15:21 <jeblair> and the one-line-patch will work in the interim
19:15:29 <cody-somerville> Do we have any other organizations willing to donate HW yet?
19:15:44 <jeblair> cody-somerville: we decided not to ask until we got one up and running
19:15:48 <cody-somerville> kk
19:16:00 <nibalizer> so next steps are to get the playbooks in, get the credentials and definitions in, build a mirror, ??
19:16:17 <jeblair> nonetheless, people do still try to offer us hardware from time to time
19:16:21 <pabelanger> so, are we sticking to ubuntu 12.04 for bare metal or are we thinking of other OS support (eg: centos7)
19:16:30 <fungi> yeah, it seems like stretching the effort onto additional hardware beyond what we have before what we have is in a semblance of the state we're looking for will slow down our progress
19:16:49 <cody-somerville> Are we keeping any notes that might be useful to help improve deploy and operations manuals?
19:17:03 <clarkb> my completely biased opinion after spending much time with the git farm is that we should not bother with centos7
19:17:04 <crinkle> pabelanger: we are using 14.04 for the baremetal servers
19:17:10 <rcarrillocruz> cody-somerville: i believe yolanda has been improving some operations docs lately
19:17:26 <rcarrillocruz> and yeah, i agree we should put playbooks for maintenance/fix things along with docs
19:17:27 <pabelanger> crinkle: thanks, typo.
19:17:31 <cody-somerville> Sweet! :)
19:17:50 <fungi> and also it's resulting in a lot of configuration management/orchestration that can be reused or pointed to as examples
19:17:53 <clarkb> we will spend all our time fighting selinux and half the packages don't come from centos anyways
19:18:01 <cody-somerville> Is there any room for more help on this? Or do you folks have the right number of cooks in the kitchen for now?
19:18:15 <eil397> +1
19:18:22 <eil397> to this question
19:18:28 <crinkle> cody-somerville: we could use help reviewing topic:infra-cloud
19:18:34 <pabelanger> clarkb: my only thought about centos7, was to dogfood OpenStack on it
19:18:42 <cody-somerville> crinkle: Sweet. Will do.
19:18:46 <pabelanger> if people are willing to step up and do work on it
19:18:48 <anteaya> speaking of which, I'm having difficulty finding a starting point for reviewing
19:18:50 <jeblair> crinkle: ++
19:19:11 <mordred> pabelanger: I think we should wait until we get tihs one up and happy
19:19:12 <anteaya> I've tried several times and seem to spend a lot of time tracking patches to find the beginning
19:19:23 <mordred> pabelanger: ten address that question at the same time as additional hardware donations
19:19:23 <anteaya> is there a beginning patch that I can start with?
19:19:31 <fungi> cody-somerville: yes, i think spreading the load out a bit so that there are more people implementing and reviewing would have benefit. right now it's mostly still people implementing and so leaves them little time for reviewing
19:19:35 <jeblair> anteaya: i rather think we're in the middle of things right now
19:19:40 <crinkle> anteaya: there isn't really a beginning anymore
19:19:43 <jeblair> anteaya: does reading http://specs.openstack.org/openstack-infra/infra-specs/specs/infra-cloud.html help provide any context?
19:19:55 * anteaya exhales
19:20:00 <anteaya> well I am trying to review
19:20:06 <clarkb> crinkle: nibalizer: do you think maybe we are at a point to have a concrete list of todo items nad goals for ft collins?
19:20:06 <anteaya> and I am finding it difficult
19:20:17 <clarkb> eg implment ha db, then ha control plane etc?
19:20:20 <anteaya> clarkb: that would help
19:20:20 <pabelanger> mordred: Fair enough. Mostly checking to see if any discussions around that have happened already
19:20:21 <nibalizer> my big push recently has to been to get us to a point where the -west region is managed just like any other infra service
19:20:33 <nibalizer> even if its a bit janky
19:20:34 <clarkb> anteaya: ya thinking it may help others find corners t ohack on
19:20:35 <pleia2> post all-the-conferences, I'm struggling to get caught up on other reviews but if there are specific things I should look at, I can work to pitch in
19:20:41 <clarkb> nibalizer: ++
19:20:44 <mordred> pabelanger: a bit - it was mostly that we decided to focus on running things like we run the rest of infra
19:20:52 <rcarrillocruz> clarkb: that'd be good, but i guess we will see in FC the working state on the infra-cloud
19:21:00 <mordred> pabelanger: and since we run the rest of infra on trusty except for the git farm where we can, we went that route
19:21:02 <rcarrillocruz> i believe we should def. talk about HA
19:21:02 <anteaya> clarkb: yes, thank you
19:21:06 <crinkle> clarkb: I started https://etherpad.openstack.org/p/mitaka-infra-midcycle and would love help refining that
19:21:07 <nibalizer> right now (worse last week) there were patches that could land, then not do anything, because there wasn't sufficient plumbing
19:21:10 <jeblair> also http://docs.openstack.org/infra/system-config/infra-cloud.html should be helpful
19:21:26 <rcarrillocruz> thanks for the link crinkle
19:21:28 <mordred> pabelanger: since "run service" for infra is the primary goal, with 'dogfood openstack' only a secondary goal
19:21:41 <clarkb> #link https://etherpad.openstack.org/p/mitaka-infra-midcycle
19:21:47 <nibalizer> clarkb: nice
19:21:49 <clarkb> #link http://docs.openstack.org/infra/system-config/infra-cloud.html
19:21:52 <nibalizer> er crinkle
19:22:02 <pabelanger> mordred: understood
19:22:10 <jeblair> i like that etherpad
19:22:32 <nibalizer> ooh i can cross things off on this list
19:22:32 <nibalizer> woot
19:22:57 <fungi> that looks like an excellent outline/worklist
19:23:18 <jeblair> i'd like to become an effective infra-cloud reviewer by, say, lunch on monday.  and then review/merge lots of changes the rest of the week.  :)
19:23:45 <fungi> sounds like a pleasant way to spend a week
19:23:54 <mordred> jeblair: that might require learning more about how openstack works - is that a thing you're ready for? :)
19:24:00 <cody-somerville> Do we think we'll be ready to start tackling things like monitoring of the cloud by mid-cycle?
19:24:06 <mordred> jeblair: as in, are you planning to pack a bucket of xanax?
19:24:08 <fungi> i'm looking forward to having a good excuse to ignore everything else and focus on something specific for a change
19:24:10 <jeblair> mordred: if you can do it, i can. ;)
19:24:24 <anteaya> fungi: mid-cycles are good for that
19:24:27 <mordred> jeblair: you've met me, right? with all the crazy and the rage? where do you think that came from ...
19:24:36 <pleia2> haha
19:24:52 <fungi> mordred: was a quiet, sheepish lad... until the day he found openstack
19:25:11 <nibalizer> cody-somerville: i think step 1 is to get infracloud hosts into standard infra monitoring systems like cacti
19:25:14 <clarkb> mordred: you warn us now. I already sort of learned how openstack works
19:25:38 <nibalizer> cody-somerville: so if you wanted to write the patch for that, that would be great
19:25:55 <jeblair> nibalizer, cody-somerville: ++ it should be monitored from the start
19:25:56 <fungi> ep, that at least gets us trending on system load, disk space, et ecetera
19:26:01 <crinkle> nibalizer: cody-somerville ++
19:26:02 <cody-somerville> The monasca folks are in FTC if we plan to dogfood that.
19:26:26 <fungi> that's the thing that's like ceilometer but isn't ceilometer?
19:26:32 <jeblair> i don't think that's in our spec
19:26:33 <rcarrillocruz> yeah
19:26:34 <clarkb> with the same API
19:26:47 <rcarrillocruz> with grafana integration
19:26:50 <rcarrillocruz> nrpe rules support
19:26:51 <rcarrillocruz> etc
19:27:00 <mordred> yah - I don't think we need that for now
19:27:01 <cody-somerville> monasca is more like nagios
19:27:03 <fungi> probably best to start with simple system metrics trending and then find out which reinvented openstack wheels we want to dogfood later on
19:27:05 <rcarrillocruz> i believe we should eventually look at options like that
19:27:10 <rcarrillocruz> but not in the mid-term
19:27:24 <rcarrillocruz> we pretty have deploeyd things in the infra-cloud by doing a lot of hand-fixing
19:27:27 <mordred> it might be worth a 15 minute chat about it at the mid-cycle just so we're all up to speed on what it is and isn't
19:27:27 <rcarrillocruz> when we stabilize
19:27:31 <yolanda> hi, sorry , i'm late
19:27:32 <rcarrillocruz> and put real automation on everything
19:27:42 <rcarrillocruz> then we can move on to more stuff
19:27:44 <rcarrillocruz> my 2 cents
19:28:00 <crinkle> rcarrillocruz: ++
19:28:13 <cody-somerville> +1 to mordred'd idea
19:28:19 <fungi> basically, we have puppet preexisting for basic system statistics collected via snmp and trended in cacti. it's a few lines in system-config to turn that on
19:28:30 <rcarrillocruz> mordred: yeah, which is why i asked earlier the expectation of the users
19:28:34 <fungi> so that's easy to get going at the start
19:28:40 <nibalizer> ssssaarrr
19:28:42 <nibalizer> aaa
19:28:50 <nibalizer> i type good
19:28:54 <mordred> rcarrillocruz: ah - only user expectation is "can run nodepool" :)
19:28:56 <bkero> sar? O_o
19:29:06 <rcarrillocruz> lol
19:29:13 <jeblair> only one user
19:29:19 <rcarrillocruz> s/nodepool/doom
19:29:20 <fungi> bkero: everyone loves sar, right?
19:29:22 <mordred> two if you count me personally
19:29:36 <mordred> since I'll clearly use it to serve all my warez
19:29:37 <bkero> fungi: Sure. When I think of multi-user long-term monitoring I think of sar.
19:29:53 <bkero> fungi: while we're at it we should establish that monitoring = dstat + netcat -l
19:29:53 <fungi> heh
19:30:16 <crinkle> I think we can probably move on from this topic
19:30:29 <cody-somerville> What about centralizing logging?
19:30:31 <fungi> thanks crinkle! excellent overview/catch-up on present state
19:30:33 <cody-somerville> We'll want that from start?
19:30:52 <fungi> cody-somerville: probably worth adding to the sprint agenda as a discussion point
19:30:55 <cody-somerville> Can we pipe that into the existing openstack-infra elk?
19:30:57 <cody-somerville> kk
19:31:11 <clarkb> no we cannot pipe it into existing infra elk
19:31:12 <fungi> the existing elk is doing all it can to keep up with some test log load
19:31:21 <clarkb> becaues of load and because its public
19:31:28 <clarkb> and clouds have too many secrets they divulge
19:31:41 <cody-somerville> I thought we had two instances? One for tests and one for infra services?
19:31:42 <jeblair> cody-somerville: http://docs.openstack.org/infra/system-config/infra-cloud.html
19:31:46 <fungi> so centralized logging would likely be something dedicated, and also non-public because openstack is still pretty bad at hiding secrets from its logs
19:32:03 <jeblair> cody-somerville: that covers some of the questions you are asking now
19:32:35 <fungi> okay, moving on to the rest of the agenda. thanks again crinkle, nibalizer, yolanda, et al!
19:32:41 <nibalizer> also all the logs are already centralized, on the one controller
19:32:47 <rcarrillocruz> nibalizer: haha
19:32:51 <rcarrillocruz> fair point :D
19:32:54 <fungi> #topic Scheduling a Gerrit project rename batch maintenance
19:33:02 <fungi> we can hopefully keep this brief
19:33:36 <fungi> we punted on it last week because the weekend was rumored to have a battle between the panthers of carolina and the broncos of denver. apparently there was much bloodshed
19:34:05 <pleia2> poor kitties
19:34:06 <jeblair> the battle raged mere miles from me, so i had to dig a hole in my backyard for safety.
19:34:13 <fungi> SergeyLukjanov has a meeting conflict right now but expressed an availability to drive this on a weekday during pacific american times if desired
19:34:18 <pleia2> jeblair: nods
19:34:28 <jeblair> i put a fruit tree in it when i was done though
19:34:38 <anteaya> jeblair: glad you had a backyard in which to dig a hole
19:34:40 <fungi> however, as pointed out last week it's also our first rename since gerrit 2.11
19:34:44 <anteaya> nice use
19:34:46 <bkero> jeblair: I told you that getting the place with the cold war bunker wasn't a crazy idea
19:35:03 <fungi> so we'll likely want a few extra people on hand if things go badly for unforeseen reasons
19:35:34 <nibalizer> I'm... ishavailable
19:35:37 <anteaya> I'm traveling on the 13th and gone the rest of the month, available this week though
19:35:41 <fungi> should we shoot for something like 22:00 utc this friday, or are people busy with travel plans for colorado at that point?
19:35:44 <pleia2> I'm around now through sprint time
19:35:51 <pleia2> fungi: that wfm
19:36:02 <anteaya> fungi: friday works for me too
19:36:13 <nibalizer> I could do this friday
19:36:27 <fungi> to repeat what's on the list, openstack/ceilometer-specs is becoming openstack/telemetry-specs, and openstack/sahara-scenario is becoming openstack/sahara-tests
19:36:34 <clarkb> I can do friday
19:36:45 <fungi> okay, that seems like we have a few people available
19:36:56 <jeblair> friday++
19:37:05 <anteaya> I can wrangle the patches
19:37:07 <fungi> zaro: are you around too in case we find fun and exciting issues in gerrit this friday trying to rename a couple of projects?
19:37:14 <fungi> thanks anteaya!
19:37:18 <anteaya> welcome
19:37:27 <zaro> unfortunately i will be traveling
19:37:31 <zaro> or out of town as well
19:37:31 <fungi> anyone want to volunteer to send a maintenance notification?
19:37:41 <pleia2> fungi: sure
19:37:51 <pleia2> I'll take care of that post meeting
19:38:12 <nibalizer> pleia2: thanks
19:38:19 <fungi> zaro: do you have time this week to look back over our rename process and see if you spot anything you're aware of which might be an issue now that we're on 2.11?
19:38:31 <pleia2> I'll also confirm that SergeyLukjanov is ok with 22UTC
19:38:41 <zaro> fungi: can do
19:38:46 <fungi> zaro: thanks!
19:38:47 <pleia2> it is "pacific american time" but it's pretty late :)
19:39:26 <fungi> i'm fine doing it earlier than that too, but there are more devs interacting with our systems the earlier on friday you go
19:39:31 * pleia2 nods
19:40:05 <anteaya> pleia2: he is in pacific time
19:40:20 <pleia2> oh :)
19:40:25 <fungi> yeah, he's at mirantis's office in california i think?
19:40:30 <anteaya> yeah I didn't know either
19:40:34 <anteaya> sounds like it
19:40:45 <anteaya> there for a few months then relocating there I think?
19:41:01 <fungi> #info Gerrit will be offline 22:00-23:00 UTC for scheduled maintenance to perform project renames
19:41:02 <pleia2> neat, we should meet up
19:41:19 <fungi> okay, thanks everyone who's helping with that
19:41:44 <fungi> if gerrit is no different, then this will be a quick one (just two repos) but a good way to find out i guess
19:41:57 <anteaya> yup, glad you are giving us the hour
19:42:05 <fungi> #topic Infra-cloud Sprint is under two weeks away, any final logistical details? (pleia2)
19:42:09 <pleia2> so, hopefully we can keep this short, but I have a few things for this topic
19:42:16 <rcarrillocruz> hmm, yeah
19:42:19 <rcarrillocruz> first time in FC
19:42:20 * rcarrillocruz reds
19:42:23 * rcarrillocruz reads even
19:42:31 <fungi> my first time as well
19:42:42 <pleia2> so jhesketh won't be able to attend, so I think I am the defacto point on organzing now since I've been working with the on site HPE people
19:42:46 <nibalizer> everyone should make sure they bought a ticket to denver and not dallas
19:42:50 <fungi> i didn't work at hp long enough to get the road tour of officies
19:42:50 * jhesketh sadly cannot make it :-(
19:42:53 <rcarrillocruz> lol
19:42:54 <nibalizer> not saying i screwed that up, just make sure
19:43:02 <anteaya> jhesketh: oh no
19:43:04 <pleia2> does anyone have any questions as far as logistics that I should ask the HPE folks about?
19:43:10 <nibalizer> jhesketh: TT
19:43:12 <anteaya> jhesketh: I was looking forward to you being there
19:43:14 <jeblair> nibalizer: always good advice
19:43:24 <fungi> hah
19:43:34 <jhesketh> anteaya: me too!
19:43:38 <pleia2> our wiki page was copied from another, and says "Hotels provide shuttle bus service to the HP office. See front desk to reserve." which no know knows about, so they're going to look into it for us
19:43:48 <anteaya> nibalizer: glad you will be in the intended city
19:43:50 <pleia2> I'm not sure that's actually true :)
19:43:57 <cody-somerville> I put that there.
19:44:01 <cody-somerville> And it is true.
19:44:05 <cody-somerville> I wrote all of that stuff.
19:44:12 <pleia2> cody-somerville: oh ok, the admin organizing said she'd call the hotels and check
19:44:31 <fungi> pleia2: cody-somerville: thanks--i have not booked a rental car and figured i'd just hitch a ride on the hotel shuttle because of that comment on the wiki ;)
19:44:38 <pleia2> fungi: yeah, me too
19:44:44 <cody-somerville> I think the Courtyard is a new hotel listed so unsure about that one.
19:44:59 <cody-somerville> but I have to imagine they do shuttle as well
19:45:07 <cody-somerville> Intel and a bunch of other companies are all there.
19:45:12 <pleia2> cody-somerville: like a "we'll take you within 3 miles" kind of shuttle?
19:45:23 <cody-somerville> pleia2: Yup.
19:45:25 <fungi> #link https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint
19:45:27 <pleia2> ok, good to know
19:45:37 <fungi> just for the benefit of those reading the logs later
19:45:45 <jeblair> oh, 3 miles from your origin, not 3 miles from your destination :)
19:45:45 <pleia2> lunches will be catered, we're organizing that now with the listed dietary restrictions in mind
19:45:52 <cody-somerville> Cambria hotel & Suites has nicer rooms and restaurant than the Hilton FYI.
19:46:03 <anteaya> I do recommending getting a rental car
19:46:10 <pleia2> if anyone else has logistic questions for HPE, let me know and I'll ask :)
19:46:21 <olaph> and the train track run right by the hilton, which can suck at 4am...
19:46:22 <anteaya> as we are way out on the edge by the campus and downtown is at least 20 minutes drive
19:46:28 <fungi> thanks pleia2! i'll make sure to direct any your way
19:46:36 <pleia2> the other half of this agenda item:
19:46:41 <jeblair> olaph: also gtk
19:46:41 <pleia2> should we make some plans as to schedule for the week?
19:46:41 <anteaya> olaph: that is the downtown hilton, not the hilton garden inn
19:46:56 <anteaya> the hilton garden in by the hpe campus has no train
19:47:05 <cody-somerville> olaph: You might be thinking of the hilton downtown. There is no train by the Hilton near the office.
19:47:06 <olaph> anteaya: downtown, by campus
19:47:06 <pleia2> I'd really really appreciate an informal presentation to kick the week off about our hardware and topology+tooling of the deployment
19:47:16 <anteaya> there are two hiltons
19:47:22 <anteaya> one has a train, one does not
19:47:29 <pleia2> I know it's all in the etherpad and reviews, etc etc, but it's a lot to get my head around (especially considering the rest of my workload)
19:47:33 <rcarrillocruz> pleia2: about the infra cloud to the FC folks?
19:47:35 <jeblair> pleia2: looks like that's anticipated at the top of https://etherpad.openstack.org/p/mitaka-infra-midcycle ?
19:47:40 <pleia2> rcarrillocruz: to me :)
19:47:46 <rcarrillocruz> ah :D
19:47:48 <clarkb> a blurb on the networking would probably be helpful
19:47:52 <jeblair> pleia2: 'least, that's how i read that... crinkle ?
19:47:53 <crinkle> pleia2: I can work on putting that together
19:47:57 <pleia2> jeblair: yeah, I'm hoping that's a proposed topic to discuss, not just "read this and you'll be all ready!"
19:48:15 <pleia2> crinkle: you rock, thanks
19:48:17 <fungi> network diagram scribbled on a whiteboard would be appreciated, yes
19:48:23 <jeblair> <cloud>
19:48:24 <mordred> pleia2: I expect to be able to ask uninformed questions when I arrive late and be chastised for not knowing
19:48:26 <fungi> thanks crinkle!
19:48:38 <pleia2> that's all from me on this topic
19:48:42 <pabelanger> anteaya: a quick google shows uber in fort collins too
19:48:43 <clarkb> mordred: are you ready for elton?
19:48:50 <anteaya> pabelanger: I've never used it
19:48:58 <mordred> clarkb: I do not believe I will experience sir john
19:48:59 <anteaya> I have no review to offer
19:49:06 <anteaya> I drive in fort collins
19:49:12 <fungi> pabelanger: did you get squared away as to whether we'll be able to add you to the list of attendees?
19:49:23 <pleia2> pabelanger is all set :)
19:49:25 <fungi> would love to have you participate
19:49:28 <fungi> awesome
19:49:29 <clarkb> do we want semi planned dinner plans?
19:49:36 <clarkb> or just free for all day of?
19:49:38 <pabelanger> fungi: pleia2: indeed. will be making the trip
19:49:41 <fungi> we can probably talk dinners off-meeting
19:49:42 <jeblair> i plan on eating dinner
19:49:42 <anteaya> clarkb: we can do that once we arrive
19:50:01 <fungi> #topic AFS for logs/docs (jeblair)
19:50:04 <anteaya> most places can accomodate a group with a few hours notice
19:50:06 <fungi> you have 10 minutes
19:50:22 <jeblair> We are using AFS for real now and have learned some things.
19:50:22 <jeblair> Are we ready to seriously consider it for logs/docs? Any other prereqs?
19:50:39 <jeblair> i have specs up for both of those topics...
19:50:43 <jhesketh> What have we learned?
19:50:57 <fungi> well, that afs is awesome, for one
19:51:17 <jeblair> jhesketh: that serving data from caches is quite fast, as fast as local apache
19:51:18 <mordred> yah. it's working quite pleasingly well
19:51:28 <pabelanger> \o/
19:51:47 <fungi> also the cache size seems to be reasonable for the things we've done so far
19:51:53 <clarkb> 50GB right?
19:52:02 <mordred> initial publication of a large read-write volume to the read-only replicas is not fast - but subsequent publications run at reasonable rates
19:52:02 <jeblair> jhesketh: that read-only replication cross-data centers is slower than we'd like, but as long as our read-only volumes aren't radically changing, it's okay
19:52:06 <mordred> heh
19:52:08 <jeblair> mordred: :)
19:52:09 <fungi> and simple to expand if we're using lvm for the cache locatioj
19:52:11 <fungi> location
19:52:30 <jhesketh> How does afs save us from having huge volumes?
19:52:53 <fungi> we can shard afs and present it as one common file tree
19:52:56 <fungi> as one option
19:53:01 <mordred> jhesketh: we can spread AFS volumes across multiple file servers ... ^^ that
19:53:43 <jhesketh> Ah cool, didn't know it sharded. Are we currently doing that?
19:54:10 <mordred> well - to be clear - it doesn't shard in the way that ceph shards
19:54:23 <fungi> the current space limitation on static.o.o is mostly a function of maximum cinder volume size in rackspace multiplied by maximum number of vdb devices we can attach to a xen domu for the version rackspace is running
19:54:24 <mordred> it shards in that you can create volumes across many fileservers
19:54:34 <jeblair> it's like, if we lose a server or partition, we would lose a part of the directory tree, but not the whole thing
19:54:36 <mordred> and then you can mount those volumes into contiguous locations in the AFS tree
19:54:36 <fungi> right, i used the term shard loosely
19:54:50 <mordred> so we currently have multiple volumes
19:55:25 <fungi> you could say that in a way we're already doing it, insofar as afs is a _global_ filesystem and we're just one part of the global afs file tree
19:55:34 <jeblair> (this applies to read-write volumes, which we would use for logs; for docs, we can put them on read-only volumes and then we would be faut-tolerant)
19:55:35 <mordred> but they are all on the same fileservers - we have mirror.ubuntu mirror.pypi mirror.wheel.trustyx64 mirror.npm and mirror.git
19:55:46 <jhesketh> So is there any duplication?
19:55:50 <mordred> jhesketh: there can be
19:56:25 <mordred> jhesketh: we have read-only replicas of our mirror volumes currently
19:56:30 <jhesketh> Perhaps we should turn that on first to see if there are any side effects or significant performance issues?
19:56:47 <mordred> jhesketh: each of our mirror volumes has a rw volume and 2 read only replicas
19:57:05 <fungi> i'll #link the specs jeblair mentioned on the agenda for reference
19:57:10 <fungi> #link https://review.openstack.org/269928
19:57:18 <fungi> #link https://review.openstack.org/276482
19:57:25 <mordred> the publication process is to write to the read/write volume and then run vos release which pushes new copies out to the read-only replicas
19:57:30 <mordred> this works amazingly well for mirrors
19:57:34 <jhesketh> Can we restore from read only replicas?
19:57:34 <mordred> and is also a nice thing for docs
19:57:36 <mordred> yes
19:57:49 <mordred> you can promote a read-only volume to take over as the read-write volume
19:58:22 <jeblair> so with only a few mins left -- how should we proceed?  continue this conversation in channel/ml/spec reviews/...?
19:58:22 <jhesketh> Okay sounds useful
19:58:29 <fungi> which gets to why we have more than one, and put them in different locations/networks
19:58:45 <mordred> of course, one of my favorite things is that I can do "ls /afs/openstack.org/mirror/ubuntu" on my local machine :)
19:58:54 <fungi> i am in favor of moving discussion to the specs and, if necessary, irc/ml
19:58:59 <mordred> jeblair: I agree with fungi
19:59:17 <jhesketh> works for me
19:59:21 <jeblair> cool
19:59:25 <jeblair> thanks!
19:59:26 <fungi> thanks jeblair!
19:59:32 <fungi> #topic Open discussion
19:59:37 <fungi> you have 30 seconds ;)
19:59:43 <eil397> : - )
19:59:56 <fungi> riveting
20:00:04 <jhesketh> Should we do a post mortem of swift logs as part of the afs spec or discussion?
20:00:16 <fungi> would be useful, yes
20:00:18 <fungi> we're out of time--thanks everyone!
20:00:21 <fungi> #endmeeting