19:02:22 <jeblair> #startmeeting infra
19:02:23 <openstack> Meeting started Tue Jun 23 19:02:22 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:26 <openstack> The meeting name has been set to 'infra'
19:02:27 <mrmartin> o/
19:02:30 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:31 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-06-16-19.02.html
19:02:36 <jeblair> #topic Announcements
19:02:58 <hogepodge> o/
19:03:01 <jeblair> i have some announcements regarding core group changes
19:03:08 <jeblair> #info adding greghaynes to glean-core
19:03:11 <fungi> exciting!
19:03:17 <jeblair> greghaynes is basically the co-author of glean; i think we're all kind of hoping it doesn't actually get any more patches
19:03:23 <jeblair> but in case someone want to add a new test, it will be nice for greg to be able to say "yes"
19:03:27 <mordred> o/
19:03:30 <jeblair> or "no" if someone wants to add a new feature
19:03:32 <jeblair> ;)
19:03:38 <mordred> NO
19:03:44 <mordred> just practicing
19:03:48 <fungi> or if the stars align and we get to delete some code from it
19:03:49 <jeblair> mordred: thank you for demonstrating
19:04:04 <jeblair> fungi: a world of possibilities i didn't even consider!
19:04:19 <jeblair> #info adding nibalizer, yolanda to infra-puppet core
19:04:25 <jeblair> just as soon as i update my script to set up that group correctly
19:04:31 <jeblair> they have been doing great work on these puppet modules and i've been heavily weighing their input for a while
19:04:56 <crinkle> yay!
19:05:03 <jeblair> try not to break everything, at least until we get the func testing in place... then i guess it's okay ;)
19:05:10 <nibalizer> awesome, thanks
19:05:32 <jeblair> #topic Specs approval
19:05:35 <fungi> yep, i can't wait do fewer reviews of those repos ;)
19:05:38 <SpamapS> o/
19:05:40 <pleia2> hehe
19:05:44 <jeblair> first some specs that we should work on reviewing this week to get them ready for final approval
19:05:51 <jeblair> #link ci watch spec: https://review.openstack.org/192253
19:05:54 <jeblair> sdague has asked that we try to get agreement on this quickly; he is out next week, so if we can review/iterate on this this week, and get it on the schedule for next week, that would be good
19:05:55 <greghaynes> oo yay im getting core
19:06:13 <fungi> greghaynes: don't spend it all in one place
19:06:20 <greghaynes> :)
19:06:22 <sdague> o/
19:07:01 <sdague> jeblair: also, a question in there about technology choices, because I think a couple of folks would start hacking on prototypes if we knew where preferences were
19:07:16 * tchaypo waves
19:07:30 <jeblair> cool; i have not had a chance to look at it yet, but hope to today
19:07:39 <sdague> ok
19:08:06 <jeblair> there are a number of other specs in the queue that are probably getting close
19:08:15 <jeblair> #link stackalytics spec: https://review.openstack.org/187715
19:08:42 <fungi> i think re refstack crowd were also at the point where they're looking for final feedback
19:08:47 <jeblair> that one seems to be getting into shape; i'm a little concerned that we're not hearing as much from mirantis as i hoped
19:08:48 <fungi> er, the
19:09:11 <jeblair> maybe SergeyLukjanov is busy?
19:09:21 <fungi> ahh, yeah SergeyLukjanov was going to check back in with mirantis marketing
19:09:34 <hogepodge> o/
19:09:46 <hogepodge> yup, we'd like to move forward on the spec as soon as we can
19:10:04 <jeblair> hogepodge: stackalytics or refstack?
19:10:17 * fungi hopes the answer is "both!"
19:10:23 <hogepodge> ok, refstack, I jumped ahead.
19:10:29 <jeblair> ok, it's up next
19:11:37 <jeblair> anyway, we can proceed with _hosting_ stackalytics at stackalytics.o.o if we want; i'd prefer to have mirantis folks on-board though
19:12:14 <jeblair> should we try to ping SergeyLukjanov this week, and if he's too busy, find another contact?
19:12:36 <fungi> that seems reasonable
19:12:38 <greghaynes> Yea - they seemed like they had some requests for its usage there so they definitely need to be on board
19:12:47 <greghaynes> ++
19:13:03 <fungi> perhaps one or more of the current stackalytics core reviewers would be good contacts on this
19:13:16 <jeblair> ya
19:13:35 <jeblair> pabelanger: let's try to track them down
19:14:01 <jeblair> anything else before we move on to approvals?
19:14:16 <pabelanger> jeblair, sounds good
19:14:25 <jeblair> #topic Specs approval: RefStack Site Hosting (hogepodge, davidlenwell)
19:14:33 <hogepodge> o/
19:14:33 <jeblair> #link refstack site hosting spec https://review.openstack.org/188207
19:15:01 <jeblair> this seems to be ready for a vote yeah?
19:15:19 <gothicmindfood> is there/win 22
19:15:22 <pleia2> I had a browse yesterday afternoon, looking good
19:15:27 <jeblair> some late minor revisions yesterday, but i don't think it's changed greatly in a while
19:15:42 * gothicmindfood "whoopses"
19:15:47 <hogepodge> we had discussed whether to split api and ui across two domains
19:16:14 <fungi> i think that can still happen later if it turns out to be needed
19:16:18 <hogepodge> yesterday the team voted to use only one, in part to ease transition to refstack.openstack.org if that was in the cards for the future
19:16:57 <fungi> co-hosting them on one server is probably simplest to start, and renders the question of how we'll tackle two https vhosts on one server moot
19:17:24 <jeblair> yeah, i think we can accomplish two if needed, but it should be done with some thought
19:17:47 <jeblair> so if we don't need it, sounds good to me
19:17:58 <jeblair> any concerns or comments, or should we open it for voting?
19:18:03 <davidlenwell> we're flexible.. wanted to make it easier for you guys
19:18:14 <fungi> er, ch-hosting them in one vhost/at one dns name
19:18:38 <fungi> i have no objections
19:19:02 <jeblair> #info refstack site hosting spec voting open until 2015-06-25 19:00 UTC
19:19:12 <jeblair> #topic Schedule Project Renames
19:19:26 <jeblair> we have one; i think we should wait for more :)
19:19:27 <fungi> so soon? seems like we _just_ did this... ;)
19:19:56 <Shrews> fungi: you love it
19:20:06 <fungi> morganfainberg: didn't keystone have one coming up for rename too?
19:20:16 <fungi> maybe we can batch them once that's confirmed
19:20:32 * mordred hands fungi a wetter-than-normal +2 aardvark of renaming
19:20:58 <jeblair> #agreed wait for more renames and batch
19:21:00 <jeblair> #topic Priority Efforts (Migration to Zanata)
19:21:04 * fungi wonders what the critical hit roll is for that aardvark
19:21:22 <pleia2> so I'm stopped at how we handle infra-controlled service accounts in openstackid
19:21:46 <pleia2> I'll stash auth data in hiera, but we need an account associated with these kinds of things for openstackid itself
19:22:17 <fungi> pleia2: after i saw the details, that this is just for authenticating a bot to push stuff into zanata, i think the easy way out (an account for hostmaster@o.o or something) is likely fine
19:22:47 <mrmartin> fungi: we have a special role field for the account, we can set to a custom value too
19:23:01 <fungi> mrmartin: oh, that's even better
19:23:20 <pleia2> in order to get an id, we need to do https://www.openstack.org/join/register
19:23:34 <pleia2> but it's a bot
19:23:50 <jeblair> fungi, pleia2: the infra-root@openstack account has a number of aliases, we can add 'translate@' if we want
19:23:55 <mrmartin> we have the 'Group' table for that, and a 'Group_Members' table to assign groups to accounts
19:24:11 <pleia2> jeblair: wfm
19:24:21 <jeblair> mrmartin: so should we start by registering through the web like a normal user, or should we create it entirely behind-the-scenes?
19:24:29 <mrmartin> jeblair: exactly
19:24:31 <fungi> jeblair: that seems safer, agreed. reusing the same one for multiple services leads to trust boundary issues
19:24:39 <pleia2> mrmartin: er, which?
19:24:39 <jeblair> mrmartin: er, which one?
19:24:44 <mrmartin> and add the Group manually both for openstackid-dev and openstack.o.o
19:25:06 <mrmartin> and I guess on the admin interface - I saw it once - you can assign the custom role to the manually registered profile
19:25:09 <pleia2> ok, so register like a normal user, and then make some behind the scenes tweaks
19:25:13 <mrmartin> yeap
19:25:37 <jeblair> #action jeblair set up translate@o.o infra-root alias
19:25:40 <pleia2> ok, so let's set up a translate@ alias and then I'll sign up with that
19:25:41 <fungi> oh, admin interface. i don't think i've got access to the admin interface
19:25:46 <mrmartin> and if we set properly the group assignment, we can filter out who is a human or who is a bot
19:25:47 <mrmartin> :)
19:25:47 <fungi> didn't realize it had one ;)
19:26:04 <jeblair> #action pleia2 sign up translate@ openstackid account
19:26:47 <mrmartin> fungi: I guess the admin was integrated into openstack.org originally
19:26:55 <pleia2> aside from that, I fixed a restart bug in our puppet module and StevenK's zanata scripts have landed (just need account to hook into), so we're on track to deliver testing version to the translators in the beginning of july (probably after the 4th)
19:26:58 <fungi> mrmartin: makes sense
19:27:20 <jeblair> #action fungi mrmartin pleia2 investigate "admin interface" for openstack id, get service account group assigned somehow
19:27:32 <asselin> o/
19:27:39 <jeblair> pleia2: let's celebrate with fireworks!
19:27:44 <pleia2> :D
19:28:06 <fungi> it's that time of year where tourists on vacation are setting off fireworks every night here. starting to get on my nerves
19:28:28 <fungi> let's celebrate with beer ;_
19:28:37 <mtreinish> fungi: you should retaliate with your own explosives
19:28:38 <mrmartin> we don't have fireworks here, so we can exchange for a week
19:28:51 <mrmartin> airbnb.openstack.org
19:28:56 <fungi> hah
19:28:57 <mrmartin> sorry
19:29:06 <jeblair> nice :)
19:29:07 <jeblair> #topic Hosting for Manila service image (u_glide, bswartz)
19:29:22 <jeblair> oh wait
19:29:26 <jeblair> this was left over from last time wasn't it
19:29:27 <fungi> thos was from last week
19:29:30 <fungi> this
19:29:34 <jeblair> #topic Open discussion
19:29:38 <fungi> i think they havea  way forward now
19:29:42 <jeblair> yep
19:29:59 <mrmartin> fungi: do you have some info about this resource-server split-out from openstackid?
19:31:06 <fungi> mrmartin: there's an open review smarcet proposed
19:31:12 <fungi> #link https://review.openstack.org/#/c/178853/
19:31:13 <mrmartin> I guess we need to set up a separate instance to serve this new endpoint, maybe with all the things, including SSL, etc.
19:31:19 <mordred> jeblair: greghaynes and SpamapS and I have a phone call on thursday with HP humans about the networking for infra-cloud
19:31:24 <fungi> right, that would be next
19:31:27 <greghaynes> mordred: horray
19:31:34 <mrmartin> do we need a spec for that?
19:32:02 <fungi> mrmartin: probably not unless it needs a whole separate puppet module, but i'll defer to jeblair on that
19:32:26 <SpamapS> jeblair: we had talked about discussing hand-off of things like the infra-cloud servers from non-root to infra-root..
19:32:35 <fungi> mrmartin: the way i see it, there's code which is running on openstackid.org which we'd rather not run there, so it's moving to a second server using the same basic framework
19:33:01 <fungi> mrmartin: and it already has a git repo, and the existing puppet module can probably just grow a class for the resource server
19:33:12 <mrmartin> fungi: yeap, I did a test on the openstackid code with the removed resource server things, and it was working
19:33:29 <mrmartin> https://review.openstack.org/178854
19:33:40 <pabelanger> I think grafana.o.o is almost ready to be stood up. Could use some guidance on the current state of the puppet module. https://review.openstack.org/#/c/179208/
19:33:54 <pabelanger> currently the only manual task right now is to create an api_key in admin GUI
19:33:56 <fungi> mrmartin: awesome. i missed that one
19:33:58 <pabelanger> for grafyaml
19:34:15 <pabelanger> can't figure out hashing of key in database ATM :(
19:34:24 <pabelanger> so, manually injection won't work right now
19:34:27 <jeblair> fungi, mrmartin: agreed
19:34:49 <jeblair> SpamapS: yeah, let's talk about the infra-cloud thing for a minute
19:35:07 <jeblair> SpamapS: what needs to be handed off?
19:35:46 <greghaynes> jeblair: I think the question is are we going to have the infra rooters do a full redeploy before it goes 'live'
19:36:02 <SpamapS> jeblair: what he said
19:36:57 <jeblair> redeploy of the initial host?
19:37:01 <jeblair> (or hosts?)
19:37:02 <SpamapS> jeblair: if its fine to just hand it to infra-root with a local hiera with 'nopassword' in all the secret slots, I'm fine w/ that too. Just not sure how pedantic we want to be about privileges given that these aren't vms you can burn easily.
19:38:29 <fungi> how smooth (and fast) is a full redeploy at this point?
19:38:40 <SpamapS> not enough data
19:39:04 <jeblair> i still don't know what we're talking about here, sorry.
19:39:09 <pleia2> can we stash ongoing documentation for the work you're doing somewhere (even if it's an etherpad?)
19:39:20 <SpamapS> I can probably re-deploy the initial hardware-cloud node with a bare OS in 30 minutes. No idea how long it will take to morph it into a working cloud though.
19:39:24 <pleia2> it's not clear to me what specific moving parts are involved ehre
19:39:26 <pleia2> here
19:39:44 <jeblair> let's just see if we can manage to get on the same page talking to each other in irc first :)
19:39:45 <greghaynes> jeblair: the concern is if we stand up infra-cloud, then just swap keys to the infra-rooter key, theres no guarantee of whether or not we have access or what code we put on those boxes
19:40:05 <jeblair> greghaynes: right, i'm just not sure exactly what things we're talking about
19:40:14 <jeblair> we have no puppet manifests or anything written yet
19:40:30 <jeblair> are we talking about what's needed to stand up the initial server, or the cloud itself, or what?
19:40:39 <fungi> so as a starting point... there's a (maybe multiple?) bastion host accessible from the internet via ssh
19:40:48 <fungi> sounded like two?
19:40:53 <jeblair> i think we were talking about 2 yes
19:41:34 <SpamapS> jeblair: I'm talking about once we write those puppet manifests, and beat the cloud into submission, do you want to re-deploy the whole thing from scratch with only infra-root's credentials?
19:41:39 <jeblair> though actually https://review.openstack.org/180796 says one
19:41:55 <SpamapS> fungi: we have one, the plan in the WIP docs is to have two bastion.
19:42:03 <SpamapS> oh doh
19:42:09 <SpamapS> well we'll fix that. :)
19:42:14 <jeblair> SpamapS: ah yes, absolutely.  we want to be able to regularly redeploy the cloud
19:42:37 <jeblair> so i imagine that will involve any of the credentials for that being in our normal secret hiera file
19:42:49 <jeblair> and we run a script to do the deploy
19:42:56 <fungi> okay, so that bastion exists today and has an operating system on it now and is able to reach a management network for all the rest of the hardware?
19:43:02 <jeblair> so i guess for hand-off, we can just change all those credentials
19:43:03 <greghaynes> fungi: yes
19:43:14 <SpamapS> Ok, so that means that what we really want is to have the baremetal cloud deploy a copy of itself, and then those two would be the bastions (with only one having all nodes enrolled and used to deploy the whole new cloud).
19:44:05 <fungi> okay, so a pair of all-in-one control plane "clouds" which manipulate the remaining hardware as ironic instances
19:44:11 <mordred> yes
19:44:20 <jeblair> how does a baremetal cloud deploy a copy of itself?
19:44:46 <mordred> I think "copy of itself" is a misnomer
19:44:59 <greghaynes> bare-metal-mitosis
19:44:59 <mordred> I think there are 2 bare metal machines that are networked to the management network
19:45:12 <SpamapS> jeblair: it deploys a bare box, and then we deploy a copy of it using our tools. :)
19:45:16 <mordred> each of them can deploy operating systems to bare metal machines
19:45:25 <mordred> and on those operating systems, we can run puppet
19:45:38 <mordred> so, each of them can deploy an operating system to the other as well
19:46:01 <jeblair> mordred: ah, i see.
19:46:03 <mordred> which means if we need to blow away and re-do either of them, we can use the other to accomplish the task
19:46:18 <fungi> and one of the systems which we could deploy to a bare metal instance is... another all-in-one control plane
19:46:23 <mordred> yes
19:46:35 <fungi> okay, this is starting to make some sense to my poor noodle
19:46:53 <jeblair> SpamapS: can you update https://review.openstack.org/180796 to describe this?
19:47:19 <jeblair> we should really try to land that soon too
19:47:27 <mordred> right now, SpamapS and greghaynes have logins and root on those machines
19:47:42 <SpamapS> yeah I've been letting it languish as I get my hands dirty in a few of the early tasks. :-P
19:47:48 <SpamapS> and lo, there must be changes. :)
19:47:59 <jeblair> yeah, it doesn't need to land perfect
19:48:00 <mordred> once we're happy with the puppet for those machines, I believe we'll want to have infra root redeploy those machines using puppet _without_ SpamapS and greghaynes keys on them
19:48:11 <jeblair> let's try to land our plan, and then make changes to it; it's easier to patch that way :)
19:48:13 <SpamapS> mordred: agreed
19:48:26 <jeblair> mordred: yup
19:48:35 <mordred> and at that point, the next steps will largely be about using the cloud APIs of those clouds to operate the next level - which can have a similar process to go through
19:49:11 <mordred> but root should no longer be needed on the all-in-ones
19:49:19 <mordred> also - to be clear - we'll ultimately have 4 of these
19:49:23 <mordred> 2 per data center
19:49:32 <SpamapS> right
19:49:34 <fungi> though also, convincing greghaynes and SpamapS that they _want_ us to put their keys back on there so that they can help us manage those should stay on the table as an option ;)
19:49:40 <mordred> fungi: :)
19:49:44 <SpamapS> ok, all that will go into next patchset of the docs proposal
19:49:49 <greghaynes> no take backs
19:49:49 <jeblair> w00t
19:49:59 <pleia2> hehe
19:50:38 <mordred> SpamapS: crinkle had a great point in her review I really liked
19:50:49 <mordred> SpamapS: which is that each of those bullet points should have a why
19:51:02 <fungi> and yes, i agree a full redeploy of everything, if for no reason other than to validate the puppet and documentation, is a necessary step
19:51:08 <mordred> SpamapS: which is likely SUPER useful so that we can remember why we decided that :)
19:51:09 <pleia2> mordred: ++
19:51:54 <jeblair> end of infra-cloud topic?
19:51:58 <crinkle> :)
19:52:09 <greghaynes> Yep, I think that is good enough for near-term
19:52:15 <mordred> \o/
19:52:17 <jeblair> pabelanger: back to your question about grafana: can the api key just be a random string?  if so, we can just use openssl to generate it and put it in hiera.
19:52:32 <SpamapS> crinkle: excellent point.
19:52:46 <jeblair> pabelanger: we have similarly 'pre-generated password' items for gerrit, etc.
19:53:03 <pabelanger> jeblair, issue is grafana does a custom hash / salt of the key.  No way to do it externally ATM
19:53:31 <fungi> hopefully i'll have a first stab at using bindep to pre-cache distro packages uploaded for review
19:53:36 <fungi> sometime later today
19:53:41 <fungi> if stuff will stop breaking'
19:54:10 <jeblair> pabelanger: hrm.  it would be great if we didn't need a manual two-step installation process for grafana
19:54:37 <jeblair> timrc: ^ maybe you could work with pabelanger on this?
19:54:39 <pabelanger> jeblair, agreed.  If people can help decode the hash / salt method, we could inject into database for now
19:54:49 <fungi> pabelanger: is there a tool we can use to generate that password?
19:54:51 <pabelanger> also created: https://github.com/grafana/grafana/issues/2218 just now
19:55:11 <pabelanger> fungi, not that I know of.  However, we could request upstream for it
19:55:24 <fungi> pabelanger: is there an example?
19:55:35 <timrc> jeblair, Hm yeah.  I think we just generated a random string and put it in heira, pabelanger
19:55:45 <jeblair> yeah, that could be a solution, if grafana could supply a tool do perform the hashing
19:55:57 <pabelanger> timrc, how do you get it into DB?
19:56:13 <timrc> pabelanger, I think we specified it as a config option.  Let me go see.
19:57:02 <pabelanger> timrc, possible I over looked something
19:58:40 <timrc> pabelanger, jeblair http://paste.openstack.org/show/317511/
19:59:12 <jeblair> do we have a puppet-grafana repo yet?
19:59:14 <timrc> pabelanger, jeblair Yeah we just created a random hash, threw it in hiera, and passed it down as a param which eventually found its way into the security block of the configuration class.
19:59:35 <ianw> is this open discussion?  anyway, like to work through people's thoughts on -> https://review.openstack.org/#/c/194477/ ; spec to get images closer to the dib ci test so we don't download them over internet
19:59:35 <pabelanger> timrc, okay, so you are using the secret_key
19:59:41 <pabelanger> let me test that out
20:00:08 <pabelanger> jeblair, no, we are consuming an upstream puppet-grafana module directly
20:00:25 <jeblair> ah great
20:00:40 <jeblair> ianw: ack
20:00:44 <jeblair> thanks everyone!
20:00:45 <jeblair> #endmeeting