19:02:14 <pleia2> #startmeeting infra
19:02:15 <openstack> Meeting started Tue Jan 26 19:02:14 2016 UTC and is due to finish in 60 minutes.  The chair is pleia2. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:18 <openstack> The meeting name has been set to 'infra'
19:02:22 <ianw> o/
19:02:31 <pleia2> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:34 <zaro> o/
19:02:46 <Clint> o/
19:02:51 <pleia2> #topic Announcements
19:02:53 <cody-somerville_> o/
19:03:07 <pleia2> Do we have any non-agenda-y announcements?
19:03:14 <craige> o/
19:04:05 <pleia2> this week is the last call for the infra-cloud sprint registration, so be sure to sign up if you're coming so I can send the final counts to HPE: https://wiki.openstack.org/wiki/Sprints/InfraMitakaSprint#Registration
19:04:59 <pleia2> also probably worth noting that HPE Cloud shuts down this week
19:05:11 * Clint sniffs.
19:05:12 <pleia2> jeblair, mordred, clarkb - do we have any announcey things for this?
19:05:18 <cody-somerville_> Have we asked if some of the HP DC Ops folks wanted to attend for part?
19:05:27 <cody-somerville_> re: sprint
19:05:33 <nibalizer> o/
19:05:35 <mordred> o/
19:05:36 <AJaeger> did we send out announcements for the shutdown?
19:06:00 <pleia2> cody-somerville_: I think we have a tour of a datacenter planned (I need to follow up) but I don't think we've directly invited any of them to the sprint itself
19:06:32 <pleia2> AJaeger: I don't remember seeing any
19:08:07 <AJaeger> then let's send one...
19:08:40 <pleia2> now that we're closer in, hopefully we have a better idea of the exact impact (how many instances we have active vs. lose)
19:09:12 <pleia2> so I think it would be good to send a message about the impending impact
19:09:23 <pleia2> who wants to send this email
19:09:24 <pleia2> ?
19:09:32 <cody-somerville_> I'm happy to volunteer.
19:10:00 <pleia2> cody-somerville_: ok, want to draft something in etherpad and then share in channel later so folks can chime in before you send?
19:10:07 <cody-somerville_> Yup. Sounds good to me.
19:10:10 <pleia2> great
19:10:33 <cody-somerville_> #action Cody to send nodepool HP Cloud sunset announcement.
19:10:57 <pleia2> #action cody-somerville to draft and send HPE Cloud shutdown notice+impact to openstack-infra and openstack-dev
19:11:06 <pleia2> hm, can non-chairs do actions?
19:11:21 * cody-somerville_ shrugs. :)
19:11:29 <pleia2> welp, maybe we'll have 2 :)
19:11:51 <pleia2> #topic Actions from last meeting
19:12:05 <pleia2> the only one was: fungi release gerritlib 0.5.0
19:12:08 <pleia2> was this done?
19:12:33 * AJaeger does not see a 0.5.0 tag
19:12:55 <pleia2> zaro: I know you had interest in having this done, do you know what happened?
19:13:43 <zaro> i think fungi said he was too busy to do it.
19:14:11 <pleia2> ok, is there an infra-root who wants to take this action item this week?
19:14:18 <nibalizer> I can do it
19:14:31 <pleia2> #action nibalizer release gerritlib 0.5.0
19:14:32 <pleia2> nibalizer: thanks :)
19:14:36 <nibalizer> sure thing
19:14:40 <pleia2> #topic Specs approval
19:14:44 <pleia2> APPROVED: Move docs.openstack.org/releases to releases.openstack.org
19:14:47 <pleia2> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/releases-openstack-org.html
19:14:51 <pleia2> hooray!
19:15:04 <pleia2> APPROVED: Improve Translation Setup
19:15:09 <pleia2> #link http://specs.openstack.org/openstack-infra/infra-specs/specs/translation_setup.html
19:15:36 <pleia2> long road with that one, nice to see it approved
19:15:39 <pleia2> PROPOSED: Unified Mirrors
19:15:43 <pleia2> #link https://review.openstack.org/#/c/252678/
19:16:01 <pleia2> krotscheck: this one is yours :)
19:16:15 <krotscheck> hi!
19:16:32 <krotscheck> Yeah, so, it's already under heavy development, and fungi wanted it to put up to a council vote.
19:16:55 * krotscheck doesn't actually know what that means, but is here to answer questions.
19:17:04 <jeblair> we're in the home stretch actually
19:17:13 <krotscheck> jeblair: Well, for the AFS portion.
19:17:24 * krotscheck notices that it's in merge conflict.
19:17:32 * krotscheck has had too many ux meetings.
19:17:58 <jeblair> well, more than afs -- the pypi part of this (as reinterpreted through afs) is basically complete
19:18:10 <krotscheck> True
19:18:17 <krotscheck> Anyway, let me rebase that really quick
19:18:19 <jeblair> which means we just about have the platform needed to start adding other mirrors to it
19:18:30 <mordred> I also pushed up a patch to do apt mirroring in AFS
19:18:46 <jeblair> at any rate, it'd be good to open this for voting; and then probably a catch-up patch later to mention the afs changes
19:19:05 <cody-somerville_> I know from experience I've specifically wanted to split mirrors out to separate hosts for performance reasons. Will it still be easy for one to do that after this spec is complete?
19:19:17 <jeblair> (i realize that's slightly the wrong order, but apparently we've re-prioritized this :)
19:19:30 <mordred> cody-somerville_: yes
19:19:37 <jeblair> mordred: it will?
19:19:38 <pleia2> jeblair: sounds good
19:19:51 <jeblair> i guess i mean there's nothing stopping it
19:20:00 <jeblair> you can always have more hosts
19:20:03 <mordred> jeblair: sure. we can have AFS volumes on different file servers and we can have different apache hosts doing different portions
19:20:05 <mordred> yah
19:20:15 <jeblair> mordred: it's worth noting this spec doesn't mention afs :)
19:20:16 <mordred> I don't think we need that ourselves just now
19:20:18 <krotscheck> Why would that improve performance?
19:20:20 <mordred> jeblair: heh
19:20:25 <jeblair> mordred: it's the 'unified mirrors' spec
19:20:31 <jeblair> so it's actually mostly about putting everything on one host
19:20:35 <mordred> nod.
19:20:42 <mordred> so - in _that_ spec then, splitting would be harder :)
19:20:53 <jeblair> (that we then use that host to serve afs is a nice follow-on, but is not fundamental to the idea)
19:21:29 <cody-somerville_> I think the afs is actually maybe a bit more important than that as we've found # of mirrors increases load if you're just using good ol' rsync.
19:21:37 <krotscheck> I'm still not certain how having mirrors on differnt hosts would improve performance, given that everything's on an AFS share already.
19:21:52 <jeblair> we kind of keep going back and forth talking about afs or not which is confusing
19:22:37 <krotscheck> Ok, so that spec needs to be updated to include AFS.
19:22:39 <mordred> I agree. also, we have not experienced performance issues with our mirrors, so while it'sa  good topic, maybe let's come back to solving it later
19:22:45 <krotscheck> Because, well, that's actually what we're building.
19:22:46 <cody-somerville_> +1
19:23:10 <jeblair> yes, i mentioned that earlier, but we're building that on top of that spec
19:23:30 <krotscheck> jeblair: I think we disagree about whether it should be part of this spec or a followup.
19:24:03 <jeblair> krotscheck: to me, that spec describes how we are restructuring the mirrors to serve multiple types of mirror content from a single host per region
19:24:17 <krotscheck> jeblair: So AFS is an implementation detail?
19:24:27 <jeblair> krotscheck: the fact that we are then moving the backing store for that data from local disk to remote is a follow-on change
19:24:37 <krotscheck> Ok, I'm convinced.
19:24:40 <mordred> I agree with that characeterization
19:24:47 <mordred> woot. we all agree!
19:24:55 <krotscheck> Mark the calendar!
19:25:02 <pleia2> ok, so folks can chime in and we'll set a deadline of next thursday (feb 4) for voting for now?
19:25:04 <jeblair> i will volunteer to write the update to it to describe the afs work
19:25:10 <mordred> jeblair: \o/
19:25:47 <jeblair> but i think it's worth voting on that now because it is 90% of what's going on and i think we all actually do agree on it, and it's a nice reference to have (we have certainly referenced it while doing this work :)
19:25:59 <pleia2> sounds good to me
19:26:08 <jeblair> (to be clear, i will write a follow-up patch so that we can open voting on this first one)
19:26:08 <mordred> I hav evoted
19:26:58 <pleia2> ok, onward to priority efforts!
19:26:59 <pleia2> #topic Priority Efforts: Ansible Puppet Apply
19:27:04 <pleia2> The agenda says: "Ready to remove from the priority efforts list/query and move into the implemented section now?"
19:27:43 <jeblair> i would have said yes -- though i think we may want to keep tracking this until we have launch-node working with puppet apply
19:28:08 <jeblair> currently our launch-node process still involves the puppetmaster
19:28:16 <jeblair> so we can't down it until that is resolved
19:28:43 <jeblair> i believe mordred has some patches up though
19:28:59 <jeblair> we may need to change their topics
19:29:03 <mordred> yah
19:29:07 <mordred> I need to fix them too
19:29:09 <mordred> they are incompete
19:29:26 <mordred> but I hope to have that ready/done by next weekish
19:29:39 <jeblair> so i say we leave this topic for another week or two to track that
19:29:46 <pleia2> sounds good, thanks jeblair
19:29:58 <cody-somerville_> Does this mean config changes are applied almost instantaneously after landing now or is that still to be done?
19:30:06 <mordred> still to be done
19:30:10 <pleia2> none of the other priority efforts have updates, so I'm going move on to the other agenda items
19:30:19 <pleia2> #topic Adding a new node to nodepool to support libvirt-lxc testing in Nova (thomasem, dimtruck)
19:30:46 <nibalizer> jeblair: there are a lot of things that require the puppetmaster
19:31:37 <nibalizer> thomasem: dim<tab> what's this then?
19:32:55 <jeblair> nibalizer: (what else?)
19:33:13 <jeblair> nibalizer: (i mean the actual puppetmaster service; not the host)
19:33:13 <pleia2> alright, maybe we'll come back to this one
19:33:16 <pleia2> SergeyLukjanov: are you about?
19:33:31 <nibalizer> jeblair: ooooo yes then we are in agreement
19:33:35 <jeblair> cool
19:33:45 <SergeyLukjanov> pleia2 not sure
19:33:45 <jeblair> (sorry i could have been more clear about that)
19:33:51 <pleia2> #topic Scheduling a Gerrit project rename batch maintenance (SergeyLukjanov)
19:33:57 <pleia2> SergeyLukjanov: all yours :)
19:34:00 <SergeyLukjanov> :)
19:34:20 <SergeyLukjanov> it was not sure because finally jet lagged :)
19:34:27 <pleia2> hehe
19:34:29 <SergeyLukjanov> so, we have two items on the renaming list
19:34:51 <SergeyLukjanov> I can do it Friday or early morning Sat PST
19:35:12 <nibalizer> I am fosdem this weekend, no availablity
19:35:30 <pleia2> I'll be visiting penguins prior to LCA, so I won't be around either
19:36:21 <SergeyLukjanov> seems like we don't have the second root to backup
19:36:24 <pleia2> are there any other infra-root folks who will around to assist?
19:36:55 <mordred> I will be driving a very large truck
19:37:14 <olaph> yes, you will
19:37:38 <bkero> Don't drink and root
19:37:40 <pleia2> ok, sounds like it needs to be bumped at least another week
19:37:44 <SergeyLukjanov> seems like need to move to the next meeting agenda and re-evaluate
19:37:51 <pleia2> SergeyLukjanov: sounds good
19:37:51 <SergeyLukjanov> pleia2 yup, thx
19:38:00 <pleia2> #topic Puppetboard/PuppetDB Crashes
19:38:07 <pleia2> so, this is a thing :(
19:38:10 <nibalizer> this is me
19:38:10 <jeblair> waaah
19:38:16 <nibalizer> so puppetboard will just randomly 500 on you
19:38:29 <nibalizer> and if you read apache logs this is because puppetdb is not responding
19:38:29 <jeblair> i too have noticed this
19:38:48 <nibalizer> in the puppetdb logs there are some exeeptions and it kinda looks like the whole java thing restarts
19:39:05 <nibalizer> so this isn't me showing up with the fix
19:39:20 <nibalizer> i just thought we should identify it as a known issue and ya someone should probably dig
19:39:39 <nibalizer> usually that would be me but unless inspiration or frustration hits I'll likely not get to it until mid feb
19:40:06 <jeblair> did something change recently?
19:40:15 <jeblair> heh, i mean, we changed how we submit reports
19:40:18 <nibalizer> yea
19:40:22 <jeblair> but what else?
19:40:32 <nibalizer> we used a privateish api and now it crashes sometimes
19:40:36 <nibalizer> we r smart developers
19:40:55 <jeblair> nibalizer: oh, so the only substantial change we know is the way we submit reports?
19:41:06 <mordred> yh - that changed with the move to puppet apply
19:41:09 <nibalizer> thats the only change I know of
19:41:22 <mordred> we are no longer letting the puppetmaster do it for us
19:41:33 <mordred> we're grabbing the reports from the hosts and injecting them ourselves
19:41:34 <nibalizer> if someone wanted to try to debug this themselves I'd be happy to assist
19:41:38 <mordred> SO
19:41:44 <mordred> I have an idea that we may want to think about
19:41:58 <mordred> which is that since we don't use puppetmaster - the advanced feautres of puppetdb are kind of lost on us
19:42:15 <mordred> and all we REALLY need is something that can turn json/yaml reports into html people can look at
19:42:23 <pabelanger> ++
19:42:40 <mordred> so maybe if fixing puppetdb turns too hard - we can just hack up a visualizatoin thing to visualize the log data
19:42:48 <pleia2> I fully support continuing our merry-go-round of dashboards
19:42:53 <mordred> pleia2: :)
19:42:55 <jeblair> i agree -- so if the inspiration to try to fix puppetdb doesn't strike anyone, maybe the inspiration to do that will :)
19:43:05 <mordred> also - we could have it support the ansible info _AND_ the puppet info
19:43:12 <nibalizer> ya
19:43:13 <pleia2> mordred: yeah, that would be useful
19:43:20 <mordred> k. I'm not going to do that htis week
19:43:24 <nibalizer> i've thought for a while that we should just submit the ansible info as if it was a puppet report
19:43:26 <jeblair> mordred: that wolud be a cool tool.
19:43:53 <pabelanger> do we have raw yaml / json for ansible / puppet some place?
19:44:01 <mordred> pabelanger: I can get you some
19:44:30 <pabelanger> I've been dumping my ansible stuff into influxdb atm, just to render some failures
19:44:44 <mordred> pabelanger: I'm landing now - remind me and I'll connect with you a little later to get you the datas
19:44:51 <pabelanger> mordred: cook
19:44:52 <crinkle> puppetboard + puppetdb is also providing fact data, not just reports
19:44:53 <pabelanger> cool*
19:45:00 <mordred> crinkle: yes
19:46:03 <crinkle> i just don't want NIH to get us into having half of a dashboard
19:46:09 <cody-somerville_> pabelanger: Do you know if RH is going to Open Source Ansible Tower?
19:46:21 <pabelanger> cody-somerville_: eventually
19:46:23 <nibalizer> crinkle: ++
19:46:45 <nibalizer> and its not like we can't add features to puppetdb/puppetboard that we want
19:46:54 <cody-somerville_> pabelanger: it looks pretty.
19:46:58 <nibalizer> we haven't even bothered upgrading either service since their installation
19:47:23 <nibalizer> anyways tl;dr the problem is known and if a human wants to investigate it please do, feel free to ping me for logs or getting started
19:47:42 <jeblair> ++
19:48:03 <pleia2> ok, thanks nibalizer
19:48:09 <pleia2> #topic OpenStack summit presentations (pabelanger)
19:48:12 <pleia2> #link https://etherpad.openstack.org/p/austin-upstream-openstack-infa
19:48:37 <pleia2> pabelanger: thanks for jumping on this, deadline for Austin summit submissions is coming up very fast
19:48:45 <pabelanger> Ya, if anybody is interested is helping with an openstack lightning talk, please add your self to etherpad
19:48:55 <pabelanger> so far me and pleia2
19:49:03 <pabelanger> could use another 1 or 2 people
19:49:38 <nibalizer> http://paste.openstack.org/show/485070/   is an example puppetdb log
19:49:40 <pleia2> you can propose other topics than the ones on there too if there's something else you want to talk about (though people do enjoy the gertty ones)
19:49:53 <pabelanger> Yup, dealers choice
19:49:55 <pabelanger> that is all
19:50:03 <pleia2> jeblair: thanks!
19:50:08 <cody-somerville_> \o/
19:50:25 <pleia2> #topic Open discussion
19:50:34 <pleia2> 10 minutes left
19:50:45 <Zara> o/
19:51:32 <jeblair> i like the lightning talks -- it looks like a bunch of fun topics from people who enjoy working on them :)
19:51:39 <pleia2> I'm flying to AU tomorrow afternoon for LCA, so I'll be out again for ~10 days
19:51:42 <craige> + 1 jeblair
19:51:53 <pleia2> jeblair: yes, it was a very good idea :)
19:52:03 <cody-somerville_> Whats the limit for # of lightning talks?
19:52:24 <pleia2> cody-somerville_: there's a chat in the etherpad now on that
19:52:41 <cody-somerville_> Cool.
19:52:54 <pleia2> if we have more speakers, the talks are just shorter ;)
19:52:55 <jeblair> pleia2: we'll miss you!
19:52:56 <cody-somerville_> So it'll depend on how many talks we get / 45 minutes ?
19:53:00 <jeblair> say hi to the penguins
19:53:05 <pleia2> penguins \o/
19:53:21 <pabelanger> Ya, right now we have 4
19:53:29 <cody-somerville_> From experience, 5 minute lightning talks are better than 10 if you can get enough speakers.
19:53:30 <pabelanger> and 45mins
19:53:56 <Zara> I have a coupla things to mention before the end of the meeting! :)
19:54:12 <pleia2> Zara: please do
19:54:34 <Zara> 1. as far as we know, storyboard emails are ready to go, pending infra setting up email config, which I believe is this patch https://review.openstack.org/#/c/270331/ . so if there's anything else we can help with, let us know
19:54:43 <pleia2> oh, awesome
19:55:36 <Zara> 2. we're doing a meetup on the 17th of February
19:55:38 <jeblair> if anyone wants to update that, feel free; otherwise i will get back to it in a little bit (maybe later in the week?)
19:56:20 <pleia2> Zara: in person, or virtual?
19:56:40 <Zara> pleia2: in person, sorry, just finding details to link
19:56:45 * pleia2 nods
19:57:19 <Zara> etherpad: https://etherpad.openstack.org/p/StoryBoard_Mitaka_Midcycle wiki page: https://wiki.openstack.org/wiki/StoryBoard/Midcycle_Meetup
19:57:28 <Zara> that was rushed, ha
19:57:39 <Zara> I meant to go through and fix the formatting, but didn't get around to it :/
19:57:57 <pleia2> oh good, anteaya will be there
19:58:12 <pleia2> so there will be some infra representation
19:58:41 <Zara> yes! we'd like more (originally it was planned around infra-cloud, but topics depends on who can come and so on) :D
19:58:57 <Zara> anyone going to the ops meetup, it's in the same city, the day after
19:59:31 <Zara> I'd planned to announce it more stylishly
19:59:34 <nibalizer> "For everyone: Cake. " this is going to be a good meetup
19:59:40 <pleia2> Zara: was this announced on openstack-dev? if not, may be worth announcing/forwarding to openstack-infra to invite us too
19:59:40 <Zara> yes! :D
20:00:02 <Zara> ah, okay, will do!
20:00:06 <Zara> (am very new to this...)
20:00:08 <Zara> thank you
20:00:08 <pleia2> Manchester is too far for me for a trip so soon, but maybe others can (not sure if anyone on our team is going to the ops meetup this time
20:00:13 <pleia2> ok, that's a wrap@
20:00:15 <pleia2> #endmeeting