19:10:48 <cinerama> #startmeeting tripleo
19:10:49 <openstack> Meeting started Tue Oct 21 19:10:48 2014 UTC and is due to finish in 60 minutes.  The chair is cinerama. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:10:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:10:53 <openstack> The meeting name has been set to 'tripleo'
19:11:02 <tchaypo> Good start
19:11:06 <cinerama> oksohi
19:11:17 <cinerama> #topic agenda
19:11:17 <cinerama> * bugs
19:11:17 <cinerama> * reviews
19:11:17 <cinerama> * Projects needing releases
19:11:17 <cinerama> * CD Cloud status
19:11:17 <cinerama> * CI
19:11:19 <cinerama> * Tuskar
19:11:24 <cinerama> * Specs
19:11:26 <cinerama> * open discussion
19:11:27 <cinerama> Remember that anyone can use the link and info commands, not just the moderator - if you have something worth noting in the meeting minutes feel free to tag it
19:11:30 <cinerama> #topic bugs
19:11:36 <cinerama> #link https://bugs.launchpad.net/tripleo/
19:11:36 <cinerama> #link https://bugs.launchpad.net/diskimage-builder/
19:11:36 <cinerama> #link https://bugs.launchpad.net/os-refresh-config
19:11:38 <cinerama> #link https://bugs.launchpad.net/os-apply-config
19:11:40 <cinerama> #link https://bugs.launchpad.net/os-collect-config
19:11:42 <cinerama> #link https://bugs.launchpad.net/os-cloud-config
19:11:44 <cinerama> #link https://bugs.launchpad.net/tuskar
19:11:46 <cinerama> #link https://bugs.launchpad.net/python-tuskarclient
19:12:03 <cinerama> (so do we usually talk about the bugs now?)
19:12:13 <bnemec> Usually criticals
19:12:24 <bnemec> I'm seeing two in tripleo
19:12:43 <bnemec> #link https://bugs.launchpad.net/tripleo/+bug/1188067
19:12:44 <uvirtbot> Launchpad bug 1188067 in tripleo "* listening services available on all addresses" [Critical,Triaged]
19:12:50 <bnemec> #link https://bugs.launchpad.net/tripleo/+bug/1374626
19:12:51 <uvirtbot> Launchpad bug 1374626 in tripleo "UIDs of data-owning users might change between deployed images" [Critical,Triaged]
19:13:03 <tchaypo> The first one is assigned to me, I think
19:13:10 <tchaypo> I flagged it as critical
19:13:11 <bnemec> Yep, looks like
19:13:34 <cinerama> we also have a critical in os-cloud-config
19:13:37 <tchaypo> It came up because jp security noticed our under cloud node listing on *:53
19:13:38 <cinerama> #link https://bugs.launchpad.net/os-cloud-config/+bug/1382275
19:13:39 <uvirtbot> Launchpad bug 1382275 in os-cloud-config "new register-nodes does not accept ints for numeric input" [Critical,In progress]
19:13:57 <tchaypo> and more importantly the Internet had noticed and it was being used to ddos people
19:14:22 <cinerama> yikes
19:14:30 <bnemec> 1374626 is assigned to SpamapS, so no update unless he arrives.
19:14:41 <bnemec> What is on port 53?
19:14:48 <tchaypo> We've fixed this for most services by putting them behind haproxy and controlling where it listens
19:14:50 * bnemec hopes it isn't something really obvious
19:15:01 <tchaypo> Dnsmasq, run by neutron
19:15:41 <greghaynes> bnemec: dns
19:16:09 <GheRivero> I took ownership of the critical one in os-cloud-config
19:16:11 <bnemec> Ah, interesting that they firewall everything else, but that's allowed to listen on public interfaces.
19:16:28 <tchaypo> And now that I think about it, perhaps this is something I should be raising with neutron
19:16:30 <bnemec> So do we have a plan for addressing that?
19:16:47 <cinerama> yup looks like GheRivero has a proposed fix for 1382275 (which could use some review love)
19:16:53 <tchaypo> The quick fix for hp2 has been some manual up tables rules to block it
19:16:57 <greghaynes> yea, dns is an annoying one because its udp, so you have to either try and stateful udp which is annoying or leave it open
19:17:02 <tchaypo> Oh. I love ghe
19:17:02 <greghaynes> anywho, sidetracked
19:17:33 <tchaypo> I'll review GheRivero 's patch.oving on
19:17:45 <cinerama> k shall we move on to reviews, on that note?
19:17:57 <bnemec> #action Review https://review.openstack.org/#/c/129950/
19:17:59 <GheRivero> it needs some unittest but it's ok
19:18:13 <greghaynes> With the uid mapping one, we had an email thread, im not sure it reached a consensus
19:18:19 <greghaynes> it might be worth resurrecting that thread
19:18:23 <bnemec> Summit topic?
19:18:28 <greghaynes> oh
19:18:45 <greghaynes> yes, ill resurrect and ask if we want to discuss more at summit or if we have a consensus
19:18:59 <greghaynes> #action greghaynes to resurrect uid mapping and ask if we want to discuss more at summit or if we have a consensus
19:19:16 <tchaypo> Is there a spec up for review?
19:19:19 <cinerama> summit is close enough that that sounds like an excellent opportunity to iron it out more quickly than email back-and-forth
19:19:20 <tchaypo> Or a change?
19:19:30 <greghaynes> tchaypo: pretty sure no
19:19:33 <cinerama> iirc there was a proposed change that attracted discussion
19:19:41 <greghaynes> tchaypo: I think the fix might live entirely internally ATM, actually :(
19:20:12 <greghaynes> anywho, resurrecting the thread will hopefully answer these questions :)
19:20:13 <cinerama> i *thought* i saw it come up for review recently
19:20:24 <bnemec> greghaynes: +1
19:20:36 <bnemec> Whoa, I just zoomed my whole screen somehow
19:21:00 <cinerama> haha
19:21:01 <tchaypo> Haha
19:21:06 <cinerama> so is that it for the bugs?
19:21:16 <bnemec> So, last question I have on this topic is do we have a path forward for 1188067?
19:21:57 <tchaypo> Oh, I misread earlier comment
19:22:01 <tchaypo> And though ghe had provided a patch
19:22:21 <bnemec> Yeah, multiple bug discussions happening at once. :-)
19:22:24 <tchaypo> I think we can get a list of interfaces to listen on from heat
19:22:39 <tchaypo> We already use that lost to control where haproxy listens for things
19:23:11 <tchaypo> As long as neutron have a hook that we can use to set the interfaces that should be sufficient
19:23:15 <greghaynes> Im pretty sure we decided that services should not bind on all interfaces by default (for this reason) and then we should explicitly list services that should listen on public interfaces
19:23:19 <greghaynes> so this sounds like just a missed case of that?
19:23:25 <tchaypo> If neutron doesn't we can raise it with the
19:23:44 <tchaypo> And as a fallback, it's be easy to manually block with up tables
19:23:49 <tchaypo> *iptables
19:23:58 <tchaypo> greghaynes: That's how I interpret it
19:24:55 <tchaypo> Moving on?
19:24:59 <bnemec> +1
19:25:06 <cinerama> #topic reviews
19:25:17 <cinerama> #info There's a new dashboard linked from https://wiki.openstack.org/wiki/TripleO#Review_team - look for "TripleO Inbox Dashboard"
19:25:17 <cinerama> #link http://russellbryant.net/openstack-stats/tripleo-openreviews.html
19:25:17 <cinerama> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-30.txt
19:25:17 <cinerama> #link http://russellbryant.net/openstack-stats/tripleo-reviewers-90.txt
19:26:00 <cinerama> ...
19:26:02 <tchaypo> Queue growth in last 30 is now at 0.9/day
19:26:31 <tchaypo> (Bottom of 30 day report)
19:27:18 <tchaypo> But
19:27:44 <tchaypo> 3rd quartile wait time: 6 days, 5 hours, 51 minutes
19:27:59 <tchaypo> Thats down from almost 2 weeks
19:28:09 <jdob> \o/
19:28:20 <cinerama> wtg reviewing people
19:28:55 <bnemec> Yeah, it seems like the flow of patches coming in must have slowed down.  Not sure whether that's good or not.
19:29:07 <tchaypo> We talked about trying to be proactive wrt first timers
19:29:19 <tchaypo> But I don't know if that has been actioned at all
19:29:27 <jdob> also the end of a release, it makes sense that things would quiet a bit before summit
19:29:36 <bnemec> True
19:29:59 <cinerama> we could look into doing a more formal "patch pilot" program
19:30:06 <cinerama> to encourage folks to contribute
19:30:59 <tchaypo> Maybe we can discuss on list/summit?
19:31:47 <cinerama> that sounds good, though i won't be at summit
19:31:59 <cinerama> anyway so is that enough on reviews for now?
19:32:14 <bnemec> wfm
19:32:29 <cinerama> #topic Projects needing releases
19:32:44 <cinerama> so...are there any? :)
19:33:04 <greghaynes> we should, its more a question of who ;)
19:33:09 <greghaynes> I really want to learn how, actually
19:33:20 <jdob> i just did them last thursday, do we really need them weekly?
19:33:30 <jdob> there weren't a ton of changes when i last did them
19:33:33 <cinerama> good qn
19:33:34 <greghaynes> jdob: can I do it and try and poke you for support?
19:33:36 <jdob> it's not a hard or really time consuming process
19:33:43 <jdob> greghaynes: ya, absolutely
19:33:56 <greghaynes> awesome
19:34:21 <jdob> you'll need perms before you can. before i got them, lifeless asked me to quickly talk with someone about the process
19:34:36 <greghaynes> #action greghaynes to learn how to release all the things
19:34:37 <jdob> so if you want to read up about it and ping me just to double check you're clear, i'll vouch for you
19:35:00 <greghaynes> jdob: ok. good time to do that when theres nothing pressing to go out then :)
19:35:07 <jdob> very true :D
19:35:17 <ccrouch> https://review.openstack.org/#/c/105275/ is a nice change in DIB yesterday i'd like to see released
19:35:27 <cinerama> maybe it would be a good week for greghaynes to do a release for practice if there are not many things going out
19:35:34 <jdob> cinerama: agreed
19:35:48 <jdob> i'd have argued more to skip it if it wasnt going to be an opportunity to train another
19:35:57 <jdob> "train"... its really not that hard :)
19:36:54 <cinerama> mmkay, anything else on reviews?
19:37:08 <greghaynes> s/reviews/releases, nerp
19:37:10 <ccrouch> i like the weekly cadence. then there is no doubt whether your change is going out or not, and each release delta is small in case of regressions
19:37:19 <cinerama> greghaynes: indeed
19:37:22 <ccrouch> so +1 greghaynes :-)
19:37:42 <jdob> ccrouch: that sounds like you volunteering to help out :D
19:37:53 <cinerama> #topic CD Cloud status
19:38:30 <tchaypo> I discovered this week that we had the wrong mac addresses for most of the machines
19:38:57 <tchaypo> We now have ~70 that can (and have) be used to bring up an overcloud
19:39:04 <derekh> rh1 - OK , hp1
19:39:21 <derekh> hp1 - patch to start using it still waiting to be merged
19:39:58 <tchaypo> Do you have a link?
19:40:18 <ccrouch> jdob: ha
19:40:25 <cinerama> if there's a nice status dashboard somewhere i don't know about we should add it to the agenda wiki thing
19:40:25 <derekh> tchaypo: this should be the link but gerrit is throwing a 404 https://review.openstack.org/#/c/126513/
19:40:53 <cinerama> derekh: wfm, weird
19:41:03 <tchaypo> Thanks
19:41:22 <derekh> tchaypo: I mean thats the link from my gerrit dashboard
19:41:47 <derekh> weird
19:42:17 <lsmola2> the link works for me
19:42:23 <derekh> hmm,  probably something todo with the project being renamed to system-config
19:42:53 <bnemec> So basically we just need an infra person to approve that so we can be multi-region again?
19:43:23 <derekh> bnemec: yup, and hope its ok ;-) , its been a few weeks since I last looked at hp1
19:43:24 <tchaypo> Is jerryz around?
19:44:02 <bnemec> derekh: Only one way to find out :-)
19:44:04 <greghaynes> I can poke clarkb because hes sitting across from me
19:44:17 <bnemec> greghaynes: He already +2'd, so don't poke him. :-)
19:44:22 <greghaynes> oh
19:44:24 <greghaynes> damn
19:44:44 <bnemec> I mean unless you want to. ;-)
19:45:21 * derekh wont be here to help with problems if its merged now
19:45:51 <cinerama> #action infra person should approve adding hp1 back
19:45:52 <bnemec> Anything else on CI?  I almost hate to say it, but based on derekh's weekly updates it's been pretty quiet lately.
19:45:59 <derekh> ok, I get the 404 if I'm logged in, and can see the link if I'm logged out
19:46:01 <bnemec> derekh: So should we hold off?
19:46:32 <derekh> bnemec: not necessarily, ideally somebody should be available if there are problems
19:46:58 <bnemec> derekh: Do we have someone else who will be able to look into issues?
19:47:29 <bnemec> Pretty sure I don't know enough to be of use.
19:48:24 <derekh> bnemec: any of the cd admins has access to the cloud, but may be lacking in familiarity , if there is problems logging into the hp1 bastian and joining the screen session is a good start
19:49:16 <tchaypo> re hp1 - it's already down, right?
19:49:22 <derekh> bnemec: any I wouldn't let that stop it merging, worst case senario CI might be failing for a few hours,
19:49:31 <tchaypo> i don't think we can make it much worse, unless it starts accepting jobs and then spuriously failing them
19:50:09 <bnemec> Hopefully it's not down, but it's not used by nodepool atm.
19:50:25 <derekh> bnemec: yup, thats the correct assessment
19:50:32 <cinerama> mmkay
19:51:11 <cinerama> shall we move on? we're getting close to the end of our slot for today i think
19:51:14 <tchaypo> we're running low on time
19:51:17 <bnemec> +1
19:51:22 <derekh> so, to sum up, we should just get it merged regardless of who is around, somebody will pick up the pieces
19:51:27 <cinerama> #topic Tuskar
19:51:37 <jdob> nohting spectacular to report
19:51:52 <jdob> mostly just prepping for summit
19:52:10 <cinerama> if no one's got anything else here, shall we move on to specs?
19:52:27 <greghaynes> gogogo
19:52:35 <cinerama> #topic Specs
19:53:20 <greghaynes> *crickets*
19:53:23 <greghaynes> I think thats a good sign
19:53:38 <bnemec> I think the big thing is summit topics.
19:53:41 <cinerama> anyone want to talk about any of the open specs, or shall i open the floor to general discussion?
19:54:02 <cinerama> #topic open discussion
19:54:06 <cinerama> right, go nuts
19:54:12 <ccrouch> bnemec: and the need to get *something* posted on a topic before summit
19:54:31 <ccrouch> rather than just coming in with all the details at summit
19:54:55 <bnemec> #link http://lists.openstack.org/pipermail/openstack-dev/2014-October/048652.html
19:55:12 <bnemec> ^ML discussion of the topics SpamapS proposed for our scheduled sessions
19:55:48 <bnemec> Since I think we're supposed to be using a more collaborative process for scheduling this cycle it would be nice to have more input. :-)
19:56:12 <lifeless> INPUT
19:56:16 <lifeless> sorry, had to
19:56:24 <bnemec> :-)
19:56:30 <cinerama> did we have an etherpad on that as well?
19:56:34 <cinerama> hi lifeless
19:56:45 <bnemec> cinerama: Yes, it's linked from the first ML post.
19:56:52 <cinerama> bnemec: OIC
19:56:53 <bnemec> #link https://etherpad.openstack.org/p/kilo-tripleo-summit-topics
19:57:31 <derekh> I didn't post anything up about CI this time round cause I thought it was much the same as the last time we discussed it
19:57:54 <bnemec> So, I don't expect us to resolve the question in the next three minutes, but I think schedules are due later this week so ASAP.
19:58:28 <bnemec> I guess that's all I had to say.
19:58:30 <lifeless> hi cinerama
19:59:29 <greghaynes> good meeting everyone!
19:59:41 <cinerama> we done here?
20:00:07 * bnemec has nothing else
20:00:33 <tchaypo> i have coffee now
20:00:39 <tchaypo> but that's not hugely relevant to the meeting
20:00:45 <cinerama> cool, i'll close up then. thanks for participating folks
20:00:52 <cinerama> #endmeeting tripleo