19:02:51 <jeblair> #startmeeting infra
19:02:51 <openstack> Meeting started Tue Jul 14 19:02:51 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:52 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:53 <anteaya> that is going to be tab complete fun
19:02:55 <openstack> The meeting name has been set to 'infra'
19:03:01 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:04 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-07-07-19.02.html
19:03:10 <jhesketh> o/
19:03:21 <jeblair> #topic Announcements
19:03:30 <frickler> o/
19:03:40 <pabelanger> o/
19:03:42 <jeblair> we have some new core members!
19:03:45 <jeblair> #info Wayne Warren (waynr) added to jenkins-job-builder-core
19:04:05 <jeblair> waynr has been doing a lot of jjb work recently, and is working on some specs in that area
19:04:27 <jeblair> hopefully will be joining us in the near future to talk about them
19:04:27 <fungi> i would welcome waynr, but isn't around i guess
19:04:38 <pabelanger> welcome!
19:04:46 <nibalizer> o/
19:05:04 <mordred> o/
19:05:05 <jeblair> waynr is not a 100% openstack person, so may not always be able to jump into our meetings
19:05:22 <mmedvede> o/
19:05:25 <fungi> jjb is less of a 100% openstack repo anyway these days, so great fit!
19:05:26 <jeblair> but that makes for a healthy project, so cool :)
19:05:32 <janonymous_> o/
19:05:42 <jeblair> #info Adam Coldrick (SotK) and Zara Zaimeche (Zara_) added to storyboard-core
19:06:03 <mordred> woot
19:06:08 <Zara_> \o/ thank you!
19:06:09 <jeblair> Adam and Zara are interested in helping to maintain and move storyboard forward
19:06:10 <pleia2> wonderful
19:06:29 <AJaeger> welcome SotK and Zara_ !
19:06:30 <jeblair> they are also running it in production so "don't break prod" is something we all agree on :)
19:06:54 * SotK says hello
19:06:56 <clarkb> neat
19:07:03 <fungi> SotK: Zara_: thanks for taking the plunge on that, and glad it's found renewed interest outside our use case!
19:07:26 <SotK> fungi: no problem, its an interesting project!
19:07:32 <jeblair> #info Ian Wienand (ianw) added to nodepool-core and project-config-core
19:07:55 <pleia2> thanks for your work, ianw!
19:08:04 <fungi> ianw's been a huge help so far. i'm very excited by this
19:08:06 <jesusaurus> o/
19:08:12 <jeblair> ian groks nodepool, and image building
19:08:28 <AJaeger> glat to get help on project-config-core, welcome ianw!
19:08:42 <AJaeger> s/glat/glad/
19:09:06 <jeblair> i expect his main area of project-config interest will be related images, at least at first
19:09:15 <jeblair> but there's a whole lot going on there, and more to come
19:09:27 <fungi> especially helpful for him to be core on project-config since that's where our prep scripts and dib elements for nodepool images reside
19:09:34 <jeblair> yup
19:09:46 <ianw> thanks, hope to be useful :)
19:09:51 <pabelanger> ianw, awesome work
19:10:14 <jeblair> #topic Specs approval
19:10:21 <jeblair> #topic Specs approval: Centralize release tagging
19:10:26 <jeblair> #link centralize release tagging spec https://review.openstack.org/191193
19:10:29 <jeblair> #info centralize release tagging spec was approved
19:10:39 <jeblair> i finally remembered to merge this :)
19:11:32 <jeblair> so actually
19:11:39 <jeblair> #link centralize release tagging spec http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
19:11:43 <jeblair> that's better
19:12:06 <jeblair> #topic Specs approval: Host trystack.o.o site
19:12:11 <jeblair> #link host trystack.o.o site spec https://review.openstack.org/195098
19:12:22 <jeblair> this one seems ready
19:12:27 <jeblair> and fairly straightforward
19:12:37 <jeblair> anyone have concerns or should we open voting on it?
19:12:52 <fungi> no concerns here
19:12:54 <pleia2> I had a look yesterday, voting sounds good
19:13:30 <mordred> vote++
19:13:55 <jeblair> #info host trystack.o.o site spec voting open until 2015-07-16 19:00 UTC
19:13:59 <clarkb> I haven't been abl to review specs so abstain
19:14:04 <clarkb> oh I have time nevermindthen
19:14:19 <jeblair> #topic Specs approval: Add images mirror spec
19:14:44 <ianw> so currently the unit tests for diskimagebuilder are *really* unreliable
19:16:17 <ianw> this is my thought, but we may need to do something else because changes are not merging
19:16:41 <jeblair> i have not looked at this yet, and i think we have some other mirror related ideas floating around
19:17:04 <clarkb> I have not looked either but the simplest way to mirror these would be to have swift serve them
19:17:21 <clarkb> since we already upload all rax dib images to swift. We cna look at doing that for hpcloud too
19:17:27 <mordred> maybe it's time to sketch out a long-term mirror strategy as well as some short-term strategic fixes? (also, we that we can know if various things are short or long term solutions to things)
19:17:45 <jeblair> mordred: yeah
19:18:01 <ianw> (just for the logs, see various rechecks on https://review.openstack.org/185842)
19:18:21 <jeblair> i think for this we should collect a set of people now to iterate on this a bit more and then bring it back for voting later
19:18:49 <mordred> jeblair: ++
19:19:08 <jeblair> i'm happy to help with it (since some of the ideas floating around are mine)
19:19:52 <pleia2> I can pitch in too (haven't looked at this particular spec yet, but will do)
19:19:53 <mordred> wow. that test sure did spent 30 minutes downloading the fedora base cloud image
19:20:09 <mordred> #link http://logs.openstack.org/42/185842/7/check/gate-dib-dsvm-functests-devstack-f21/779d39a/console.html.gz
19:20:53 <ianw> yeah, not only does it not work, it's not very friendly to external mirrors
19:21:15 <ianw> although i bet they all see lots more even crazier stuff
19:21:23 <clarkb> oh that kind of image mirroring
19:21:38 <clarkb> so we have two sets of images we need to mirror, something to keep in mind when solving
19:21:55 <jeblair> i volunteer clarkb to also review that spec
19:21:58 <jeblair> and mordred :)
19:22:01 <clarkb> kk :)
19:22:04 <mordred> ++
19:22:19 <jeblair> ianw: does that work for you?
19:22:29 <fungi> we also have jobs which generate images and then other jobs which consume the same
19:22:52 <clarkb> fungi: good point, so 2.5 types of image
19:23:00 <clarkb> upstream, our images, images we build for others
19:23:07 <ianw> thanks, yeah that is fine.  i know the problem space is quite large, so hopefully we can find some baby-steps
19:23:16 <fungi> so this is sort of similar on the second half of that problem, but the first half is "notice this has updated and retrieve it" rather than "build and upload it"
19:23:35 <ianw> that get us moving in the right direction *and* get nodepool functional tests working
19:23:42 <mordred> yah. I think we can ... also, even a sketched out "in 3 years we'd love to have mirroring that looks like X" should help us with some babysteps for now as well
19:23:56 <clarkb> did someone write a job with the nodepool devstack work I did?
19:24:05 <clarkb> or do you mean dib func tests?
19:24:14 <ianw> bah, sorry, dib
19:24:15 <clarkb> (its still on my list to make the nodepool + devstack stuff work)
19:25:02 <jeblair> #info continue work on add images mirror spec: https://review.openstack.org/194477
19:25:26 <jeblair> #topic Priority Efforts (Swift Logs)
19:26:04 <jhesketh> so there's probably not a lot to say here except there are some changes ready for review https://review.openstack.org/#/q/status:open+project:openstack-infra/os-loganalyze+branch:master+topic:enable_swift,n,z
19:26:25 <jhesketh> basically I think the problems we had with updating os-loganlayze before were due to not reloading apache
19:26:32 <jhesketh> this should happen now, so we should check that theory
19:26:36 <jeblair> okay cool
19:26:47 <jhesketh> unfortunately I never had the logs from what went wrong, so it's hard to say
19:26:55 <jeblair> jhesketh: what are the next few steps?
19:26:59 <jhesketh> it's something we should watch when it goes in
19:27:14 <fungi> jhesketh: you had the stack traces though, correct? i at least provided those
19:27:21 <fungi> i'm pretty sure
19:27:36 <clarkb> I never saw the errors fwiw
19:27:39 <jhesketh> hmm, if you did, I've missed them :-(
19:28:04 <clarkb> jhesketh: the change(s) to do the apache reload did get in correct?
19:28:17 <jhesketh> yes
19:28:27 <jhesketh> https://review.openstack.org/#/c/199375/
19:28:39 <clarkb> ok (just checking as I remember reviewing it and it wasn't on the list you linked)
19:29:23 <jhesketh> so there are two options for next steps, I think.. 1) Wait until we have really solid upgrade and integration tests for os-loganalyze, or 2) carefully merge the changes during a quiet time
19:30:01 <jhesketh> pragmatically I'd like to chose #2
19:30:20 <anteaya> we have been using #2 prior to now, have we not?
19:30:50 <clarkb> anteaya: we have
19:30:52 <jhesketh> more or less
19:30:58 <clarkb> jhesketh: I agree and am happy to be around to help with that
19:31:01 <jeblair> 2 works for me :)
19:31:12 <anteaya> I'm happy with #2
19:31:29 <mordred> 2
19:31:33 <fungi> jhesketh: found the paste!
19:31:37 <fungi> #link http://paste.openstack.org/show/205683/
19:31:41 <clarkb> I am awake at all hours so we can even do it during your day :)
19:31:52 <pleia2> haha, poor clarkb
19:31:52 <anteaya> clarkb: yay babies
19:31:53 <jhesketh> :-)
19:32:00 <jhesketh> my day tends to be quieter for most anyway
19:32:24 <jhesketh> fungi: ah cool, thanks for that... when I stumbled across the reloading apache problem that was what I saw
19:32:28 <jhesketh> so hopefully that is the fix
19:32:31 <fungi> perfect
19:32:34 <fungi> smoking gun
19:33:17 <jhesketh> clarkb: shall we merge it my Saturday, so Friday night for most others?
19:33:28 <clarkb> jhesketh: that sounds good
19:33:45 <clarkb> its an easy rollback so not worried about it if we do have problems
19:33:50 <jhesketh> cool, I'll ping you then and we can figure out if it's a good time
19:33:53 <jhesketh> agreed
19:34:18 <jeblair> cool, anything else on this?
19:34:28 <jhesketh> I'm also happy to do it myself but appreciate the offer for help
19:34:38 <jhesketh> probably just getting reviews on that list would be most helpful
19:35:05 <jeblair> #info clarkb and jhesketh to babysit application of https://review.openstack.org/#/c/199375/
19:35:18 <jeblair> #info on friday/saturday
19:35:30 <fungi> jhesketh: now that you have ssh access you can also easily watch the access logs on static.o.o
19:35:41 <fungi> so can revert fairly quickly if there are still issues
19:35:44 <jhesketh> indeed :-)
19:36:24 <jeblair> i forgot to do agenda cleanup last week, so i think the next topic is grafana, does that seem wrong to anyone?
19:36:45 <anteaya> that was last week was it not?
19:37:07 <jeblair> anteaya: not according to the summary
19:37:26 <jeblair> let's go with it
19:37:27 <SotK> I have a couple of things to discuss re: StoryBoard
19:37:46 <jeblair> okay, storyboard then grafana
19:37:49 <jeblair> #topic Storyboard Development
19:38:37 <jeblair> SotK: what's on your mind?
19:38:48 <SotK> First up, would it be possible for Zara_ and I to be added to the storyboard-webclient-core group on Gerrit too please?
19:39:03 <jeblair> did i miss that?  sorry.  certainly!
19:39:09 <SotK> thanks!
19:39:33 <pabelanger> jeblair, anteaya no, we ran out of time last week for grafana
19:39:56 <SotK> I was also wondering if we should send specs to the infra-specs repo for any features we plan to implement
19:39:57 <anteaya> pabelanger: ah
19:40:29 <SotK> For example, I've recently been thinking about task lists and would find design feedback from other folk useful I think
19:41:30 <jeblair> SotK: that's a good question... honestly, i think it comes down to whether there will be useful reviewers....
19:42:00 <jeblair> SotK: maybe the best thing is to go ahead and do that for the task list idea you're thinking of and see who shows up and if it's useful
19:42:28 <jeblair> SotK: and if it turns out that it's not, we can decide not to do it next time :)
19:42:37 <SotK> sounds good
19:42:38 * mordred agrees with jeblair
19:43:16 <jhesketh> +1
19:43:35 <SotK> Finally, would it be possible for us to reinstate the old storyboard weekly meeting at a similar time to the 15:00 UTC it used to be at?
19:43:49 <pleia2> I'd say go for it
19:43:56 <anteaya> only if that space is still available
19:44:07 <pleia2> yep, just check on http://eavesdrop.openstack.org/
19:44:19 <pleia2> (also explains how to add your meeting)
19:44:35 <SotK> pleia2: great, thanks!
19:44:47 <anteaya> that time looks full on my ical feed but I'm sure you can find an open time somewhere for a weekly team meeting
19:45:07 <jeblair> SotK: thank you!
19:45:15 <AJaeger> wow, eavesdrop got a redesign - but still the single ical feeds are broken ;(
19:45:18 <jeblair> #topic Grafana.o.o (pabelanger)
19:45:57 <pabelanger> So, the puppet-grafana modules is it good shape I think.  So much so, I think I'm ready to ask for it to be merged.  However, 1 thing that is missing is the yaml file support for datasources
19:46:10 <pabelanger> right now we are using 2.0.0 but a patch was added into 2.1.0 for it
19:46:11 <pabelanger> https://github.com/grafana/grafana/issues/2218
19:46:20 <nibalizer> sweet i should reivew this
19:46:25 <pabelanger> however, 2.1.0 is not going to be released for another 2 months I think
19:46:34 <ttx> AJaeger: I shall look into that
19:46:46 <nibalizer> pabelanger: so did we end up using the community puppet-grafana or our own?
19:46:49 <pabelanger> so, question is, is 2.0.0 good enough for now, or wait until 2.1.0 for grafyaml full support
19:46:57 <pabelanger> nibalizer, upstream module
19:46:59 <nibalizer> or are you refering to the bits in system-config to use the community module
19:47:13 <jeblair> what are we missing in 2.0.0?
19:47:39 <pabelanger> the ability to create datasources via grafyaml
19:47:47 <pabelanger> right now it is manual step
19:47:52 <jeblair> pabelanger: ah
19:48:07 <pabelanger> otherwise we need to hack database to add our key
19:48:15 <pabelanger> which we can do, but pretty ugly
19:48:31 <jeblair> pabelanger: is that just a one time thing to add our key, then we can use grafyaml?
19:48:34 <pabelanger> once 2.1.0 lands, we'll update grafyaml to support basic auth
19:48:44 <pabelanger> jeblair, right
19:49:17 <jeblair> okay, so there's a manual step at server creation time (which will be automatable in 2.1.0), but other than that, we can start using grafyaml to manage the actual dashboards?
19:49:25 <pabelanger> correct!
19:49:29 <jeblair> woot!
19:49:40 <jeblair> i think in that case i'm okay with the manual stuff to get us started
19:49:51 <pabelanger> Ya, figured people would be cool with it
19:49:56 <pabelanger> reason I am asking
19:50:00 <jeblair> ++
19:50:07 <fungi> right now we're in a similar boat with gerrit and jenkins
19:50:21 <nibalizer> cool
19:50:25 <mordred> ++
19:50:33 <fungi> so it's a reasonable stopgap, especially since there's a light at the end of the tunnel (unlike with gerrit)
19:51:01 <jeblair> #agreed okay to spin up grafana and manually add key to database (grafana 2.1.0 will make this automatable)
19:51:19 <jeblair> pabelanger: anything else?
19:51:20 <pabelanger> ya, upstream grafana was more then happy to address the bug request
19:51:25 <pabelanger> jeblair, nope. thanks
19:51:28 <jeblair> oh that's good to hear!
19:51:35 <jeblair> #topic The Ops Mid-Cycle details are now available, Palo Alto, Aug 18-19 (pleia2)
19:51:50 <pleia2> we hadn't really talked about this as a team, so I wanted to just mention it
19:52:03 <anteaya> I'm going
19:52:07 <pleia2> pretty sure crinkle and greghaynes are planning on attending along with me
19:52:18 <pleia2> so we've got hardware and puppet and infra
19:52:18 <anteaya> but my main motivation is the nova-net/neutron stuff
19:53:04 <greghaynes> pleia2: Yep!
19:53:04 <pleia2> of course our focus is re: infra-cloud
19:53:19 <anteaya> do we need our topic back?
19:53:24 <jeblair> maybe i ought to go too... since it's "close"  (even though travel-wise it's about as far as portland)
19:53:41 <anteaya> it would be nice to see you
19:53:45 <mordred> jeblair: you could fly
19:54:01 <greghaynes> I have always really enjoyed the ops summits FWIW, highly reccomended if you can make it :)
19:54:09 <jeblair> pleia2: but yeah, in general that seems like a good thing for us to go
19:54:23 <jeblair> mordred: OAK-PAO on a cessna
19:54:28 <pleia2> ++
19:54:45 <pleia2> that's it on that topic, we can chat closer in about anything else
19:55:06 <fungi> i believe the openstack meetbot will fix the topic again on the next #topic change
19:55:12 <anteaya> ah
19:55:13 <jeblair> pleia2: cool, thanks!  i had missed the announcement, and it's a good idea.
19:55:20 <jeblair> #topic  Open discussion
19:55:32 <fungi> theory proven ;)
19:55:36 <anteaya> yay
19:55:46 * zaro will be mostly afk for next 2 weeks
19:55:58 <anteaya> zaro: happy vacationing
19:56:03 <zaro> i missed that announcement as well.
19:56:08 <pabelanger> pleia2, do you have a schedule for ops meeting up some place?
19:56:10 <pleia2> I'm doing CLS and some OSCON-near things, so I'll be scarce this Fri- next Wednesday
19:56:18 <greghaynes> I forgot to chime in during the mirror section (my time is all off ATM) - another big thing DIB tests are dealing with is lack of package mirrors in addition to image mirrors
19:56:21 <anteaya> it is on the operators mailing list
19:56:25 <fungi> i'm still sort of not around while i deal with new house stuff, so sorry for being in and out and generally minimally helpful
19:56:34 <greghaynes> specifically, debian and fedora have pretty unreliable package mirrors and dib tests use them
19:56:58 <pleia2> pabelanger: not sure how much they have yet, let's see...
19:56:59 <mordred> yah
19:57:02 <fungi> greghaynes: for that, can we rely on our package caching more?
19:57:07 <anteaya> #link http://lists.openstack.org/pipermail/openstack-operators/2015-July/007634.html
19:57:07 <pabelanger> I wanted to note I haven't heard much about stackalytics.o.o in a few weeks.  Unless anybody else has heard from thing from mirantis
19:57:13 <greghaynes> fungi: in theory, yes, its a lot of packages though
19:57:18 <nibalizer> pleia2: do you think theres any hope of getting a tag on the upstream grafana module
19:57:20 <mordred> greghaynes: so - I think we should make a more comprehensive TOTAL mirror solution spec
19:57:21 <pabelanger> any word back from their marketing team?
19:57:24 <greghaynes> I think were quickly aproaching the limit of where that is sustainable
19:57:25 <fungi> greghaynes: any guess at how big a primed dib job package cache would be?
19:57:25 <mordred> that includes all the things we want to mirror
19:57:26 <greghaynes> mordred: agreed
19:57:31 <nibalizer> using a random hash is okay i guess but tags are nice
19:57:38 <pleia2> pabelanger: there's a planning etherpad on https://wiki.openstack.org/wiki/Operations/Meetups
19:57:40 <greghaynes> fungi: nope, that is a good question though
19:57:59 <greghaynes> fungi: I bet I can hack up something with dib to figure it out
19:58:05 <anteaya> pabelanger: I have not heard anything no
19:58:09 <fungi> greghaynes: if it's just a few hundred megabytes, that's probably easy to accommodate
19:58:13 <pleia2> nibalizer: we can probably arrange that
19:58:33 <pleia2> nibalizer: though I think you meant pabelanger :)
19:58:35 <greghaynes> fungi: the problem is its all the distros, so its probably a few hunded meg * 5+
19:58:35 <nibalizer> pleia2: oh i was meaning to tab pabelanger ya
19:58:53 <pabelanger> jeblair, mordred re: stackalytics.o.o, any suggestion on nudge mirantis?
19:59:02 <pabelanger> nudging*
19:59:12 <fungi> greghaynes: oh, right, we can't rely on our nodepool-built cache for this because you're installing other operating systems in a chroot
19:59:31 <mordred> pabelanger: you know - I was just chatting with lsell earlier - maybe she or jbryce could be helpful in getting that sorted
19:59:33 <greghaynes> fungi: exactly
20:00:24 <jeblair> thanks all!
20:00:26 <jeblair> #endmeeting