19:02:00 <jeblair> #startmeeting infra
19:02:01 <openstack> Meeting started Tue Sep 15 19:02:00 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:05 <openstack> The meeting name has been set to 'infra'
19:02:12 <clarkb> jeblair: you need a sacrificial keyboard
19:02:29 <jeblair> clarkb: oh, of course, i have a model-m; i can just throw it in the dishwasher
19:02:34 <pabelanger> o/
19:02:40 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:43 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-09-08-19.01.html
19:03:01 <jeblair> #topic Actions from last meeting
19:03:05 <jeblair> clarkb see if github api supports transfers yet
19:03:23 <clarkb> I did and no
19:03:37 <clarkb> you ca ndo renames with the edit repo api call but transfers dont appear to be a thing via the api
19:03:40 <jeblair> #info github api does not support transfers :(
19:03:48 <clarkb> there is even an open bug against githubs bug tracker for it
19:04:07 <jeblair> clarkb: so, post oct 17 we might be in a better place
19:04:11 <fungi> i suppose we have one month for people to pester that bug asking for the feature
19:04:25 <zaro> o/
19:04:44 <clarkb> well bug is ancient
19:04:49 <clarkb> so wouldnt get my hopes up
19:04:57 <jeblair> mordred look into better version of https://review.openstack.org/105057 or improve it
19:05:05 <jeblair> mordred isn't here....
19:05:09 <jeblair> and it doesn't look like that got improved
19:05:29 <jeblair> i missed some of the action on friday; were we able to use ansible at all?
19:05:50 <clarkb> we were not, I think jasondotstar was looking at it and said we shouldn't
19:05:52 <fungi> just the playbook we already have to clean up slave workspaces
19:06:07 <clarkb> we decided the ~6 hours we had was better spent prepping rather than ansible scrambling
19:06:16 <AJaeger> I'm sorry for beeing late...
19:06:21 <jasondotstar> o/
19:06:45 <hogepodge> o/
19:06:48 <fungi> fell back on my script-creating-script ouroboros
19:06:49 <jasondotstar> I am working on it
19:06:49 <fungi> # https://review.openstack.org/222726
19:06:54 <fungi> #link https://review.openstack.org/222726
19:07:00 <jeblair> should we say that ship has sailed and just do our usual for oct 17?
19:07:01 * fungi fails at irc today
19:07:13 <jasondotstar> wasn't ready by last friday
19:07:41 <clarkb> jeblair: ya, and we were taking about testing it against review-dev too
19:07:47 <clarkb> so that may happen prior to the 17th
19:08:17 <jasondotstar> clarkb: +1
19:08:25 <jeblair> k, so keep working on it because it's still useful even after the stackforge move, but not counting on it at this point for that
19:08:32 <jeblair> (and maybe we'll be surprised)
19:08:39 <jasondotstar> ack.
19:09:22 <jeblair> jasondotstar: cool, thanks!
19:09:40 <jeblair> #topic Specs approval: Artifact Signing Toolchain (fungi)
19:09:47 <jeblair> #link Artifact Signing Toolchain https://review.openstack.org/213295
19:09:51 <jeblair> #info Artifact Signing Toolchain spec was approved
19:10:03 <Clint> \o/
19:10:05 <jeblair> and it merged
19:10:06 <fungi> noted, thanks all
19:10:31 <jeblair> (also, it's going to be someone else's turn to try to remember to do that soon ;)
19:10:53 <jeblair> #topic Schedule Project Renames
19:11:11 <jeblair> anyone want to talk about oct 17?
19:11:34 <jeblair> i don't have anything, but figure we should start talking about next steps soon
19:11:44 <fungi> fwiw, the wiki list is a small fraction of what we need to address
19:11:56 <fungi> at least so far
19:12:18 <fungi> we have 331 stackforge repos at the moment
19:12:36 <fungi> hopefully projects will step it up
19:13:09 <jeblair> maybe we should start sending some targeted emails, to recent contributors or project originators
19:13:22 <fungi> is gertty moving to git.inaugust.com?
19:13:24 <fungi> ;)
19:14:11 <fungi> but yeah, we could automate some sort of mass contact attempt
19:14:12 <jeblair> probably not :)
19:14:39 <jeblair> i'll take that one
19:14:52 <jeblair> #action jeblair automate some sort of mass contact attempt for stackforge move
19:14:58 <fungi> cool!
19:15:36 <jeblair> we have a next step; anything else on this?
19:16:09 <jeblair> let's say next week we'll look for someone to volunteer for writing some scripts; so heads up.  :)
19:16:26 <jeblair> #topic Priority Efforts (Swift Logs)
19:16:45 <jeblair> #link https://review.openstack.org/#/c/214207/
19:16:55 <jeblair> "Dynamic folder indexes are ready for review"
19:17:23 <cody-somerville> \o/
19:17:44 <jeblair> which is yay, because i think that was our major blocker for turning it on on more jobs
19:17:52 <fungi> agreed
19:18:02 <fungi> and also annoying for some jobs already using that
19:18:11 <fungi> thanks jhesketh!
19:18:16 <jhesketh> yeah, some reviews would be good
19:18:23 <jhesketh> (it's been up a couple of weeks)
19:18:37 <fungi> i see that. bumping up my list
19:18:40 <jhesketh> works okay locally.. is a little ugly, but at least functional
19:19:04 <jeblair> i love functional :)
19:19:24 <jeblair> #topic Priority Efforts (Migration to Zanata)
19:19:28 <fungi> jhesketh: mind if i update the review topic on it to match what's in the logs-in-swift spec?
19:19:38 <jhesketh> fungi: not at all, thanks
19:19:43 <pleia2> so translators are using Zanata now, just wrapping up some things WRT jenkins periodic jobs
19:19:45 <jhesketh> sorry I forgot to use the topic!
19:20:05 * fungi would like to be able to pretend that's the reason he hadn't reviewed it yet, but alas
19:20:30 <jeblair> pleia2: cool
19:20:30 <pleia2> nothing else meeting worthy :)
19:20:32 <clarkb> pleia2: has a change up to give zanata a bit more memory
19:20:41 <pleia2> yeah https://review.openstack.org/#/c/223721
19:20:44 <jeblair> translate-dev ran out of heap space this morning and got stuck spinning cpu
19:20:47 <jeblair> i restarted wildfly
19:20:55 <pleia2> jeblair: this patch will fix that
19:21:00 <jeblair> neat
19:21:09 <jeblair> also we should probably add them to cacti?
19:21:19 <pleia2> https://review.openstack.org/#/c/223687/
19:21:21 <pleia2> yep ^^
19:21:51 <jeblair> neat...if i say a third thing, will there already be a change in review for it? ;)
19:22:00 <fungi> quick, make it something important!
19:22:04 <pleia2> haha
19:22:13 <pleia2> I do have a docs change WIP, that's important, I hope to finish it this afternoon
19:22:36 <jeblair> docs are cool
19:22:40 <fungi> and important
19:22:43 <pleia2> we never had a docs translate page, will soon
19:22:49 <AJaeger> ;)
19:23:22 <jeblair> pleia2: thanks!
19:23:26 <jeblair> #topic Ansible roles under -infra (pabelanger)
19:23:37 <pabelanger> ohai
19:23:53 <jeblair> #link http://lists.openstack.org/pipermail/openstack-dev/2015-September/073857.html
19:24:12 <pabelanger> so, this one is just starting the discussions around how it would look like moving ansible roles upstream into the OpenStack git workflow.
19:24:39 <pabelanger> So, for example, I have ansible-role-nodepool, that I would like to start using downstream.  I know of some other people that would also like to consume it.
19:24:47 <pabelanger> so, I'm looking for a new home for the module
19:25:01 <yolanda> from our Gozer team we'd like to give a chance to that ansible roles as well
19:25:14 <pabelanger> trying to understand if -infra would be a good location for it, or some other place (git.o.o/openstack)?
19:25:26 <jeblair> this is a role to deploy nodepool?  eg, replace puppet-nodepool?
19:25:34 <pabelanger> I don't want to convince -infra to use ansible to provision nodepool
19:25:45 <fungi> in the past we'd approved inclusion of distro packaging repos for infra's software as part of infra even though we weren't expecting to consume said packages immediately (if ever)
19:25:59 <pabelanger> jeblair: no replacement, just another method to deploy nodepool
19:26:01 <jeblair> fungi: i think we're still hoping too someday :)
19:26:06 <fungi> heh, true
19:26:24 <jeblair> pabelanger: but exclusive with puppet-nodepool, right?  they occupy the same space?
19:26:30 <pabelanger> correct
19:26:38 <pabelanger> same space
19:26:43 <SpamapS> config management fight! YAY!
19:26:50 <pabelanger> basically :)
19:26:55 <SpamapS> so productive
19:27:17 <SpamapS> seems like alt pieces of the puzzle should go in their own repos
19:27:39 <pabelanger> Personally, I am not sure -infra is the place for it.  Since -infra would not consume it.
19:27:48 <pabelanger> however, people in infra will use it
19:27:55 <pabelanger> downstream
19:28:24 <asselin> o/
19:28:32 <yolanda> if the scope is just for infra components, i think that infra is the place, under a separate project with independent group
19:28:48 <jeblair> pabelanger: i realize it's not what your asking, but let me just mention this for those who haven't heard it before -- we can definitely discuss replacing our use of puppet in infra with something else, though it needs to be a plan we get consensus on, and it needs to (eventually) be a wholesale replacement.  we don't want two config management systems in use at once.
19:29:47 <fungi> in our infra puppet modules we have support implemented for platforms we don't (and don't expect to) run. this seems like a similar situation
19:30:06 <pabelanger> jeblair: Right. And I don't want to do 2 different systems. Honestly, I see some of this moving independent of -infra and don't want to force the issue to replace puppet upstream
19:30:12 <ruagair> O/
19:30:30 <pabelanger> just looking for the best place to get the openstack git workflow, without setting up something external
19:31:30 <fungi> so the question is a) openstack-infra, b) openstack-ansible, c) some new project-team, d) unofficial repos in the openstack namespace
19:31:47 <jeblair> fungi: yeah, i'm sympathetic, though i don't want to do something to damage the progress we've made on actually collaborating on puppet; i'm a little worried about having two infra-official-looking ways of deploying stuff.
19:32:03 <fungi> that's certainly a legitimate concern
19:32:09 <clarkb> jeblair: I am also worried that this will also just add to our workload after we hvae said "well we probably won't use this"
19:32:19 <clarkb> or at least that assertion seems to be out there right now
19:32:25 <greghaynes> I also wonder who would review it? would current infra cores want to?
19:32:29 <clarkb> specs, testing, etc
19:32:39 <fungi> i am unlikely to review changes for things we're not running
19:32:44 <pabelanger> greghaynes: no, if infra-core is not using it. I don't see them reviewing
19:32:51 <jeblair> i'm lucky to review changes for things we are :)
19:32:56 <fungi> well, which we're not running and not working toward running
19:33:03 <pabelanger> another reason I don't think -infra is the place
19:33:14 <greghaynes> yea, so then im not sure what the value is of it being in infra land
19:33:43 <pabelanger> that leaves openstack name space and possible ansible team
19:35:36 <yolanda> how is that going to live with general openstack ansible projects?
19:36:17 <pabelanger> they both use ansible I guess
19:37:22 <fungi> also curious whether there's a downstream that's seriously interested in swapping out all the infra puppet they're using with ansible, in which case that tips the scale a bit for "might use it upstream if we replace everything" since that might already get us to the halfway mark
19:37:51 <clarkb> well it would be better to justify that a little bit more beyond "bceause we are alergic to puppet" if so
19:38:06 <fungi> agreed
19:38:09 <clarkb> since we actually have to switch in production
19:38:31 <yolanda> well, in our case, we started some orchestration with infra-ansible, so using ansible for the components looks as a natural step. We don't have a plan to replace puppet, but at least we are curious to test that approach
19:38:31 <jeblair> my reading is that there is lukewarm reception to hosting them in infra.  we're pretty accomodating in hosting some amount of related infrastructure that we aren't directly consuming, but this seems to be a bit too far away from what we're doing with too little potential for collaboration (unless we decide to switch) for some of us to incorporate it into infra
19:38:42 <jeblair> yolanda: that's the thing...
19:38:50 <fungi> i'm certainly allergic to puppet, doesn't stop me from using it. i also might discover i'm allergic to ansible if i used it as much as i use puppet now
19:39:12 <jeblair> yolanda: as a group, we've said "ansible makes sense for orchestration, but puppet is sufficient, if not better, at configuration management" so that's the direction we're going
19:39:39 <jeblair> fungi: yeah, few of us are puppet cheerleaders, but we get over it because it lets us work together
19:39:52 <pabelanger> Ya, I don't think this is at the stage to justify moving from puppet to ansible. But, I do think there is some downstream folks that are interested in ansible.
19:40:47 <fungi> sounds like another "when you have a hammer every problem looks like a nail" situation. once you have the tool implemented for something it's good at, there's a temptation to use it to solve other problems too just because you already have it
19:40:51 <jeblair> pabelanger: we don't have to cover it here, but i'd like to know if, especially after we get rid of the puppetmaster with the ansible-launch-node work, why diverging is better than collaborating
19:41:19 <jeblair> s/if//
19:41:42 <pabelanger> jeblair: right, I can go into some details about that, maybe an email, at least explaining why I'm working on ansible stuff ATM
19:42:19 <fungi> the downstream justification would be interesting, if nothing else
19:42:42 <pabelanger> Ya, I can get some notes together and continue on ML
19:42:50 <jeblair> pabelanger: that would be cool if you could send it to the -infra list; we'll get some more eyes on it
19:42:57 <pabelanger> ack
19:43:39 <jeblair> pabelanger: anyway, i'm not getting very strong support for inclusion into infra.  but we don't want to lose opportunities to collaborate.  so maybe we call this a tentative "doesn't sound like the right place" for right now, but continue exploring on the ml?
19:44:30 <pabelanger> sure, that works for me.
19:44:37 <jeblair> thanks
19:44:39 <jeblair> #topic Nodepool consuming requirements (ianw)
19:44:51 <ianw> hello
19:45:09 <ianw> this came up yesterday, i think there might be old discussions around this before my time
19:45:21 <ianw> "is nodepool a openstack project"
19:45:37 <anteaya> what is the context for the question?
19:45:48 <anteaya> I have never asked myself that question that I recall
19:45:50 <fungi> we could rescope the question to "do all openstack projects participate in global requirements sync?"
19:46:10 <ianw> anteaya: consuming global requirements
19:46:19 <anteaya> ianw: thank you, now I understand
19:46:21 <clarkb> my concern here is I don't want to have to justify Infra's use of every lib we use
19:46:36 <AJaeger> Oh, that discussion ;( I gave up frustrated with openstack-manuals at one point ;(
19:46:44 <jeblair> in general, no, we do not hold infra projects to the same standard.  our goals and needs are deliberately separate from openstack.
19:46:44 <clarkb> because we have made some very specific decisions as far as design and deps that don't line up with openstack very well
19:46:45 <ianw> as of right now, the next dib release will break nodepool, which kind of sucks
19:47:08 <jeblair> ianw: well, i mean, we're a git commit away from fixing it :)
19:47:09 <clarkb> though pymysql is now used by everyone else at least (that was the big one)
19:47:38 <ianw> jeblair: yeah, because we noticed because pabelanger was playing with it.  it would suck if puppet autodeployed it at a bad time though
19:47:44 <fungi> i'm less worried about having to justify choice of dependencies, more worried about maintaining some logical separation between the projects which you install to run an instance of "openstack" (for some definitions thereof) and what we develop and maintain to support our community infrastructure
19:47:54 <jeblair> i don't think nodepool, or other infra projects, should participate in global reqs sync.  we are not intending to be part of the combined deliverable we vaguely call openstack.
19:48:19 <jeblair> ianw: yeah, though, tbh, it's probably not going to kill us if pbr flaps on nodepool.o.o
19:48:28 <greghaynes> AIUI the problem that came up here is that dib does req sync and had an incompatible dep with nodepool
19:48:37 <greghaynes> which should be rare, I hope
19:49:09 <ianw> the other thing is people are packaging nodepool, and version skew is an issue
19:49:28 <greghaynes> version skew how?
19:49:30 <jeblair> greghaynes: yeah, and any lib can cause that.  i think the overlap is small enough that we don't need the full power of the requirements repo to address it when it does
19:49:35 <fungi> well, plenty of nodepool's dependency chain might release versions with incompatible transitive dependencies. dib is little different in that regard
19:49:52 * jeblair just lets fungi talk :)
19:50:06 <fungi> heh, after you!
19:50:13 <greghaynes> agreed
19:51:15 <jeblair> and we can definitely broaded requirements ranges as needed for packagers
19:51:18 <fungi> i think one place we've seen some overlap between vaguely openstack services and openstack infra projects is solum, which was considering using zuul as its scheduler
19:51:37 <ianw> so it feels like there's not much love for syncing with global requirements
19:52:21 <fungi> i think the situation we engineered requirements synchronization to solve is not a problem we run into often enough in infra projects
19:52:28 <ianw> maybe we should just ensure we're testing dib from git?  that would give us a heads up on issues there, at least?
19:53:04 <jeblair> not a bad idea, maybe a non-voting job?
19:53:13 <greghaynes> Does the nodepool dsvm test not do that?
19:53:18 <clarkb> greghaynes: it does not
19:53:36 <clarkb> greghaynes: it pip installs nodepool's source which installs dib from pypi
19:53:45 <greghaynes> ah
19:53:49 <fungi> can you elaborate on the degree of impact? is it going to crash nodepoold when we update dib? or keep it from starting when we restart? or keep us from being able to upgrade/reinstall it?
19:53:51 <jeblair> (i would also like to avoid pulling nodepool into a shared queue with dib, at least at the moment)
19:54:59 <ianw> fungi: it will be a pbr conflict; i'm not 100% sure but i guess it would be broken on restart?
19:55:05 <fungi> if the majority of the impact is that our nodepool unit or integration tests begin failing when dib releases, that's more or less working as intended
19:55:05 <ianw> at least
19:55:14 <clarkb> I think it would just fail to upgrade nodepool
19:55:21 <clarkb> but restarts would run the old version
19:56:04 <fungi> just want to make sure that we don't implement a solution to this which is even more invonvenient than the problem
19:56:46 <ianw> so in summary ; global requirements = no ... testing dib from git = propose something and we can see
19:56:52 <ianw> ?
19:57:30 <fungi> that's my take
19:57:33 <jeblair> ianw: ++
19:57:43 <ianw> ok, thanks
19:57:46 <greghaynes> sgtm
19:57:49 <clarkb> sounds good, I think you just need a second dsvm job that does dib from source
19:57:57 <clarkb> just an extra devstack flag iirc
19:58:13 <pabelanger> Ya, that is how I caught the issue, git install and simple dib create
19:58:43 <jeblair> ianw: thanks
19:58:44 <jeblair> #topic Open discussion
19:58:48 <jeblair> 2 mins!
19:59:00 <yolanda> so i need reviews for https://review.openstack.org/#/c/206582/
19:59:09 <yolanda> anteaya needs that to start using apache logrotation
19:59:21 <yolanda> it has a +2, but needs more eyes on it
19:59:29 <anteaya> fewer apache logs on gerrit, yay!
20:00:04 <fungi> oh, who was looking into vcsrepo updating to upstream and getting off our fork? check in with me
20:00:13 <jeblair> yes, getting that in soon would be good
20:00:13 <clarkb> mrmartin?
20:00:19 <jeblair> thanks everyone!
20:00:21 <jeblair> #endmeeting