16:00:37 <odyssey4me> #startmeeting OpenStack-Ansible
16:00:37 <openstack> Meeting started Thu Feb 25 16:00:37 2016 UTC and is due to finish in 60 minutes.  The chair is odyssey4me. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:41 <openstack> The meeting name has been set to 'openstack_ansible'
16:00:44 <odyssey4me> #topic Agenda and roll-call
16:00:50 <automagically> o/
16:00:53 <spotz> \o/
16:00:59 <izaakk> o/
16:01:25 <cloudnull> o/
16:01:29 <palendae> o/
16:01:30 <jmccrory> o/
16:01:40 <raddaoui> o/
16:02:14 <odyssey4me> #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
16:02:20 <KLevenstein> o/
16:02:45 <mattt> \o
16:04:40 <prometheanfire> hi
16:04:52 <odyssey4me> Hi everyone, welcome back after the week of absence due to the mid cycles.
16:05:06 <odyssey4me> There were no action items in our last meeting.
16:05:19 <odyssey4me> #topic Define core team expectations/Add non-Rackspace cores
16:05:27 * odyssey4me hands the mic to automagically :)
16:06:22 <automagically> odyssey4me: Thanks! I added an agenda item about core expectations and the addition of non-Rackspace cores as I and others outside of Rackspace have a growing dependency on the fine work that you Rackers and the rest of the community have done here
16:07:07 <automagically> So, wanted to raise the issue of documenting expectations for core contributors and assess the groups thinking on recruiting/adding Rackspace-external cores
16:07:46 <automagically> Action items would be: wiki doc explaining core responsibilities and process for gaining/losing core status
16:07:50 <automagically> Thoughts?
16:07:56 <odyssey4me> I like the idea of documenting expectations for cores and reviewers overall. I've also been wanting to document expectations of the PTL. We can perhaps also document what the launchpad drivers group should be doing.
16:08:10 <hughsaunders> o/
16:08:17 <odyssey4me> I'd prefer it not to be wiki doc - I'd rather see it join the contributor guidelines in the docs.
16:08:19 <palendae> automagically: I am all for people outside of Rackspace becoming core, but I do think you're right that the expectations need to be written down
16:08:35 <odyssey4me> I'd like the wiki to largely be a place that points at the docs we have.
16:08:50 <automagically> odyssey4me: Location matters much less to me than getting it agreed upon and written down
16:09:43 <spotz> There's definitely groups out there who have documented out their criteria already that could be adapted or used for a start
16:09:47 <odyssey4me> I've also been wanting to suggest two new cores to the team. I have discussed them with the existing core members and there is a general agreement of approval.
16:09:47 <automagically> Any ideas about how to best recruit/retain/attract cores from outside Rackspace?
16:10:11 <automagically> I’m thinking the Austin event could be a good time/place to recruit
16:10:18 <odyssey4me> Are we happy to discuss that now quickly, or would it be preferred that it's done through the ML (as is traditional).
16:10:23 <automagically> spotz: Agreed
16:10:54 <automagically> Believe you and I discussed https://wiki.openstack.org/wiki/Heat/CoreTeam as a useful example
16:10:55 <palendae> odyssey4me: I'd caution that the cores probably want to discuss before nominating someone on the public ML who might not get approved
16:11:20 <mattt> my personal opinion is that this project needs to be a bit more flexible and fluid when it comes to cores
16:11:24 <odyssey4me> in terms of recruiting cores - all cores need to start by being involved in the community first - we recruit new contributors at any events we are involved at
16:11:25 <palendae> That's caused embarassment and hard feelings in the past
16:11:28 <mattt> i don't think we're in a position to start setting down hard requirements
16:11:41 <odyssey4me> I'd be happy to have a session to discuss how better to grow our community at the summit
16:11:46 <odyssey4me> I'd like this to be a group effort
16:11:53 <palendae> mattt: I don't think there should be hard requirements, but definitely a target for people who want to be core
16:12:10 <automagically> palendae: +1
16:12:12 <odyssey4me> yeah, not hard requirements - just expectation setting
16:12:14 <palendae> mattt: to help answer, "What should I do to get that status?" Be active in reviews, contribute code, etc
16:12:22 <hughsaunders> +1 for more cores, regardless of employer, we should also be working towards the diverse-affiliation tag
16:12:35 <mattt> palendae: i think you summarised it very well right there :)
16:12:40 <automagically> hughsaunders: How would we do so?
16:12:40 <spotz> +1 palendae and hughsaunders
16:12:45 <odyssey4me> being core is not a status - it is a responsibility and a role of service to the community
16:12:47 <automagically> re: diverse-affiliation
16:12:58 <palendae> odyssey4me's point is accurate
16:12:59 <odyssey4me> hughsaunders +1
16:13:06 <palendae> There is a bit more demand for a core reviewer
16:13:30 <hughsaunders> automagically: https://github.com/openstack/governance/blob/master/reference/tags/team_diverse-affiliation.rst
16:13:36 <automagically> thx
16:13:57 <automagically> Ah, so that was very much my goal in raising this agenda topic
16:14:11 <automagically> Nice to see it formalized in the manner within the broader community
16:14:14 <odyssey4me> hughsaunders automagically or better: https://governance.openstack.org/reference/tags/index.html
16:15:21 <automagically> So #agreed that core contributor expectations be added to the project doc
16:15:53 <cloudnull> +1
16:15:58 <mattt> can we move forward with bringing people on as cores or do we need further discussions, documentation, etc. in place first?
16:16:04 <mattt> i'd really love to see us adding cores sooner rather than later
16:16:08 <odyssey4me> can someone do some research into prior art and propose something for review - then we can adjust to suit us?
16:16:12 <cloudnull> mattt:  +1
16:16:22 <automagically> #action - update contributor guidelines to describe core responsibilities/expectations and membership process
16:16:49 <cloudnull> odyssey4me: https://wiki.openstack.org/wiki/Nova/CoreTeam#Membership_Expectations
16:16:57 <odyssey4me> automagically you need to have the first word be the person assigned to do so :)
16:17:02 <automagically> mattt: My belief is that it may be hard to accept a nomination without a good understanding of the level of responsibility
16:17:10 <spotz> odyssey4me I think that link to Heat's the automagically posted is pretty good. I can look for the Doc team's
16:17:21 <odyssey4me> #link http://docs.openstack.org/developer/neutron/policies/core-reviewers.html
16:17:27 <mattt> automagically: but i'm also not sure all our cores at the moment would meet any sort of level ... which is why i say we have to be more fluid
16:17:45 <mattt> and as the project grows and our cores become more diverse we put down a proper framework for what this involves
16:17:48 <automagically> mattt: I’m good with the expectations being lax given the reality of current cores
16:18:13 <odyssey4me> yeah, no worries - let's deal with the details in review
16:18:16 <automagically> odyssey4me: Thx for the tip. So, who wants to own the action item on doc. spotz? you?
16:18:22 <odyssey4me> automagically can you put a suggested page together?
16:18:45 <odyssey4me> otherwise spotz :)
16:18:49 <automagically> automagically #action automagically will submit patchset for review documenting core expectations
16:19:05 <automagically> Whoops, thats a lot of automagicallys ;)
16:19:17 <spotz> automagically poke if you need help
16:19:18 <hughsaunders> such magic so auto
16:19:23 <odyssey4me> alright, let's move on from that
16:19:26 <automagically> Happy to pass the mic, think the conversation went in a very useful direction
16:19:38 <odyssey4me> #topic New core proposals
16:19:53 <odyssey4me> I'd like to propose both jmccrory and automagically as new cores for OSA.
16:20:15 <hughsaunders> I'd contest that this topic is the same as the last one.
16:20:17 <odyssey4me> They've both been regular committers, reviewers and been very helpful in identifying and fixing issues.
16:20:27 <cloudnull> jmccrory: +1
16:20:30 <cloudnull> automagically: +1
16:21:16 <andymccr> +1 on both
16:21:17 <cloudnull> I think they'd both make fine additions to the core team.
16:21:23 <spotz> hughsaunders last one was proceedure/policy this one is voting
16:22:13 <palendae> Not sure if only cores get a vote, but no objections to either
16:22:32 <hughsaunders> +1 +1
16:23:03 <mattt> i'm +1 on both also
16:23:45 <odyssey4me> d34dh0r53 and stevelle aren't active/present, but that's a majority
16:24:22 <odyssey4me> so, welcome to both of you to the core team - I'll do the formalities afterwards?
16:24:42 * automagically whoot
16:24:59 <jmccrory> thanks
16:25:14 <odyssey4me> #topic Pinning pip and related dependencies
16:25:33 <odyssey4me> #link https://etherpad.openstack.org/p/openstack-ansible-pip-conundrum
16:25:51 * cloudnull high fives jmccrory and automagically
16:26:00 <spotz> grats guys
16:26:04 <mattt> jmccrory automagically welcome !
16:26:20 <odyssey4me> OK, down to the business of improving repeatability.
16:26:20 <automagically> Looking forward to continuing to contribute to such a great project
16:26:44 <odyssey4me> in recent days we hit two issues which uncovered a failing in our build methods
16:27:06 <odyssey4me> the genral idea we aim for is to ensure that whenever you build a tag, the result is the same
16:27:33 <odyssey4me> today we are failing in terms of the repo server build and everything related to python bits before that
16:27:56 <odyssey4me> in the solution options I've outlined some suggestion -but I'd like more discussion and ideas
16:28:01 <odyssey4me> questions, comments, etc
16:28:06 <odyssey4me> please add to the etherpad
16:28:23 <automagically> The first solution in pip_install role seems like the cleanest
16:29:08 <odyssey4me> it covers pip, wheels and setuptools - but it's a hard set. I'd like a neater way to do this.
16:29:35 <odyssey4me> the first is probably something we need as a stop-gap, the second seems like a better long term view
16:30:26 <odyssey4me> something that requires as little maintenance as possible is essential
16:30:37 <automagically> odyssey4me: +1 on the low maintenance aspect
16:30:58 <cloudnull> cant we add this to the global-requirements.txt files in the main repo for the three branches
16:31:03 <automagically> The second solution begs the question of how a user/deployer might override if needed
16:31:16 <cloudnull> then we dont have to mess with the package versions at the role level
16:31:29 <odyssey4me> cloudnull the repo server install doesn't use any of the requirements set out anywhere
16:31:46 <odyssey4me> it only uses the bits in its own pip_packages list
16:32:01 <odyssey4me> we could change that
16:32:19 <cloudnull> right. but if we set it in the global requirements pins it would get picked up everywhere as it pertains to an OSA isntall.
16:32:26 <odyssey4me> but maintaining requirements for the repo server which conflict with openstack requirements is also not ideal
16:32:53 <odyssey4me> cloudnull so the pinning is not actually an issue for any of the packages once the repo is built
16:33:01 <odyssey4me> the issue is in building the repo server itself
16:33:14 <cloudnull> seems like the simiplest solution would be to add the lines here https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt
16:33:15 <odyssey4me> it does not use global requirements, upper constraints, or anything like that
16:33:31 <cloudnull> it does
16:33:43 <odyssey4me> cloudnull how so?
16:33:45 <cloudnull> the lookup plugin py_pkgs indexes everything which instructrs the repo
16:34:05 <odyssey4me> that is in building the repo, not installing the repo server
16:34:08 <mattt> cloudnull: installing packages in the repo_server prior to building stuff
16:34:28 <mattt> ie. to be able to build wheels you need wheel installed
16:34:36 <jmccrory> these https://github.com/openstack/openstack-ansible-repo_server/blob/master/defaults/main.yml#L83-L92 ?
16:34:44 <mattt> all those packages at the minute are unconstrained
16:34:57 <hughsaunders> jmccrory: yes
16:35:23 <cloudnull> ah i see now.
16:35:58 <odyssey4me> so my suggestion is to do an upper constraints style thing
16:36:15 <odyssey4me> we take the output of a successful build, and publish it on openstack infrastructure
16:36:26 <odyssey4me> the repo install then needs to use that as an upper constraint
16:37:09 <logan-> i would much rather see it included in the osa repo than published elsewhere and pulled down. easier for operators to override and also makes life easier for operators doing offline deployments.
16:37:16 <automagically> logan-: +1
16:37:20 <cloudnull> logan-:  +1
16:37:27 * cloudnull was just writing that
16:37:34 <automagically> I think we definitely need a solution that allows override capabilities similar to what we have elsewhere
16:37:56 <automagically> That said, I like the general approach of the upper bound constraints
16:38:06 <odyssey4me> then we have to regularly patch it to keep it fresh
16:38:22 <cloudnull> I think thats just something we have to do .
16:38:29 <odyssey4me> my method can easily allow it to be optional -as was noted in the etherpad
16:38:39 <automagically> WIth the force var?
16:38:56 <mattt> so is this effort better than just pinning a couple of packages in teh repo server role?
16:38:59 <odyssey4me> I'd rather stay away from implementing caps - especially ones that duplicate those in openstack.
16:39:07 <mattt> will it really result in playing whack-a-mole
16:39:15 <automagically> Thinking through this a bit, as a deployer how would I test a new set of constraints
16:39:28 <automagically> Just override the requirements file location var?
16:39:38 <odyssey4me> simple - set the var that ignores the upstream constraints
16:39:53 <automagically> Right, but I’m talking about testing a new set of constraints
16:39:57 <odyssey4me> and the location would be a var too, so you could set differing constraints
16:40:42 <odyssey4me> mattt the trouble with pinning that package list is that it misses pinning the deps of those packages - that is what creates the whack-a-mole situation
16:40:44 <automagically> Could we use a lookup with the requirements url, so if I didn’t want to publish my new set of requirements, I can just override the var with a list?
16:40:59 <mattt> odyssey4me: in the last 2 days the issues have been with those packages specifically tho
16:41:04 <cloudnull> I think we could add the same constraints in the repo server role as whats in the pip install role and be good.
16:41:05 <automagically> ^ thinking out loud obviously there
16:41:08 <odyssey4me> I would rather we use our final complete pip repo version list as an upper constraint - it contains absolutely everything we use
16:41:25 <mattt> we need to be careful of over-engineering this
16:41:29 <mattt> because we've hit 2 problems in 2 days
16:41:33 <odyssey4me> cloudnull whack-a-mole again then for the next time we find a gap like this
16:41:34 <mattt> i don't recall this biting us until then
16:41:54 <cloudnull> I'd like to get the requirement / constraint files published too but idk tat it needs to be an integral part of the build process
16:43:10 <cloudnull> and if we pin them in the pip install role then the items here should already be resolved https://github.com/openstack/openstack-ansible-repo_server/blob/master/defaults/main.yml#L83-L92 IE wheel, setuptools.
16:43:12 <odyssey4me> we've had a lot of feedback from the infra and pypa crew that pinning these packages is not a good idea.
16:43:29 <odyssey4me> but we also need to ensure that we build the same thing every time and can rely on things to work
16:43:39 <mattt> but you're pinning them using a constraints file right
16:43:43 <mattt> i don't see the difference
16:43:44 <odyssey4me> which is why I like the idea of updating requirements after successful builds
16:43:57 <odyssey4me> or perhaps doing a nightly that updates the thing
16:44:27 <odyssey4me> mattt the difference is that one gets updated manually by a review - the other is updated dynamically and automatically
16:44:37 <cloudnull> as long as we're locking pip to a specific version i think we have to pin the packages and doing it in the pip install role makes the most sense
16:44:55 <mattt> well
16:45:10 <odyssey4me> ok, if that's the way we think is best - that's fine
16:45:11 <mattt> is there value outside of this specific problem to have our packages captured somewhere outside the env ?
16:45:26 <mattt> if there is we should do this and we can use it when we install teh repo server
16:45:32 <odyssey4me> bear in mind that we then end up doing things differently from upstream
16:45:49 <odyssey4me> openstack tests all use the latest pip and the upper constraints for the rest
16:46:14 <palendae> Are they using pip8 now?
16:46:18 <odyssey4me> palendae yes
16:46:22 <mattt> yeah we're already doing something different now then
16:46:24 <palendae> They must cause it breaks every time a new pip comes out
16:46:28 <odyssey4me> for stable/kilo and above
16:46:40 <palendae> Yeah, makes sense
16:46:55 <odyssey4me> yes - which is why sigmavirus24 has already adviced us not to do what we're doing
16:46:59 <cloudnull> looking at the global requirements the only named constraint they have is wheel
16:47:00 <odyssey4me> as have the infra crew
16:47:00 <cloudnull> https://github.com/openstack/requirements/blob/master/upper-constraints.txt
16:47:05 <cloudnull> https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L376
16:47:23 <odyssey4me> cloudnull g-r has pip too
16:47:45 <cloudnull> which is unpinned
16:47:46 <cloudnull> https://github.com/openstack/requirements/blob/master/global-requirements.txt#L139
16:47:52 <sigmavirus24> right
16:48:16 <sigmavirus24> it looks as though we're leaning towards uncapping pip?
16:48:24 <cloudnull> so i dont see the harm in pinning the other bits so long as we're allowing pip to move forward
16:48:37 <cloudnull> *we're NOT allowing ...
16:48:44 <odyssey4me> sigmavirus24 no, the majority at this point want to cap pip, setuptools and wheel
16:48:45 <hughsaunders> cloudnull: but thats how we ended up with the wheel > pip problem
16:48:52 * sigmavirus24 shrugs
16:49:03 <sigmavirus24> Have fun figuring out why things break when those are capped with upstream having them uncapped
16:49:14 <odyssey4me> sigmavirus24 I would like to uncap these things and use a published upper constraint that's updated by the build process dynamically
16:49:16 <cloudnull> hughsaunders:  idk if this is a problem if we're using the latest pip 8 -- sigmavirus24?
16:49:30 <sigmavirus24> cloudnull: if what specifically is a problem?
16:49:40 <sigmavirus24> if anything, I'd advocate blacklisting known bad versions
16:49:46 <mattt> sigmavirus24: well they break when you leave them uncapped also
16:49:48 <mattt> so what to do
16:49:52 <sigmavirus24> (especially since the pip team is responsive to openstack breakage)
16:49:57 <odyssey4me> sigmavirus24 which is already done upstream
16:50:21 <sigmavirus24> mattt: every version of pip has broken us?
16:50:37 <mattt> sigmavirus24: no, but setuptools wasn't constrained and that broke a bunch of things yesterday
16:51:03 <mattt> i don't see the issue if you pin the three in tandem
16:51:38 <mattt> hand by tandem i mean the latest working version of all three at a point in time
16:51:38 <sigmavirus24> mattt: we can constrain the world in conflict with upstream openstack. we could help a resource starved project (setuptools). we could use an upper-constraints-like system as  odyssey4me suggested
16:51:50 <odyssey4me> So the issue I have with capping is simple - openstack's testing all uses the current versions. There are no caps. If we do not follow that model, then we assume full responsibility of testing everything that the rest of openstack-ci has tested.
16:52:08 <sigmavirus24> mattt: the thing is that bounding them all in tandem is kind of silly since they're all independent pieces
16:52:25 <sigmavirus24> pip needs a modernish version of setuptools which doesn't have to be the latest
16:52:40 <sigmavirus24> we need a version of wheel that will generate wheel names that our version of pip can install
16:53:00 <mattt> right because we pinned pip without doing the same to wheel and setuptools
16:53:03 <mattt> which is why we hit that issue
16:53:05 <sigmavirus24> they're related, absolutely, but not deeply tied together by any stretch of the imagination. Is the problem with setuptools tracked on their issue tracker? Have we tried fixing things?
16:53:09 <mattt> wheel was updated and wasn't compatible w/ pip
16:53:12 <cloudnull> sigmavirus24:  what would be the best solutuin knowing that we have locked ourselves to pip7.x ?
16:53:23 <odyssey4me> sigmavirus24 a patch is in progress
16:53:41 <sigmavirus24> cloudnull: we can cap the world, but it's a smell given that openstack itself isn't doing this
16:53:48 <sigmavirus24> I would advocate for blacklists
16:53:54 <odyssey4me> cloudnull we should not be locking ourselves to pip 7.x is the point
16:53:55 <sigmavirus24> pip!=8.0.0,!=8.0.1
16:53:58 <mattt> so why did we pin pip to begin with and what could we have done to avoid that?
16:54:04 <mattt> because that decision is the source of this problem here
16:54:13 <cloudnull> odyssey4me:  we currently are though .
16:54:14 <sigmavirus24> mattt: 8.0.0 broke argparse
16:54:29 <odyssey4me> yes, but it raised a broader problem which is that what we deply today is not what we deploy tomorrow
16:54:30 <sigmavirus24> mattt: so the thing is that 8.0.0 wanted to remove support for uninstalling distutils installed packages
16:54:44 <odyssey4me> this is why I'd like to implement something to close the gap
16:55:04 <sigmavirus24> that broke installing things with pip that might have installations pre-existing based on standard library or system packaging
16:55:08 <odyssey4me> simply blacklisting does not solve that issue
16:55:22 <cloudnull> which would have to be an upper cap on setuptools and wheel so long as we have this https://github.com/openstack/openstack-ansible-pip_install/blob/master/defaults/main.yml#L27
16:55:25 <sigmavirus24> odyssey4me: the issue of what we deploy today is not what we deploy tomorrow for the same version?
16:55:47 <odyssey4me> publishing the full pip requirements file from our repo per build will result in each tag having that set in stone
16:55:58 <sigmavirus24> odyssey4me: in that case, why not have vars that are generated when we create a tag that represent the versions of everything when that tag was created
16:56:02 <odyssey4me> this is the best way to ensure that each tag deployed will always result in the same thing
16:56:12 <mattt> odyssey4me: if you want to take that stance it has to be applied right through openstack-ansible, not just when it comes to deploying a specific container
16:56:12 <sigmavirus24> odyssey4me: capping will?
16:56:22 <automagically> sigmavirus24: Hmm, interesting middle ground on the vars generation
16:56:30 <odyssey4me> sigmavirus24 I'm suggesting something similar to what upper-constraints does for devstack
16:56:42 <sigmavirus24> odyssey4me: I think we have similar ideas
16:56:52 <odyssey4me> if we can automate vars generation on every sha jump, tha'td do fine too
16:57:02 <sigmavirus24> I'm saying that we should let the gate do whatever. Only when we tag the release should we update those constraints.
16:57:29 <sigmavirus24> See I also disagree with our sha bumps being a thing on stable branches that happens *after a tag* but that's just me
16:57:42 <sigmavirus24> I know I'm the only person who thinks we should trust the small team of dedicated upstream stable maintainers
16:57:58 <odyssey4me> we're almost out of time
16:58:02 <sigmavirus24> (This is also why I've kept out of these discussions)
16:58:09 <odyssey4me> so we need to close off and continue in #openstack-ansible
16:58:51 <cloudnull> cheers everyone !
16:58:52 <odyssey4me> Thank you all for your time and participation.
16:59:43 <odyssey4me> #endmeeting#endmeeting