16:02:43 <cloudnull> #startmeeting OpenStack Ansible Meeting
16:02:43 <openstack> Meeting started Thu Jul 16 16:02:43 2015 UTC and is due to finish in 60 minutes.  The chair is cloudnull. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:48 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:03:00 <cloudnull> hello everyone
16:03:03 <cloudnull> #topic Agenda & rollcall
16:03:11 <cloudnull> o/
16:03:20 <jwagner> \o
16:03:33 <odyssey4me> o/
16:03:37 <prometheanfire> \o
16:03:38 <sigmavirus24> o/
16:04:01 <serverascode> o/
16:04:11 <stevelle> o/
16:04:12 <andymccr> o/
16:04:34 <hughsaunders> lo
16:05:19 <palendae> Hi
16:06:36 <cloudnull> so i guess we're all here.
16:06:38 <sigmavirus24> Anddddd endmeeting
16:06:40 <sigmavirus24> =P
16:06:42 <cloudnull> #topic Review action items from last week
16:06:58 <cloudnull> we only had the one - test upgrading from kilo to liberty (master)
16:07:05 <cloudnull> palendae:  i think that you gave this a go
16:07:25 <cloudnull> and we found that the not having an epoch in the version number for liberty was causing problems.
16:07:28 <cloudnull> cc sigmavirus24
16:07:34 <sigmavirus24> so
16:07:39 <sigmavirus24> I think I figured out a way to work around that
16:07:44 <sigmavirus24> But I have yet to put together a POC
16:07:49 <palendae> cloudnull: I did juno to master, but yes
16:08:07 <palendae> I'd assume any upgrade where master is the end target would happen, though
16:08:16 <sigmavirus24> Basically the plan I have is that if we just change the wheel names, pip will work with the upgrade from kilo to liberty
16:08:18 <palendae> And *technically* we don't support Juno straight to Liberty
16:08:29 <sigmavirus24> Then we purge the kilo versions and change the liberty wheel names if we want to keep them around
16:08:44 <palendae> Cause our unspoken rule was upgrades should go ((current + 1) + 1)
16:08:51 <sigmavirus24> The versions of the servers will be X.0.0 and so upgrading without epochs from there will be fine
16:09:01 <sigmavirus24> But we can't keep around legacy (read Kilo) packages
16:09:18 <sigmavirus24> Basically upgrading kilo to liberty will be a nightmare because the release team cares more about aesthetics
16:09:48 <cloudnull> which sucks.
16:10:05 <sigmavirus24> ¯\_(ツ)_/¯
16:10:42 <sigmavirus24> They've made it clear at this point they will not deal with the problems they've caused so I'm not going to expend more energy on a mailing list discussion that isn't going to progress. Even if they misrepresent how installs actually work from people doing production source-based installs
16:11:17 <cloudnull> if we have oppinions about this, which i think we do please contribute to the conversation here: http://lists.openstack.org/pipermail/openstack-dev/2015-July/069556.html
16:12:17 <Sam-I-Am> sigmavirus24: tell us how you really feel?
16:12:35 <cloudnull> so we'll need to  carry an action item to figure our a fix for that moving forward.
16:12:44 <sigmavirus24> cloudnull: I just need to find time to put together a POC
16:12:54 <sigmavirus24> The one off upgrades will be a nightmare for in-place upgrades
16:13:01 <cloudnull> #action add more hours to the day so sigmavirus24 can put together a POC
16:13:03 <sigmavirus24> And just for kilo->liberty
16:13:09 <sigmavirus24> Appreciated
16:13:13 <cloudnull> lol
16:13:29 <cloudnull> #topic Blueprints
16:13:45 <cloudnull> there are a few more specs online https://review.openstack.org/#/q/status:open+project:stackforge/os-ansible-deployment-specs,n,z
16:13:50 <cloudnull> they all need reviews.
16:14:03 <cloudnull> specifically https://review.openstack.org/#/c/194255/
16:14:22 <cloudnull> which i think is not far off from the work that has been going into upstream for better Keystone support.
16:14:52 <cloudnull> #link https://blueprints.launchpad.net/openstack-ansible/+spec/keystone-sp-adfs-idp
16:15:47 <cloudnull> #link https://blueprints.launchpad.net/openstack-ansible/+spec/keystone-federation
16:16:42 <cloudnull> odyssey4me: miguelgrinberg hughsaunders how are things going on all that
16:17:06 <odyssey4me> not doing too badly - some work is still ongoing for the keystone IdP
16:17:20 <miguelgrinberg> the SP is looking pretty good, IdP needs a day or two more
16:17:33 <hughsaunders> cloudnull: nearly there, just testing the playbook modifications at the moment
16:17:38 <odyssey4me> the Keystone SP is largely done - we have a tested configuration working for TestShib (a public test IdP)
16:18:13 <odyssey4me> I'm busy validating the ADFS use-case and have discovered a lovely issue when doing SSL offloading, which is obviously a production related thing
16:18:26 <odyssey4me> anyway, I'm not giving up yet
16:18:31 <cloudnull> i changed the keystone-deferation in LP to good progress.
16:18:53 <cloudnull> *federation
16:19:37 <cloudnull> anything that we want to touch on in terms of specs / bps ?
16:19:53 <cloudnull> anything that needs a spec / bp that we're not already working that we think we should be ?
16:20:05 <cloudnull> cc BjoernT ^
16:20:44 <BjoernT> ?? are talking still over federation?
16:21:09 <cloudnull> [11:19] <cloudnull> anything that we want to touch on in terms of specs / bps ?
16:21:10 <cloudnull> [11:19] <cloudnull> anything that needs a spec / bp that we're not already working that we think we should be ?
16:21:22 <BjoernT> nope
16:21:28 <cloudnull> okiedokie.
16:21:37 <cloudnull> #topic Open discussion
16:22:06 <cloudnull> #startvote Can/should we re-enable the successerator w/ 1 retry for Master / Kilo?
16:22:07 <openstack> Begin voting on: Can/should we re-enable the successerator w/ 1 retry for Master / Kilo? Valid vote options are Yes, No.
16:22:09 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
16:22:21 <cloudnull> this is around our gating issues .
16:22:35 <cloudnull> we presently set the successerator to 0/1 for master kilo
16:22:46 <cloudnull> should we re-enable it ?
16:23:05 <stevelle> is this the original successerator?
16:23:06 <jwagner> sorry if i am out of the loop, but why was it lowered?
16:23:07 <cloudnull> most of the issues inflight have been seen as transient except in the case of HPB4.
16:23:15 <cloudnull> stevelle: yup
16:23:17 <cloudnull> its still there.
16:23:24 <cloudnull> just with retries set to 0
16:23:39 <cloudnull> git-harry hughsaunders ^
16:23:55 <andymccr> hm
16:24:02 <odyssey4me> hm
16:24:06 <cloudnull> #vote yes
16:24:11 <hughsaunders> #vote yes
16:24:30 <stevelle> #vote yes
16:24:37 <cloudnull> that said we can rip it back out once ansible v2 drops and we get it baked in
16:24:38 <andymccr> in theory we should address the issues if possible, but im guessing that isnt the case
16:24:39 <andymccr> so
16:24:40 <andymccr> #vote yes
16:24:46 <andymccr> ahh yeh treu
16:24:48 <andymccr> true
16:24:52 <andymccr> lets set that as an aim
16:24:55 <palendae> #vote yes
16:25:06 <palendae> On the provision that we do the v2 rip out and such
16:25:12 <odyssey4me> yeah, happy to do it to see if it helps and until ansible v2 drops
16:25:19 <odyssey4me> #vote yes
16:25:23 <prometheanfire> #vote yes
16:26:29 <cloudnull> #endvote
16:26:31 <openstack> Voted on "Can/should we re-enable the successerator w/ 1 retry for Master / Kilo?" Results are
16:27:13 <cloudnull> next: How should we handle the change in OpenStack package version numbers? odyssey4me
16:27:15 <cloudnull> cc sigmavirus24
16:27:40 <sigmavirus24> You mean in liberty?
16:27:43 <cloudnull> i guess we already covered this,
16:27:45 <cloudnull> yes
16:27:51 <cloudnull> i assume odyssey4me ?
16:28:03 <odyssey4me> lol, personally I think this is best handled as a once-off... if we perpetuate the model we're going to have to live with it for ever
16:28:06 <sigmavirus24> Yeah leave the actual python package metadata alone, muck with the filenames since that's what pip checks for upgrades/installs
16:28:15 <sigmavirus24> odyssey4me: read upwards, that's what I'm suggesting
16:28:30 <sigmavirus24> I think there's a way to do it, but that one-off will be painful no matter waht
16:28:37 <cloudnull> #link http://lists.openstack.org/pipermail/openstack-operators/2015-June/007390.html
16:28:39 <sigmavirus24> Thanks ReleaseMGMTTeam
16:28:39 <cloudnull> #link http://legacy.python.org/dev/peps/pep-0440/#version-epochs
16:29:01 <odyssey4me> and what's the issue with doing a kilo->liberty reboot of packages?
16:29:22 <odyssey4me> ie remove then install again for hosts, and kill containers and rebuild
16:29:32 <odyssey4me> (or something to that effect)
16:29:37 <sigmavirus24> Are in-place upgrades no longer wanted?
16:29:46 <sigmavirus24> I thought that was a bit of a goal/feature we had going
16:29:48 <cloudnull> we want in-place upgrades for sure.
16:30:35 <odyssey4me> I'm just saying that containers can be replaced easily. If we stage it right, it'll even be without downtime.
16:31:04 <odyssey4me> And packages too - we could even try a force 'downgrade' as an approach.
16:31:36 <cloudnull> i think that we need to spike on how to best deal with these changes moving forward.
16:31:42 <andymccr> agree with that
16:31:45 <odyssey4me> I just don't see the point of introducing some arbitrary prefix to packages.
16:31:48 <cloudnull> can someone get a thread going on the ML to that effect  ?
16:35:22 <cloudnull> So maybe next time? Anyone want to help out with that effort?
16:37:04 <odyssey4me> I'm interested in doing a spike on it, but it'll only probably happen in two weeks or so for me.
16:37:17 <andymccr> cloudnull: im happy to get on board with that too
16:37:28 <andymccr> (im ooto after next week though for a bit so same deal as odyssey4me really)
16:38:06 <sigmavirus24> I'll see if I can get a spike done next week
16:38:32 <sigmavirus24> I don't see anyway to prevent the operator from having to be very cognizant of this upgrade problem though
16:38:47 <palendae> tbh the operator should be cognizant of upgrade problems
16:39:01 <sigmavirus24> If there's any that we can make easier, though, we should
16:39:03 <sigmavirus24> This one we can't
16:39:07 <palendae> True
16:39:15 <sigmavirus24> We'll need really good documentation around this though so we'll need Sam-I-Am's help
16:39:39 <palendae> They are still ultimately using openstack, though, so it's very much worth pointing out that this decision happened
16:40:07 <sigmavirus24> certainly
16:41:12 <stevelle> document the atrocities
16:41:34 <cloudnull> ok so anything else we want to talk about ?
16:41:38 <cloudnull> its open mic time
16:43:19 <cloudnull> okiedokie.
16:43:50 <cloudnull> #action reenable the successerator for a single retry within the gate. To be removed as soon as Ansible v2 drops upstream - someone
16:44:20 <cloudnull> #action get a thread going on the mailing list surrounding issues with upgrading Kilo > Liberty - someone
16:44:23 <cloudnull> #endmeeting