16:01:29 <b3rnard0> #startmeeting OpenStack Ansible Meeting
16:01:29 <openstack> Meeting started Thu Mar 26 16:01:29 2015 UTC and is due to finish in 60 minutes.  The chair is b3rnard0. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:33 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:01:45 <b3rnard0> #chair cloudnull
16:01:45 <openstack> Current chairs: b3rnard0 cloudnull
16:01:52 <cloudnull> presente
16:01:54 <b3rnard0> #topic Agenda & rollcall
16:02:05 <b3rnard0> #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting
16:02:07 <rackertom> o/
16:02:10 <stevelle> present
16:02:17 <b3rnard0> ello!
16:02:22 <palendae> o/
16:02:38 <Sam-I-Am> hello
16:02:42 <hughsaunders> hey
16:03:00 <hughsaunders> palendae: thanks for the redirect :)
16:03:01 <dstanek> o/
16:03:03 <Bjoern__> Cisco Hey
16:03:09 <Bjoern__> lol
16:03:11 <palendae> hughsaunders: welcome :)
16:03:17 <mattt> \o
16:03:51 <b3rnard0> #topic Review action items from last meeting
16:04:37 <b3rnard0> only one from last meeting was
16:04:40 <b3rnard0> odyssey4me Solicit feedback from the mailing list as to whether os package management should be part of the project?
16:05:03 <b3rnard0> not sure odyssey4me is here to answer that so we can prob keep it open
16:05:12 <cloudnull> i dont believe that has been done at this point.
16:05:32 <b3rnard0> okay, i'll keep it open
16:05:35 <hughsaunders> any community members have an opinion?
16:05:45 <cloudnull> ^ +1
16:06:15 <dstanek> is there a link to the thread?
16:06:29 <hughsaunders> not sure its been started yet
16:06:32 <cloudnull> http://eavesdrop.openstack.org/meetings/openstack_ansible_meeting/2015/openstack_ansible_meeting.2015-03-12-16.00.log.html
16:06:41 <cloudnull> that was from two weeks ago
16:07:31 <b3rnard0> #action odyssey4me Solicit feedback from the mailing list as to whether os package management should be part of the project?
16:07:43 <hughsaunders> 16:06:15 <odyssey4me> as I recall we agreed to wait until the 'manifesto' was compiled and agreed to before we went back to that
16:07:51 <hughsaunders> which manifesto?
16:08:18 <cloudnull> dstanek the tl;dr is if we should create a community apt repo for use within OSAD.
16:08:33 <cloudnull> hughsaunders manifesto does not exist at this point .
16:08:56 <palendae> As I remember, that was all about defining what's in scope for os-a-d
16:09:07 <cloudnull> palendae yes.
16:10:59 <cloudnull> so if nobody has anything to add to that, lets move on.
16:11:10 <palendae> Nothing here
16:11:30 <cloudnull> #topic Blueprints
16:11:45 <cloudnull> #link https://blueprints.launchpad.net/openstack-ansible
16:12:44 <alextricity> whats up
16:12:55 <cloudnull> on https://blueprints.launchpad.net/openstack-ansible/+spec/dynamically-manage-policy.json
16:13:24 <cloudnull> do you know whats going on with that? or can you get the people who were interested on working on it in here ?
16:14:14 <cloudnull> reason being is that bp and https://blueprints.launchpad.net/openstack-ansible/+spec/tunable-openstack-configuration both are loosely related.
16:14:24 <alextricity> Yeah..givce me a min
16:15:00 <cloudnull> so starting at the top: https://blueprints.launchpad.net/openstack-ansible/+spec/rsyslog-update
16:15:09 <cloudnull> that bp has been implemented. in master.
16:15:45 <cloudnull> it seems I neglected to change the state so thats resolved.
16:15:56 <cloudnull> sigmavirus24: https://blueprints.launchpad.net/openstack-ansible/+spec/additional-tempest-checks
16:16:00 <cloudnull> what say you ?
16:16:27 <sigmavirus24> I concur
16:16:34 <cloudnull> drafted by d34dh0r53 and assigned to you.
16:16:52 <sigmavirus24> https://bugs.launchpad.net/openstack-ansible/+bug/1422936 was marked as "Fix Committed" as well
16:16:54 <openstack> Launchpad bug 1422936 in openstack-ansible trunk "Tempest failing on TestNetworkBasicOps" [Medium,Fix committed] - Assigned to Nolan Brubaker (nolan-brubaker)
16:16:57 <sigmavirus24> so it would seem it's done
16:18:36 <andymccr> i believe those are in the aio
16:18:40 <andymccr> so yeh i thinkt hats done
16:18:53 <cloudnull> ok.
16:18:57 <alextricity> cloudnull: I'm getting suda on here. He is resbonsible for that bp
16:19:05 <cloudnull> so lets talk about the new kilo bps.
16:19:20 <cloudnull> miguelgrinberg: https://blueprints.launchpad.net/openstack-ansible/+spec/heat-kilofication
16:19:45 <cloudnull> i see one wip change.
16:19:58 <mattt> cloudnull: yeah i've been trying to get some of that work done
16:20:03 <mattt> not sure how much progress miguelgrinberg has made
16:20:37 <miguelgrinberg> mattt: have not done anything with heat yet, assumed you were starting
16:20:45 <cloudnull> so can we change the commit ID to match that of the new overarching spec?
16:20:56 <cloudnull> just to clean up what we have so far ?
16:21:21 <cloudnull> IE https://blueprints.launchpad.net/openstack-ansible/+spec/master-kilofication
16:21:45 <mattt> cloudnull: sure
16:21:46 <cloudnull> 's/commit ID/implements tag in the commit/'
16:21:57 <mattt> cloudnull: i can do that
16:23:00 <cloudnull> i'd like to see that done with all of the other ones as well.
16:23:11 <sigmavirus24> We can also use "Partially implements" since the master-kilofication is probably not going to be fixed by one change
16:23:33 <mattt> sigmavirus24: good idea, and i think i followed sigmavirus24's lead on the commit message :P
16:23:49 <sigmavirus24> Should probably all use the appropriate topic branch too in gerrit to track these more easily
16:23:53 <sigmavirus24> mattt: probably, I did mine wrongly
16:24:09 <sigmavirus24> (Then again, one change can affect multiple bps)
16:24:14 <sacharya> cloudnull: l am around
16:25:40 <palendae> I'm pretty sure there's not much to do with swift for kilofication; they've had 2 very small releases since Juno
16:26:04 <cloudnull> ok.
16:26:18 <palendae> 2.2.0 to 2.2.2
16:26:40 <cloudnull> sacharya can you go over the bp for https://blueprints.launchpad.net/openstack-ansible/+spec/dynamically-manage-policy.json
16:27:24 <cloudnull> is there code that pertains to that bp that we can review ?
16:27:33 <cloudnull> and move the bp into a beta status ?
16:28:26 <sacharya> oh yeah, so basically its a custom module similar to the copy or template module… it will take src, updates and dest as arguments ….src is json only for now and any updates provided will update the key-values in that src json
16:28:41 <sacharya> I am calling it copy_updates plugin for now… and open to suggestions for now
16:28:58 <sacharya> I am basically cleaning it up rt now and will send for review soon
16:29:11 <cloudnull> nice.
16:30:17 <cloudnull> im moving the bp to "in progress" but would love to see a wip with partial implementation for review
16:30:19 <sacharya> not familiar with launchpad.. how do i move the bp to in progress ?
16:30:21 <sacharya> cool
16:30:51 <cloudnull> ill be happy to go over all of the lp bits offline.
16:31:09 <cloudnull> are there any other bp's that we want to talk about ?
16:31:33 <cloudnull> IE hughsaunders https://blueprints.launchpad.net/openstack-ansible/+spec/manage-resolv-conf
16:31:59 <cloudnull> also can we get some of these converted into specs?
16:32:20 <cloudnull> we now have a specs repo, where all bp work should go into moving forward .
16:32:51 <b3rnard0> #info cloudnull:	we now have a specs repo, where all bp work should go into moving forward .
16:32:53 <hughsaunders> cloudnull: that came from a situation during an upgrade.
16:33:18 <hughsaunders> cloudnull: I wasn't sure if the solution should go into OSAD, so created a blueprint for discussion
16:33:26 <hughsaunders> much discussion has not happened
16:33:55 <cloudnull> whats your opinion on the matter ?
16:34:32 <hughsaunders> batteries included, lets add a play to manage resolv.conf, but make it optional so people can use it if they require it.
16:35:57 <cloudnull> other thoughts ?
16:36:14 <Sam-I-Am> can we generate something and not copy it into place?
16:36:23 <Sam-I-Am> kind of like the container host-ip file
16:37:40 <palendae> hughsaunders: was that related to the scenario where an infrastructure DNS server was hosted on a cloud that got rebooted?
16:38:54 <hughsaunders> palendae: I don think so. It was related to a scenario where the hosts resolv.conf was managed by an external set of playbooks, but the containers all had the default form our playbooks (8.8.8.8)  so the containers couldn't resolve internal things.
16:39:02 <palendae> Ah
16:39:03 <palendae> Ok
16:40:31 <cloudnull> so hughsaunders can you convert that to a spec and resubmit it for approval ?
16:40:39 <hughsaunders> ok
16:40:51 <hughsaunders> cue b3rnard0
16:41:29 <b3rnard0> #action hughsaunders convert that to a spec and resubmit it for approval
16:41:32 <cloudnull> so moving on, we have https://review.openstack.org/#/c/166986/
16:41:41 <cloudnull> that is implementing the changes required to make kilo
16:41:51 <cloudnull> so i'd love to get some more eyes on that .
16:42:12 <palendae> I'll rebuild that today to check the behavior mattt found
16:42:17 <cloudnull> such that its not holding up the work on the various other projects.
16:43:05 <cloudnull> are there any other reviews that we want to talk about ?
16:43:08 <mattt> palendae: i'm still having networking issues, but Apsu helped me get past some of them
16:43:13 <mattt> so i may remove that -2 for the time being
16:43:23 <palendae> Ah, so they were unrelated?
16:43:36 <palendae> I'm still going to double check networking, since I didn't touch that in my last test
16:43:37 <mattt> yeah, due to bad user variables configuration
16:43:59 <palendae> Ok
16:45:56 <cloudnull> i think if its passing our min gating tests and gets passed our individual build reviews i think we should move forward with it and then address shortcomings localized to the individual projects.
16:46:54 <mattt> i would love to see it pass a few scenario tests to know relative functionality is there
16:46:58 <mattt> i've not been able to do that tho
16:47:01 <cloudnull> i'd also like to track closer to the head of the master at this point to make sure we're able to catch issues as they come up as the various RC's are released over the next month.
16:47:40 <cloudnull> mattt agreed. i think once the RC's start rolling out we should re-enable the scenario tests.
16:48:01 <cloudnull> also we'll need to make sure we're on an appropriate release of tempest.
16:48:17 <mattt> cloudnull: yeah hughsaunders will be validating that i believe
16:48:18 <cloudnull> right now were on the head of master as of a week ago.
16:48:19 <mattt> anyway -2 removed
16:48:33 <andymccr> that PR is quite large and seems to fix multiple bugs.
16:48:38 <andymccr> its really hard to review that thoroughly
16:48:42 <andymccr> is there really no way to split it?
16:48:59 <cloudnull> converting juno to kilo piece meal is all but impossible.
16:49:12 <Sam-I-Am> ^ this
16:49:33 <cloudnull> not without disabling all of gating.
16:49:48 <Sam-I-Am> gating is overrated
16:49:48 <mattt> cloudnull: could the galera changes etc. be moved out to separate commits ?
16:49:54 <Sam-I-Am> things merge easier without it
16:50:13 <b3rnard0> don't you have diagrams to diagram?
16:50:25 <andymccr> it just seems "not ideal" to me to have these massive patches affecting everything as frequently as we are seeing them.
16:51:31 <cloudnull> well intruth about every six months there will be a large-ish patch to update to the latest OS release.
16:51:31 <mattt> andymccr: most of the changes are required to jump step between juno/kilo
16:52:00 <andymccr> no i understand that - but surely we can do pre work
16:52:04 <cloudnull> mattt the galera changes could be backed out. they were added because i couldn't re-boostrap a cluster without them
16:52:09 <b3rnard0> did we not agree in our contributing guidelines about the size of commits? sounds like these types of patches would be one exception
16:52:11 <andymccr> e..g "patch 1 - prep keystone for kilo"?
16:52:28 <mattt> andymccr: i don't think that'd work
16:52:30 <cloudnull> andymccr what would that have looked like ?
16:52:41 <mattt> you're either using juno or kilo
16:52:53 <mattt> our system can't handle bits and pieces running different things
16:52:56 <andymccr> e.g. the glance change
16:53:06 <andymccr> could go in juno with options set for juno that would then work in kilo
16:53:18 <andymccr> the kilo patch would simply change the api version
16:53:22 <andymccr> with functionality added in juno
16:53:27 <palendae> Can the services themselves span versions? e.g. juno keystone and kilo glance?
16:53:33 <andymccr> that too.
16:53:40 <mattt> palendae: i think so?
16:53:49 <palendae> I honestly don't know, which is why I'm asking
16:53:50 <andymccr> im very much against blanket changes in the name of "making the boat go faster"
16:53:53 <cloudnull> keystone yes, most of the other services no.
16:54:05 <andymccr> so if there is a better way of getting this split out then id rather we do that.
16:54:13 <palendae> cloudnull: So kilo glance and nova expect both to be on kilo
16:54:24 <palendae> ?
16:54:31 <cloudnull> yes. generally speaking.
16:54:41 <cloudnull> that said i didn't specifically test that.
16:54:49 <cloudnull> i found no reason to.
16:54:50 <palendae> Yeah, I don't think any of us have
16:54:57 <mattt> i think that is going to complicate things
16:55:03 <mattt> i don't advocate us doing changes like that
16:55:11 <palendae> mattt: Yeah, neither is great
16:55:36 <palendae> I would say if we have multi-patch changes like that (one service at a time), it wouldn't be releasable anyway
16:56:05 <cloudnull> palendae yes gating would have to be disabled.
16:56:21 <palendae> Lots of things would be wonky, gating included
16:56:44 <Sam-I-Am> you generally can't mix service versions
16:56:47 <dstanek> i've found that beyond Keystone some of the projects add features from other projects; so only upgrading some here and there may require lots of testing
16:56:48 <Sam-I-Am> well, major versions
16:56:50 <palendae> Sam-I-Am: That makes sense
16:57:02 <Sam-I-Am> it *should* work because API standards...
16:57:11 <andymccr> well our gate is based on branches
16:57:13 <palendae> dstanek: I'd only propose it as a way to make the version to version upgrade patches more digestable
16:57:18 <palendae> Not anything long term
16:57:19 <andymccr> so if we adjust the gate tests once we move
16:57:28 <andymccr> we can then do more incremental changes
16:57:42 <andymccr> e.g. we increase the tests we do on the new branch as we improve it.
16:57:59 <dstanek> palendae: that would probably be find then, just update the projects withouts deps first and work your way backward
16:57:59 <andymccr> allowing us to actually stick to our contributing guidelines and have easier to review PRs
17:00:00 <cloudnull> andymccr that wouldn't work from a python packaging perspective.
17:00:12 <cloudnull> we'd have conflicting requirements all over the place.
17:00:29 <andymccr> cloudnull: why? im simply suggesting we move to kilo, but reduce the gate tests for that branch
17:00:37 <stevelle> time
17:00:49 <cloudnull> how would we move to kilo?
17:00:51 <andymccr> that way we can move towards a working gate once we have everything fixed which happens more incrementally
17:01:13 <andymccr> you make a change to setup the services on kilo, and adjust the gate for that branch at the same time
17:01:14 <cloudnull> so master would be 100% broken until we fix all the things ?
17:01:17 <palendae> Shared package versions would be messed up
17:01:27 <andymccr> cloudnull: yes, but we can fix the issues in bits rather than 1 massive thing
17:01:32 <andymccr> the end result is exactly the same
17:01:42 <andymccr> just with smaller more manageable PRs that fit our contributing guidelines
17:01:51 <b3rnard0> is that it for this meeting? we are slightly over
17:02:06 <cloudnull> ah we're over , we can continue in the channel .
17:02:21 <b3rnard0> #endmeeting