16:00:02 <b3rnard0> #startmeeting OpenStack Ansible Meeting
16:00:03 <openstack> Meeting started Thu Jan 22 16:00:02 2015 UTC and is due to finish in 60 minutes.  The chair is b3rnard0. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:05 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:07 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:00:52 <cloudnull> so hey everybody
16:01:00 <Sam-I-Am> hi
16:01:01 <b3rnard0> #topic RollCall
16:01:12 <mattt> \o
16:01:19 <Sam-I-Am> here
16:01:19 <palendae> Hello
16:01:19 <b3rnard0> #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible
16:01:21 <odyssey4me> o/
16:01:25 <cloudnull> present
16:01:25 <hughsaunders> \o
16:01:25 <andymccr> \o
16:01:29 <sigmavirus24> present
16:01:32 <d34dh0r53> pre sent
16:01:38 <palendae> present
16:01:41 <klamath> here
16:01:54 <cfarquhar> present
16:02:01 <hughsaunders> git-harry: present
16:02:11 <b3rnard0> #topic The state of "10.1.2" and "9.0.6", loggin, bugs, ect. - (cloudnull)
16:02:40 <mancdaz> present
16:02:52 <cloudnull> looks like I need "apell".
16:03:01 <cloudnull> ^see
16:03:29 <cloudnull> so odyssey4me andymccr , how are we looking towards getting logging all happy ?
16:03:55 <andymccr> i think the main work that was required has gone in already - courtesy of odyssey4me
16:04:00 <andymccr> so we should be fine.
16:04:06 <b3rnard0> #link https://bugs.launchpad.net/openstack-ansible/+bug/1399371
16:04:26 <odyssey4me> cloudnull: one patch left before it's all hunky dorey - https://review.openstack.org/#/c/148623/
16:05:13 <odyssey4me> essentially the only bit left is that for some reason the swift logs aren't coming through properly - the rsyslog container doesn't send them... I'm still trying to figure out the problem there
16:05:40 <cloudnull> odyssey4me: is that something that maybe d34dh0r53  can help out with ?
16:06:05 <andymccr> odyssey4me: i'll look into that because i know i removed the rsyslog containers from being built on the swift storage hosts
16:06:18 <odyssey4me> sure - in truth anyone who has a working build can help to figure out what the cause is - once I know that it'll be easy to roll in the patch
16:06:51 <odyssey4me> it may be an aio specific issue, in which case we can ignore it - but I could do with assistance to triage, thanks andymccr
16:07:07 <andymccr> ive got a build almost completed now
16:07:17 <andymccr> so will be no problem
16:07:28 <b3rnard0> #action andymccr to investigate why rsyslog container doesn't send swift logs issue
16:07:33 <odyssey4me> ok - we can sort that out before COB tomorrow then
16:07:42 <andymccr> e-z
16:07:45 <d34dh0r53> let me know if I can assist as well
16:07:55 <cloudnull> sweet.
16:08:07 <b3rnard0> #action d34dh0r53 To help andymccr on rsyslog issue
16:08:21 <andymccr> i could use a coffee - milk, 1 sugar :D thanks d34dh0r53
16:08:27 <hughsaunders> haha
16:08:39 <cloudnull> so for our open bugs within 9.0.6.
16:08:44 <d34dh0r53> :) on it's way, may be a little cold when it gets there
16:09:08 <b3rnard0> #topic 9.0.6
16:09:09 <cloudnull> between mattt odyssey4me and mancdaz everything looks "in progress"
16:09:45 <cloudnull> odyssey4me: logging should be good in 9.0.6
16:09:54 <cloudnull> because no swift?
16:09:54 <mancdaz> cloudnull yeah I backported the stuff that was tagged as a potential, after assessing if it was needed
16:09:57 <b3rnard0> #link https://launchpad.net/openstack-ansible/+milestone/9.0.6
16:09:59 <mancdaz> some of those are outstanding
16:10:29 <odyssey4me> cloudnull: yep, once I get a working aio on 9.x I'll test all the logging patches and submit them
16:11:17 <odyssey4me> next week will be backport week, pretty much
16:11:45 <cloudnull> odyssey4me: https://bugs.launchpad.net/openstack-ansible/+bug/1399387/comments/10 is that being broken out into multiple commits?
16:12:13 <cloudnull> or will it stay as is?
16:12:38 <odyssey4me> cloudnull: yes - that'll be reverted and submitted one at a time once openstack infra approves https://review.openstack.org/#/c/149305/
16:12:46 <cloudnull> kk.
16:13:06 <odyssey4me> that patch will allow me to patch both the icehouse and juno branches to a point where we have a working aio check on both those branches
16:13:29 <cloudnull> sounds good.
16:13:45 <odyssey4me> then we'll re-enable voting on those branches for the aio check
16:14:30 <cloudnull> is there anything that is presently in 9.0.6 that needs to have the state changed?
16:15:17 <cloudnull> like i said, thats a lot of in progress for only one week till release?
16:15:41 <cloudnull> ok
16:15:45 <cloudnull> on to 10.1.2
16:16:03 <b3rnard0> #topic 10.1.2
16:16:09 <b3rnard0> #link https://launchpad.net/openstack-ansible/+milestone/10.1.2
16:16:24 <mancdaz> lots of things
16:16:29 <cloudnull> whats pressing there? does anyone need help on working these issues?
16:16:53 <mancdaz> I need help in getting the cherry-pick backports approved
16:17:19 <mattt> w/ 9.0.6, some of those in progress are actually fix committed no ?
16:17:43 <odyssey4me> mancdaz I've got your back there.
16:17:49 <cloudnull> mattt, i assume yes. but i guess we'll have to go through and figure that out
16:18:03 <mattt> cloudnull: i looked at one and it's been merged into icehouse, so that list may not be accurate
16:18:16 <mancdaz> it's the wau lauinchpad tracks the status
16:18:30 <odyssey4me> mattt possibly - we can verify next week I guess
16:18:37 <mancdaz> it's why I wanted to talk about how we do that in launchpad
16:18:49 <cloudnull> mancdaz we'll get to that .
16:18:54 * mancdaz nods
16:19:29 <cloudnull> so , with 10.1.2 we need to do lots of backporting . but beyond that everything else looks to be good, right?
16:19:55 <mancdaz> I can go through that list and see if anything targetted for a series has actually merged, even if the bug shows in progress
16:20:22 <cloudnull> kk thanks mancdaz . we'll need to do that for 9.0.6 an 10.1.2
16:20:27 <cloudnull> i can help out with that too.
16:20:37 <b3rnard0> #action mancdaz To go through that list and see if anything targetted for a series has actually merged, even if the bug shows in progress (9.0.6 & 10.1.2)
16:20:38 <odyssey4me> cloudnull: should be fine - essentially we just need people to actually test the build over and over again
16:20:44 <mancdaz> there's actually not that many juno outstanding with the backport tag
16:20:52 <mancdaz> I did most of the backports last week
16:21:02 <mancdaz> just need them to be approved/reviewed
16:21:10 <odyssey4me> once the aio check is fixed then it'll be automatically tested in principle per commit, which will help
16:21:15 <b3rnard0> #action cloudnull To help mancdaz with merging help
16:21:28 <mancdaz> we need help with that help
16:21:35 <cloudnull> haha
16:22:02 <b3rnard0> #topic Prioritizing the "next" milestone for 10.x and 9.x. - (cloudnull)
16:22:52 <cloudnull> we've got quite a few items in the "next" milestone.
16:22:59 <cloudnull> link https://launchpad.net/openstack-ansible/+milestone/next
16:23:12 <b3rnard0> #link https://launchpad.net/openstack-ansible/+milestone/next
16:23:28 <cloudnull> i'd like to see if we can prioritize them for what will be 10.1.3 and 9.0.7
16:24:18 <cloudnull> starting with what will be a hot topic item , "F5 Pool Monitoring in Kilo"
16:24:40 <cloudnull> presently the f5 monitoring script uses xml for token parsing ,
16:24:48 <cloudnull> this is gone come kilo.
16:24:49 <b3rnard0> #link https://bugs.launchpad.net/bugs/1399382
16:25:11 <cloudnull> i'd like to see if we can do something better/different  .
16:25:34 <cloudnull> Apsu: you worked on that, any insight ?
16:25:51 <odyssey4me> surely this belong in the set of things which need to be extracted from the generalised open build, as it's specific to the rax deployments
16:26:04 <Apsu> cloudnull: Life is hard trying to process data on an F5, given the CPU limitations and poor forking performance
16:26:18 <Apsu> So making use of JSON will be interesting. Also the python version is quite old.
16:26:18 <mancdaz> some service that runs on the host and checks status locally? It's how galera recommend it's done
16:26:27 <mancdaz> the f5 then only needs to do a standard http keyword check
16:26:28 <Apsu> Might look into jshon, if we're still going to do API level checks
16:26:56 <cloudnull> odyssey4me, it is true that rax does use f5 but that is not specific to rax, anyone that uses an f5 will not be able to in short order.
16:27:30 <Apsu> At least as far as pool monitoring based on API availability/modest authenticated requests
16:28:07 <cloudnull> Apsu can we spike within the next milestone to see how we make that better?
16:28:18 <Apsu> Sure.
16:28:41 <cloudnull> i know that its a wishlist item, but it will be something that we/the project is going to have to deal with sooner than later.
16:28:42 <Apsu> I'll see about statically-compiled jshon in case it's a quick win and we can parse JSON from the bash script directly
16:28:59 <palendae> A thing I'll toss out - does that check have to be done on the F5?
16:29:00 <Apsu> Should incur the least penalty and not require a custom python version or such
16:29:13 <mancdaz> palendae yeah that's what I was saying
16:29:21 <Sam-I-Am> palendae: i was typing the same question
16:29:25 <Apsu> We should definitely look into not
16:29:29 <palendae> Given the limitations, would it be easier to move that monitoring action?
16:29:32 <b3rnard0> #action Apsu Spike on F5 monitoring
16:29:38 <Sam-I-Am> voluntold
16:29:39 <Apsu> The F5 needs to know, but it doesn't have to be the place that does token procurement.
16:29:40 <mancdaz> the f5 can do  a dumb poll, the results of which are generated by a more complex monitor on the host
16:29:47 <palendae> Cool
16:29:49 <d34dh0r53> I don't really like the idea of the F5 doing anything other than port level monitoring either
16:29:52 <palendae> ^
16:29:55 <Sam-I-Am> d34dh0r53: ding
16:30:02 <palendae> Doesn't sound like anyone does
16:30:03 <Apsu> Well that depends on what level of LB intelligence we want.
16:30:22 <Apsu> Port monitoring is great unless the API is spitting out 500s
16:30:24 <palendae> Can probably dig deeper later, but something to think about
16:30:25 <d34dh0r53> Apsu: true that
16:30:28 <Apsu> Port's open! but broken
16:30:36 <Apsu> Anyhow, I'll look into it
16:30:46 <cloudnull> as for the rest of the wishlist items targeted for "next" , do we want to bring any of them in for the proposed 10.1.3 / 9.0.7 ?
16:30:48 <mancdaz> I actually don't care where the intelligence lives, except that clearly in this case we can't do what we want on the f5, so do it somewher else
16:30:57 <d34dh0r53> Apsu: in it's current state the script is closing a lot of ports that are functioning correctly
16:31:24 <Apsu> d34dh0r53: That's unrelated to the concept of availability monitoring, just the particular way the script skeleton was filled out so far :P
16:31:29 <Apsu> Will have that fixed up too, lol
16:31:41 <d34dh0r53> Apsu: cool
16:31:53 <mancdaz> cloudnull as for other wishlist items, I doubt they'd make it into icehouse
16:32:06 <mancdaz> since we are treating that as a maintenance branch/version now, right?
16:32:30 <mattt> cloudnull: i'll probably work https://bugs.launchpad.net/openstack-ansible/+bug/1412762 into 10.1.3
16:32:31 <cloudnull> mancdaz we are.
16:32:47 <mattt> we expose a whole bunch of non-function heat resource types which is a bit janky
16:32:55 <mattt> *non-functional
16:33:08 <cloudnull> kk . sounds good
16:33:19 <cloudnull> if nobody has anything specifically that they want in, ill go through them and file them away accordingly.
16:33:30 <b3rnard0> #action matt To backport https://bugs.launchpad.net/openstack-ansible/+bug/1412762 into 10.1.3
16:34:36 <cloudnull> ok moving on.
16:34:36 <b3rnard0> #action cloudnull To target greater than wishlist next bugs into future milestones
16:34:51 <b3rnard0> #topic Triaging bugs, targeting at correct series, targeting series fixes at milestones - getting consensus on how we do it. mancdaz
16:35:08 <cloudnull> mancdaz you have the floor .
16:35:27 <mancdaz> ok so the way we file/triage/target bugs etc
16:35:39 <mancdaz> when a bug is first filed, it is auto targetted to trunk
16:35:58 <mancdaz> but we often need to backport it to icehouse and juno, or just juno
16:36:14 <mancdaz> so what I tend to do is add the juno/trunk/icehouse series
16:36:25 <mancdaz> and then for the milestones, target the relevant series at that
16:36:38 <mancdaz> trunk would never have a milestone, as we don't actually release anything from master
16:37:03 <mancdaz> I also see bugs being filed, and then a milestone being added to the default (trunk) series
16:37:15 <mancdaz> just wanted to get consensus on how we do this
16:37:25 <mancdaz> and also at what point we actually target a bug
16:37:49 <mancdaz> ie can I decide right now if something is going in to 10.1.2, or do we do the backport now, and later target a handful at a milestone
16:38:03 <mancdaz> I don't know the 'correct' way
16:38:12 <mancdaz> but just as long as we're all doing it the same way
16:38:21 <mancdaz> am I making sense?
16:39:07 <mancdaz> example of what I do: https://bugs.launchpad.net/openstack-ansible/+bug/1408608
16:39:34 <odyssey4me> fyi - if you follow the methodology outlined by mancdaz, then the openstack-infra will also auto-update the bug status when your patch merges
16:39:45 <mancdaz> only for trunk odyssey4me
16:40:10 <cloudnull> imo once its fixed in trunk it should be backported to its appropriate series.
16:40:12 <mancdaz> we have to manually change the status of the series
16:40:28 <mancdaz> cloudnull yeah I'm not arguing about wherther we sdhould backport or not
16:40:36 <mancdaz> just how we track it in launchapd
16:41:24 <cloudnull> i would think that if the item was fixed outside of the already set milestone, as long as its not a new feature, it should be it should be added to the upcoming milestone.
16:42:41 <cloudnull> and that would be tracked as such within launchpad.
16:43:01 <odyssey4me> cloudnull: the advatnage of adding the series is that you can target the series to the particular milestone
16:43:20 <odyssey4me> rather than the trunk fix to a milestone, which means you can't target two milestones
16:43:40 <odyssey4me> ie 9.0.x and 10.0.x
16:43:47 <mancdaz> ok so look at my example versus this one https://bugs.launchpad.net/openstack-ansible/+bug/1402028
16:43:58 <mancdaz> this was not targeted to a series
16:44:10 <mancdaz> but the fix was against trunk
16:44:15 <cloudnull> ah, i see now what you're talking about.
16:44:27 <cloudnull> i like the mancdaz.
16:44:31 <mancdaz> it was targeted at a milestone for the trunk
16:45:10 <mancdaz> in the way I've been doing it, you'd never target the trunk fix at a milestone, only the series
16:45:17 <mancdaz> as that's where we release
16:45:18 <cloudnull> so, as we triage issues, add series target to milestone .
16:45:26 <cloudnull> i like it
16:45:31 <b3rnard0> do we have agreement on this issue?
16:45:56 <hughsaunders> got that #agreed ready b3rnard0?
16:46:01 <mancdaz> (also please add appropriate backport-potential tags tags)
16:46:05 * odyssey4me likes it - milestones targeted at series makes the backporting tracking easier
16:46:11 <mancdaz> then release manager can come along and work through the backports
16:46:35 <cloudnull> this is what we get for letting b3rnard0 chair a meeting...
16:46:49 <cloudnull> all opposed ?
16:46:54 <Apsu> cloudnull: It's because he has that standing desk. If he had an actual chair, we'd be golden
16:47:00 <b3rnard0> #startvote
16:47:01 <openstack> Unable to parse vote topic and options.
16:47:08 <cloudnull> the motion passes...
16:47:10 <cloudnull> moving on .
16:47:15 <d34dh0r53> The eyes have it
16:47:21 <hughsaunders> aye
16:47:24 <b3rnard0> #agreed triage issues, add series target to milestone
16:47:54 <b3rnard0> #topic Airing of grievances
16:47:57 <mancdaz> just make sure you also add trunk, as otherwise it looks weird
16:48:26 <cloudnull> so . do we have anything that anyone wants to talk about regarding the project?
16:48:39 <cloudnull> we didnt have too many agenda items
16:49:02 <cloudnull> so i thought it would be nice to open it up to folks if they wanted to discus something.
16:49:20 <cloudnull> and had neglected to add an agenda item.
16:49:27 <toxick> Can we skip to the Feats of Strenght?
16:49:38 <Apsu> cloudnull: I've got a lot of problems with you people.
16:49:54 <d34dh0r53> lol
16:49:56 <palendae> The de-raxification is saved for 'next' correct?
16:50:13 <palendae> Which is the first step in galaxification
16:50:22 <Sam-I-Am> palendae: kilo, i think
16:50:34 <cloudnull> moving on to the feats of strength
16:50:47 <Apsu> Log Toss competitors, form a line on the left
16:50:48 <cloudnull> however irc wrestling may be tough.
16:51:01 <cloudnull> ok. if nothing then moving on.
16:51:32 <b3rnard0> #topic More on genericising deployment roles and code. - (cloudnull)
16:51:32 <cloudnull> lets move to generizing things for the last 10 minutes
16:51:46 <cloudnull> which opens up to palendae
16:52:53 <cloudnull> so far the plan has been that we will not target 10 or 9 for the removal of the rax bits, moving to kilo will be the first release that has rax bits stripped
16:52:55 <palendae> I was more asking about what we decided last time - we had deferred to next
16:53:44 <cloudnull> which will also include the roles as made galaxy compatible though all within the same structure.
16:54:34 <cloudnull> as we progress through the making of our stack more generic, we will hopfully get to the point where the roles become separate repos.
16:54:35 <odyssey4me> cloudnull: and using external galaxy roles for infrastructure (non-openstack) where applicable and possible, I guess?
16:55:33 <cloudnull> odyssey4me: i'd like to say yes, though from what I've seen so far that may be difficult.
16:55:49 <cloudnull> granted i've not look everywhere.
16:55:56 <Apsu> May be room to improve the existing ones, then.
16:56:08 <cloudnull> definitely .
16:56:09 <Apsu> Merge in our work to improve both user communities
16:57:12 <cloudnull> the rack_roles have been labeled as "abandon ware" so we could try to pick up where they left off or move to other role maintainers.
16:57:39 <cloudnull> but i like the idea of being able to consume external roles.
16:57:44 <palendae> cloudnull: So the os_cloud repo's going to remain as a sketch?
16:58:10 <palendae> And I'd agree that long term upstream galaxy roles would be great, but I think we'd have to participate in contribs/maintenance then, too
16:58:36 <cloudnull> as soon as we extract juno from master into the branches ill put through a collapsed pr as WIP.
16:58:55 <palendae> cloudnull: Of all the roles getting split out?
16:59:03 <palendae> It sounded like there was some resistance to that
16:59:09 <palendae> Since it's one big chunk
17:00:06 <cloudnull> presently it would be a big chunk.
17:00:17 <cloudnull> lets move this convo into the channel because we're out of time.
17:00:21 <hughsaunders> it may be impractical to have a stackforge repo for every role, at least immediately - that was my resistance. But I like the idea of using galaxy roles
17:00:22 <palendae> ok
17:00:35 <Sam-I-Am> good meeting y'all
17:00:41 <odyssey4me> don't forget to end the meeting :)
17:00:43 <b3rnard0> #endmeeting