16:00:49 <mnaser> #startmeeting openstack_ansible_meeting
16:00:49 <openstack> Meeting started Tue Dec 18 16:00:49 2018 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:51 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:52 <mnaser> #topic rollcall
16:00:53 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:00:53 <mnaser> o/
16:02:12 <mnaser> quiet quiet :)
16:02:12 <chandankumar> \o/
16:02:35 <chandankumar> year end is going on :-)
16:03:01 <mnaser> not much in last week highlights, we can discuss open topics in open discussion
16:03:10 <guilhermesp> would we wait to a quorum or can we discuss placement a bit?
16:03:20 <mnaser> guilhermesp: we can do that in open discussion
16:03:36 <mnaser> #topic bug triage
16:03:40 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1808830
16:03:41 <openstack> Launchpad bug 1808830 in openstack-ansible "container properties were not updated from env.d file" [Undecided,New]
16:03:48 <chandankumar> mnaser: do we need a proper blueprint/spec related to os_tempest part?
16:04:05 <mnaser> chandankumar: can we discuss this later in open discussion?  but probably not :>
16:04:13 <chandankumar> mnaser: sure
16:04:21 <mnaser> hmm
16:04:24 <mnaser> looks like someone is using lvm
16:04:32 <mnaser> for containers
16:04:48 <mnaser> i remember cloudnull mentioning that this swallows data
16:05:31 <mnaser> i dont know enough about this
16:05:53 <mnaser> i'll leave it, hopefully next time we triage we can see what things look like
16:06:08 <evrardjp> properties have been documented as part of o_u_config
16:06:14 <evrardjp> not as env.d
16:06:27 <evrardjp> iirc
16:06:33 <evrardjp> oh wait
16:06:33 <mnaser> ok so what they're doing is just wrong?
16:06:40 <evrardjp> I meant container_vars
16:06:43 <evrardjp> mmm
16:06:52 <openstackgerrit> Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Add automated migration of neutron agents to bare metal  https://review.openstack.org/625331
16:06:55 <mnaser> so their workarounds is really the right way to do it?
16:07:02 <odyssey4me> o/
16:07:09 <evrardjp> yeah that's how I'd configure things
16:07:22 <evrardjp> container_vars: <variable>
16:07:36 <mnaser> in that case we'll put as invalid and mention that the workaround is the right way to go about it
16:08:11 <odyssey4me> mnaser evrardjp so I helped the guy out yesterday - effectively the inventory was generated before the env.d file change, and if that's done then the env.d change does not take effect
16:08:19 <mnaser> aaaah
16:08:26 <mnaser> so it works but because the inventory already exists
16:08:28 <mnaser> it doesnt update
16:08:33 <mnaser> so we're not really overriding, its just an 'initial' value
16:08:43 <evrardjp> odyssey4me: also I notice there is a 1 space difference
16:08:55 <evrardjp> as you can see container_fs_size is wrongly indented
16:08:56 <odyssey4me> I don't know if we consider that as a bug - but it's not obvious, and we don't really do a good job of documenting how people can do things like this after the fact.
16:09:12 <odyssey4me> evrardjp I suspect that's just launchpad being stupid.
16:09:22 <odyssey4me> It would be good to verify it independently.
16:09:43 <evrardjp> I mean that would be a good reason why it doesn't show up on an empty inventory.
16:10:01 <evrardjp> after that our inventory is quite a mess, so things not updating would not be a surprise to me
16:10:02 <odyssey4me> yah, when I worked through it with him - he had the spacing right
16:10:06 <odyssey4me> lemme find the eavesdrop
16:10:42 <mnaser> okay so if we wanna quickly triage this, how do we feel like
16:11:50 <mnaser> i would defer to evrardjp or odyssey4me because this isn't my strength :)
16:12:33 <evrardjp> if someone really cares about the inventory, (s)he/it should step up, as to my knowledge, everybody is fine with the current way of working, even if it's bugged
16:12:47 <odyssey4me> the beginning of the very long conversation is here: http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/%23openstack-ansible.2018-12-17.log.html#t2018-12-17T14:23:23
16:12:50 <evrardjp> so I'd say low
16:12:53 <odyssey4me> perhaps good to add that to the bug
16:12:57 <mnaser> but confirmed?
16:13:14 <odyssey4me> I would not say it's confirmed unless someone else verifies it
16:13:15 <mnaser> i'm not asking anyone to fix it, but just if we recognize that this is an issue or not :)
16:13:28 <mnaser> so triaged/low?
16:13:57 <odyssey4me> assign to me - I'll try to confirm it, it shouldn't be hard
16:14:01 <odyssey4me> so leave it new
16:14:31 <mnaser> ok cool
16:14:39 <mnaser> thank you odyssey4me
16:14:58 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1808693
16:15:00 <openstack> Launchpad bug 1808693 in openstack-ansible "haproxy galera backend config contains httpchk statement" [Undecided,New]
16:15:09 <odyssey4me> I think it might just be a simple doc fix - a known issue somewhere.
16:15:31 <evrardjp> on that we have an http middleware
16:15:35 <evrardjp> I'd say invalid
16:15:43 <mnaser> i think this is probably invalid, this deployment probably failed to start the xinetd server
16:15:49 <evrardjp> indeed
16:15:58 <evrardjp> disabling checks is NOT a solution :)
16:16:24 <mnaser> done
16:16:28 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1808543
16:16:29 <openstack> Launchpad bug 1808543 in openstack-ansible "Keystone Federation cannot complete SP node setup on stable/rocky" [Undecided,New]
16:16:46 <mnaser> oh that's redkrieg :)
16:17:11 <mnaser> i see a keystone traceback
16:17:49 <mnaser> could this be a keystone thing?  i personally dont know much about federation.. a little bit
16:18:01 <odyssey4me> yeah, this was piped up in channel too and I asked for a bug report
16:18:28 <odyssey4me> It's been a while since I've done any federation things - hopefully I can confirm this... I'd say leave new and assign to me
16:18:34 <guilhermesp> would it be a dependency error following the keystone traceback?
16:18:49 <odyssey4me> unless someone else wants to have a bash at it
16:19:26 <odyssey4me> it might just be a new dependency which isn't covered
16:19:55 <guilhermesp> specifically keystone.auth.saml2?
16:20:43 <mnaser> Maybe we can ask redkrieg to dig in a bit if he has time. He’s done a bunch of python :)
16:20:57 <odyssey4me> it seems like it might just be that whatever that task is doing, it's not using the venv
16:21:34 <mnaser> Let’s leave it as is and catch up with redkrieg this week
16:22:59 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1807400
16:22:59 <openstack> Launchpad bug 1807400 in openstack-ansible "networksegments table in neutron can not be cleared automatically" [Undecided,New]
16:23:19 <mnaser> Sounds like a neutron bug?
16:23:58 <odyssey4me> that does seem like it's not registered to the right project
16:24:21 <odyssey4me> mark invalid, but add neutron
16:25:04 <mnaser> yep done
16:25:09 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1807346
16:25:10 <openstack> Launchpad bug 1807346 in openstack-ansible "[heat] Installations fails during Update Q->R" [Undecided,New]
16:26:02 <mnaser> huh thats weird
16:26:08 <guilhermesp> sounds like variable changes between Q -> R?
16:26:13 <guilhermesp> that is breaking the task
16:27:29 <mnaser> its almost liek
16:27:45 <mnaser> its trying to create a user called `stack_domain_admin` but there's already one?
16:27:56 <mnaser> oh wait i didnt scroll down
16:28:26 <mnaser> seems like ansible bugs ?
16:28:34 <odyssey4me> hmm, ok - I haven't seen this yet, but I guess I could take the time to verify it unless someone else has a little time for it?
16:28:55 <guilhermesp> can be assigned to me
16:28:59 <odyssey4me> It seems that I'm falling on the upgrade sword. :p
16:29:10 <mnaser> we have to do another pretty big upgrade soon
16:29:10 <odyssey4me> Yay! Thanks guilhermesp :)
16:29:36 <mnaser> so i might run into that i guess
16:29:49 <mnaser> assigned to guilherme for further heck
16:29:50 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1807074
16:29:51 <openstack> Launchpad bug 1807074 in openstack-ansible "Neutron OVS bridge_mappings not built on computes" [Undecided,New]
16:30:53 <mnaser> jamesdenton: did you want to grab this?
16:31:15 <mnaser> jamesdenton: you know i feel this relates to that fix i did a while back
16:32:03 <mnaser> https://review.openstack.org/#/c/614282/
16:32:53 <odyssey4me> seems legit, and that's in rocky - perhaps note that in the bug, but leave it new to give jamesdenton some time to look again
16:33:00 <odyssey4me> I think he's out today.
16:33:18 <mnaser> okay cool, ill leave it as new so we can see it again next week
16:33:25 <odyssey4me> I don't think that patch is in a released OSA - or it's only in the most recent release.
16:33:36 <mnaser> yeah but that patch needs another follow up one
16:33:59 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1806938
16:34:01 <openstack> Launchpad bug 1806938 in openstack-ansible "RabbitMQ Upgrade does not work during upgrade from Q->R" [Undecided,New]
16:34:22 <guilhermesp> same guy from heat one
16:34:22 <odyssey4me> ah, a patch for that merged recently
16:34:41 <noonedeadpunk[h]> yeah, I think mnaser has already fixed it
16:34:48 <mnaser> yep.
16:34:51 <odyssey4me> https://review.openstack.org/625200
16:34:55 <mnaser> that was a very expensive bug ill let you know that.
16:34:56 <mnaser> ;[
16:35:14 <odyssey4me> sorry mnaser :/
16:35:21 <mnaser> its okay
16:35:28 <mnaser> thats why i wanna try and help with some of hte ugprade stuff soon
16:36:07 <mnaser> gah
16:36:10 <mnaser> launchpad wont let me comment
16:36:39 <odyssey4me> https://media.giphy.com/media/b3hWtl6x3H3Ko/giphy.gif
16:37:05 <mnaser> getting a wall of red text
16:37:32 <mnaser> aunchpad requires a <code>REFERER</code> header to perform this action. There is no <code>REFERER</code> header present. This can be caused by configuring your browser to block <code>REFERER</code> headers.
16:37:38 <guilhermesp> i think this happened in summit when we were registering a bug or something on a session
16:37:52 <odyssey4me> I can't seem to find my favourite cat presses button and kaboom gif - evrardjp knows the one I mean,
16:37:53 <mnaser> can someoen else comment that was fixed in https://review.openstack.org/#/c/625200/
16:38:01 <guilhermesp> lemme try mnaser
16:38:15 <mnaser> and set status to fix released
16:38:18 <evrardjp> odyssey4me: I don't have it here
16:38:24 <evrardjp> but yeah.
16:39:37 <mnaser> thanks guilhermesp
16:39:39 <mnaser> #link https://bugs.launchpad.net/openstack-ansible/+bug/1806696
16:39:40 <openstack> Launchpad bug 1806696 in openstack-ansible "Galera lxc shutdown results in corrupted db" [Undecided,New]
16:40:03 <mnaser> oh no
16:40:28 <odyssey4me> FYI Christian Zunker and his team at Codecentric have done a wonderful job of reporting decent bugs and submitting fixes.
16:40:39 <mnaser> +++
16:40:42 <mnaser> are they on irc
16:40:42 <odyssey4me> He was at Berlin IIRC.
16:40:47 <odyssey4me> I don't think so.
16:40:58 <mnaser> yeah i saw some cool patches
16:41:36 <odyssey4me> heh, 5 secs to kill a container is a bit harsh for galera - that sounds legit
16:41:54 <odyssey4me> It's probably better to make that a pretty long wait time - maybe 5 mins or something.
16:41:55 <noonedeadpunk> but is it the case for nspawn?
16:41:56 <mnaser> ive always lxc-stop'd things
16:43:11 <mnaser> set to confirmed
16:43:30 <odyssey4me> it'd be nice to get something sorted for that asap
16:43:42 <francois> hi guys, I'm working with Justin who reported this bug, if there's any additional info needed
16:43:44 <evrardjp> it sounds like a bad bug indeed
16:43:55 <mnaser> searching SHUTDOWNDELAY in codesearch shows nothing
16:43:58 <mnaser> so maybe this isnt something we deploy?
16:44:18 <odyssey4me> mnaser no, it's likely a default - but we can probably implement something to change it
16:44:34 <mnaser> yep ok
16:44:40 <odyssey4me> francois great bug, and it'd be nice to figure out a lxc_hosts role patch to increase that timeout
16:44:43 <evrardjp> at least make it match with the value used in timeout for galera
16:44:57 <odyssey4me> evrardjp yeah, I think that's 180 secs
16:45:02 <odyssey4me> I'd go for 5 mins myself.
16:45:07 <mnaser> francois: do you think you have someone who can help submit a patch for this? i can help them land it
16:45:10 <mnaser> i just dont wanna land it myself
16:45:11 <evrardjp> I remember that was longer than 5 minutes in old times
16:45:23 <evrardjp> remember we searched with mancdaz something about 3600s timeout?
16:45:25 <evrardjp> :)
16:45:45 <evrardjp> that's too much indeed
16:45:52 <francois> yeah definitely if there's an idea and the best way to fix it
16:45:52 <odyssey4me> oh man, that's like back in the day when the dinosaurs were still roaming
16:45:53 <evrardjp> but 5 minutes wouldn't be that bad
16:46:07 <evrardjp> odyssey4me: hahaha
16:47:24 <mnaser> ok lets sync up after the meeting
16:47:30 <francois> agreed that 5 minutes sounds much saner then 5 secondes, it looks like a pretty harsh default
16:47:37 <mnaser> id like to go to open discussion cause we took a bit a lot of time
16:47:40 <mnaser> #topic open discussion
16:47:44 <evrardjp> I can't really help on finding out where to start for this
16:47:47 <mnaser> anyone has any things?
16:47:55 <guilhermesp> I do about placement
16:48:04 <guilhermesp> can someone give a feedback https://review.openstack.org/#/c/618820/ ?
16:48:09 <guilhermesp> we have greenfiled soon but
16:48:15 <guilhermesp> uwsgi app is not running yet
16:48:18 <odyssey4me> I might find a little time to look into it evrardjp mnaser francois - upgrade tests take quite a bit of time. ;)
16:48:41 <mnaser> guilhermesp: it looks like something is not wired up properly
16:48:44 <guilhermesp> in that last patch I added two tasks to configure uwsgi but seems that the ini file is not being placed under /etc/uwsgi
16:48:58 <mnaser> do you have a vm with that role in there?  maybe we can dig in and see it there later?
16:49:14 <guilhermesp> yep I've been doing this this morning
16:49:26 <guilhermesp> I intend to continue dig in
16:49:44 <guilhermesp> and about the tests, placement doesnt have its own tempest tests
16:49:53 <mnaser> that's really f'n annoying that they dont
16:49:55 <guilhermesp> guys from the channel said to reuse compute tests
16:50:12 <guilhermesp> yep that's bad
16:50:19 <mnaser> well thats not easy because it's going to conflict with the placement thats alreadty deployed
16:50:26 <guilhermesp> I didn't ask the reasons they have but I can try to figure out, anyways...
16:50:27 <mnaser> unless we add a patch to remove placement and we have this circular dependency
16:50:28 <mnaser> ugh
16:50:37 <guilhermesp> indeed
16:50:46 <mnaser> lets get it to run and validate it locally and merge it that way.. i dunno, icant think of any other way.
16:50:53 <guilhermesp> I will need to test by hand as soon as  I finish my PR so the code can be merged
16:51:17 <odyssey4me> for now, don't bother with any real tests - do something simple like validate the service is running and be done with it
16:51:18 <guilhermesp> we are up to 51 versions in my PR :P
16:51:26 <evrardjp> odyssey4me: +1
16:51:31 <odyssey4me> once we change os_nova to use it, then we can do something
16:51:42 <guilhermesp> odyssey4me: makes sense for now
16:51:57 <odyssey4me> guilhermesp yeah, for now I'm more inclined to get that patch in even if it's not fully functional - it's getting too big
16:52:00 <pabelanger> heads up, new ansible-lint released today, if you start seeing new linting failures
16:52:07 <pabelanger> https://docs.ansible.com/ansible-lint/rules/default_rules.html
16:52:08 <guilhermesp> I believe today or within the next days you guys can review and merge hopefully
16:52:11 <pabelanger> should help with what is new
16:52:13 <odyssey4me> pabelanger we pin ansible-lint :)
16:52:13 <mnaser> chandankumar: anything at yor side?
16:52:34 <odyssey4me> on that topic, we're long overdue for an asible-lint pin update
16:52:59 <chandankumar> mnaser: I got the answer regarding the spec/blueprint question
16:53:21 <pabelanger> odyssey4me: nice, I might start doing that as the new linters are some what aggressive IMO
16:53:42 <chandankumar> mnaser: we are sending os_tempest calloboration update on openstack-discuss list , anything we can improve in that report?
16:54:02 <mnaser> chandankumar: i *love* it
16:54:19 <mnaser> im going to try and do smething similar inside OSA side, i really really like those
16:54:44 <francois> is there some plan to upgrade ansible in stable/rocky, as it seems to fix issues like https://bugs.launchpad.net/openstack-ansible/+bug/1803382
16:54:45 <openstack> Launchpad bug 1803382 in openstack-ansible "unbound-client.yml failure depending on TERMCAP envvar content" [High,Confirmed] - Assigned to Francois Deppierraz (francois-ctrlaltdel)
16:54:49 <odyssey4me> pabelanger I dunno if there's a default job for ansible-lint, but it'd be nice if it supported pinning - we could then use it, possibly
16:55:18 <evrardjp> francois: nice launchpad id
16:55:27 <pabelanger> odyssey4me: not sure either, I still manage it via tox-linters job and tox.ini
16:55:37 <chandankumar> mnaser: glad to know that you guys liked that , last update of the year is tomorrow, then, we can do send resume in new year
16:55:43 <francois> evrardjp: :)
16:55:57 <mnaser> chandankumar: wonderful, its been a pleasure working together
16:56:10 <chandankumar> mnaser: :-)
16:56:41 <odyssey4me> francois hmm, I'm not sure we could safely upgrade to ansible 2.6 for rocky... that'd be a new minor revision for sure if we did
16:56:50 <odyssey4me> is there no way to get the right fix into ansible 2.5?
16:57:20 <odyssey4me> personally, I'd like rocky to be ansible 2.6 - but it is a big change that risks destabilising more things
16:57:35 <odyssey4me> it might be better to just document that as a known issue
16:57:42 <mnaser> i agree that we shouldnt make a big leap like that
16:57:59 <mnaser> i would want to work with upstream ansible to land those fixes instead
16:58:10 <francois> odyssey4me: I wasn't actually able to find the exact fix in ansible
16:58:27 <odyssey4me> yep, if possible - otherwise just document that people shouldn't use screen and use tmux instead :p
16:59:02 <mnaser> odyssey4me: maybe we can have openstack-ansible clear env
16:59:07 <mnaser> and we can pass only the things we want down to ansible
16:59:12 <odyssey4me> that said, I think jrosser might have actually identified the fix for it - or at least a workaround
16:59:30 <odyssey4me> mnaser that sounds a bit like a minefield
16:59:59 <mnaser> add a little spice to your life
17:00:13 <odyssey4me> hahaha :)
17:00:14 <mnaser> to end it
17:00:20 <mnaser> happy new year in advance.  this is our last meeting of the year
17:00:26 <mnaser> next tuesday is the 25th
17:00:38 <mnaser> so unless y'all wanan celebrate xmas here together, we'll cancel that one :)
17:00:58 <mnaser> thanks for all the awesome work, we're doing great progress in what we discussed since the PTG
17:01:06 <odyssey4me> https://media.giphy.com/media/3o7TKtsBMu4xzIV808/giphy.gif
17:01:11 <mnaser> and it's awesome to see people excited about recommending OSA for large deployments in the ML (if anyone saw that thread!)
17:01:12 <guilhermesp> happy holidays you all! It's been a pleasure to work with osa team. I remember my first steps were this year
17:01:20 <guilhermesp> mid-year I think
17:01:30 <guilhermesp> and the plan is to keep working hard to contribute as many as I can
17:01:50 <odyssey4me> guilhermesp thank you for joining the party :)
17:02:03 <mnaser> and on that fun note, i'll end it with the lame joke we all wanted to say
17:02:04 <guilhermesp> odyssey4me: o/ let's rock
17:02:06 * odyssey4me pencils a note to catch up on the ML
17:02:15 <mnaser> see y'all in next years meeting
17:02:16 <mnaser> #endmeeting