16:15:52 <mnaser> #startmeeting openstack_ansible_meeting
16:15:53 <openstack> Meeting started Tue Jul  9 16:15:52 2019 UTC and is due to finish in 60 minutes.  The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:15:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:15:56 <openstack> The meeting name has been set to 'openstack_ansible_meeting'
16:15:58 <mnaser> sorry, i got caught up in something
16:16:45 <mnaser> so i think we need to get around the release
16:16:52 <mnaser> ifeel like everything is probably okay enough right now
16:16:56 <guilhermesp> o/
16:17:41 <guilhermesp> what is the status of the release? I think noonedeadpunk was working on that while ago?
16:18:14 <jrosser> o/
16:18:26 <mnaser> noonedeadpunk: is out on vacation :)
16:18:41 <guilhermesp> oh good for him :)
16:18:44 <mnaser> but i think evrardjp is around.  i think jrosser covered a bunch of stuff
16:19:09 <jrosser> i did i pile of digging on the master branch sha bump
16:19:12 <openstackgerrit> Kevin Carter (cloudnull) proposed openstack/ansible-config_template master: Add option to use a remote source  https://review.opendev.org/669758
16:19:58 <jrosser> and seems we pull in a pinned version of requirements, and that (is/was) what was breaking us on master
16:20:49 <mnaser> jrosser: we should probably use master in master but i was thinking for stable/stein
16:22:17 * jrosser rechecks https://review.opendev.org/#/c/668328/1
16:22:31 <jrosser> the infra mirrors were a bit fubar so lets see if thats better now
16:23:28 <jrosser> theres another thing i wanted to bring up - the lack of haproxy in our metal jobs leaves us exposed to error
16:24:13 <jrosser> having the ssl & self signed cert there with haproxy catches all sorts of endpoint misconfiguration stuff, and we merged a patch with that wrong recently
16:25:29 <mnaser> jrosser: yeah.. that is something we can fix though
16:25:43 <mnaser> by pretty much listening on the right interface
16:25:45 <mnaser> instead of 0.0.0.0
16:25:46 <jrosser> indeed - theres a port conflict iirc?
16:25:48 <mnaser> thats how we do it at least
16:26:03 <mnaser> yeah, pretty much haproxy will listen on vip:x and services listen on mgmt:x
16:26:09 <mnaser> that eliminates that issue
16:27:01 <guilhermesp> yeah I think there was some work in the past with this. Or I did a workaround locally in a deployment but didn't have time to patch  this in osa
16:27:25 <guilhermesp> that's is also a blocker for deployer who wants to not deploy on lxc
16:28:31 <jrosser> if theres a simple fix then thats would be good, we need the tests to fail when the endpoints are setup badly
16:30:04 <jrosser> other things - still not much more progress on ansible 2.8 , it's still stuck with tempest failing object storage tests
16:33:22 <mnaser> right
16:33:30 <mnaser> so ill watch 668328 (and if others can too please)
16:33:39 <mnaser> and once that merges we push an rc and hopefully thats the end of it
16:33:42 <mnaser> :(
16:33:50 * guilhermesp watching too
16:34:15 <jrosser> also new elk stuff on the go to
16:34:37 <jrosser> we pushed a first cut of elk_metrics_7x today if anyone wants to get stuck in with that
16:36:27 <openstackgerrit> Jonathan Rosser proposed openstack/openstack-ansible-ops master: Replace git.openstack.org URLs with opendev.org URLs  https://review.opendev.org/657225
16:39:52 <evrardjp> hey
16:39:58 <evrardjp> sorry catching up
16:41:28 <evrardjp> well I think we don't need haproxy for aio
16:41:36 <evrardjp> metal
16:41:47 <evrardjp> but I agree there is a coverage gap
16:41:59 <evrardjp> but that was supposed to be covered by lxc deploys
16:42:17 <evrardjp> for the releases I am just back from holidays, I will check the status tomorrow
16:42:29 <guilhermesp> evrardjp: you mean we don't need to test haproxy on metal or we dont need to have haproxy on metal deployments?
16:43:05 <evrardjp> the latter, assuming a well architected cloud, which also mean the former
16:43:36 <evrardjp> but I agree, it could be nice to have haproxy coverage too
16:44:03 <evrardjp> I just think this could be done using a different test that covers other things
16:44:11 <evrardjp> (multinode)
16:45:01 <evrardjp> my idea (long ago!) was to have the user stories of our documentation, gated.
16:45:15 <jrosser> right now we don't run any lxc jobs for role tests
16:45:18 <evrardjp> so aio/multinode would be tested
16:46:12 <evrardjp> jrosser: none at all?
16:46:26 <evrardjp> like we don't have our specific scenarios anymore for galera and others?
16:46:33 <jrosser> https://review.opendev.org/#/c/669615/
16:46:44 <evrardjp> I think we need to convert the old scenarios to the new world asap
16:48:05 <evrardjp> how did the first one work? I am curious
16:48:08 <jrosser> i'm still thinking we should have haproxy generally, because a mix of https & http is what everyone is doing in a real deploy, from AIO up
16:48:30 <jrosser> i expect it connected to the wrong endpoint, which was http, and therefore it worked
16:48:40 <jrosser> but in real life that would have been https and fails
16:48:46 <evrardjp> I see
16:49:06 <evrardjp> I don't have the cycles to add haproxy to the metal job
16:49:24 <evrardjp> but I guess it's a question of whether we should do that or not
16:49:42 <evrardjp> (in case not, what other job will we create to cover this use case?)
17:10:00 <mnaser> uh haproxy makes a lot of sense in aio metal tbh
17:10:10 <mnaser> you dont want ssl happening inside services
17:11:39 <evrardjp> haproxy makes sense in metal cases, I agree, but AIO... meh?
17:12:24 <evrardjp> but I won't say no to those patches :)
17:12:31 <evrardjp> I just don't have the bw
17:14:08 <jrosser> It’s makes sense in AIO metal because that’s what we base a ton of our tests on
17:14:43 <mnaser> yeah, i think they are necessary imho, but i've been pretty overwhelmed personally.
17:14:51 <mnaser> i am hoping some welcome hhelp soon
17:14:53 <mnaser> #endmeeting