15:03:34 #startmeeting openstack_ansible_meeting 15:03:34 Meeting started Tue Feb 14 15:03:34 2023 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:34 The meeting name has been set to 'openstack_ansible_meeting' 15:03:39 #topic office hours 15:03:56 #topic rollcall 15:04:10 o/ 15:04:16 sorry for using wrong topic at first 15:05:11 hi 15:07:24 o/ hello 15:08:29 #topic bug triage 15:08:45 We have couple of new bug reports, and one I find very scary/confusing 15:09:02 #link https://bugs.launchpad.net/openstack-ansible/+bug/2007044 15:09:31 I've tried to inspect code at very least for neutron and haven't found any possible opportuninty of such thing happening 15:10:49 I was thinking to maybe add extra conditions here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_install.yml#L43-L48 to check for common path we use for distron install 15:11:24 As I can recall patching some role to prevent running python_venv_build for distro path 15:12:05 As then it will be passed venv_install_destination_path as _bin and _bin for sure comes from distro_install.yml for distro path 15:12:22 But bug overall looks a bit messy overall 15:12:42 we could have a default thats says `/openstack` in that role 15:12:52 and if it doesnt match that start then `fail:` 15:13:20 Well. I do use this role outside of openstack as well... 15:13:56 right - so some more generic way of defining a "safe path" 15:15:15 I just can't think of good way of doing that to be frank 15:15:34 We use `/usr/bin` mainly for distro path 15:16:43 But basically - ppl are free to set venv_install_destination_path to any crazy thing... 15:18:09 I was going to check for some more roles if we might run the role somehow for distro path... 15:19:18 i think we need to ask for a reproduction and log in the bug report 15:19:25 as i've never seen anything like that before 15:20:56 Another thing from same person you've never seen... 15:21:13 #link https://bugs.launchpad.net/openstack-ansible/+bug/2006986 15:22:53 I was going to create a sandbox, but havn't managed to 15:23:45 But since I know you're using dns-01 and having some envs on zed - I'm not really sure I will be able to reproduce that either 15:24:38 "Haproxy canno't using fqdn for binding and wait for an IP." 15:24:41 is that really true? 15:25:13 Well, as I wrote there - we have haproxy binded to fqdn everywhere... 15:25:39 I can assume that it might not be true with newer haproxy versions or when having DNS RR or failing to resolve DNS.... 15:29:36 But I don't see referring binding on FQDN on haproxy docs https://www.haproxy.com/documentation/hapee/latest/configuration/binds/syntax/ 15:30:03 I kind of wonder if debian or smth is shipped with newer haproxy where bind on fqdn is no longer possible 15:30:51 `The bind directive accepts IPv4 and IPv6 IP addresses.` 15:31:39 Actually, I'm thinking if it's not time to try to rename internal_lb_vip_address 15:31:45 It's hugely confusing 15:31:53 works fine at least on HA-Proxy version 2.0.29-0ubuntu1.1 2023/01/19 15:32:54 Well. That could be some undocumented behaviour we've taken as granted.... 15:33:19 comment #9 suggests it is working now? 15:33:33 i'm pretty unclear what is going on in the earlier comments 15:33:50 yeah... 15:34:32 oh right but `haproxy_keepalived_external_vip_cidr ` will stop the fqdn being in the config file? 15:34:49 in keepalived file 15:34:50 well not sure actually 15:35:25 for haproxy you'd need haproxy_bind_internal_lb_vip_address 15:36:31 I think we should get rid of internal/external_lb_vip_address by using smth with more obvious naming 15:36:54 As basically what we want this variable to be - represent public/internal endpoints in keystone? 15:37:19 And serve as a default for keepalived/haproxy whenever possible 15:39:34 so maybe we can introduce smth like openstack_internal/external_endpoint and set it's default to internal/eternal_lb_vip_address and replace _lb_vip_address everywhere in docs/code with these new vars? 15:40:01 having it actually describe what it is would be good 15:40:26 though taking into account doing dashboard.example.com and compute.example.com rather than port numbers would be good too 15:41:01 there is perhaps a larger piece of work to understand how to make that tidy as well 15:41:06 what confuses me a lot - saying that address can be fqdn... 15:41:38 yeah, I assume that would need quite ACLs, right? 15:41:51 yeah but perhaps that makes it clearer what we need 15:42:23 as the thing that haproxy binds to is either some IP or a fqdn 15:42:51 I'm not sure now if it should bind to fqdn.... or if it does in 2.6 for example... 15:43:10 and we completely dont handle dual stack nicely either 15:43:33 feels like we get to PTG topic area with this tbh 15:43:55 yeah, totally... Let me better write it down to etherpad :D 15:44:23 dual stack is possible - we have it but the overrides are really quite a lot 15:45:58 I'd say one of problems as of today - .example.com is part of the role 15:46:09 service role I mean 15:47:18 As I guess we should join nova_service_type with internal_lb_vip_address by default for that 15:47:38 So this leads us to more relevant topic 15:48:23 #topic office hours 15:48:44 Current work that happens on haproxy with regards to internal TLS 15:49:07 today I'm working on: 15:49:13 - removing haproxy_preconfigured_services and stick only with haproxy_services 15:49:15 - adding support for haproxy_*_service_overrides variables 15:49:18 - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo 15:49:28 i'll push changes today/tomorrow 15:49:38 I also pushed PKI/TLS support for glance and neutron(however i need to push some patches to dependent roles to get them working): 15:49:40 https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/821011 15:49:42 https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/873654 15:50:21 damiandabrowski: I have a question - was there a reason why we don't want to include haproxy role from inside of service roles? 15:51:15 As it feels right now, that implementation of this named endpoints will be way easier, as we will have access to vars that are defined inside roles 15:51:41 Or it was somehow more tricky with delegation? 15:51:43 what would we do for galera role there? 15:51:48 And handlers? 15:52:12 do we want to couple the galera role with the haproxy one like that when they are currently independant 15:52:34 jrosser: I should return to my work on proxysql to be frank I've put on hold year ago... 15:53:06 i am also using galera_server outside OSA 15:53:08 hmm, i'm not sure if i understand you correctly, can you provide some example? 15:53:31 we do you think it would be better to patch each role? 15:53:38 Well. It's doesn't make usage of haproxy really good option.... 15:53:46 for galera balancing 15:55:42 anyway fundamental question seems to be if we should call haproxy role from inside things like os_glance 15:55:52 or if it should be done somehow in the playbook 15:55:56 yes ^ 15:56:23 and then also i am not totally following damiandabrowski> - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo 15:56:37 ^ is this about how the code is now, or modifiying the new patches 15:56:39 jrosser: Galera for me doesn't make much sense for me personally to make dependant on haproxy 15:56:49 I'm not sure though if you wanted to do that or not 15:57:07 i think we should keep those decoupled, and also rabbitmq 15:57:14 But I'd rather not, and left galera in default_services or whatever var will be 15:57:21 Yes 15:57:45 but for os_ I think it does make sense to call haproxy role from them 15:58:09 "^ is this about how the code is now, or modifiying the new patches" - modifying patches, that was your suggestion, right? 15:58:30 yes, thats right 15:59:13 is it possible to make nearly no change to haproxy role? 16:00:49 i don't think so... 16:01:32 but i can at least try to make as little changes as possible 16:02:27 i still have no idea how can we avoid having haproxy_service_config.yml for "preconfigured" services and other one for services configured by service playbooks 16:03:05 we can talk that though if you like 16:03:11 *through 16:03:55 We can make some call even if needed 16:05:03 yeah, sure 16:05:25 #endmeeting