Tuesday, 2023-02-14

hamburglerHey all - Question about OSA - Zed, is Zookeeper considered production ready via the OSA deployment as it is still fairly new to this project?06:53
jrosserhamburgler: you can look here at what the zookeeper deployment touches https://review.opendev.org/q/topic:osa%252Fzookeeper08:06
jrosseri have it deployed in 3 different OSA where we have cinder active/active with ceph, designate H/A and also octavia H/A - those are the things that currently expect some kind of coordination service08:07
jrosserso zookeeper/coordination is needed for some specific situations like those, if you don't have them it doesnt matter08:08
jrosserif you do have them, i've had no trouble so far with the zookeeper deployment and it's actually made a significant improvement to designate in a H/A config now it is present08:09
hamburglerjrosser: thank you :) very helpful!08:29
noonedeadpunkadmin1: there is a  ML thread regarding magnum and latest fedora - there were patches pushed but not merged yet09:25
noonedeadpunkI'd actually expect significant changes to octavia behaviour as well with amphorav209:26
jrosserdamiandabrowski: if you want to discuss anything with the haproxy stuff, let me know09:33
damiandabrowskihi jrosser ! so yesterday you suggested to remove haproxy_preconfigured_services and use only haproxy_services for both service types(preconfigured and configured during service playbooks execution)09:58
noonedeadpunkyeah, looking at these 2 files I'm not really sure what the difference is there10:12
jrosseri was also wondering if we have covered like (horizon + LE) (no horizon + LE) (horizon + no LE) (no horizon + no LE)10:17
jrossertheres tons of combinations that need supporting in the initial haproxy playbook to get that going10:17
jrosserand then also not to break it if you run haproxy again after horizon playbook as run and make it's own settings for haproxy10:17
jrosseralso the support of security.txt is an outlier here as well which is served from keystone but also an haproxy ACL10:18
noonedeadpunkI think most tricky can be no horizon + LE10:20
noonedeadpunkand yeah, security.txt is interesting usecase indeed10:21
damiandabrowskinoonedeadpunk: the difference is facts gathering and "manual" handler execution. But yes, they share the same service.j2 template10:23
damiandabrowskiall these combinations should be supported, the logic is quite simple:10:24
damiandabrowskiif nothing listens on port 80, create a temporary service on that port, issue initial LE certificate and remove that service afterwards.10:24
damiandabrowskican't comment on security.txt right now as I'm not familiar with it, but i will look into this10:24
damiandabrowskinoonedeadpunk: you added a comment "We need to have proper yaml file start as well as license for all newly created files."10:28
damiandabrowskii will add license, but '---' is something i still can't understand :D why we define it? looks like it's completely unnecessary in our case:10:29
damiandabrowskihttps://stackoverflow.com/a/53691066/941376810:29
damiandabrowskior maybe there is some reason?10:30
noonedeadpunkdamiandabrowski: well, out of gathering vars for OS we need only haproxy_system_ca. As it's used only in template, I'd assume we can define it in jinja instead of vars. Regarding service restart... Hm. I'm not really sure right now10:30
noonedeadpunkdamiandabrowski: while yaml document start is optional, it's smth we follow throughout the code and I don't see any reason why we should stop doing that and remove document start everywhere10:32
damiandabrowskiless code and simplicity?10:33
noonedeadpunkit's not a code at the first place? And if you check ansible-lint rules - it's going to fail without that if strict is used https://ansible-lint.readthedocs.io/rules/yaml/#correct-code10:38
damiandabrowskiah so it really is recommended to use it10:43
damiandabrowskiok, i will add these dashes but it's still super weird for me :D for ex. I learned that thee dots indicate the end of document, but we don't use them for some reason :D 10:45
noonedeadpunkwell, it;'s all about coding style I would say. I can compare that to pyhton - you can use tabs or spaces in code 10:46
noonedeadpunkboth will work equally good. but not a lot of ppl will be happy if you'll use tabs :D10:46
*** dviroel|out is now known as dviroel10:58
jrosserdamiandabrowski: really i think that my preference would be that we don't make any changes at all in the haproxy_server role if possible11:00
damiandabrowskiso you want to drop letsencrypt support from haproxy_role?11:01
noonedeadpunkI need to check on handlers topic, as I for some reason think it should be possible to run handlers where they're supposed to run... But I'd need to spawn up env for that11:01
noonedeadpunkI'm not sure how that's related?11:01
noonedeadpunkah, you mean about temporary service11:03
jrosserthings like the change to fix selinux on centos should be broken out into their own patch so we can just merge those anyway11:03
jrosseri don't think that minimal changes to the haproxy role mean dropping LE - as i put in my review comments earlier i think that the bring-up of LE should be handled in playbooks/haproxy-install.yml and vars passed into the haproxy_server role, not by adding more complexity into that role itself.11:21
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Prepare haproxy role for separated haproxy config  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87118811:30
opendevreviewDamian Dąbrowski proposed openstack/openstack-ansible-haproxy_server master: Move selinux fix to haproxy_post_install.yml  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87370311:30
damiandabrowskijrosser: ^11:32
damiandabrowskii will think about this temporary LE service feature in haproxy role. But IMO it may be useful also for non-openstack cases11:33
jrosseri use haproxy_server a ton outside OSA already with LE, it works pretty well11:33
jrosseradmin1: you might be interested in this https://review.opendev.org/c/openstack/magnum/+/84915612:33
admin1jrosser.. thanks 13:43
admin1i was looking into https://opendev.org/openstack/magnum/src/branch/master/magnum/drivers/k8s_fedora_coreos_v1/templates/kubecluster.yaml to make a note of all config options that can be passed 13:45
admin1just a thougth .. br-lbaas and br-storage can share the same interface for example right ? 14:09
admin1or with br-mgmt 14:09
admin1and its already in the controllers as well 14:09
admin1br-mgmt especially 14:09
noonedeadpunkadmin1: I don't think you want to share br-lbaas with br-storage at very least due to security reasons14:12
noonedeadpunkAs br-lbaas is passed to the amphora VMs, which will have access to the storage14:13
admin1issue i have is i only got 5 dedicated vlans, and 2 is being used for external network ..   .. so i only have 3 i can use 14:13
noonedeadpunkbut yes, sure you can share with br-mgmt 14:13
admin1noonedeadpunk, does this sound correct ? https://gist.githubusercontent.com/a1git/0bccd468b96ecffb0490cbb1183fce4e/raw/8a1c9eb32dd9f179577169840719446db51baed9/gistfile1.txt14:30
admin1changes is octavia_provider_segmentation_id is removed and   bridge is br-mgmt14:31
noonedeadpunkadmin1: you don't wanna do that either....14:32
noonedeadpunkBest thing to share - mgmt with storage networks14:32
admin1oh 14:32
admin1hmm.. that works 14:32
noonedeadpunkI mean - you're about to pass internal network with all communications happening to the VM that is basically managed through API and customer traffic is passing through it14:33
jrosseradmin1: you can still make a vxlan if you need14:34
noonedeadpunkI was thinking about that, but didn't dare to suggest14:34
admin1i am ok for all ideas :) 14:34
admin1this is a semi-dev-poc cluster 14:34
noonedeadpunkThough my suggestion would include also using OVS for bridges then, but I've never tried on how this is going to stack/work14:35
noonedeadpunk(ovs for control bridges is doable, but I think we may have still open patches to finilize this)14:35
admin1what i have is  a single interface with 5 tagged vlans .. so i gave br-vlan an ip .. and then of the 5 vlans,  2 = external ..  ,    and then br-mgmt, br-vxlan and br-storage each 14:35
admin1the tagged interfaces are in linuxbrige while the master interface is on ovn 14:36
noonedeadpunkWhat point of having br-vlan if you're limited on vlans?14:36
admin1i have to make do on what is given to me :) 14:36
noonedeadpunkyeah, but I don't think you will be creating vlan tenant networks?14:37
admin1tenant networks are all vxlan 14:37
noonedeadpunkyeah, so why do you need br-vlan?14:37
admin1for external 14:37
admin1ext-net1 and ext-net2 are  on 2 dedicated vlans ..   so i only have 3 more to go ..         and so far i am using it for mgmt, storage and vxlan  each 14:38
admin1and want to add lbaas on it also 14:38
noonedeadpunkah, ok, yeah, I guess I mixed external for tenants and external for public interface of cloud14:38
noonedeadpunkSo yeah, I'd either tried to use vxlan for lbaas or merged storage with mgmt14:39
admin1how do I merge storage with mgmt ? replace container_bridge:  br-storage with br-mgmt and add the mgmt ip in the same br-mgmt ? 14:40
admin1change line2 to 22  ? => https://gist.github.com/a1git/1ba0d24eaea553eee5fb0fc8a5f374cd14:41
noonedeadpunkI think you can jsut drop br-storage?14:42
noonedeadpunkas you don't need any extra interface or anything basically14:43
noonedeadpunkbr-mgmt is already present anyway on all hosts that should have access to storage14:43
admin1got it 14:43
admin1will give it a try 14:43
noonedeadpunkbut if you're deploying ceph-ansible with osa - you will likely need to define couple of overrides14:44
admin1not in this one 14:44
admin1this one is non ceph .. 14:44
admin1using nfs for cinder/glance 14:44
noonedeadpunkyeah. so then just ensure that nfs has mgmt vlan14:45
admin1wait  ..  jrosser said its also possile to route using vxlan 14:45
admin1how ? 14:45
admin1create a manual vtep and bridge ? 14:45
jrosseryes14:46
admin1i want to test/expore this idea14:46
jrosserif you were really on it you could make that correspond to the lbaas network in neutron14:46
admin1becuase right now i have 5 vlans. maybe one day i get just 2 vlans and i still need to do stuff like lbaas and storage and trove etc that may need their own bridge .. 14:46
jrosserlike same VNI on the controller14:47
noonedeadpunkyeah vxlan is really good idea to use with lbaas14:47
admin1since osa only needs a bridge and does not care about what is underlying in the bridge, its a good idea 14:47
jrosserit can be just standard /etc/<whatever-you-use> to configure networks for a vxlan14:47
noonedeadpunkoh, wait, you said OVN? Then not vxlan but geneve :D14:47
jrosserboth i think these days?14:48
admin1well, ovn and geneve can co-exist in different bridges 14:48
noonedeadpunkOne of these14:48
noonedeadpunkNot both at the same time afaik14:48
noonedeadpunkAlso - vxlan for ovn is limited with 4096 networks14:48
noonedeadpunkat least I was told so by slaweq after couple of beers (sound legit, does it? hehe)14:49
jrosseradmin1: https://vincent.bernat.ch/en/blog#tag-network-vxlan14:49
noonedeadpunkI haven't tested ovn with vxlan though14:50
noonedeadpunk#startmeeting openstack_ansible_meeting15:03
opendevmeetMeeting started Tue Feb 14 15:03:34 2023 UTC and is due to finish in 60 minutes.  The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot.15:03
opendevmeetUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:03
opendevmeetThe meeting name has been set to 'openstack_ansible_meeting'15:03
noonedeadpunk#topic office hours15:03
noonedeadpunk#topic rollcall15:03
noonedeadpunko/15:04
noonedeadpunksorry for using wrong topic at first15:04
damiandabrowskihi15:05
jrossero/ hello15:07
noonedeadpunk#topic bug triage15:08
noonedeadpunkWe have couple of new bug reports, and one I find very scary/confusing15:08
noonedeadpunk#link https://bugs.launchpad.net/openstack-ansible/+bug/200704415:09
noonedeadpunkI've tried to inspect code at very least for neutron and haven't found any possible opportuninty of such thing happening15:09
noonedeadpunkI was thinking to maybe add extra conditions here https://opendev.org/openstack/ansible-role-python_venv_build/src/branch/master/tasks/python_venv_install.yml#L43-L48 to check for common path we use for distron install15:10
noonedeadpunkAs I can recall patching some role to prevent running python_venv_build for distro path15:11
noonedeadpunkAs then it will be passed venv_install_destination_path as <service>_bin and <service>_bin for sure comes from distro_install.yml for distro path15:12
noonedeadpunkBut bug overall looks a bit messy overall15:12
jrosserwe could have a default thats says `/openstack` in that role15:12
jrosserand if it doesnt match that start then `fail:`15:12
noonedeadpunkWell. I do use this role outside of openstack as well...15:13
jrosserright - so some more generic way of defining a "safe path"15:13
noonedeadpunkI just can't think of good way of doing that to be frank15:15
noonedeadpunkWe use `/usr/bin` mainly for distro path15:15
noonedeadpunkBut basically - ppl are free to set venv_install_destination_path to any crazy thing...15:16
noonedeadpunkI was going to check for some more roles if we might run the role somehow for distro path...15:18
jrosseri think we need to ask for a reproduction and log in the bug report15:19
jrosseras i've never seen anything like that before15:19
noonedeadpunkAnother thing from same person you've never seen...15:20
noonedeadpunk#link https://bugs.launchpad.net/openstack-ansible/+bug/200698615:21
noonedeadpunkI was going to create a sandbox, but havn't managed to15:22
noonedeadpunkBut since I know you're using dns-01 and having some envs on zed - I'm not really sure I will be able to reproduce that either15:23
damiandabrowski"Haproxy canno't using fqdn for binding and wait for an IP."15:24
damiandabrowskiis that really true?15:24
noonedeadpunkWell, as I wrote there - we have haproxy binded to fqdn everywhere...15:25
noonedeadpunkI can assume that it might not be true with newer haproxy versions or when having DNS RR or failing to resolve DNS....15:25
noonedeadpunkBut I don't see referring binding on FQDN on haproxy docs https://www.haproxy.com/documentation/hapee/latest/configuration/binds/syntax/15:29
noonedeadpunkI kind of wonder if debian or smth is shipped with newer haproxy where bind on fqdn is no longer possible15:30
noonedeadpunk`The bind directive accepts IPv4 and IPv6 IP addresses.`15:30
noonedeadpunkActually, I'm thinking if it's not time to try to rename internal_lb_vip_address15:31
noonedeadpunkIt's hugely confusing15:31
damiandabrowskiworks fine at least on HA-Proxy version 2.0.29-0ubuntu1.1 2023/01/1915:31
noonedeadpunkWell. That could be some undocumented behaviour we've taken as granted....15:32
jrossercomment #9 suggests it is working now?15:33
jrosseri'm pretty unclear what is going on in the earlier comments15:33
noonedeadpunkyeah...15:33
jrosseroh right but `haproxy_keepalived_external_vip_cidr ` will stop the fqdn being in the config file?15:34
noonedeadpunkin keepalived file15:34
jrosserwell not sure actually15:34
noonedeadpunkfor haproxy you'd need haproxy_bind_internal_lb_vip_address15:35
noonedeadpunkI think we should get rid of internal/external_lb_vip_address by using smth with more obvious naming15:36
noonedeadpunkAs basically what we want this variable to be - represent public/internal endpoints in keystone?15:36
noonedeadpunkAnd serve as a default for keepalived/haproxy whenever possible15:37
noonedeadpunkso maybe we can introduce smth like openstack_internal/external_endpoint and set it's default to internal/eternal_lb_vip_address and replace _lb_vip_address everywhere in docs/code with these new vars?15:39
jrosserhaving it actually describe what it is would be good15:40
jrosserthough taking into account doing dashboard.example.com and compute.example.com rather than port numbers would be good too15:40
jrosserthere is perhaps a larger piece of work to understand how to make that tidy as well15:41
noonedeadpunkwhat confuses me a lot - saying that address can be fqdn...15:41
noonedeadpunkyeah, I assume that would need quite ACLs, right?15:41
jrosseryeah but perhaps that makes it clearer what we need15:41
jrosseras the thing that haproxy binds to is either some IP or a fqdn15:42
noonedeadpunkI'm not sure now if it should bind to fqdn.... or if it does in 2.6 for example...15:42
jrosserand we completely dont handle dual stack nicely either15:43
jrosserfeels like we get to PTG topic area with this tbh15:43
noonedeadpunkyeah, totally... Let me better write it down to etherpad :D15:43
jrosserdual stack is possible - we have it but the overrides are really quite a lot15:44
noonedeadpunkI'd say one of problems as of today - <service>.example.com is part of the role 15:45
noonedeadpunkservice role I mean15:46
noonedeadpunkAs I guess we should join nova_service_type with internal_lb_vip_address by default for that15:47
noonedeadpunkSo this leads us to more relevant topic15:47
noonedeadpunk#topic office hours15:48
noonedeadpunkCurrent work that happens on haproxy with regards to internal TLS15:48
damiandabrowskitoday I'm working on:15:49
damiandabrowski- removing haproxy_preconfigured_services and stick only with haproxy_services15:49
damiandabrowski- adding support for haproxy_*_service_overrides variables15:49
damiandabrowski- evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo15:49
damiandabrowskii'll push changes today/tomorrow15:49
damiandabrowskiI also pushed PKI/TLS support for glance and neutron(however i need to push some patches to dependent roles to get them working):15:49
damiandabrowskihttps://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/82101115:49
damiandabrowskihttps://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/87365415:49
noonedeadpunkdamiandabrowski: I have a question - was there a reason why we don't want to include haproxy role from inside of service roles?15:50
noonedeadpunkAs it feels right now, that implementation of this named endpoints will be way easier, as we will have access to vars that are defined inside roles15:51
noonedeadpunkOr it was somehow more tricky with delegation?15:51
jrosserwhat would we do for galera role there?15:51
noonedeadpunkAnd handlers?15:51
jrosserdo we want to couple the galera role with the haproxy one like that when they are currently independant15:52
noonedeadpunkjrosser: I should return to my work on proxysql to be frank I've put on hold year ago...15:52
jrosseri am also using galera_server outside OSA15:53
damiandabrowskihmm, i'm not sure if i understand you correctly, can you provide some example?15:53
damiandabrowskiwe do you think it would be better to patch each role?15:53
noonedeadpunkWell. It's doesn't make usage of haproxy really good option....15:53
noonedeadpunkfor galera balancing15:53
jrosseranyway fundamental question seems to be if we should call haproxy role from inside things like os_glance15:55
jrosseror if it should be done somehow in the playbook15:55
noonedeadpunkyes ^15:55
jrosserand then also i am not totally following damiandabrowski> - evaluating possibility of moving LE temporary haproxy service feature from haproxy_server role to openstack-ansible repo15:56
jrosser^ is this about how the code is now, or modifiying the new patches15:56
noonedeadpunkjrosser: Galera for me doesn't make much sense for me personally to make dependant on haproxy15:56
noonedeadpunkI'm not sure though if you wanted to do that or not15:56
jrosseri think we should keep those decoupled, and also rabbitmq15:57
noonedeadpunkBut I'd rather not, and left galera in default_services or whatever var will be15:57
noonedeadpunkYes15:57
noonedeadpunkbut for os_<service> I think it does make sense to call haproxy role from them15:57
damiandabrowski"^ is this about how the code is now, or modifiying the new patches" - modifying patches, that was your suggestion, right?15:58
jrosseryes, thats right15:58
jrosseris it possible to make nearly no change to haproxy role?15:59
damiandabrowskii don't think so...16:00
damiandabrowskibut i can at least try to make as little changes as possible16:01
damiandabrowskii still have no idea how can we avoid having haproxy_service_config.yml for "preconfigured" services and other one for services configured by service playbooks16:02
jrosserwe can talk that though if you like16:03
jrosser*through16:03
noonedeadpunkWe can make some call even if needed16:03
damiandabrowskiyeah, sure16:05
noonedeadpunk#endmeeting16:05
opendevmeetMeeting ended Tue Feb 14 16:05:25 2023 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:05
opendevmeetMinutes:        https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.html16:05
opendevmeetMinutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.txt16:05
opendevmeetLog:            https://meetings.opendev.org/meetings/openstack_ansible_meeting/2023/openstack_ansible_meeting.2023-02-14-15.03.log.html16:05
noonedeadpunkSorry, we were out of time :(16:05
noonedeadpunkBut I think we at least need to decide how we want to call haproxy role - from playbooks or roles. Calling from playbooks leave role cleaner, but indeed I'm thinking if role vars might be useful 16:06
damiandabrowskii can evaluate if it's possible16:07
noonedeadpunkOr, I can recall in one of iterations, where definition was in role defaults and include on playbook level - I assume that it was reading somehow these vars?16:08
jrosserwell i already did not like having haproxy_* vars in role/defaults/main.yml16:08
jrosseranyway16:08
noonedeadpunkjrosser: well, if they will be glance_haproxy_service?16:08
noonedeadpunklike we do definition for pki for example16:08
jrosseryeah thats not so bad16:09
noonedeadpunkAlso changes you're reffering to are already abandoned :) But I was more talking about - then on playbook level you somehow got access to role vars16:10
noonedeadpunkThen basically it doesn't matter where to trigger role16:10
noonedeadpunkThough I still don't get how this https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/871193/1/defaults/main.yml was working...16:11
damiandabrowskisorry, i'm getting distracted by personal matters16:14
damiandabrowskibut I'm also not sure if triggering haproxy role from the inside of other roles(openstack services, repo, rabbitmq, galera etc.) is the right thing to do16:16
noonedeadpunkok, good then :)16:22
damiandabrowskiso maybe i can fix all your comments which are fairly easy to fix first, and during next iteration(later this week) we can discuss more complex issues together16:26
damiandabrowskiis that ok for you?16:26
jrossersure16:28
spateljamesdenton around?18:44
spatelI have question on netplan, if i create bridge using netplan br-mgmt, br-vlan etc... does STP default disabled on netplan created bridges? 18:45
spatelUbuntu official document saying STP default enabled. But when i use brctl show command its saying STP enabled: no 18:46
spatelwho i should believe? 18:46
noonedeadpunkyou should believe what you see on your system as a result :)18:53
opendevreviewJonathan Rosser proposed openstack/openstack-ansible-haproxy_server master: Add a variable to allow extra raw config to be applied to all frontends  https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/87374518:57
jrossernoonedeadpunk: i applied the workaround to haproxy with that, and well, "nothing broke" :)18:58
noonedeadpunkjrosser: I wonder if we should take apply |unique as well ?19:00
noonedeadpunkbut yeah, we never did that before, so 19:01
spatelnoonedeadpunk you are saying whatever brctl saying is true? 19:03
jrosseri thought we could also add another role scope variable here (item.service.haproxy_frontend_raw|default([ `haproxy_frontend_raw` ])) + haproxy_frontend_extra_raw19:04
jrosserbecasue the default to [] is a bit sad there19:04
jrosserthen you can do conditional global setting if the service doesnt have anything already, as well as additive "extra" raw statements19:05
noonedeadpunkspatel: netplan is just configuration tools that creates config and passes to systemd_networkd19:07
noonedeadpunkbrctl shows you currently applied configuration basically19:07
spatelHmm but why netplan saying they default enable STP on bridges they created. I was confused there. 19:08
spatelWe had some strange STP loop from one of compute nodes and i am debugging and found this contradict statement 19:08
noonedeadpunkwell... it's indeed true in the code https://github.com/canonical/netplan/blob/0.105/tests/generator/base.py#L11519:10
noonedeadpunkdisrtegard that please19:11
noonedeadpunkbut yeah, I think in code it's indeed set to true... even though it's a C code...19:13
noonedeadpunkthat could be actually quite legit bug in netplan...19:13
spatelI have CentOS and Ubuntu compute and i have noticed STP loop only in Ubuntu computes.. Multiple time it happened 19:19
spatelNow i am nervous that its possible stupid bug hurting me.. 19:19
spatelHow do i prove that STP is really really off on bridges. 19:20
admin1brctl show will say STP is on yes or no 21:40
admin1that is not enough ? 21:40
ElDuderinoAnother ‘not sure how to ask you’ question. Background: I had 3k (test) vms that were booted from volumes. I successfully deleted all 3k vms in one go/pass. I then issued a command to delete the orphaned vols, via openstack cli which succeeded.22:07
ElDuderinoWhile monitoring all the vhosts in the rabbit dashboard, I didn't note many non-acks. Everything seems to run as expected.22:07
ElDuderinoo, while deleting the 3k orphaned vols, its also worth noting that I had 0 vms at this point.22:07
ElDuderinoAt that time, I issued a concurrent vm create request, boot from volume (to test volume CRUD limits) for ~400 vms. 22:07
ElDuderinoAt which point nova filtering (we don't use the placement service, I know, I know) found valid hosts, neutron assigned IPs as expected, glance cached the images to the hosts but they all failed to build b/c they were waiting on cinder to process the concurrent (new) create requests while also processing the prior delete requests.22:07
ElDuderinoI looped a script to tell me how many vols were left to delete to monitor it and once the number left dropped below 1k or so, I was able to create the delta (up to the 1k point, or whatever the number was, I don’t recall at the moment) with no failures.22:07
ElDuderinoAll of that to say, where do I set/tune/test different values? Or is there a max quota set elsewhere that concerns itself with cinder calls? I’ll keep googling but wanted to ask you also. THX.22:08

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!