Thursday, 2019-02-21

*** hwoarang has quit IRC00:04
openstackgerritMerged openstack/openstack-ansible stable/rocky: Mark (and use) version with the OSA wrapper script  https://review.openstack.org/63736600:04
*** hwoarang has joined #openstack-ansible00:05
*** macza has quit IRC00:06
*** markvoelker has joined #openstack-ansible00:19
*** tosky has quit IRC00:19
*** cmart has quit IRC00:55
*** logan- has quit IRC01:22
*** logan- has joined #openstack-ansible01:23
NobodyCamis it possible to setup OSA-aio on a remote host?01:28
*** logan- has quit IRC01:31
*** logan- has joined #openstack-ansible01:31
*** mmercer has quit IRC01:36
*** DanyC has quit IRC01:39
jamesdentonwhat do you mean?01:43
*** sdake has quit IRC01:44
jamesdentonYou can use the AIO plays to setup openstack on other hosts in addition to the AIO node (i.e. using the AIO node as a controller/compute/etc and have additional computes), but it won't handle any of the other networking/bootstrapping; that's only done on the AIO node itself01:45
*** hwoarang has quit IRC01:47
*** sdake has joined #openstack-ansible01:47
*** hwoarang has joined #openstack-ansible01:48
*** hwoarang has quit IRC01:55
*** hwoarang has joined #openstack-ansible01:56
cloudnulllbragstad I cant say I've ever seen that issue02:03
cloudnullhowever our key rotation policy may not be as aggressive?02:04
*** gyee has quit IRC02:06
lbragstadyeah - possibly02:06
cloudnullis this user using something like a 1 min rotation ?02:06
lbragstadrotations every hour02:06
lbragstadtoken expiration every 2 hours02:06
lbragstad3 max keys02:06
cloudnullhum02:08
lbragstadpas-ha apparently hit the issue using an internal openstack deployment they use for ci/cd, which just churns out vms02:08
cloudnullour default policy is daily rotation w/ 7 keys02:08
cloudnullI guess if you were hammering the API during a rotation event we'd see that same issue02:09
lbragstadright02:09
cloudnullno matter the event timing02:09
lbragstadit has to get routed to a host thats in the middle of an rsync operation, too02:10
lbragstadlike - exactly in the middle02:10
lbragstador anytime the new staged key is sync'd, but before the primary key is sync'd02:10
cloudnullseems like this would be best solved client side. like 401, retry02:11
lbragstadyeah02:11
lbragstada subsequent would work02:11
lbragstadsubsequent request*02:11
cloudnullhowever, maybe they're seeing 10 million 401s all spike up then back to normal work02:11
cloudnulland then they're seeing this every hour02:11
lbragstadotherwise - you'd have to reverse the order in which you copy keys based on the key inde02:11
lbragstadindex*02:11
lbragstadif you copy the key with the highest index first, the problem should go away02:12
lbragstad(but we leave that implementation detail to rsync)02:12
cloudnullI wonder if we could do a list of all the files in the dir and then reverse the order in the command?02:14
lbragstadand do a manual copy of each file?02:15
cloudnullsomething rsync $flags 7 6 5 4 3 2 1 $user@$server:$path02:15
lbragstador still use rsync somehow?02:15
lbragstadoh - sure02:15
cloudnullIDK if that would preserve the ordering but that would work ?02:15
cloudnull**should work02:15
*** DanyC has joined #openstack-ansible02:16
lbragstadhttps://pasted.tech/pastes/ec9618b32aa49b7bbca9c4f32a014c83995d79da.raw02:18
cloudnullso if we specify the file ordering and set the flag `--delay-updates` it should go ?02:19
lbragstadsounds like two different solutions02:19
lbragstadone is to explicitly call out which files you want transferred first and use individual rsync commands02:20
cloudnullI guess we could use scp instead of rsync02:20
lbragstadthe second is to use --delay-updates02:20
cloudnullthat would keep the order02:20
lbragstad--delay-updates sounds like it just minimizes the window of susceptibility02:20
lbragstadyou could theoretically still hit the problem, just less likely02:21
cloudnullscp would be slower, and technically just copying the files one at a time, but could be done in a given order.02:21
lbragstadright02:21
lbragstadkey distribution isn't time sensitive anyway02:21
lbragstadwe don't require that you rush02:21
cloudnullok. so we could just change https://github.com/openstack/openstack-ansible-os_keystone/blob/master/templates/keystone-fernet-rotate.sh.j2#L34 to use scp and https://github.com/openstack/openstack-ansible-os_keystone/blob/master/templates/keystone-fernet-rotate.sh.j2#L37 to be a list of the files in reverse order02:23
cloudnulland the problem should be solved?02:23
lbragstadi think so?02:23
lbragstadfwiw - i haven't verified this locally02:23
cloudnullso this is the normal sort order02:30
cloudnullhttps://pasted.tech/pastes/f87e7c2eedc586ea5725e4ec05a5636f9d3c746a02:30
cloudnulland we want it based on the time it was created https://pasted.tech/pastes/fcd8c0f781bc03f864481335733bc1f5411fe4b502:30
*** cmart has joined #openstack-ansible02:44
*** sdake has quit IRC02:48
*** sdake has joined #openstack-ansible02:49
*** sdake has quit IRC02:57
*** sdake has joined #openstack-ansible02:59
lbragstadso - if that's the node your rotating from03:05
lbragstadthen i think it has to be 0 $highest-index03:05
lbragstadactually - $highest-index should be first (i think)03:06
lbragstadbecause the key with the highest index is the key encrypting tokens03:06
lbragstadall other keys, including the 0 key, are secondary keys that can only be used to decrypt tokens03:07
lbragstadthe only difference between a non-zero secondary key and the 0 key is that the 0 key hasn't had the opportunity to encrypt anything, yet03:07
lbragstadso - that should mean if you transfer the key with the highest index first, you should be able to validate tokens that were *just* encrypted with that key03:08
lbragstadand if you're using scp - you shouldn't be wiping the entire key repository, so other secondary keys should still be available if an older token comes in for validation halfway through the rotation03:09
*** alvinstarr has quit IRC03:16
*** markvoelker has quit IRC03:20
cloudnull sorry was making dinner, back03:26
*** spsurya has joined #openstack-ansible03:34
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_keystone master: Correct fernet token sync race condition  https://review.openstack.org/63832703:48
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_keystone master: Correct fernet token sync race condition  https://review.openstack.org/63832703:49
cloudnulllbragstad ^03:49
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_keystone master: Correct fernet token sync race condition  https://review.openstack.org/63832703:51
*** udesale has joined #openstack-ansible03:51
*** hwoarang has quit IRC04:12
*** hwoarang has joined #openstack-ansible04:14
*** markvoelker has joined #openstack-ansible04:21
lbragstadcloudnull nice - thanks!04:25
*** ianychoi has quit IRC04:31
*** jbadiapa has quit IRC04:38
*** jbadiapa has joined #openstack-ansible04:38
*** cmart has quit IRC04:44
*** vnogin has joined #openstack-ansible04:48
*** sdake has quit IRC04:50
*** vnogin has quit IRC04:52
*** sdake has joined #openstack-ansible04:54
*** markvoelker has quit IRC04:55
*** ArchiFleKs has quit IRC04:56
*** cmart has joined #openstack-ansible04:59
*** ArchiFleKs has joined #openstack-ansible05:13
*** sdake has quit IRC05:14
*** sdake has joined #openstack-ansible05:15
*** hwoarang has quit IRC05:16
*** hwoarang has joined #openstack-ansible05:18
*** shyamb has joined #openstack-ansible05:19
*** hwoarang has quit IRC05:39
*** hwoarang has joined #openstack-ansible05:41
*** sdake has quit IRC05:48
*** sdake has joined #openstack-ansible05:51
*** markvoelker has joined #openstack-ansible05:52
*** gokhani has joined #openstack-ansible05:53
*** lbragstad_ has joined #openstack-ansible05:53
*** lbragstad has quit IRC05:55
*** cmart has quit IRC05:56
*** kmrchdn is now known as chandankumar06:01
*** sdake has quit IRC06:03
*** lbragstad has joined #openstack-ansible06:04
*** sdake has joined #openstack-ansible06:04
*** lbragstad_ has quit IRC06:04
*** lbragstad_ has joined #openstack-ansible06:09
*** lbragstad has quit IRC06:10
*** lbragstad has joined #openstack-ansible06:16
*** lbragstad_ has quit IRC06:18
*** markvoelker has quit IRC06:25
*** spsurya has quit IRC06:25
*** sdake has quit IRC06:37
*** shyamb has quit IRC06:55
*** jorti_ has quit IRC07:03
*** jorti has joined #openstack-ansible07:05
*** shyamb has joined #openstack-ansible07:06
fnpanichi07:15
*** spsurya has joined #openstack-ansible07:17
*** kopecmartin|off is now known as kopecmartin07:19
*** pcaruana has joined #openstack-ansible07:19
*** markvoelker has joined #openstack-ansible07:22
*** cshen has joined #openstack-ansible07:27
*** DanyC has quit IRC07:35
*** DanyC has joined #openstack-ansible07:36
*** shyamb has quit IRC07:37
*** cshen has quit IRC07:41
*** markvoelker has quit IRC07:55
*** cshen has joined #openstack-ansible08:08
*** osackz has quit IRC08:08
*** phasespace has joined #openstack-ansible08:11
*** cshen has quit IRC08:22
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: use run_stackviz to generate stackviz report  https://review.openstack.org/63836008:22
*** lbragstad has quit IRC08:25
*** tosky has joined #openstack-ansible08:26
*** ivve has joined #openstack-ansible08:27
*** gillesMo has joined #openstack-ansible08:34
*** fghaas has joined #openstack-ansible08:36
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added tempest.conf for heat_plugin  https://review.openstack.org/63202108:39
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Use the correct heat tests  https://review.openstack.org/63069508:40
*** DanyC has quit IRC08:41
CeeMacmorning08:47
odyssey4meprometheanfire there is instrumentation in the lxc-container-create role to allow a different distro than the host, ut the osa integrated build does not use it08:48
odyssey4meprometheanfire yes, OSA uses the infra afs mirrors08:49
odyssey4mecjloader cloudnull d34dh0r53 the ping package was added to the wrong list: https://review.openstack.org/#/c/638189/4/vars/redhat-7.yml08:49
*** cshen has joined #openstack-ansible08:50
odyssey4meprometheanfire to use the infra mirrors, though, we override the defaults in the roles via extra-vars in the integrated build, and via set_facts in the tests repo playbooks (for role tests)08:52
*** markvoelker has joined #openstack-ansible08:52
*** cshen has quit IRC08:55
*** shyamb has joined #openstack-ansible08:58
*** gillesMo has quit IRC09:03
odyssey4meevrardjp heh, no need for https://review.openstack.org/637361 / https://review.openstack.org/637359 / https://review.openstack.org/637363 any more, because the use of the env var has merged :)09:05
*** electrofelix has joined #openstack-ansible09:08
evrardjphaha :)09:09
evrardjpgood09:09
evrardjpI need to do something on master, because master must look very weird09:09
evrardjpI have to discuss it with mnaser09:09
evrardjpbut cool :)09:09
*** cshen has joined #openstack-ansible09:22
*** markvoelker has quit IRC09:25
chandankumarodyssey4me: evrardjp I need some help on this review https://review.openstack.org/#/c/632726/ I am facing this error http://logs.openstack.org/26/632726/12/check/tripleo-ci-centos-7-standalone-os-tempest/d22a454/job-output.txt.gz#_2019-02-19_09_46_00_50546509:32
chandankumarodyssey4me: evrardjp since it is a action plugin, if I set dependencies under meta/main.yaml it always assumes as a role09:33
chandankumarbut I need to set it as a action plugin09:33
chandankumaris it possible to do that?09:33
openstackgerritMerged openstack/ansible-config_template master: Add multistropt test cases  https://review.openstack.org/63660309:35
openstackgerritMerged openstack/ansible-config_template master: Remove whitespace before comments  https://review.openstack.org/63693509:35
*** shyamb has quit IRC09:41
*** shyamb has joined #openstack-ansible09:50
*** iurygregory has quit IRC10:01
*** iurygregory has joined #openstack-ansible10:01
*** phasespace has quit IRC10:09
*** markvoelker has joined #openstack-ansible10:17
*** spsurya has quit IRC10:22
*** DanyC has joined #openstack-ansible10:25
openstackgerritChandan Kumar proposed openstack/ansible-config_template master: Fixed config_template setup.cfg to treat as a role  https://review.openstack.org/63838310:26
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272610:27
*** phasespace has joined #openstack-ansible10:33
chandankumarodyssey4me: jrosser https://review.openstack.org/#/c/638360/ stackviz enable disable10:34
*** ChosSimbaOne_Lap has quit IRC10:34
*** DanyC has quit IRC10:38
*** DanyC has joined #openstack-ansible10:39
*** ChosSimbaOne_Lap has joined #openstack-ansible10:41
*** Chosimba1 has joined #openstack-ansible10:42
*** Chosimba1 has quit IRC10:44
*** ChosSimbaOne_Lap has quit IRC10:45
*** ChosSimbaOne has joined #openstack-ansible10:45
*** shyamb has quit IRC10:46
ChosSimbaOneHi. I am currently diving into OpenStack-Ansible for setting up our openstack installation. I am installing on top on Ubuntu 18.04 which uses netplan for network setup. All the documentaion relates to the old ifupdown way of doing network in ubuntu. What is recommended with openstack-ansible for ubutnu 18.04? Replace netplan with ifupdown or is there any documentation on setting up with netplan?10:48
chandankumarodyssey4me: I think if i uncomment this line https://git.openstack.org/cgit/openstack/ansible-role-python_venv_build/tree/defaults/main.yml#n22 then I can reuse it as a dependeny in os_tempest10:49
*** DanyC has quit IRC10:50
chandankumaranyway this var getting overwritten so I donot think it will a problem10:50
chandankumarodyssey4me: what you say?10:50
*** ChosSimbaOne has quit IRC10:52
*** ChosSimbaOne has joined #openstack-ansible10:52
openstackgerritChandan Kumar proposed openstack/ansible-role-python_venv_build master: Uncomment venv_install_destination_path for using as a role  https://review.openstack.org/63839310:54
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: Added dependency of os_tempest role  https://review.openstack.org/63272610:55
*** stuartgr has joined #openstack-ansible10:59
*** shyamb has joined #openstack-ansible11:15
*** udesale has quit IRC11:26
*** ArchiFleKs has quit IRC11:26
*** ArchiFleKs has joined #openstack-ansible11:31
*** fghaas has quit IRC11:39
*** fghaas has joined #openstack-ansible11:54
*** sdake has joined #openstack-ansible12:02
*** cshen has quit IRC12:07
phasespaceI'm testing openstack ansible upgrades. After the upgrade i see stuff like this in the ceilometer logs: Invalid input: extra keys not allowed @ data[u'flavor_name']. Googling this I see people saying this will be fixed by running ceilometer-upgrade to feed gnocchi with the new resource types. Is that so? Will I break anything by running this?12:08
chandankumarodyssey4me: I think my venv changes broke it12:13
jamesdentonmornin' folks12:13
chandankumarthe tripleo jobs12:13
CeeMacmorning jamesdenton12:14
CeeMacjust the man i was looking for actually :)12:15
jamesdentonalrighty, let's do this12:15
CeeMacremember the other day I was saying about issues on my network, well the plot thickens12:16
gundalowAnyone able to take a look at https://github.com/ansible/ansible/pull/5269912:16
CeeMaclost network connectivity to one of the nodes early morning around 6/7am (different times on different nodes)12:16
CeeMacand I've been combing through the logs and found this12:16
CeeMachttp://paste.openstack.org/show/745596/12:16
CeeMacany idea whats going on there?12:17
jamesdentonone sec12:18
CeeMacsure12:18
CeeMacsorry to jump on you first thing12:18
jamesdentonthat's alright. You're on Ubuntu? With netplan, right?12:19
CeeMacyep12:19
CeeMacseems to occur frequently12:19
CeeMacbut only affects some of the nodes12:19
jamesdentonAnd i'm assuming you're referring to the stop/start?12:20
CeeMacthe only way to get network back is to bounce a switch port.  The strange thing is, disabling either switchport afterwards doesn't effect network at all.  So maybe some arp aging needs factoring in12:20
CeeMacyeah, after the systemd running in system mode12:20
CeeMackernal log just shows the ports going down then back up (and the various bonds etc)12:21
CeeMacno idea what is causing systemd to do this?12:22
jamesdentonanything in dmseg around the same time?12:22
CeeMaclet me check12:23
CeeMacno dmesg12:23
CeeMachmm12:25
*** kmadac has joined #openstack-ansible12:25
CeeMacdpkg upgrade ran maybe?12:25
CeeMachttp://paste.openstack.org/show/745599/12:25
CeeMacpretty sure only kernel security patches should be getting installed automatically12:25
jamesdentondefinitely looks like a new version of systemd (237) might've been installed.12:26
*** cshen has joined #openstack-ansible12:27
jamesdentonIf you run something like, netplan apply or systemctl restart systemd-networkd do you lose connectivity?12:27
CeeMaclets find out :)12:28
CeeMachmmm12:28
CeeMacnot that time12:28
CeeMaclet me try netplan apply12:29
CeeMacnope12:29
CeeMaclong ping reply but no drop12:29
jamesdentonk12:29
jamesdentonhow about systemctl daemon-reexec12:29
CeeMacnope12:30
CeeMacthat created the systemd running in system mode messages in syslog12:30
jamesdentonIf you have prior timestamps, can you check those to see if they correlate to the same event? or a systemd restart ot upgrade?12:30
CeeMacbut not the network stop/start12:30
jamesdentonok12:30
CeeMacwhen i was losing connectivity during playbook install it was always around systemd upgrade / change12:31
CeeMaclet me see how far the logs go back on this server12:32
jamesdentoncan you share your netplan config files?12:32
CeeMacsure12:32
CeeMachttp://paste.openstack.org/show/745601/12:33
jamesdentonthanks12:34
CeeMacnp12:34
jamesdentonThe playbook install you mentioned.. did you reliably experience loss of connectivity during OSA install or something?12:34
CeeMacno issue during the intial install12:35
CeeMacbut when I was running minor upgrade to 18.1.3 i lost connectivity to some of the nodes during both host-setup and inf-setup12:35
CeeMacalways during the lxc-hosts install apt task12:36
CeeMacwhich does target systemd12:36
jamesdentonok, where it might run an 'apt update' or something?12:36
jamesdentonor upgrade, rather12:37
CeeMacyep12:37
CeeMacwell, it installs a specific set of packages from a variable iirc12:37
CeeMacand presumably would upgrade them if required12:37
jamesdentonWell, i think any system package is targeted, too, with this: apt-get upgrade -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" --force-yes12:39
CeeMacI'll trust you on that :)12:39
jamesdentonwell, don't. haha12:39
CeeMaci didnt get quite that far down the rabbit hole :)12:39
jamesdentonas far as your switch config goes, you don't have any special bonding configuration on there, right? Just configured as normal trunk interfaces?12:40
CeeMacjust normal trunk interface yeah12:40
CeeMacthey're independant switches12:40
CeeMacno cross links, stacks, virtual-chassis or anything12:41
jamesdentonk12:41
CeeMacalthough they're both in the same blade chassis12:41
*** shyamb has quit IRC12:41
jamesdentonthe blade. forgot about the blade.12:41
*** ansmith has quit IRC12:43
*** kaiokmo has quit IRC12:44
CeeMacthe other weird thing is keepalived periodically loses the vip and I need to restart the service12:45
jamesdentonas in, the vip disappears from the interface?12:45
CeeMachaproxy blade is set with flat networks though, no bond, can't see anything in logs so far to account fir this12:45
CeeMacyeah12:45
CeeMacbut the keepalived and haproxy services are both up12:46
jamesdentonyou have three infras? or only the one12:46
CeeMachaprxy is on tin12:46
CeeMacand there is only 1 right now12:46
CeeMacso if network is failing maybe keepalives is "failing over" to nothing, need to look into that more12:46
CeeMacmy next job is to double up the tin for each role to make everything fully HA, wanted to get the minor upgrade out of the way first12:47
CeeMacthen ran into this weird network issue12:47
jamesdentonso, you might consider removing the 'primary' parameter on the bond12:48
jamesdentonAre you able to successfully test failover manually? Shutting down interfaces, yanking cables, etc12:48
CeeMaci tested shutting down the interface this morning and it seemed ok, i'll test again12:49
CeeMaccan't pull cables as theyre hooked in via midplane to blade switches12:49
CeeMacbrb12:49
jamesdentonwhen this happens, do you have an OOB interface you come in on, or use a virtual console or something?12:50
jamesdentonright ok12:50
chandankumarodyssey4me: please have a look at this issue http://logs.openstack.org/26/632726/14/check/tripleo-ci-centos-7-standalone-os-tempest/a4c5081/job-output.txt.gz#_2019-02-21_12_07_39_45113312:53
*** nicolasbock has joined #openstack-ansible12:57
CeeMacback12:58
CeeMacyeah, i have iLO / virtual console access12:58
jamesdentonCeeMac if/when this happens again, it would be worth doing a tcpdump on the two interfaces of the bond to see if there's any traffic making it12:59
jamesdentonarp, unicast, whatever12:59
CeeMaci only tend to catch it retrospectively unfortunately13:00
CeeMac6am seems to be a trigger on some of the servers13:00
jamesdentonso networking is already back up?13:00
CeeMacoh, i see what you mean13:00
CeeMacno, it doesn't come back until i bounce the ports13:00
*** kaiokmo has joined #openstack-ansible13:00
jamesdentonk, yeah, before you bounce13:00
CeeMacgood plan13:01
CeeMacok, so i've shut down each interface in turn, waited 10 secs then unshut it13:12
CeeMacno ping drop from server13:12
CeeMacso it must be a software issue?13:12
jamesdenton*shrug*13:12
jamesdentonthis is where it gets fun13:12
CeeMachaha13:13
CeeMacyeah13:13
CeeMacok, i'll ignore it until it goes off overnight again, then i'll take some pcaps13:13
*** Mr_Smurf has joined #openstack-ansible13:14
Mr_SmurfRocky + Xenial.. supported or not?13:14
CeeMacdo you think its worth removing the primary?13:14
CeeMacon the bond?13:14
jamesdentonCeeMac Up to you - you can leave it and do the caps next time it happens, or remove it and see if the issue persists13:15
CeeMaci'll wait then, in the interest of scientific research13:15
CeeMacjust going to check the other nodes that have bonds, only 2 went down, but maybe only 2 got updated13:16
jamesdentonMr_Smurf I believe so?13:16
jamesdentonMr_Smurf This seems to imply it is: https://docs.openstack.org/project-deploy-guide/openstack-ansible/rocky/targethosts.html13:17
*** gillesMo has joined #openstack-ansible13:17
*** markvoelker has quit IRC13:17
CeeMacpretty sure i was running rocky on xenial on my last dev setup13:18
*** markvoelker has joined #openstack-ansible13:18
gillesMoHello ! Is it possible to mix Ubuntu releases for compute nodes with OSA 18.x ? I have a Queens deployment, managed with OSA 17.x, I will soon upgrade to OSA 18.x / Rocky but still on Ubuntu 16.04. As I will need more compute nodes, I wonder if I could deploy them on Ubuntu 18.04 ?13:20
*** gillesMo has quit IRC13:21
*** gillesMo has joined #openstack-ansible13:21
Mr_Smurfjamesdenton: ok, thanks..13:21
gillesMoHello ! Is it possible to mix Ubuntu releases for compute nodes with OSA 18.x ? I have a Queens deployment, managed with OSA 17.x, I will soon upgrade to OSA 18.x / Rocky but still on Ubuntu 16.04. As I will need more compute nodes, I wonder if I could deploy them on Ubuntu 18.04 ?13:22
CeeMachmmm, looks like unattended upgrades *is* enabled. I thought it was just for kernel security13:22
*** markvoelker has quit IRC13:22
*** sdake has quit IRC13:23
*** sdake has joined #openstack-ansible13:24
jamesdentonchandankumar i think because you're doing a distro install, neither the venv_pip_packages nor venv_default_pip_packages lists are being populated. From what I'm finding, the related tasks only get triggered on a source install. So there could be a bug with distro builds?13:26
jamesdentonchandankumar https://github.com/openstack/tripleo-quickstart-extras/blob/master/playbooks/multinode-standalone.yml#L5713:26
chandankumarjamesdenton: I think it is related to stackviz installation in venv13:26
chandankumarjamesdenton: I am working on a patch to move stackviz on rpm13:27
chandankumarjamesdenton: may be this one https://github.com/openstack/openstack-ansible-os_tempest/blob/master/tasks/tempest_install.yml#L3213:27
chandankumarbut I am not sure odyssey4me will like python_venv_fix13:27
jamesdentonWell this is where it's failing:13:28
jamesdentonhttps://github.com/openstack/ansible-role-python_venv_build/blob/master/tasks/python_venv_install.yml#L76-L7813:28
jamesdentonand based on the error, i bet both of those lists are empty (default to empty) because they only get populated on a source install13:28
*** shyamb has joined #openstack-ansible13:35
*** sdake has quit IRC13:36
*** sdake has joined #openstack-ansible13:38
*** shyamb has quit IRC13:44
*** fghaas has quit IRC13:53
*** ansmith has joined #openstack-ansible13:55
*** lbragstad has joined #openstack-ansible13:57
*** hamzaachi has joined #openstack-ansible13:59
*** fghaas has joined #openstack-ansible14:08
*** sdake has quit IRC14:14
*** vnogin has joined #openstack-ansible14:16
*** markvoelker has joined #openstack-ansible14:18
*** phasespace has quit IRC14:21
*** udesale has joined #openstack-ansible14:22
*** vnogin has quit IRC14:30
*** sdake has joined #openstack-ansible14:30
CeeMacoh, jamesdenton i just remembered i had another question14:33
jamesdentonsure14:33
CeeMacsee when you're using a cinder volume driver to a backend SAN using iSCSI14:34
CeeMacdoes cinder still maintain a separate front-end iscsi connection to nova for passing volume claims to instances14:34
CeeMacor does it expect the iSCSI backend to be on the br-storage network?14:34
*** vnogin has joined #openstack-ansible14:37
jamesdentonthat's a good question, and one i do not know the answer to offhand14:37
CeeMachmm14:37
CeeMacguess I'll just try it and see then :)14:37
jamesdenton:)14:41
*** dave-mccowan has joined #openstack-ansible14:48
mnaserhi everyone14:50
jamesdentonhowdy14:51
*** sdake has quit IRC14:51
kaiokmoheyllo14:52
*** markvoelker has quit IRC14:53
fghaasCeeMac: so far as I know the contract for Cinder is that it'll always directly connect instances (VMs) to the storage backend. cinder-volume isn't normally expected to operate as a proxy of sorts, it's just that with the iSCSI/LVM backend there's no other way to expose the volumes other than cinder-volume doing it by itself. The general expectation is that your compute node will be able to connect to, depending on your choice14:53
fghaasof Cinder backend, the iSCSI storage network, the fibre channel network, the Ceph cluster public network. For IP-based protocols like iSCSI and Ceph it's not required that the backend is _bridged_ into the storage network, but you definitely need at least a routed connection between the compute nodes and the iSCSI backend. Does that help?14:53
CeeMachi fghaas, thats great thanks14:54
CeeMaci'm going to be running nova and cinder on the same host so hopefully this will match the criteria14:55
CeeMacnova and cinder-volume that is14:55
fghaaswhat nova though? compute or the api container?14:58
CeeMaccompute14:58
fghaasyeah in your case that's not really necessary. If you have an iSCSI-based backend that you're managing with say the 3par or netapp drivers, you can run cinder-volume in a container on your control nodes15:00
fghaasbecause that's only ever used for provisioning15:00
mnaserfghaas: ^ i've learned the hardway that's not ideal15:00
mnasersome things like create volume from image with certain drivers can result in it trying to mount things via iscsi inside the container15:01
mnaserand that breaks in weird ways15:01
fghaasmnaser: elaborate? You mean because you have to wire up your control nodes to be able to connect to your storage nodes, and for your compute nodes you're presumably already doing that?15:01
fghaasIf so, that's a fair point15:01
fghaasOh! And the stupid netlink issue15:02
mnaserfghaas: cinder-volume process tries to create a volume from image, so it tries to bind/attach it into the container via iscsi to dd the image into it and yeah15:02
*** sdake has joined #openstack-ansible15:02
fghaasYou're right. Forget what I said CeeMac, for any iSCSI backend never run cinder-volume in a container15:02
*** cshen has quit IRC15:02
fghaasSo, what you'd planned is perfectly reasonable.15:03
fghaas(sorry for forgetting that one, I usually run with Ceph all the way and there that's a no-issue)15:03
CeeMaci think i just about followed that15:05
mnaseryeah ceph is fine15:05
CeeMacto clarify, I've got the host in compute_hosts, metering_compute_hosts and storage_hosts15:05
mnaserbecause ceph is great15:05
openstackgerritShannon Mitchell proposed openstack/openstack-ansible-os_tempest stable/queens: Update workspace tempest.conf on changes  https://review.openstack.org/63803215:06
CeeMacand i'm using the zadara iscsi backend15:06
openstackgerritShannon Mitchell proposed openstack/openstack-ansible-os_tempest stable/pike: Update workspace tempest.conf on changes  https://review.openstack.org/63803615:06
CeeMacso it sounds like I'll be ok?15:06
*** ArchiFleKs has quit IRC15:07
*** shananigans has joined #openstack-ansible15:10
*** ArchiFleKs has joined #openstack-ansible15:11
*** vnogin has quit IRC15:12
*** dave-mccowan has quit IRC15:21
*** cshen has joined #openstack-ansible15:30
*** dxiri has joined #openstack-ansible15:32
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-os_keystone master: Correct fernet token sync race condition  https://review.openstack.org/63832715:37
NobodyCamjamesdenton: Thank you for the reply. I am attempting to setup a single aio node just not from local host15:37
cjloaderodyssey4me: do we need to revert?15:40
*** dave-mccowan has joined #openstack-ansible15:41
cloudnullcjloader I'd say just move the package to the noted list15:45
*** dave-mccowan has quit IRC15:45
openstackgerritCam J. Loader (cjloader) proposed openstack/openstack-ansible-os_tempest master: Fix redhat iputtils  https://review.openstack.org/63844415:46
cjloadercloudnull: d34dh0r53 odyssey4me ^15:46
*** sdake has quit IRC15:48
*** markvoelker has joined #openstack-ansible15:50
*** sdake has joined #openstack-ansible15:50
*** tosky has quit IRC15:53
*** tosky has joined #openstack-ansible15:54
openstackgerritChandan Kumar proposed openstack/openstack-ansible-os_tempest master: use tempest_run_stackviz to generate stackviz report  https://review.openstack.org/63836016:06
*** udesale has quit IRC16:11
*** hamzy has quit IRC16:15
spotzcjloader: you're good16:16
cjloaderspotz: ty16:17
spotzNP16:18
CeeMacquick question, host_vars don't seem to be taking under storage_hosts in openstack_user_config.ym;16:19
CeeMacis there a way to debug that?16:19
*** cshen has quit IRC16:23
*** DanyC has joined #openstack-ansible16:23
*** markvoelker has quit IRC16:23
*** cmart has joined #openstack-ansible16:24
*** macza has joined #openstack-ansible16:24
chandankumarodyssey4me: jrosser when you are free feel to take at these series https://review.openstack.org/#/q/topic:os_tempest_deps+(status:open+OR+status:merged)16:25
*** defionscode has quit IRC16:30
*** kopecmartin is now known as kopecmartin|off16:33
*** vnogin has joined #openstack-ansible16:35
openstackgerritShannon Mitchell proposed openstack/openstack-ansible-os_tempest master: Update workspace tempest.conf on changes  https://review.openstack.org/63801416:36
*** DanyC has quit IRC16:38
*** DanyC has joined #openstack-ansible16:39
*** hamzy has joined #openstack-ansible16:40
*** dave-mccowan has joined #openstack-ansible16:47
*** defionscode has joined #openstack-ansible16:47
*** gyee has joined #openstack-ansible16:48
*** defionscode has quit IRC16:51
*** defionscode has joined #openstack-ansible16:53
cloudnullCeeMac are you not seeing the var at all, or just not seeing the value your setting ?16:56
cloudnulllike maybe there's a conflicting var?16:56
CeeMacnone of the settings are being pushed out in the config drop task16:56
CeeMacits like its not being parsed at all16:57
CeeMaci've had a quick look in cinder.conf and there aren't any existing settings that would block it that I can see16:57
CeeMactried running with -vvv but couldn't see any reference to it pulling in config overrides other than the default ones16:58
cloudnullmind sharing the stanza in the user config?16:58
CeeMacsure16:58
CeeMachttp://paste.openstack.org/show/745634/16:59
CeeMacits the second host in particular16:59
CeeMacpulling config settings from here: https://docs.openstack.org/cinder/rocky/configuration/block-storage/drivers/zadara-volume-driver.html17:00
cloudnullCeeMac and I e3-211-rccn02v is a baremetal volume server ?17:01
chandankumarcloudnull: Hello17:01
CeeMacyes17:01
CeeMacand compute node17:01
cloudnullok.17:01
cloudnullis E3213V01P01 supposed to be a section in config?17:01
chandankumarcloudnull: you mean https://review.openstack.org/#/c/638393/1/defaults/main.yml > /openstack/venvs/myvenv -> /openstack/venvs/undefined or I missing somehting?17:01
*** vnogin has quit IRC17:02
cloudnullchandankumar yes :)17:02
cloudnullits just a cosmetic nit17:02
CeeMacas i understand it, if i was to have multiple backends I'd need to have unique names17:02
openstackgerritChandan Kumar proposed openstack/ansible-role-python_venv_build master: Uncomment venv_install_destination_path for using as a role  https://review.openstack.org/63839317:03
CeeMacthe sample used vpsa, I've tried swapping out E3213V01P01 for vpsa as per the example17:03
CeeMacdidnt make a difference17:03
chandankumarcloudnull: by the way I love your pets pics on twitter, keep posting :-)17:03
cloudnullCeeMac: I think this would work https://pasted.tech/pastes/879afb6a4b6653e0411e72dd0a2b7e5fd311acc417:04
CeeMaclemme give that a go17:04
cloudnullchandankumar ha! thanks! Rest assured, there will be more soon :)17:05
CeeMacdoes it treat cinder-volume as container even though its metal?17:05
cloudnullno. sadly the key "container_vars" is a throw back to a time long since past. container_vars will apply given variables to all things under a given host, containers or not.17:06
CeeMacah17:07
*** pcaruana has quit IRC17:07
CeeMacthats not clear in the documentation :)17:07
cloudnullhost_vars was added later, which makes a lot more sense, but adds variables to only items that are noted as "physical_hosts"17:07
cloudnullso if the config needs to hit all of the above, container_vars are the way to go17:07
CeeMacwhich is why the example in the docs uses the nova conf ?17:07
chandankumarCeeMac: :-)17:08
chandankumarcloudnull: :-)17:08
cloudnullthe problem is that a lot of these inventory things were written back in Juno17:08
cloudnulland the term container was being used as the recipient of inventory things.17:09
CeeMacright17:09
cloudnullnot a fancy cgroup :)17:09
cloudnullso... i guess this is a long way to say, sorry.17:10
CeeMachaha17:10
cloudnull:)17:10
CeeMacno worries17:10
CeeMacthere are always foibles in a system, its just a matter of knowing what they are!17:10
CeeMacwhich is why I keep pestering you lovely people ;)17:10
cloudnullindeed!17:10
cloudnullpester away!17:10
* cloudnull enjoys a good pestering 17:11
CeeMac:D17:11
cloudnulldid that updated stanza work ?17:12
CeeMacit did!17:12
cloudnullsweet17:12
CeeMacexcept for the DEFAULT17:12
CeeMacwhich didnt take17:12
cloudnullhum .17:12
CeeMacwonder if that is conflicting with the glance iscsi settings17:12
* cloudnull looking17:12
CeeMacwonder if i could set that under the backend config?17:13
CeeMaciscsi_protocol isn't listed in cinder.conf though17:13
CeeMachmmm17:13
cloudnullit should inject the config into the cinder.conf17:14
cloudnullis cinder_cinder_conf_overrides defined elsewhere?17:14
CeeMacnope17:15
cloudnullmaybe in a user_*.yml file ?17:15
cloudnullok17:15
CeeMacthe backend has been set17:15
CeeMacand the enabled_backends17:15
CeeMacjust not the iscsi_protocol line17:15
CeeMacjust seems to have issues with cinder_cinder_conf_overrides17:15
CeeMacwonder if there is a default override variable for the iscsi protocol17:16
* CeeMac goes to look17:16
cloudnullis cinder_cinder_conf_overrides in your inventory anywhere `openstack_inventory.json`?17:17
CeeMaclet me check17:18
cloudnullI assume you're running the playbook with a limit of just that one host?17:18
cloudnullany tags?17:18
CeeMacjust the host limit, no tags17:18
cloudnullok17:18
* cloudnull goes to try it 17:18
CeeMacos-cinder-install.yml17:19
CeeMacit is in the inventory17:20
*** markvoelker has joined #openstack-ansible17:20
CeeMachttp://paste.openstack.org/show/745640/17:20
openstackgerritMerged openstack/openstack-ansible-os_keystone master: Add keystone_user_pip_packages variable  https://review.openstack.org/63823317:26
CeeMacmkay17:26
CeeMacmaking progress, but not quite there yet.  Gonna shoot off and revisit with fresh eyes in the morning17:26
CeeMacthanks for the help as usual chaps17:26
cloudnulltake care!17:27
CeeMacttfn!17:31
*** dxiri has quit IRC17:35
openstackgerritJacob Wagner proposed openstack/openstack-ansible-ops master: Add ability to deploy designate with BIND9 servers  https://review.openstack.org/63561117:35
*** DanyC has quit IRC17:42
*** shardy has quit IRC17:49
openstackgerritMerged openstack/openstack-ansible-os_horizon master: Add horizon_user_pip_packages variable  https://review.openstack.org/63823917:51
*** markvoelker has quit IRC17:53
*** gillesMo has quit IRC18:00
*** sdake has quit IRC18:05
*** sdake has joined #openstack-ansible18:06
openstackgerritMerged openstack/openstack-ansible-os_heat master: Add heat_user_pip_packages variable  https://review.openstack.org/63823018:08
*** vnogin has joined #openstack-ansible18:15
*** vnogin has quit IRC18:21
*** cmart has quit IRC18:23
*** cshen has joined #openstack-ansible18:30
*** cshen has quit IRC18:34
partlycloudyhi folks, i tried to make a deployment using ovs-dvr, but only got br-int created on the target hosts.18:36
*** electrofelix has quit IRC18:37
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/queens: cinder.conf: add [nova] section, override interface defaults  https://review.openstack.org/63820618:38
openstackgerritMerged openstack/openstack-ansible-os_cinder stable/rocky: cinder.conf: add [nova] section, override interface defaults  https://review.openstack.org/63820518:38
partlycloudyOSA tag 18.1.4. here are exerpts from openstack_user_config.yml (https://pasted.tech/pastes/38b55873a7a6912a269) and user_variables.yml (https://pasted.tech/pastes/72f71f9f9db2acb7df499daf14393cff1730e570). is anything that i missed?18:39
partlycloudysorry for the first broken link. here it is: openstack_user_config.yml (https://pasted.tech/pastes/38b55873a7a6912a26972ff58ace655d97da3314)18:40
jamesdentonchandankumar we you able to get past that pip issue?18:46
jamesdenton*were18:46
*** markvoelker has joined #openstack-ansible18:50
jamesdentonpartlycloudy do you still have the deployment logs handy? Are you able to see the result of the 'Setup Network Provider Bridges' task?18:55
*** vollman has quit IRC18:56
jamesdentonactually, it may show up as 'Setup External Network Provider Bridge'18:56
jamesdentonoh n/m, i see it18:56
jamesdentonhttps://github.com/openstack/openstack-ansible-os_neutron/blob/stable/rocky/tasks/providers/ovs_config.yml#L2418:57
jamesdentonfor Rocky, it would only be setup on non-DVR deploy. So you can just set it up by hand with 'ovs-vsctl add-br br-provider' and restart the agent18:57
partlycloudyjamesdenton: thanks james, as always. what about br-tun? do i need to create that as well?18:59
jamesdentonit's possible it will show up after you fix this. i bet the agents are not loading fully18:59
jamesdentonprobably erroring out because the bridge does not exist18:59
partlycloudyjamesdenton: yup. i saw the exact error msg, complaining about non-existing br-provider19:00
jamesdentonyeah, you'll need to add it then. It's "fixed" in master19:00
jamesdentonthe docs you followed, if 'latest' then those correspond to master moreso than Rocky19:01
partlycloudyi tried master branch last time, but got some errors with ceph (asking about —allow-downgrade etc…), so i switched back to v18.19:02
jamesdentoni noticed you defined provider networks in openstack_user_config.yml and overrides in user_variables.yml. Ideally, it would be one or the other. There were changes in master (Stein) that should allow you to forgo the overrides in user_variables.yml and set it all up in the other file (preferred?).19:03
jamesdentonfor Rocky, though, you are probably fine with what you have, even though there's some redundancy there19:03
partlycloudyi see. the redundant part in user_variables.yml was added after the first unsuccessful run. i thought that may be the problem. i will remove it later. :-)19:05
*** cmart has joined #openstack-ansible19:17
*** mmercer has joined #openstack-ansible19:20
*** markvoelker has quit IRC19:23
openstackgerritMerged openstack/openstack-ansible-os_tempest master: Fix redhat iputtils  https://review.openstack.org/63844419:28
*** alvinstarr has joined #openstack-ansible19:35
partlycloudyjamesdenton: i'm redeploying the whole thing now (the previous build was destroyed just before you came to the rescue) Will let you know how it goes after the manual fix. cheers!19:38
*** ArchiFleKs has quit IRC19:39
*** strattao has joined #openstack-ansible19:44
*** fghaas has quit IRC19:46
*** spatel has joined #openstack-ansible19:48
spatelGood afternoon folks!19:49
cloudnullo/ spatel19:49
strattaoHello all!19:49
strattaoI was just looking at the link for the upcoming PTG in Devner, and it still looks a little… sparse. I’m slated to go to the conference and am staying for the PTGs, so I am definitely looking forward to putting some faces to some names.19:49
spatelmy current cloud going to touch 250 compute nodes.. so should i continue adding compute nodes?19:50
spatelcloudnull: or jamesdenton ^^19:50
cloudnull300-350 is the largest I would go, as a rule.19:50
jamesdenton35119:50
cloudnullthat said, 500+ does work you just need to scale the infra to meet the demand19:50
cloudnulldedicated network nodes, rabbitmq, etc19:51
cloudnullstrattao awesome! I hope that I will be able to attend and see you there. IDK if there's been any PTG planning for denver quite yet. -cc odyssey4me mnaser evrardjp19:52
strattaoGood timing on this topic for me! Are there any metrics for how the infra needs to be scaled? Where things break down at scale?19:52
strattaoWe’ve got some big systems we’re just starting to roll out and would like to get ahead of some of the bottlenecks we’ll be facing19:52
spatelHow do i measure that everything looks good at infra19:52
strattao* seconds spatel’s question19:53
spatellike tipping point or rates i should watch etc... on infra nodes?19:53
jamesdentonsimultaneous restart of all agents and services and see how long it takes to fall apart </troll>19:53
*** ArchiFleKs has joined #openstack-ansible19:54
spateljamesdenton: This is what my rabbitMQ looking https://ibb.co/bbLQdRm19:55
spatelThis is only infra-02 stats19:55
spatelits pretty much same on all three nodes19:55
*** hamzaachi has quit IRC19:56
strattaocloudnull, I haven’t been to the conference before or the PTGs… do you guys typically meet all day long for all three days? What is the schedule typically like?19:56
jamesdentonspatel i don't really have a point of reference, unfortunately. this is an area i don't focus on much19:56
spatelMySQL stats - https://ibb.co/6DGBDjr19:57
spatelcloudnull: how to scale infra ? ( could you explain what you trying to say )19:58
spateloh!! dedicated service node :)19:58
spatelstrattao: you and me on same page :)19:58
spateljamesdenton: other option i have to build new openstack cloud and maintain two cloud :(20:00
*** DanyC has joined #openstack-ansible20:03
jamesdentonat some point, though, you will need to do that20:03
cloudnullstrattao: for the PTG its an all day thing, however, some folks participate in multiple projects so they come and go as needed20:04
cloudnullspatel ++ dedicated service nodes.20:05
cloudnull you can begin exploring cells however, +1 to what jamesdenton said, eventually you will need to begin thinking about RegionTwo20:05
spatelhow do i isolate services in production ?20:06
strattaoSo, is the basic approach, just whenever a service gets bogged down, spin it out onto it’s own dedicated hardward and scale up from there then?20:06
cloudnullThat's been my approach.20:06
cloudnullI'll spend a good amount of time tuning but if the issue is contention then the next thing to do is to spin it out onto other gear20:07
strattaobut you haven’t been involved with anything that’s approaching 500+ node clouds, right?20:07
strattaoor anything bigger like 1000+ ;)20:08
cloudnull500 yes. >500 in a single region, no, not really.20:08
*** maxbab has joined #openstack-ansible20:08
strattaocool, just curious. thx20:08
cloudnullwe generally opt for multiple regions at that scale20:09
cloudnullits more to manage, but limits the blast radius20:09
cloudnullwith cells v2 we might be able to a lot more but that largely hasn't been explored, at least not by me20:10
cloudnullI know cern is doing a lot with cellsv2 these days20:10
cloudnullTim Bell is amazing - so we might be able to learn quite a bit from them20:11
*** DanyC has quit IRC20:15
*** maxbab has quit IRC20:17
*** maxbab has joined #openstack-ansible20:19
*** maxbab has quit IRC20:19
*** fghaas has joined #openstack-ansible20:20
*** markvoelker has joined #openstack-ansible20:21
admin0what is the variable to set if horizon image upload does not work ?20:28
admin0i know there was some workaround/setting to set this to remote/something20:28
*** cshen has joined #openstack-ansible20:30
*** cshen has quit IRC20:34
*** cmart has quit IRC20:37
*** macza has quit IRC20:45
*** macza has joined #openstack-ansible20:45
*** hamzaachi has joined #openstack-ansible20:51
*** markvoelker has quit IRC20:55
*** hamzaachi has quit IRC20:55
*** DanyC has joined #openstack-ansible21:08
openstackgerritJonathan Rosser proposed openstack/openstack-ansible-os_heat master: Do not install heat service distro packages for source installs  https://review.openstack.org/63851121:10
jrossercloudnull: ^ lets see how that goes21:11
*** macza has quit IRC21:19
*** spatel has quit IRC21:20
cloudnull++21:21
* cloudnull wonders if that has been contributing to our heat issues? 21:21
*** kmadac has quit IRC21:26
*** ansmith has quit IRC21:36
cloudnullpulled that patch and after purge / redeploy - https://pasted.tech/pastes/1b23368f41efaf5be41a9dce6ddec322593d41a7 - things are looking a lot better.21:37
cloudnullhow i purged https://pasted.tech/pastes/3e00db6b17effdcb9487158cef274fd56f8235ee21:37
*** sm806 has quit IRC21:41
*** sm806 has joined #openstack-ansible21:41
cloudnullit doesn't look like we have any other service bleeding distro packages into source, https://pasted.tech/pastes/f9b2b26d8e0879baf402e42c5f11c1589f91f71e, but it'd be good to get another set of eyes looking at the same21:42
*** markvoelker has joined #openstack-ansible21:53
*** macza has joined #openstack-ansible21:53
*** macza has quit IRC21:55
*** macza has joined #openstack-ansible21:55
*** fghaas has quit IRC21:56
*** cmart has joined #openstack-ansible21:59
*** shananigans has quit IRC22:07
*** ansmith has joined #openstack-ansible22:11
*** fghaas has joined #openstack-ansible22:17
*** hamzy has quit IRC22:21
*** markvoelker has quit IRC22:25
*** sdake has quit IRC22:27
*** cshen has joined #openstack-ansible22:30
*** cshen has quit IRC22:35
*** dave-mccowan has quit IRC22:40
*** dave-mccowan has joined #openstack-ansible22:42
*** dave-mccowan has quit IRC22:46
*** fghaas has quit IRC22:49
openstackgerritJacob Wagner proposed openstack/openstack-ansible-ops master: Add ability to deploy designate with BIND9 servers  https://review.openstack.org/63561122:56
NobodyCamgood afternoon OSA folks22:59
NobodyCamcould someone point me to any doc about setting up flat networking, is that even possible?23:02
*** sdake has joined #openstack-ansible23:11
*** tosky has quit IRC23:11
cloudnullany cores around want to give https://review.openstack.org/638511 push though23:12
cloudnullwe need to backport that with a quickness.23:12
cloudnullNobodyCam: ye sflat networking is totally possible23:13
cloudnullwe do that in the gate23:13
NobodyCamoh sweet23:14
NobodyCamhappen to have anything thing you could point me to?23:14
cloudnullheres the doc that covers the overview - https://docs.openstack.org/project-deploy-guide/openstack-ansible/newton/app-networking.html#network-appendix23:15
NobodyCamI'm a little list going thru the official doc's23:15
cloudnullin practice we just create the flat network stanza in the user config and ensure there's an ethernet device to attach to23:15
* cloudnull getting a couple snippets 23:15
NobodyCamawesome TY cloudnull :)23:16
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio.j2#L86-L94 - that'd go in the openstack_user_config23:16
cloudnullnote eth12 in that snippet needs to exist on your machine.23:17
cloudnullyou can change that to whatever you want however it needs to be an ethernet device23:17
cloudnullif you dont have an ethernet device you want to use with a flat network you can hang a vethpair off one of the bridges and name it eth1123:18
cloudnull's/eth11/eth12/'23:18
cloudnullhere's an example on creating a veth pair and hanging it off a bridge23:19
cloudnullhttps://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L52-L5923:19
NobodyCam:) awesome. I'll give it a shot23:19
cloudnullin that example https://github.com/openstack/openstack-ansible/blob/master/etc/network/interfaces.d/aio_interfaces.cfg#L53 creates the "ethernet" device named eth1223:19
cloudnullwhich is used here https://github.com/openstack/openstack-ansible/blob/master/etc/openstack_deploy/openstack_user_config.yml.aio.j2#L89-L9023:19
cloudnullwith the one stanza in config rerun the neutron playbook and you should be off to the races with a new flat network type23:20
*** markvoelker has joined #openstack-ansible23:22
*** ivve has quit IRC23:25
*** sdake has quit IRC23:33
*** sdake has joined #openstack-ansible23:37
*** gyee has quit IRC23:38
*** sdake has quit IRC23:46
*** sdake has joined #openstack-ansible23:47
*** sdake has quit IRC23:51
*** gyee has joined #openstack-ansible23:52
*** sdake_ has joined #openstack-ansible23:52
*** strattao has quit IRC23:54
*** markvoelker has quit IRC23:55

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!