Thursday, 2017-11-02

*** dayou has quit IRC00:08
rm_workyeah we just followed the neutron guidelines for filtering and such00:08
*** rtjure has quit IRC00:11
rm_workjohnsom: i'm trying to trace the issue with the tempest stuff now, finally finished with meetings and errands00:11
johnsomOk, I have logged off to make dinner, but might be able to answer questions and such00:14
rm_workkk np00:14
*** rtjure has joined #openstack-lbaas00:16
*** fnaval has joined #openstack-lbaas00:18
*** armax has joined #openstack-lbaas00:18
*** fnaval has quit IRC00:20
*** fnaval has joined #openstack-lbaas00:21
rm_workkong: you around?00:24
rm_worklooks like there's some stuff in server_util.py that you copy/pasted from elsewhere -- i see some of it is from tempest's waiters, but ... what about `_execute` ?00:25
johnsomrm_work: btw, I updated the zuul v3 etherpad with ovh and stuff ready for review00:44
rm_workkk00:44
rm_workerr which etherpad is that00:44
rm_workahh i see the issues00:48
rm_workone was caused by my latest refactor, and one was ... ok i don't know but i fixed it00:48
rm_workrunning that in our cloud now00:51
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: Create scenario tests for loadbalancers  https://review.openstack.org/48677500:52
rm_workerr, that ^^00:52
*** jniesz has quit IRC00:52
johnsomrm_work: https://etherpad.openstack.org/p/octavia-zuulv3-patches00:53
rm_workjohnsom: rebuild your devstack for tempest with the newest LB patch, if you can00:54
rm_worklater00:54
johnsomI will fire that version up in the morning00:54
rm_workkk00:54
openstackgerritAdam Harwell proposed openstack/octavia-tempest-plugin master: WIP: Failover test  https://review.openstack.org/50155901:12
openstackgerritMerged openstack/octavia-dashboard master: Zuul: add file extension to playbook path  https://review.openstack.org/51612601:18
openstackgerritAdam Harwell proposed openstack/octavia master: WIP: Floating IP Network Driver (spans L3s)  https://review.openstack.org/43561201:21
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830801:21
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830801:23
*** bbbbzhao_ has joined #openstack-lbaas01:42
kongrm_work: hi, the _execute is from the very early version of that patch02:10
*** dayou has joined #openstack-lbaas02:12
*** sanfern has joined #openstack-lbaas03:44
sanfernHi johnsom03:44
*** AlexeyAbashkin has joined #openstack-lbaas04:18
*** bbbbzhao_ has quit IRC04:22
*** AlexeyAbashkin has quit IRC04:22
*** yamamoto has joined #openstack-lbaas04:24
rm_workkong: well, i fixed it -- please actually try running the latest revision04:46
rm_worki refactored some stuff a little bit (moved it around to different files, and renamed a client), so just to make sure it all works for you04:47
rm_workit works for me, but we don't use the FLIP stuff04:47
*** sanfern has quit IRC04:56
*** eN_Guruprasad_Rn has joined #openstack-lbaas04:57
*** eN_Guruprasad_Rn has quit IRC05:12
*** gcheresh has joined #openstack-lbaas05:13
*** gcheresh has quit IRC05:21
*** armax has quit IRC05:27
*** armax has joined #openstack-lbaas05:27
*** armax has quit IRC05:27
*** armax has joined #openstack-lbaas05:28
*** armax has quit IRC05:28
*** armax has joined #openstack-lbaas05:29
*** armax has quit IRC05:29
*** sanfern has joined #openstack-lbaas05:34
*** eN_Guruprasad_Rn has joined #openstack-lbaas05:46
*** gcheresh has joined #openstack-lbaas05:50
*** pcaruana has joined #openstack-lbaas05:58
*** sanfern has quit IRC06:06
*** Alex_Staf_ has joined #openstack-lbaas06:57
*** annp has quit IRC07:16
*** dayou has quit IRC07:20
*** rcernin has quit IRC07:27
*** Alex_Staf_ has quit IRC07:39
openstackgerritNir Magnezi proposed openstack/octavia master: Update devstack plugin and examples  https://review.openstack.org/50363807:49
*** AlexeyAbashkin has joined #openstack-lbaas07:58
*** tesseract has joined #openstack-lbaas08:10
*** annp has joined #openstack-lbaas08:15
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id  https://review.openstack.org/45830808:17
*** Tengu has left #openstack-lbaas08:24
*** rcernin has joined #openstack-lbaas08:26
*** Alex_Staf_ has joined #openstack-lbaas08:29
*** oanson has quit IRC08:30
*** leyal has quit IRC08:30
*** oanson has joined #openstack-lbaas08:31
*** spectr has joined #openstack-lbaas08:36
*** dayou has joined #openstack-lbaas08:39
*** oanson has quit IRC08:45
*** oanson has joined #openstack-lbaas08:46
*** slaweq_ has quit IRC09:08
*** slaweq has joined #openstack-lbaas09:16
*** yamamoto has quit IRC09:18
*** yamamoto has joined #openstack-lbaas09:24
*** yamamoto has quit IRC09:28
*** yamamoto has joined #openstack-lbaas09:35
*** yamamoto has quit IRC09:35
*** links has quit IRC09:44
*** yamamoto has joined #openstack-lbaas09:57
*** rtjure has quit IRC10:17
*** salmankhan has joined #openstack-lbaas10:20
*** slaweq has quit IRC10:21
kongrm_work: i rerun the test, still works for me10:21
*** rtjure has joined #openstack-lbaas10:22
*** fzimmermann has joined #openstack-lbaas10:33
fzimmermannHi, any hints how to "revive" an amphora if octavia terminated them caused by network-connection-issues?10:34
*** slaweq has joined #openstack-lbaas10:38
*** knsahm has joined #openstack-lbaas11:03
knsahmis it possible to rebuild the amphore instances?11:07
knsahmoctavia has killed all amphora instances and we want to rebuild the instances...11:08
knsahmall loadbalancer are in provisioning_status "ACTIVE"11:08
*** yamamoto has quit IRC11:16
*** yamamoto has joined #openstack-lbaas11:19
*** yamamoto has quit IRC11:24
*** sanfern has joined #openstack-lbaas11:26
knsahmDoes somebody has any idea?11:27
*** knsahm has quit IRC11:54
*** rcernin has quit IRC11:56
*** salmankhan has quit IRC12:10
*** sanfern has quit IRC12:11
*** yamamoto has joined #openstack-lbaas12:11
*** salmankhan has joined #openstack-lbaas12:14
*** yamamoto has quit IRC12:15
*** yamamoto has joined #openstack-lbaas12:15
*** yamamoto has quit IRC12:20
*** knsahm has joined #openstack-lbaas12:22
*** krypto has joined #openstack-lbaas12:26
*** krypto has quit IRC12:26
*** krypto has joined #openstack-lbaas12:26
*** leitan has joined #openstack-lbaas12:38
*** eN_Guruprasad_Rn has quit IRC12:38
*** eN_Guruprasad_Rn has joined #openstack-lbaas12:39
*** yamamoto has joined #openstack-lbaas12:47
*** yamamoto has quit IRC12:47
*** yamamoto has joined #openstack-lbaas12:50
*** eN_Guruprasad_Rn has quit IRC12:55
*** eN_Guruprasad_Rn has joined #openstack-lbaas12:55
*** eN_Guruprasad_Rn has quit IRC13:03
*** fnaval has quit IRC13:05
*** eN_Guruprasad_Rn has joined #openstack-lbaas13:05
*** eN_Guruprasad_Rn has quit IRC13:10
*** rtjure has quit IRC13:15
fzimmermannAny hints how to trigger a redeployment of a load balancer if the amphore got terminated and the database already cleaned up? So how to tell octavia: I know there a no running Amphoras for your Config, please create new ones?13:17
*** rtjure has joined #openstack-lbaas13:19
*** eN_Guruprasad_Rn has joined #openstack-lbaas13:23
*** AlexeyAbashkin has quit IRC13:23
*** yamamoto has quit IRC13:24
*** yamamoto has joined #openstack-lbaas13:28
*** AlexeyAbashkin has joined #openstack-lbaas13:45
*** fnaval has joined #openstack-lbaas13:49
johnsomfzimmermann: Still arround?14:00
fzimmermannyes14:00
fzimmermannjohnsom: hi14:01
fzimmermannsome background: we had network issues and our housekeeping-timeout cleaned all amphora-table entries.14:01
johnsomOk, they are all in provisioning status active?14:01
fzimmermannthe amphora?14:01
johnsomBut housekeeping would only clean them up if someone deleted them, not if there was a networking error14:02
johnsomThe load balancers14:02
johnsomAre the load balancers in status Active?14:04
knsahmthe load balancers are in provisioning status active14:04
knsahmyes14:04
knsahmbut no amphoras are working14:04
johnsomOk, is the health manager process running?14:04
johnsomWhat version of octavia are you running?14:05
fzimmermannNewton14:05
johnsomWhat is the status of the health manager processes?  O-hm14:06
fzimmermann0.9.2dev314:06
*** lutzb has joined #openstack-lbaas14:06
johnsomIt should be attempting to failover those amphora and rebuild them14:06
fzimmermannthe health-manager is fine. If I create an amphore-table-entry (from backup) and create a suitable amphora_health-entry. The health-manager detects the problem and tries to create a new amphora.14:07
fzimmermannyes, but its missing some ports14:07
knsahmcorrection: neutron cli shows lbaas pending state active but the octavia table entries are "ERROR"14:08
johnsomOk, yeah, neutron has a habit of getting out of sync.  We should focus on the octavia side for the recovery14:09
knsahmyes14:09
johnsomwe can sync neutron after14:10
fzimmermannsure, what could we do to trigger a new amphora-deployment?14:11
johnsomPick one load balancer that is in ERROR state in octavia DB, check the listener and pool to see if they are still active.14:12
fzimmermannok, give me a second.14:13
fzimmermannok, listener is ERR and pools are ACTIVE14:15
dayouhttps://review.openstack.org/#/c/505851/14:16
johnsomSo, when you created the amphora health entries, what port was missing?  It should rebuild all of the ports except for the VIP which has the VIP IP address assigned14:17
dayouAny idea about this patch? Since we are using pike now, and that's the only issue blocks us from creating a tls listener.14:18
fzimmermannjohnsom: octavia-lb-vrrp-b716ac46-6c26-4b39-bc5c-60ea6e51282c14:18
dayouoctavia_v2 only seems works fines so far14:18
johnsomdayou It just needs reviews and that comment addressed14:19
dayouAlright, thanks14:19
*** armax has joined #openstack-lbaas14:19
fzimmermannvrrp_port_id14:20
johnsomOk, so that is the VIP port, that is not good.14:20
johnsomOh, wait, no, that is the base port for vrrp.  That should be automatically re-built.  It may have another IP address, but the VIP IP should still be ok14:22
johnsomThe load balancer you created a fake amphora and amphora_health record for, it is not up and functional?14:23
fzimmermannright. I did the SQL-Inserts (dump, all values as they where in backup) and the amphora got removed and recreated. Give me a  minute I will redo it and provide the correct errors.14:24
johnsomOk, yeah, that is the procedure I would follow as well.14:26
nmagnezijohnsom, https://review.openstack.org/#/c/503638/ <-- that patch cursed. I was stacking a node (a CentOS node) and now it fails because of an issue with the devstack plugin (specifically here https://github.com/openstack/octavia/blob/master/devstack/plugin.sh#L58-L63 ). will try to finalize it as soon as possible.14:27
johnsomI can't say that I have tested a situation where I needed failover but the amphora database records were deleted.  We run with a 7 day purge cycle in housekeeping, so failover should be done by then14:27
johnsomnmagnezi I was wondering what was up with that patch.  Want me to see if I can finish it up today?14:28
fzimmermannjohnsom: https://pastebin.com/qJnHLBHL14:28
*** spectr-RH has joined #openstack-lbaas14:30
nmagnezijohnsom, i will try to figure this out very soon, if not I'll let you know. it *should* be working fine, but I wanted to see it with my own eyes before I declare it's done (since it has a chance of breaking stuff)14:30
johnsomCrumb, I think I have a bug I am working on for this issue.  Can you paste the amphora database record for that amp?14:30
johnsomnmagnezi Ok14:30
fzimmermannjohnsom: https://pastebin.com/PTkbwGA214:32
fzimmermannmaybe we should try the old "backup" amphora?14:32
*** spectr has quit IRC14:33
fzimmermannjohnsom: ok that looks promosing.. the backup amphora got deployed successfully14:35
johnsomIs there any way in the error log post you can get the log lines before this taskflow error tree?  I'm looking for the last few successful lines and then the error line.14:37
fzimmermannjohnsom: sure - one moment.14:38
*** eN_Guruprasad_Rn has quit IRC14:41
fzimmermannjohnsom: https://pastebin.com/7WN7pGjH14:42
johnsomYeah, ok.  So when you create those amphora records, the two ports are going to need to exist in neutron, so you will need to rebuild them or find them in neutron if they still exist.  Would you like the neutron commands for that?14:47
fzimmermannoh yeah - would be great!14:48
*** gcheresh has quit IRC14:49
*** krypto has quit IRC14:51
*** krypto has joined #openstack-lbaas14:52
*** krypto has quit IRC14:57
*** krypto has joined #openstack-lbaas14:57
*** krypto has quit IRC14:57
*** krypto has joined #openstack-lbaas14:57
*** krypto has quit IRC15:02
*** krypto has joined #openstack-lbaas15:03
johnsomThis command will create the two ports:15:06
johnsomneutron port-create --tenant-id <LB project/tenant ID> --name octavia-lb-vrrp-<amp ID> --security-group lb-<lb ID> --allowed-address-pair ip_address=<VIP IP address> <network ID for VIP>15:06
fzimmermannjohnsom: thanks a lot! I will try it15:07
fzimmermannwhat should be do to get neutron back in sync?15:07
johnsomThe ID of this port is the ID that will go into the "vrrp_port_id" field.  If you do a 'neutron port-list | grep <VIP IP>'  you can see the ID of the "ha_port_id"15:08
johnsomYou will want to have the event streamer setting enabled with "queue_event_streamer" https://docs.openstack.org/octavia/latest/configuration/configref.html#health_manager.event_streamer_driver15:10
johnsomThis setting assumes that your neutron-lbaas is using the same rabbit queue as octavia15:10
*** salmankhan has quit IRC15:11
fzimmermannwe already have this setting enabled: /etc/octavia/octavia.conf:event_streamer_driver=queue_event_streamer15:11
fzimmermannlooks like we are not using the same queue, isnt' it15:11
johnsomIf it is setup you will see debug messages in neutron service log for the messages coming off the queue.15:12
johnsomThere is a patch up for queens that allows separate queues, but that won't help here.15:13
johnsomYou can also do a "neutron lbaas-loadbalancer-update" call to set something, like the description or do an enable/disable cycle on an object.  That should force neutron to get back in sync15:14
*** eN_Guruprasad_Rn has joined #openstack-lbaas15:22
*** salmankhan has joined #openstack-lbaas15:23
fzimmermannjohnsom: yes got notification in neutron after lb-update-call. Thanks a lot for your support!15:31
fzimmermannjohnsom: maybe one last question: currently we only have one amphora back. How would you tell octavia to create another (BACKUP) one?15:34
fzimmermannsame as above? Create ports and suitable amphora-lines?15:35
johnsomyes, same as above15:36
*** slaweq has quit IRC15:38
*** bzhao has quit IRC15:38
*** yamamoto has quit IRC15:41
*** spectr has joined #openstack-lbaas15:41
*** yamamoto has joined #openstack-lbaas15:43
*** spectr-RH has quit IRC15:44
*** yamamoto has quit IRC15:48
*** AlexeyAbashkin has quit IRC15:53
*** spectr has quit IRC15:55
*** yamamoto has joined #openstack-lbaas16:04
*** yamamoto has quit IRC16:04
*** links has joined #openstack-lbaas16:10
johnsomGetting better:16:12
johnsomhttps://www.irccloud.com/pastebin/kVgeno8y/16:12
johnsomDigging in to find out what went wrong16:13
*** eN_Guruprasad_Rn has quit IRC16:13
*** gcheresh has joined #openstack-lbaas16:14
*** gcheresh has quit IRC16:34
*** links has quit IRC16:35
*** yamamoto has joined #openstack-lbaas17:05
*** fzimmermann has quit IRC17:06
*** yamamoto has quit IRC17:12
Alex_Staf_johnsom, this is the plugin test ?17:24
johnsomyes17:25
Alex_Staf_which one fails ?17:25
johnsomoctavia_tempest_plugin.tests.v2.scenario.test_basic_ops.BasicOpsTest.test_basic_ops17:25
johnsomI have posted some comments on the patch17:25
Alex_Staf_johnsom, I ran it and I posted comments regarding why it failed for me17:26
*** sshank has joined #openstack-lbaas17:26
johnsomYeah, I am seeing slightly different results17:26
Alex_Staf_johnsom, ok. I am runnign it a cloud and not devstack FYI17:27
johnsomYeah, I am on devstack.  Probably the difference17:27
Alex_Staf_johnsom, it is a tough one to debug with ipdb17:27
*** sshank has quit IRC17:29
*** AlexeyAbashkin has joined #openstack-lbaas17:31
*** AlexeyAbashkin has quit IRC17:35
*** sshank has joined #openstack-lbaas17:36
*** knsahm has left #openstack-lbaas17:42
*** lutzb has quit IRC17:55
*** SumitNaiksatam has joined #openstack-lbaas18:01
*** AlexeyAbashkin has joined #openstack-lbaas18:09
*** pcaruana has quit IRC18:10
*** AlexeyAbashkin has quit IRC18:14
*** tesseract has quit IRC18:25
*** gcheresh has joined #openstack-lbaas19:02
*** Alex_Staf_ has quit IRC19:03
*** salmankhan has quit IRC19:48
johnsomrm_work Not sure what is going on here, looks like it is progressing before the LB goes active?19:49
johnsomhttps://www.irccloud.com/pastebin/ppjqiFX1/19:49
*** SumitNaiksatam has quit IRC20:10
rm_workjohnsom: what is this i'm looking at?20:15
rm_worki mean, is this a single-create?20:15
johnsomYeah, the tempest test from the o-cw log20:15
rm_workhmmm20:16
*** atoth has quit IRC20:16
rm_workahhhh i see what you mean20:16
rm_workon the create listener20:16
rm_workerr, create pool20:16
rm_workactually all of it20:16
rm_workso the LB goes active from the create... and then the pool create starts... and then ALSO somehow the listener and member creates come in? before it goes active?20:17
rm_worki don't even see how this is possible on the worker side20:17
rm_workthe tempest stuff doesn't use single-create20:17
johnsomYeah, the log is confusing, trying to debug a bit now.  I think I just found my problem20:18
rm_workO_o20:19
*** Alex_Staf_ has joined #openstack-lbaas20:20
johnsomI changed the path but didn't notice you have a different path in the screen command20:20
johnsomhttps://www.irccloud.com/pastebin/Mx1a0iXP/20:20
rm_work?20:22
rm_workah ok so /dev/shm is actually a mountpoint20:22
rm_workso yeah we could do that i guess if that's consistent across distros20:22
rm_workor i guess it could be ... configurable :P20:23
rm_workdefaulting to that or something20:23
rm_workjohnsom: i remember kong had the same issue with space as you did but somehow he fixed it??20:23
*** slaweq has joined #openstack-lbaas20:26
*** sshank has quit IRC20:41
johnsomrm_work Yeah, after I fixed that the test passed20:44
rm_workk20:44
*** sshank has joined #openstack-lbaas20:45
openstackgerritGerman Eichberger proposed openstack/octavia master: ACTIVE-ACTIVE with LVS - DIB  https://review.openstack.org/49980720:49
johnsomrm_work With those last comments I will be ok to +220:51
Alex_Staf_rm_work, I tested the ssh in the test_basic_ops, I compared it to other neutron test. Regarding your comment regarding the user it is possible. I compared it to other test and it worked for me but the keypair there without use id and stuff21:02
Alex_Staf_rm_work, https://github.com/openstack/neutron/blob/master/neutron/tests/tempest/scenario/test_trunk.py#L74 compared to this. The VM here booted with keypair ( nova show shows key not"-")21:04
*** AlexeyAbashkin has joined #openstack-lbaas21:08
*** AlexeyAbashkin has quit IRC21:13
*** salmankhan has joined #openstack-lbaas21:30
*** gcheresh has quit IRC21:31
*** salmankhan has quit IRC21:35
*** yamamoto has joined #openstack-lbaas21:51
*** leitan has quit IRC21:52
*** rcernin has joined #openstack-lbaas22:01
*** AlexeyAbashkin has joined #openstack-lbaas22:08
*** AlexeyAbashkin has quit IRC22:13
*** rohara has quit IRC22:43
*** fnaval has quit IRC22:59
*** slaweq has quit IRC23:09
*** salmankhan has joined #openstack-lbaas23:10
*** slaweq has joined #openstack-lbaas23:12
*** salmankhan has quit IRC23:14
*** fnaval has joined #openstack-lbaas23:15
*** slaweq has quit IRC23:16
*** rcernin has quit IRC23:17
*** slaweq has joined #openstack-lbaas23:41
*** links has joined #openstack-lbaas23:44
openstackgerritMichael Johnson proposed openstack/octavia master: Make the allowed_address_pairs driver better  https://review.openstack.org/51745523:52
*** cyberde has quit IRC23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!