20:00:13 <xgerman> #startmeeting octavia
20:00:14 <openstack> Meeting started Wed Sep  9 20:00:13 2015 UTC and is due to finish in 60 minutes.  The chair is xgerman. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:18 <openstack> The meeting name has been set to 'octavia'
20:00:25 <madhu_ak> hi
20:00:25 <johnsom> o/
20:00:26 <abdelwas> o/
20:00:27 <dougwig> o/
20:00:28 <xgerman> #chair blogan
20:00:29 <openstack> Current chairs: blogan xgerman
20:00:48 <ajmiller> o/
20:00:59 <TrevorV> o/
20:01:03 <xgerman> #topic Announcements
20:01:17 <blogan> hello!
20:01:24 <xgerman> Liberty Release schedule - #link https://wiki.openstack.org/wiki/Liberty_Release_Schedule
20:01:27 <bana_k> hi
20:01:29 <ptoohill> o/
20:01:49 <xgerman> RC1 is on 9/21 so we need to be on our toes + rumor has it the CLI is already cut
20:02:14 <sballe> o/
20:02:16 <sbalukoff> Howdy, howdy!
20:02:16 <rm_work> o/
20:02:19 <xgerman> Mitaka design summit topics
20:02:26 <xgerman> #link https://etherpad.openstack.org/p/neutron-mitaka-designsummit
20:02:44 <xgerman> we need to start adding our topics (e.g. Octavia Active-Active)
20:03:52 <xgerman> Lastly: Flavor stuff
20:03:54 <minwang2> o/
20:04:14 <xgerman> jwarendts has that working on his machine
20:04:31 <blogan> cool
20:04:44 <blogan> and itll start with provider and flavor together and eventually move to just flavor?
20:04:56 <blogan> well neutron-lbaas
20:05:00 <xgerman> yes, it will show both
20:05:07 <xgerman> and only neutron-lbaas
20:05:15 <xgerman> for now
20:05:22 <xgerman> and hopefully RC1
20:05:53 <xgerman> but, yeah, pretty fun neutron labs-loadbalancer-create —flavor GOLD
20:06:03 <sbalukoff> Question on that for operators: Do you see yourself actually using a flavor that works on more than one provider?
20:06:06 <bharathm> 0/
20:06:33 <xgerman> sbalukoff I don’t see myself doing that but it is a valid future use case
20:06:37 <sbalukoff> I would be surprised if we do that. Rather, I expect we'll probably have multiple flavors per provider.
20:06:53 <blogan> yeah i thinks o too
20:06:54 <xgerman> yeah, that’s what’s implemented
20:06:57 <sbalukoff> xgerman: Possibly-- if we know someone is going to use it, eh. I'm a fan of not building things nobody uses. :)
20:07:17 <xgerman> sbalukoff if I bought lets say A10 lbs and then get a good deal on Netscaler
20:07:18 <sbalukoff> But that's a bit of a digression.
20:07:23 <xgerman> and want both be gold flavor
20:07:55 <rm_work> GOLD_PLUS, GOLD_PREFERRED
20:07:56 <dougwig> it supporting multiple providers was a diplomatic thing to convince certain folks that it wasn't a vendor lock.
20:07:57 <xgerman> but that really complicates things since now you need weights and a scheduler
20:07:58 <rm_work> :P
20:08:02 <johnsom> Easy, gold-a and gold-b
20:08:18 <rm_work> johnsom: way more boring than my suggestions :P
20:08:49 <johnsom> I work for the "cold dead fish" company, what can I say....
20:08:52 <rm_work> or just confuse the heck out of people with "GOLD_PLATINUM" (pretty sure i have seen that on a credit card before0
20:08:53 <rm_work> )
20:08:55 <sbalukoff> Haha
20:09:03 <minwang2> hahaha
20:09:22 <sbalukoff> Anyway, we really don't need to discuss this now, IMO. We've got...  healthier fish to fry.
20:09:25 <bana_k> sounds like airline tickets
20:09:31 <rm_work> indeed
20:09:39 <blogan> i don tunderstand how that would cause vendor lock
20:09:40 * rm_work is on an airplane, so it may have come to mind
20:09:47 <sbalukoff> Haha
20:09:49 <rm_work> blogan: me either, but whatever
20:10:01 <sballe> :)
20:10:51 <xgerman> #topic Liberty deadline stuff
20:11:14 <xgerman> Failover, Heartbeat, oh my
20:11:19 <rm_work> we need to get Failover merged
20:11:20 <rm_work> jeebus
20:11:27 <xgerman> yep
20:11:29 <johnsom> +100 on failover merged!
20:11:29 <rm_work> that and fix the subnet/port delete issue
20:11:30 <sbalukoff> L7 is out. I wasn't able to get my Octavia work to the point where I'm ready to commit anything yet, and nobody has reviewed any of the L7 Neutron LBaaS stuff.
20:11:35 <blogan> i believe it is working for both ssh and rest now
20:11:40 <rm_work> blogan: the deletes?
20:11:45 <blogan> rm_work: no fialover
20:11:45 <rm_work> it wasn't as of yesterday afternoon
20:11:47 <rm_work> oh
20:11:47 <rm_work> k
20:11:54 <blogan> rm_work: i dont fully comprehend the deletes though, i thought this was fixed
20:12:00 <blogan> can you point me to some failures to look at?
20:12:00 <xgerman> +1
20:12:13 <xgerman> let’s be organized
20:12:18 <xgerman> if failover works we merge it
20:12:23 <xgerman> delete should be his own patch
20:12:26 <blogan> well ive testd it out, has anyone else?
20:12:36 <blogan> would like to get double verification, or triple
20:12:43 <minwang2> https://review.openstack.org/#/c/214058/
20:12:44 <johnsom> I haven't had a chance to try it with the latest patches
20:12:46 <sbalukoff> blogan: I'll try it out in the next day or two.
20:12:52 <johnsom> Can this afternoon though
20:12:53 <bana_k> delete lb ?
20:12:59 <xgerman> johnsom awesome
20:13:05 <rm_work> blogan: yeah i will get you logs
20:13:07 <minwang2> delete ports
20:13:18 <dougwig> where are we with the tempest job with an octavia backend?
20:13:28 <xgerman> bad news
20:13:33 <blogan> so it'll work with an existing amphora that stops sending heartbeats, in the acse of there not being ane xisting amphora, it will not work as that will require some major flow changes
20:13:33 <xgerman> johnsom explain
20:13:38 <rm_work> blogan: basically the tempest api tests in neutron-lbaas create a LB, and then when they try to clean up, octavia tries to delete the subnet and fails because there is still a port (because nlbaas owns the port)
20:13:52 <rm_work> dougwig: that's what i was working on with min and what we need the subnet delete fix for
20:13:57 <johnsom> dougwig https://jenkins07.openstack.org/job/gate-neutron-lbaasv2-octavia-dsvm-api/11/console
20:13:57 <blogan> rm_work: octavia tries to delete the subnet?
20:14:02 <blogan> rm_work: that doesnt make sense
20:14:03 <rm_work> blogan: i think so
20:14:07 <rm_work> it's on o-cw
20:14:07 <johnsom> It's a topic item later on the list.
20:14:14 <rm_work> i will double-check
20:14:17 <rm_work> it seems really odd
20:14:22 <johnsom> Semi-working.  Extremely slow test though
20:14:25 <rm_work> either way, the cleanup totally barfs, and leaves a subnet around
20:14:27 <johnsom> 2+ hours
20:14:31 <rm_work> yeah
20:14:39 <rm_work> i think on RAX images it should be more like 80min
20:14:49 <sbalukoff> Eew.
20:14:52 <dougwig> oh, is that an hp vs rax throwdown that i see?
20:14:53 <johnsom> rm_work explain?
20:14:57 <rm_work> and the infra peeps said they'd take a look at it too once we get the job actually passing, in any amount of time
20:14:58 <xgerman> fanatical support?
20:15:03 <rm_work> johnsom: it's just... faster
20:15:18 <sbalukoff> Haha
20:15:36 <rm_work> johnsom: min and I tested RAX vs HP images yesterday and found average boot time for VMs to be ~4m on RAX-8G and ~8m on HP-30H
20:15:36 <xgerman> rm_work do we have a bug for the subnet?
20:15:37 <blogan> 2 hours for one test, that seems acceptable
20:15:41 <johnsom> Hahaha.  The big issue is the gate systems not emulating vt-x, so qemu drops to TCG mode, which is painfully slow software emulation for our amps
20:15:42 * blogan ends sarcasm
20:15:43 <rm_work> *30G, which is what gate uses
20:15:46 <rm_work> xgerman: not sure
20:15:59 <xgerman> ok, let’s create one so we can track
20:16:00 <rm_work> blogan: i mean it's the whole test run, not "one test"
20:16:08 <blogan> rm_work: oh, well still
20:16:13 <xgerman> 2 hours for all tests? Not bad...
20:16:15 <rm_work> some already take 50m
20:16:19 <rm_work> for the dsvm that is normal
20:16:29 <blogan> yeah but 2 hours is over double
20:16:40 <rm_work> yeah we figured out it is only creating one VM per file, not one VM per test, so it's really only like 15 VM creates, not 150
20:16:42 <xgerman> but the general Q is can we get infra ro provision VT-X hosts for us?
20:17:00 <rm_work> xgerman: i don't know, like i said, once we can get the experimental gate PASSING, they will look at it with us
20:17:05 <xgerman> dougwig as our liaison...
20:17:10 <rm_work> but until then it is hard for me to give them useful data
20:17:15 <blogan> does trove have a similar issue?
20:17:16 <rm_work> I have been working a lot with them recently as well
20:17:21 <rm_work> not sure
20:17:27 <xgerman> brogan they only spin up three vms total
20:17:33 <minwang2> blogan you may want to work a bit on your patch, i post comment there, i think it is related to the port delete https://review.openstack.org/#/c/214058/
20:18:00 <rm_work> blogan: i promise to get you logs by tomorrow
20:18:03 <blogan> minwang2: ah you're using that patch to run the tests?
20:18:30 <rm_work> or you can try spinning up the test yourself -- my script works now 100%: https://gist.github.com/rm-you/f7585ca4932b3ee1eed9
20:18:32 <rm_work> blogan: ^^
20:18:35 <blogan> minwang2: does it work without that patch?
20:18:37 <rm_work> from a fresh Ubuntu 14.04
20:18:46 <rm_work> blogan: i was testing without it
20:19:00 <blogan> 100% eh? bold claim sir
20:19:04 <rm_work> heh
20:19:13 <rm_work> 100% to the point where you can run tempest tests with "tox -e apiv2"
20:19:15 <blogan> rm_work: you were testing without and it still got the same issue?
20:19:20 <rm_work> yes
20:19:23 <minwang2> the first time i got the error for teardown, ptoohill pointed out that this patch might help ,i cherry picked it but seem not help much
20:19:26 <blogan> okay so not that patch tehn
20:19:51 <blogan> okay ill run some local tests as well and see what happens
20:19:54 <xgerman> can we move the troubleshooting to after the meeting?
20:20:28 <sbalukoff> xgerman: +1
20:20:39 <rm_work> yes
20:20:43 <xgerman> thanks
20:20:50 <johnsom> Just a side, huge thanks to rm_work for helping with this
20:21:03 <xgerman> yep, he should come to Seattle more often!!
20:21:04 <minwang2> +1
20:21:23 <rm_work> my job right now seems to be basically making sure everything else works, since I don't have the bandwidth to take a specific task <_<
20:21:34 <rm_work> so just hopping around helping with random stuff seems to be useful :P
20:21:41 <sbalukoff> rm_work: That's an extremely important job. :)
20:21:48 <xgerman> rm_work did you try red bull? That works for dougwig...
20:21:51 <rm_work> heh
20:22:06 <rm_work> i had some of the seattle cold brew while i was in the office, love that stuff
20:22:14 <rm_work> anyway, what is next if we're done troubleshooting?
20:22:17 <xgerman> yep, it’s awesome!!
20:22:24 <xgerman> we merge everyhting
20:22:27 <rm_work> heh
20:22:34 <johnsom> Yes, merge, merge, merge
20:22:40 <sbalukoff> And then go nuts fixing bugs
20:22:41 <rm_work> well there's an additional patch in front of failover, right?
20:22:46 <xgerman> except Bertrans’s thing
20:22:47 <rm_work> but do we think we can merge that and failover ... today?
20:22:51 <blogan> ok so assuming failover and the gat work, whats the next thign we need to get in for ref?
20:22:51 <johnsom> I will give +2's, etc this afternoon after I test the latest code
20:23:10 <rm_work> UDP listener at least
20:23:20 <blogan> okay
20:23:20 <rm_work> ideally the followup for eventstreamer but that won't be dealbreaking
20:23:23 <xgerman> well, the DB stuff works
20:23:28 <xgerman> I tried that a while back
20:23:36 <rm_work> not because of the eventstreamer itself, but because of the refactor it includes for the DB updates
20:24:06 <xgerman> mmh, so need to try again...
20:24:09 <rm_work> leaving those non-mixins as mixins is ... irksome
20:24:22 <xgerman> k
20:24:24 <rm_work> though it doesn't affect functionality really, it's just gross
20:24:25 <xgerman> we can rename ;-)
20:24:30 <rm_work> yeah that is the refactor
20:24:43 <johnsom> Can we do that after we do the chain merge?
20:24:49 <xgerman> +1
20:25:04 <rm_work> yeah i mean if we get through to UDP merged it's good
20:25:15 <sbalukoff> Sweet.
20:25:50 <xgerman> ok, so moving on johnsom did some benchmarks
20:26:00 <xgerman> so dougwig can be proud :-)
20:26:05 <johnsom> Yeah, I would like to see everything in that chain up to UDP merged today/tomorrow if we can get the reviews/tests done
20:26:11 <blogan> isnt dougwig on vacation or something
20:26:13 <johnsom> Yeah, ok
20:26:19 <rm_work> just commented on https://review.openstack.org/#/c/220747/5 but willing to merge it despite comments
20:26:20 <xgerman> I saw his ghost earlier
20:26:23 <dougwig> he came back early, after mechanical issues.
20:26:23 <rm_work> all minor code quality stuff
20:26:24 <johnsom> caveats - DevStack inside ESXi 6 on 14.04 guest - 4vCPU, 16GB RAM, 50GB disk.  Two amphora ubuntu nova instances with GWAN web server, serving 100 byte text file.
20:26:33 <johnsom> I used two benchmarks, apache bench(ab) and httperf (full disclaimer HP labs code).
20:26:36 <xgerman> mechanical issues? Pictures?
20:27:02 <johnsom> On httperf Octavia did 88.5% of the requests per second as the namespace driver.
20:27:22 <rm_work> johnsom: i had some erlang based swarm testing code somewhere that ptoohill and I worked on a while back, we might throw that at some LBs too and see if we get similar results
20:27:26 <johnsom> On AB Octavia did 103% of the requests per second
20:27:38 <ptoohill> Tsung
20:27:40 <rm_work> yeah
20:27:43 <rm_work> Tsung is GREAT
20:27:51 <ptoohill> It is.
20:27:56 <ptoohill> Locust is getting better
20:27:59 <blogan> so johnsom, they look similar
20:28:19 <sbalukoff> Close enough not to make people hate Octavia on the basis of performance, at least.
20:28:22 <rm_work> but that's workable, even at ~90% that's fine
20:28:25 <rm_work> yep
20:28:33 <xgerman> sbalukoff +1
20:28:36 <johnsom> The test system was really CPU bound to get peak performance numbers.  It really should be run on 4+ bare metal systems to get true requests per second #s
20:28:38 <blogan> sbalukoff: yeah and octavia has more going for it as well
20:28:51 <ptoohill> like the name Octavia
20:28:52 <sbalukoff> blogan: Indeed it does! Octavia has a bright future!
20:28:58 <rm_work> johnsom: ok, remind me later and we can look at that
20:29:16 <rm_work> the more benchmarks the better
20:29:34 <xgerman> absolutely — I like to be on stage and say now with 1000% more performance
20:29:41 <johnsom> Yep.  This highly qualifies as "quick and dirty"
20:29:46 <dougwig> those numbers will do fine, especially since we now have scalability and (soon) failover.
20:29:55 <johnsom> Just to make dougwig sleep slightly better at night
20:30:08 <sbalukoff> Hehe!
20:30:17 <xgerman> moving on
20:30:23 <xgerman> Horizon
20:30:23 <sbalukoff> We are all extremely concerned about dougwig's sleep.
20:30:51 <xgerman> so Aish on our team tried installing that without success so far
20:31:11 <dougwig> anyone from ebay/paypal here today?
20:31:25 <xgerman> we have been in touch with them
20:31:48 <xgerman> so chances are that it will work
20:32:06 <xgerman> but I am not sure how we would package/release it
20:33:07 <dougwig> let's get it working and merged, then we can sort out the release timeline.
20:33:15 <xgerman> ok
20:33:20 <xgerman> sounds good
20:33:37 <xgerman> next up: Active-Passive
20:34:06 <xgerman> johnsom reviewed and Sherif is facing the bugs currently
20:34:13 <xgerman> facing=fixing
20:34:35 <johnsom> Yeah, I think he is pretty close to addressing all of the comments I left.
20:34:37 <xgerman> I talked with mystery and we have an FFE for that
20:34:49 <xgerman> mestery
20:35:00 <blogan> hopefully the failover and udp reviews go faster and get merged so I can focus on that one and test it out well
20:35:13 <xgerman> yeah, that would be good
20:36:36 <xgerman> #Open Discussion
20:36:40 <xgerman> #topic Open Discussion
20:36:53 <blogan> have we mentioned merges?
20:37:01 <xgerman> yes, we did :-)
20:37:12 <blogan> oh
20:37:14 <xgerman> and we don’t want to merge things which are not critical right now
20:37:23 <johnsom> I have patches up on both Octavia and neutron-lbaas for adding a member state of "NO_MONITOR".
20:37:26 <blogan> design sessions, should we have one for neutron-lbaas and one for octavia, or have it combined now?
20:38:23 <sbalukoff> As I alluded to last week, I have an IBM group based in Israel that is very interested in contributing to Octavia.
20:38:23 <xgerman> cool
20:38:23 <sbalukoff> I am working on getting them up to speed as to how to engage with the community effectively.
20:38:23 <blogan> sbalukoff: they can sync up with sam and evgeny well too!
20:38:23 <xgerman> blogan — not sure how tight the space is but we can combine
20:38:24 <minwang2> i am working on octavia gate setting and cert rotation
20:38:24 <sbalukoff> I don't know if they'll make it to this meeting that often, as they are in a crappy timezone for it. But I'mma try to get them into our IRC channel.
20:38:24 <xgerman> yeah, cert/anchor stuff will be M
20:38:30 <blogan> sbalukoff: yeah same reason sam and evgenyf dont make it most of the time
20:38:43 <xgerman> so we need a new time?
20:38:47 <xgerman> alternate?
20:38:51 <sbalukoff> For what it's worth, they're most interested in working on the active-active code, a heat-compute driver, and starting work on getting horizontal service delivery an actual reality.
20:38:59 <blogan> xgerman: lets discuss that at the summit
20:39:01 <sbalukoff> So, they're probably going to be pushing hard for / diving into that.
20:39:03 <blogan> xgerman: or make an ML
20:39:23 <xgerman> yeah, this clearly is an ML
20:40:04 <sbalukoff> So, expect to hear a lot more about that in the near-ish future.
20:40:12 <xgerman> cool
20:40:45 <xgerman> we are excited
20:40:52 <sbalukoff> On another note: I feel like the Neutron LBaaS pool sharing stuff should be ready to merge, though since nothing in L7 is going to get in in Liberty, that will probably remain on hold until the Liberty stuff is tagged at least.
20:41:01 <johnsom> What about doc updates for Liberty / Octavia?
20:41:11 <sbalukoff> johnsom: Yep, we need to do that.
20:41:14 <minwang2> +1
20:41:17 <xgerman> +1
20:41:32 <xgerman> first we make something that works… then :-)
20:41:40 <sbalukoff> xgerman: +1
20:41:53 <sbalukoff> I'm happy to work on that once the merge-fest is done and we're in bugfix mode for a while.
20:42:02 <xgerman> thanks
20:42:17 <xgerman> I am happy to help as well + remember we have a hands on lab in about a month
20:42:26 <sbalukoff> Right.
20:42:28 <rm_work> yes
20:42:33 <rm_work> which i *will* be there for
20:42:39 <rm_work> flights and hotel booked yesterday
20:42:44 <sbalukoff> Sweet!
20:42:52 <sbalukoff> I'm also 100% confirmed to be there.
20:42:57 <rm_work> cool
20:42:59 <TrevorV> It doesn't look promising for me to show up guys... Makes me sad, and jealous.
20:42:59 <xgerman> cool
20:43:08 <rm_work> TrevorV: there might be another round
20:43:15 <xgerman> TrevorV just hide in blogans suitcase
20:43:15 <rm_work> and i was definitely WAY under what they expected for budget
20:43:17 <TrevorV> Its unlikely they'll pick me anyway.
20:43:27 <TrevorV> You gonna hostel it rm_work ?
20:43:34 <rm_work> no, but cheap hotel
20:43:37 <TrevorV> I've signed up as a speaker, last week sometime.
20:43:38 <rm_work> staying at the same one as brandon
20:43:43 <rm_work> woo 3/5 star hotel
20:43:49 <TrevorV> In japan, that might be bad.
20:43:50 <TrevorV> lulz
20:43:52 <rm_work> heh yeah
20:44:01 <rm_work> Westin is 5-star <_<
20:44:07 <rm_work> worried a little about 3-star
20:44:07 <xgerman> it is bad
20:44:38 <TrevorV> I'll make the mistake of getting into a 5-star, blowing the budget ideals there, and then getting in trouble and never being able to travel again... Except I'll have gone to tokyo, so "worth it" is all I'll say :P
20:44:40 <blogan> it'll be a little room
20:44:52 <rm_work> heh
20:44:57 <sbalukoff> Haha
20:45:24 <johnsom> My objective is to not end up in a "pod" hotel...
20:45:26 <xgerman> https://www.youtube.com/watch?v=K0ViYfN420k
20:45:40 <rm_work> so once we are done with main topics, i will ask who all is staying afterwards and if anyone wants to go to Osaka with me :P
20:46:08 <rm_work> I'm there from 10/24 to 11/14
20:46:17 <blogan> i want to ride on a plane that is tall person friendly
20:46:17 <rm_work> and no plans for week 2 yet
20:46:19 <xgerman> well, 10/31 is a big kid holiday
20:46:29 <rm_work> 11/03 is Culture Day
20:46:38 <rm_work> festivals and stuff i think that are cool
20:46:41 <rm_work> if you stay longer
20:46:46 <xgerman> mmh
20:46:53 <rm_work> anywho, any other real topics
20:46:56 <rm_work> where were we?
20:47:03 <xgerman> I think no real topics
20:47:13 <xgerman> done early?
20:47:16 <sbalukoff> Don't have the vacation time for it, myself, unfortunately. Otherwise I'd love to, rm_work. :P
20:47:20 <sbalukoff> Done!
20:47:26 <xgerman> #endmeeting