17:03:12 <timsim> #startmeeting designate
17:03:13 <openstack> Meeting started Wed Apr  5 17:03:12 2017 UTC and is due to finish in 60 minutes.  The chair is timsim. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:03:15 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:03:17 <timsim> #topic roll call
17:03:18 <openstack> The meeting name has been set to 'designate'
17:03:25 <diana_clarke> o/
17:03:29 <trungnv> o/
17:03:33 <hieulq_> o/
17:03:33 <sonuk> o/
17:03:41 <timsim> ping mugsie kiall (if you're around)
17:04:03 <mugsie> o/
17:04:08 <mugsie> sorry - got distracted
17:04:40 <timsim> No worries, are  you around in full mugsie?
17:04:59 <mugsie> yup
17:05:11 <timsim> Alright
17:05:22 <timsim> #topic Bug Triage
17:06:00 <timsim> #chairs mugsie
17:06:05 <mugsie> :)
17:06:11 <timsim> Just in case I have to drop off
17:06:16 <mugsie> k
17:06:27 <timsim> https://bugs.launchpad.net/designate/+bug/1674565
17:06:27 <openstack> Launchpad bug 1674565 in Designate "openstack recordset create does not throw any error when Zero TTL Values are specified" [Undecided,New]
17:06:55 <mugsie> I think this is a client bug
17:07:01 <timsim> yeah that's what I was going to say
17:07:06 <mugsie> somewhere 0 is being cats to Nll
17:07:09 <mugsie> null*
17:07:15 <timsim> yeah. meh
17:07:23 <timsim> I'll move it
17:07:30 <mugsie> its an issue - lhf, in the client bugs, medium?
17:07:39 <mugsie> but not a huge issue
17:07:48 <timsim> yeah
17:07:51 <timsim> agreed
17:07:52 <timsim> https://bugs.launchpad.net/designate/+bug/1675719
17:07:52 <openstack> Launchpad bug 1675719 in Designate "type not throwing any error message when repeated twice while creating record " [Undecided,New]
17:08:16 <mugsie> eh
17:08:33 <mugsie> multiple --records create multiple records afaik
17:08:45 <mugsie> or not
17:08:47 <mugsie> woiw
17:09:14 <mugsie> yeah, client issue anyway - but I am not sure how we would even fix that
17:09:26 <mugsie> I think it is the cliff parsing code
17:09:52 <timsim> Probably an issue, but not one likely to fix
17:09:59 <timsim> Just don't specify a flag twice :P
17:10:12 <timsim> Move it, low?
17:10:20 <mugsie> yeah, its a pain, but if someone has a fix - ++
17:10:22 <mugsie> yeah
17:10:39 <timsim> https://bugs.launchpad.net/designate/+bug/1676827
17:10:39 <openstack> Launchpad bug 1676827 in Designate "enable worker is deprecated but not start without enabling" [Undecided,New]
17:10:51 <timsim> This is just "deprecated" being confusing
17:11:11 <mugsie> yeah - lhf, and suggest updating the string?
17:11:57 <timsim> Seems legit, hopefully we'll just get that done this cycle and it'll be gone
17:12:01 <mugsie> ++
17:12:30 <timsim> https://bugs.launchpad.net/designate/+bug/1676228
17:12:30 <openstack> Launchpad bug 1676228 in Designate "Install and configure for Red Hat Enterprise Linux and CentOS in Installation Guide for DNS Service" [Undecided,New] - Assigned to plieb (jliberma)
17:12:58 <mugsie> legit
17:13:10 <mugsie> C, and backport
17:13:42 <timsim> yeap
17:14:31 <timsim> #topic stable backport triage
17:14:53 <timsim> I don't have the link, but I don't think there's going to be anything here.
17:14:56 <timsim> Move on?
17:15:05 <mugsie> nope - broken gates will dop that
17:15:07 <mugsie> yeah
17:16:17 <timsim> #topic Open Discussion
17:16:26 <mugsie> Gate?
17:16:31 <mugsie> I see diana_clarke is here
17:16:49 <trungnv> hello?
17:16:57 <diana_clarke> mordred, dims, and I have been working on un-breaking the gate
17:17:04 <timsim> <3
17:17:22 * timsim hasn't had any time to look at it
17:17:25 <mugsie> I have not been avaoiding you delibertly - I have been stuck in internal work for the last few days
17:17:42 <mordred> heya
17:17:43 <trungnv> regard to Open discuss, I want to talking about rolling-upgrade for designate
17:17:45 <mugsie> its down to the new eventlet?
17:17:50 <mordred> yah - it is
17:17:53 <diana_clarke> mugsie: yup
17:17:53 <mugsie> -_-
17:18:03 <mordred> with the upcoming even newer eventlet we're down to 2 unittest failures
17:18:06 <mugsie> OK. it could also be the way we intialise eventlet
17:18:12 <mugsie> we do it really early
17:18:18 <timsim> We changed that somewhat recently too didn't we?
17:18:30 <mugsie> yeah, so it is not quite as early, but still early
17:18:34 <dims> so folks, are you ok with dropping off from requirements process?
17:18:46 <mugsie> dims: I would rather not
17:18:58 <mugsie> but if we are causeing issues, we should
17:19:04 <mordred> yah - that was the release-valve idea we came up with for you
17:19:10 <dims> mugsie : problem is the firehose of g-r/u-c updates is hard to keep up with
17:19:19 <mordred> basically, turn it off until new eventlet and until you get a chance to fix the remaining issues
17:19:19 <mugsie> 90% of the time we are OK
17:19:22 <diana_clarke> There's some additional discussion here too: https://github.com/eventlet/eventlet/issues/390#issuecomment-291497330
17:19:24 <mordred> and then turn it back on
17:19:31 <mugsie> yeah
17:19:41 * timsim has a feeling it'll never get turned back on
17:19:57 <mugsie> I am sorry to have been MIA for so much of this, but I will clear some time this week
17:20:03 <mordred> there's also sqlalchyemy issues: https://review.openstack.org/#/c/451569/
17:20:11 <mordred> with the new sqla
17:20:11 <dims> timsim : folks like me who do not know anything cannot help unfortunately
17:20:11 <mugsie> yeah - I saw that
17:20:16 <dims> it's not just eventlet...
17:21:05 <timsim> This isn't new, we've just normally had the time to address these kinds of issues, this last week has just been tough on mugsie and I.
17:21:13 <mordred> yah, for sure
17:21:39 <mordred> fwiw, I'm happy to do whatever I can to be helpful - but the outstanding eventlet issue confuses me and I have no idea what's actually breaking
17:21:57 <dims> timsim : mugsie : i am ok to leave things as is until one of you has the time. i just don't want to be pressured to revert things back in g-r/u-c
17:22:00 <diana_clarke> timsim: No worries from my point of view, I was just playing in your sandbox for fun. I'm not actually running into designate issues.
17:22:01 <dims> because of designate
17:22:15 <mugsie> yeah - I need to get an env up and running
17:22:20 <mugsie> dims: understandable
17:22:27 <dims> thanks mugsie
17:22:30 <timsim> I wouldn't worry about that dims. It's on us to get it figured out if everyone else is fine.
17:22:31 <mugsie> diana_clarke: and thanks for that
17:22:50 <trungnv> Could I discuss about my topic in Open discuss at the moment? -- Rolling upgrade (trungnv)
17:22:51 * mugsie is trying to figure out how to get py3.5 on OpenSuse LEAP right now
17:23:02 <dims> thanks for saying that timsim
17:23:23 <mugsie> trungnv: let us finish up on the gate, and then we will move on to rolling upgrade
17:23:50 <trungnv> mugsie: yes.
17:23:55 <mugsie> I think the gist is we (me / timsim ) need spend a day or so fixing the outstanding issues
17:24:27 <mugsie> diana_clarke: mordred are you OK if we take your patches and move them around / combine them / etc ?
17:24:33 <mordred> please do!
17:24:39 <diana_clarke> mugsie: yup :)
17:24:45 <mugsie> cool.
17:24:55 <timsim> Alright, so we'll get on that ASAP.
17:24:59 <mugsie> ++
17:25:02 <mordred> \o/
17:25:08 <timsim> #action timsim/mugsie to do the needful to fix the gates
17:25:17 <dims> great thanks mugsie diana_clarke and timsim
17:25:30 <timsim> <3 for stepping in dims diana_clarke mordred it's much appreciated
17:26:06 <timsim> Alright trungnv go ahead.
17:26:08 <mordred> timsim: sure thing - sorry we couldn't figure out the final bit
17:26:26 <trungnv> yes. thanks.
17:26:41 <timsim> No worries. That's a particularly nasty one.
17:27:08 <diana_clarke> If we knew why this test [1] is failing, I think we'd be able to fix the gate.
17:27:11 <diana_clarke> [1] http://logs.openstack.org/95/453195/2/check/gate-designate-python27-ubuntu-xenial/a02a2d1/console.html#_2017-04-05_13_38_38_610869
17:27:24 <mugsie> nsd !?!?!?!
17:27:25 <diana_clarke> ( I think that's the last hurdle)
17:27:25 <mugsie> gah
17:27:40 <mugsie> its a experimental backend -_-
17:27:48 <mugsie> OK - that helps
17:28:32 <mugsie> OK, I think we have everything for the gate
17:28:38 <diana_clarke> That's from: https://review.openstack.org/#/c/453195/
17:28:49 <diana_clarke> (obs, lol)
17:30:18 <trungnv> Now I can start?
17:30:33 <mugsie> sure
17:30:54 <trungnv> this is my BL about rolling-upgrade: https://blueprints.launchpad.net/designate/+spec/designate-rolling-upgrade
17:31:06 <trungnv> I am working at Fujitsu company. and we want to attempt with Designate in our schedules.
17:31:11 <trungnv> And we want to implement rolling-upgrade for Designate in next cycle.
17:31:45 <mugsie> OK
17:32:10 <mugsie> this is a lot of detailed work, especially for anyone not familiar with the codebase
17:32:30 <trungnv> yes.
17:32:42 <trungnv> for the detail items, I will listed out later.
17:33:06 <trungnv> Now I really need to get an approval to next my schedule.
17:33:40 <mugsie> A cycle is quite short for this - most projects have done this over 2 or 3
17:34:20 <trungnv> perhaps, I am trying to implement this feature in this cycle.
17:34:31 <timsim> There also seems to be bits of it that just aren't possible. For example, OVO
17:34:41 <mugsie> yeah
17:34:48 <mordred> ?OVO
17:34:53 <timsim> oslo versioned objects
17:34:54 <mugsie> Oslo versiojned objects
17:34:55 <mordred> oh - yah
17:35:08 <timsim> trungnv: What is the actual reason for wanting this specific thing?
17:35:11 <mugsie> we have our own, that are pretty much incompatible
17:35:26 <timsim> trungnv: Is it just coming out
17:35:32 <timsim> oops
17:35:40 <timsim> coming out of wanting to have a good upgrade experience?
17:35:57 <mugsie> also, API microversions - there would need to be a *very* strong case why we need them for rolling upgrade
17:36:03 <timsim> The upgrades for N-P and onward should be, at most, drop new config, new code, restart services.
17:36:36 <timsim> Yeah microversions isn't happening in all likelihood
17:36:54 <trungnv> rolling-upgrade is good feature to implment for any projects.
17:36:54 <mugsie> but, we should apply for the accessible-upgrade tag
17:37:21 <mugsie> it can be. but the extra code complexity needed for it need to be examined
17:37:55 <mugsie> seen as we have our data plane completely separate, we are not in as bad a place as other projects
17:38:02 <timsim> trungnv: Right, but if the entire idea behind achieving the tag is to have a good upgrade experience, and ours is already 80% of the way there, without doing 1000 hours of work, it doesn't make sense.
17:38:42 <mugsie> if we had 200 engineers like nova, it might be worth it - but we have at most 2 or 3
17:39:06 <mugsie> I would want to understand what code we need to add befreo we say yes
17:39:13 <mugsie> and what impact it will have
17:39:38 <mugsie> and some of the bits will be useful
17:40:19 <trungnv> I know, for next future then rolling-upgrade need to implment. and assign rolling-upgrade tag for it.
17:40:38 <trungnv> I will listed out detail items in the next meeting for you.
17:40:56 <mugsie> OK, we can look at the level of complexity then
17:41:48 <timsim> That's fine, but I'd urge you to examine __why__ you want rolling upgrade, so that we can address the actual need that you have perhaps without doing the full rolling-upgrade tag experience.
17:42:08 <timsim> Alright, anyone else have anything?
17:42:43 <trungnv> with rolling-upgrade feature.
17:42:45 <diana_clarke> not me, cheers folks
17:42:56 <sonuk> timsim: i have
17:43:13 <timsim> What's up sonuk
17:43:28 <trungnv> thanks. see you next meeting.
17:43:58 <sonuk> I create an instance in domain  in one ragion but i want that domain to be resolved in another region
17:44:27 <mugsie> humm - 2 separate installs?
17:44:49 <sonuk> both region are on the same netwprk
17:45:15 <sonuk> *network
17:45:22 <mugsie> ok, I am not understanding the issue
17:45:24 <sonuk> mugsie: yes
17:45:53 <mugsie> we tell people to run a single designate install
17:45:59 <mugsie> (like keystone)
17:46:17 <mugsie> but, unless you block DNS traffic, it should work
17:46:50 <sonuk> mugsie: one openstack deployment with regionone and another openstack deployment with region two and have two designate
17:47:07 <sonuk> but keystone shared
17:47:37 <mugsie> yeah - we recommned sharing a designate like keystone
17:48:03 <mugsie> why cant each region get to the others DNS servers?
17:50:18 <mugsie> I have to run, but sonuk we can continue in #openstack-dns later if that suits?
17:50:27 <sonuk> mugsie: can i  resolve the created instance domain of oneregion in another region with some pool config
17:50:41 <timsim> Same, I need to run.
17:51:13 <mugsie> sonuk: I think we need to talk about this in more detail - I am not sure what the problem is
17:51:21 <sonuk> timsim: ok we can continue on another channel np.
17:51:59 <timsim> sonuk: It's about DNS resolution though, not anything Designate specific. If you have a host that needs to resolve something that's sitting on a DNS server in another region, you should ensure that for that zone, the DNS server in the other region is authoratative and that your resolver is going to go there when it queries for that zone.
17:52:03 <timsim> Alright, let's wrap up
17:52:09 <timsim> See you all in #openstack-dns
17:52:15 <timsim> #endmeeting