17:01:31 #startmeeting designate 17:01:32 Meeting started Wed Jun 18 17:01:31 2014 UTC and is due to finish in 60 minutes. The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:01:33 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:01:36 The meeting name has been set to 'designate' 17:01:40 Heya 17:01:43 Who's about? 17:01:45 Hi all. 17:01:48 heya 17:01:50 o/ 17:01:55 o/ 17:01:58 o/ 17:02:12 literally just off a call, getting ducks in a row now ;) 2 sec 17:02:14 here 17:02:44 ello 17:02:46 #topic Review action items from last week 17:02:48 was in the wrong chan :) 17:02:54 ekarlso: hah :) 17:02:59 o/ 17:03:00 Okay - First was kiall to file BP on exposing NS/SOA records in V2 API 17:03:00 Hey everyone 17:03:23 https://blueprints.launchpad.net/designate/+spec/expose-soa-ns and https://review.openstack.org/#/c/100901/ cover it off 17:03:30 Please read + review :) 17:03:43 Next was kiall to write out mdns initial load scenarios and add to eankutse's BP 17:03:56 I think eankutse's blueprint covered off pretty much all of them? 17:04:12 I did add that to the blueprint 17:04:14 (the Q+A's in https://blueprints.launchpad.net/designate/+spec/mdns-master) 17:04:28 Again - Please read, comment, etc :) 17:04:54 And - That was all the items I believe 17:05:07 #topic Review blueprint for Recordsets/Records DB tables redesign 17:05:10 betsy: about? 17:05:15 yep 17:05:17 #link https://blueprints.launchpad.net/designate/+spec/recordsets-records-tables-redesign 17:05:26 So, we’ve talked this to death, I think 17:05:30 :D 17:05:39 #link https://wiki.openstack.org/wiki/Designate/Blueprints/Records_Table_Redesign 17:05:43 ^ actual spec :) 17:05:43 I just wanted to let everybody know that the bp/spec is ready for approval 17:05:57 So, comments welcome 17:06:01 ready for review or ready for approval :-) 17:06:25 I think it’s been reviewed/discussed by everyone, but maybe I’m wrong 17:06:25 Okay - Does anyone have any outstanding concerns on it? (Or, have people read it yet? :)) 17:06:38 i'm good. 17:06:44 goodie 17:06:46 One note on the scenarios 17:07:01 They should probably be 17:07:21 *They should probably be "recordset_id" instead of "recoredset_id" ;) 17:07:39 Oops 17:07:48 :) 17:08:09 Lol - Okay, other than minor typos, any concers? 17:08:11 concerns* 17:08:22 Not from me. 17:08:24 nope - pretty happy with it 17:08:37 still good. 17:08:47 Okay - No dissent - I'll mark it approved. 17:09:16 #topic New style rules 17:09:19 jaycaz: about? 17:09:28 Upgrading the hacking package 17:09:41 it's in the openstack global-requirements 17:10:01 plus, there are some new style rules that are important for making sure I18n is set up properly for log messages 17:10:25 however, there are a bunch more style rules that have been added as well 17:10:27 #link https://bugs.launchpad.net/designate/+bug/1330540 17:10:30 Launchpad bug 1330540 in designate/juno "Style checker ignoring logging I18n issue" [Medium,In progress] 17:11:08 here's a quick list of the new style rules: 17:11:09 # H104 file contains nothing more than comments 17:11:09 # H236 Python 3.x incompatible __metaclass__, use six.add_metaclass() 17:11:09 # H305 imports not grouped correctly 17:11:09 # H307 like imports should be grouped together 17:11:09 # H405 multi line docstring summary not separated with an empty line 17:11:09 # H904 Wrap long lines in parentheses instead of a backslash 17:11:10 # E111 Indentation is not a multiple of four 17:11:10 # E126 continuation line over-indented for hanging indent 17:11:11 # E128 continuation line under-indented for visual indent 17:11:11 # E251 unexpected spaces around keyword / parameter equals 17:11:12 # E265 Block comment should start with '# ' 17:11:27 * Kiall is really surprised jaycaz wasn't booted by freenode for that ;) 17:11:35 haha, whoops 17:11:44 still new to irc, sorry 17:12:11 Specifically - there are 11 new rules which we don't stick to, some are important (e.g. H236 Python 3.x incompatible..."), some are pretty trivial (e.g. H405 multi line docstring summary not separated with an empty line) 17:12:29 Everyone happy for us to simply ignore the trivial new rules for now? 17:12:34 yup 17:12:39 Are any of these required for graduation from incubation phase? 17:12:39 either way, I made individual patches 17:12:47 Fo' sho. 17:12:51 vinod1, no 17:12:52 so whichever ones you decide to apply, I can add pretty easily 17:13:17 And - Of those 11 - what do we think are non-trivial? I'd say H236 is the only one I think we should try fix soon 17:13:24 also, H405 isn't even an issue because we also ignore H404 17:13:54 I'd say H236 is the most important 17:14:02 I thought E126/128 were enforced already, huh. 17:14:03 H236 seems important. 17:14:14 is there a tool to check for these in code? 17:14:20 (the non-trivial ones) 17:14:21 jaycaz: Well, If you have patches made already - I wouldn't be opposed to applying + enforcing, I just don't want anyone spending too much time on trivial things :) 17:14:35 no, that's fine 17:14:53 rjrjr_: yes, "hacking" and flake8 enforce them.. If we don't ignore them in tox.ini, the gate would prevent them from entering the codebase :) 17:14:56 Hey, isn’t that what interns are for? :D 17:14:57 some of the trivial ones actually take more work then they might be worth 17:15:21 so, we might have this covered already except H236? 17:15:26 H904 was kind of a pain 17:15:33 No, these are new rules brought in by a new version of hacking 17:15:53 jaycaz: If you’ve already done the work, then we should start enforcing them 17:16:02 +1 17:16:02 I have patches for H236, H305, H307, H904, E11, E251, E265 17:16:18 betsy: Yea, agreed. I personally don't care what style we enforce - so long as it's consistent ;) 17:16:27 The indent ones (E126, E128) were only breaking for a few really long lines 17:16:43 and I wasn't sure how to change them and still keep it readable 17:17:09 jaycaz: Sounds like we should take the patches you have, and ignore the rest. No point letting the patches rot / go to waste, 17:17:19 I'd say just use your best judgement on the ones we should be following and ignore the arduous ones. 17:17:23 Kiall: So, for the patches I have, should I commit them individually or consolidate them together? 17:17:42 i think this is one case where a consolodated patch is ok 17:17:49 I would do them separately, easier to review, if you have them that way already :) 17:17:52 If not .. 1 is OK 17:18:11 I have them separately. That might be best because some of them affect a lot of files 17:18:31 H305/307 affects over a hundred files, but only the import statements 17:18:38 and it's mostly just adding newlines 17:18:42 jaycaz: great, if you apply them to your clone 1 after another, and `git review`, it'll create a review for each commit 17:19:49 Okay - Once ^ are applied, we can see a smaller list and decide if there are others we want to tackle / continue to ignore 17:20:01 so, commit them all separately, but to the same branch, and only git review at the end? 17:20:08 jaycaz: exactly 17:20:22 that's pretty nifty. all right then! 17:20:27 betsy / vinod1 / mugsie - since they affect so many files, merge conflicts will be real easy.. if we can review them promptly it'll save jaycaz some pain 17:20:45 true 17:20:45 yup 17:20:51 Okay - Moving on 17:20:55 #topic Zone Ownership Transfer Blueprint 17:21:06 o/ 17:21:07 #link https://review.openstack.org/#/c/100267/ 17:21:17 #link http://docs-draft.openstack.org/67/100267/5/check/gate-designate-specs-docs/ca4a197/doc/build/html/specs/juno/zone-migration-between-tenants.html 17:21:27 betsy, put up a patch with typos fixed 17:21:42 so have people read the spec? any questions? 17:22:13 BTW - Thanks for being the first to brave the new designate-specs repo ;) 17:22:38 I did get it setup - feel right that I feel the pain first :) 17:22:40 +1 And how did you generate the docs to make it look pretty? 17:22:51 the jobs do that automatically 17:22:52 betsy: the gate will automagically do it 17:23:04 Ah. I just didn’t know where to look for it 17:23:06 the "gate-designate-docs" job link will point to the rendered docs 17:23:30 it is the gate-designate-specs-docs link that jenkins comments with 17:23:31 (post-merge, they still don't get published anywhere, we'll fix that as soon as infra have somewhere for them to go) 17:23:59 well, github will render them after merge for the time being 17:24:18 So - besides betsy / myself - has anyone else had time to review it yet? No is Ok :) 17:24:20 The BP looks good to me at a glance. Looks like a really cool feature. 17:24:30 i did not yet look at them 17:24:33 i haven't. 17:24:35 looked at it, looks good 17:25:39 cool. can we aim to have it merge soon (next day or 2)? 17:25:42 mugsie - Okay, seems the general opinion is it's fine - maybe get started on it, and we can look again next week? 17:25:58 Not me 17:26:00 after vinod1 / rjrjr_ anyone else has time to read 17:26:18 we dont have to wait for the meeting next week - when ever people have time, just leave comments like an normal review 17:26:33 and when we think it is ok, we can +A it 17:26:34 mugsie: will do. 17:26:34 mugsie: true - half the point of the specs repo I guess :) 17:26:38 yup 17:26:39 mugsie: I really like this new process. Thanks again for setting it up 17:26:42 np 17:27:02 #action all to review owner transfer spec @ https://review.openstack.org/#/c/100267/ 17:27:26 If there's no Q's for mugsie - we'll move on... 17:27:44 #topic Cells and Designate 17:27:46 (was: Is it possible to have Designate Sink consume notification from a separate rabbitmq instance than it uses to communicate with Designate central?) 17:28:15 rjrjr_: you added this one I think? :) 17:28:20 Subbu_ is an eBay architect and has joined us for this topic. 17:28:31 #link http://docs.openstack.org/trunk/config-reference/content/section_compute-cells.html 17:28:51 Subbu_: heya, welcome :) 17:29:06 just here as a user/operator - would like to understand the pattern for deploying Designate with compute cells in place 17:29:46 Subbu_: the more users/operators join, the more likely we are to build things that work ;) 17:30:14 Kiall: yes. rjrjr_ and our team was debatign the best pattern 17:30:29 so, yesterday, kiall mentioned shovel. i'm wondering if we can't do better. 8^) 17:30:37 well atm it's not possible with the current sink code, but I don't think it should be overly hard (but then again I haven't looked at how oslo.messaging is ith multiple connection to diff rmq's) 17:30:43 So - I have to admit, I know very little about Nova Cells. I suppose they could be considered similar to regions, Just with 1 API endpoint for N "cells"? 17:30:57 Kiall: a bit different 17:31:11 since designate-sink currently assumes u have the stuff u consume from and designate on the same rmq connection 17:31:49 top level cell communicates with cell level nova-scheduler via a rabbit 17:31:52 Kiall: well, each cell has different rmq's isn't that correct ? 17:31:55 within each cell there is a differrent rabit 17:32:04 correct - to scale out rabbit mess 17:32:10 so u would need kinda 1 sink then pr cell or smth 17:32:24 ekarlso: correct. 17:32:29 how does celiometer get events from the cells? 17:33:02 we have not looked at Ceilometer in the context of cells yet (it has its own scaling issues) 17:33:09 right 17:33:16 https://blueprints.launchpad.net/ceilometer/+spec/nova-cell-support 17:33:29 looks like cell support has been discussed for ceilometer. 17:33:32 Subbu_: Also - you're specifically interested in designate-sink + cells? (The original agenda item mentioned sink by name) 17:33:39 The challenge as I understand from rjrjr_ is that the inbound and outbound messages for the sink need to go over different rabbits 17:33:44 correct 17:34:17 yes, we want sink to participate in cell, but communicate with Designate central. 17:34:33 the idea of shovel was mentioned, but that's a bit scary from scalability point of view. The intent of cells is to keep each cell rabbit independent 17:34:34 Okay - About 2 months ago we made the switch from the old oslo.rpc to the new oslo.messaging code, which ekarlso tells me supports multiple connections to different RMQs. But - Our code today doesn't take advantage of this. 17:34:45 (Central and API do not sit in the cell level FYI) 17:34:55 Kiall: well, I haven't looked much at o.m : p 17:34:56 Kiall: that seems like a good idea 17:34:59 but I *think* it would go 17:35:32 coudl do some time later today or tmrw to find out 17:35:35 Okay - So, designate-sink could be updated to support supplying two set's of RMQ connection details, 1 for talking to Designate, 1 for receiving events. 17:35:42 (in theory.. ) 17:35:45 +1 17:36:41 rjrjr_: what do you think? 17:36:53 if possible, i would write up a BP and work on solution. 17:36:57 would that fix cells thought? - it would n+1 connections (where n = # cells) 17:37:13 What if the queue talking to Designate dies, we lose all teh messages. we dont persist them in Sink as of now. 17:37:24 mugsie: well, you would deploy Nx designate-sink's into each cell, 17:37:25 mugsie: just two 17:37:38 Kiall: correct, there will be n sinks 17:38:00 dtx00ff: If an exception occurs while processing a message in sink, it will not be ACK'd, so it will remain on the queue for reprocessing. 17:38:22 ekarlso: can you let me know if it is possible with o.m? 17:38:30 rjrjr_: sure, I can try tmrw 17:39:09 ekarlso - yea, since you know the lib better than us ;) if you could give rjrjr_ some pointers, and if it looks like it will solve the their problem, we can look into implementing it 17:39:29 very appreciated. 8^) 17:40:08 thank you guys - this makes it possible for us to put it in place :) 17:40:45 No problem - The pain of openstack is the million and three different ways to deploy it.. We should try and support the different patterns we can :) 17:40:57 Okay - Moving on, last item... 17:40:57 :) 17:41:02 #topic Repository Rename 17:41:28 We need to ask infra to rename the repo from stackforge/designate etc to openstack/designate now that we're incubated.. 17:41:58 I know we at HP will have some pain and internal tooling to update when this happens, so, I wanted to check with others before raising it with infra... 17:42:33 I've put it on their agenda for next Tuesday - where we can try influence the schedule if necessary. 17:42:39 Anyone have any preference? 17:42:47 It'll break a few little things, everyone will have to change their git remotes. But it shouldn't be that bad. 17:43:33 tsimmons: I'll be a lot more painful than that for us sadly ;) 17:44:00 Oh I don't doubt that. I would say whenever you think you'll be ready is when you should try and get it done. 17:44:27 i agree. we can mitigate the work on our end. 17:45:18 Okay - Sounds like nobody has any preference, great :) I'll attend the infra meet next week to figure things out.. Since it involves downtime for *all of gerrit* across openstack, they'll probably have to try and fit it in with other pending maintenance etc etc. Will let you know next week. 17:45:48 +1 17:45:50 Once it's renamed - I plan on submitting out DevStack plugin into DevStack proper - which should help with "outsiders" experimenting et 17:45:55 our* 17:46:04 Okay... Moving on 17:46:05 #topic Open Discussion 17:46:18 Anyone have any off agenda topics? 17:46:24 yup 17:46:43 are people happy with the template I upoloaded for the specs repo? 17:46:48 #link https://review.openstack.org/#/c/100336/ 17:47:09 i haven't had a chance to look at it. 17:47:22 if so, can we get it merged to help the next people whp have to write specs 17:47:26 cool 17:47:43 I thought it was good. We can always edit if someone comes up with some new fanciness. 17:47:52 tsimmons: exactly, it's not set in stone 17:48:00 yup 17:48:01 I liked it 17:48:05 betsy / myself have +2'd it - unless people object .. I'll +A tomorrow. 17:48:28 #action kiall Assuming no -1's, +A #100336 17:49:12 Oh, I have a fun one 17:49:30 Kickout migrate to alembic as was mentioned in the incubation meeting 17:49:30 can we start dividing up the server pool work? 17:50:33 mugsie: I was wondering what was going on with the per tenant options work? That looked cool. 17:50:37 ekarlso: I think, if we do that, it should be phased a little :) once your oslo.db switch patch lands, we can move over to the oslo.db migration code @ https://github.com/openstack/oslo.db/tree/master/oslo/db/sqlalchemy/migration_cli 17:50:59 then, that supports sqla-migrate and alembic, so possibly make the move after that. 17:51:08 tsimmons, waiting for objects, and validations on objects 17:51:13 tsimmons: I think lack of reviews :) 17:51:22 (I know I haven't - apologies!) 17:51:27 Kiall: that's already done in the patch for o.db 17:51:29 :) 17:51:35 Oh - I missed that. 17:51:40 mugsie: Cool. 17:52:07 Yeah we have a lot of stuff open right now. 17:52:59 rjrjr_: I *think* the pools stuff was blocked on mDNS, which is finally making progress 17:53:11 I could be wrong - It's been a few weeks since I've given pools some thought. 17:53:42 mugsie: ^ ? 17:54:08 yea - 17:54:18 still very much blocked by mdns 17:54:49 k 17:55:06 eankutse you were working on the mdns-designate-mdns-functional BP, much progress? I think that's the main one actually blocking pools 17:55:36 Kiall: this will be notify 17:55:40 and axfr 17:55:43 and soa 17:55:53 vinod is woking on the Notify 17:56:04 axfr/soa is on hold for a while 17:56:12 could we start "assigning" the pools work in anticipation of mdns being ready? 17:56:14 maybe a couple of weeks 17:56:51 rjrjr_, yes I think we can 17:56:59 rjrjr_: could change by then, though 17:57:03 mugsie: I believe pools can proceed somewhat now? 17:57:06 rjrjr_: we likely can :) mugsie - can you make sure the BPs are refreshed based on the summit sessions over the next few days, then we review early next week, and discuss next meet? 17:57:09 right? 17:57:14 yup 17:57:23 that would be terrific! 17:57:36 I have the bp nearly done - just tryiogn to make sure i didnt forget anythiong 17:57:43 #action mugsie to validate pools BPs are up to date with summit session notes and decisions before Monday, giving Monday/Tuesday as review time to people ;) 17:57:53 been churning on operational work and darshan and i are committing our time to server pools work. 17:58:07 rjrjr_: cool :) 17:58:17 Okay - Times up 17:58:31 Thanks all 17:58:48 #action kiall to put dividing up pools BPs on next agenda 17:58:50 TOPIC: Next mid cycle meetup? 17:58:51 #endmeeting