18:02:34 #startmeeting keystone 18:02:34 what was the 1970 bad cartoon. Ive seen them all 18:02:36 Meeting started Tue Aug 6 18:02:34 2013 UTC and is due to finish in 60 minutes. The chair is dolphm. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:02:37 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:02:40 The meeting name has been set to 'keystone' 18:02:52 henrynash: thanks for last week! 18:03:04 Hi 18:03:05 dolphm: no 18:03:14 dolphm: np (!) 18:03:17 lol 18:03:28 * topol awkward 18:03:38 Hi! 18:03:51 this is turning into a real community effort. 18:03:53 forgive me, i'm still getting back up to speed :) 18:03:57 dolphm: (that'l teach me to type with apple pie and custard in one hand) 18:04:14 henrynash: you should use a plate 18:04:24 dolphm: yes, it is a bit gooey 18:04:26 henrynash: and maybe a desk 18:04:42 henrynash, what's on your *other hand*? :) 18:04:55 dolphm: bad boy 18:05:00 i definitely want to go through havana-3 BP's today, but that can just be open discussion 18:05:01 gyee: bad boy 18:05:20 #topic Critical issues 18:05:36 No critical bugs as of right now 18:05:44 i'm way behind on triaging new bugs though :( 18:06:00 dolphm, I filed one for auth_token middleware, apparently we are using v2.0 for admin token 18:06:02 if anyone is aware of an issue that's still New, feel free to mention it 18:06:18 gyee: link? 18:06:37 dolphm, I've been doing abit of triage. Please feel free to assign any LDAP based bugs you come across to me, to include stuff on the mixed Identity backend 18:06:37 https://bugs.launchpad.net/bugs/1207922 18:06:39 Launchpad bug 1207922 in python-keystoneclient "auth_token middleware always use v2.0 to request admin token" [Undecided,Confirmed] 18:06:55 dolphm, I am about to file another one 18:07:07 gyee: that's not immediately breaking something though, right? 18:07:24 we should not be using httplib in auth_token middleware anymore as it does not validate server cert 18:07:32 that might be a security issue 18:07:32 gyee: there's a bug and bp for that already 18:07:49 gyee: i think it's assigned to jamielennox 18:08:00 dolphm, oh ok, that's good 18:08:01 dolphm, he's working on it, and a lot of client related issues 18:08:07 ayoung: +1 18:08:34 dolphm, he broke his work up in to a series of reviews. 18:08:49 i'll assume there's nothing too exciting going on, but again... i had literally 100+ emails from launchpad about bugs when i got back lol 18:08:50 dolphm, last week I was haranguing the core keystone devs to do more client reviews 18:09:24 so y'all consider yourselves re-harangued: do more client reviews 18:09:25 ayoung: thanks! the client should really get more of everyone's attention than it does 18:09:41 yes absolutely 18:10:03 dolphm, so the problem is that I think people don't really understand how the client is put together until they deep dive it 18:10:14 * dolphm harangue 18:10:19 ayoung: thankfully diving into it isn't too bad. 18:10:25 dolphm, maybe next week we put a few minutes aside for a walk through? 18:10:36 ayoung +1 18:10:42 ayoung: during the meeting? 18:10:47 ayoung: +1 18:11:02 dolphm, it is the only time we have everyone together. Either then, or a special one off/ 18:11:03 I have a callin number. would we use that? 18:11:19 ayoung: i'd think a special one off might be easier. 18:11:19 I think it might be time well spent. And, morganfainberg is right, it isn't that bad 18:11:34 or is it an irc based walkthrough? 18:11:42 ayoung: we can setup webex/g2meeting or something with audio as well, might help. 18:11:43 topol, I think IRC would be sufficient 18:11:52 sure, if someone wants to coordinate something, this would be the best place to promote it, but i'm not sure it's the best venue to conduct it 18:11:55 i'll defer if you think IRC is suffcient though 18:11:55 I'm partial to elluminate myself 18:12:09 * dolphm always happy to answer questions on irc 18:12:13 dolphm, cool, put down an action item and I'll try to arrainge 18:12:20 com'on ppl, topol is offering his bridge 18:12:23 #action ayoung to coordinate client walkthrough 18:12:24 take advantage of it! 18:12:29 gyee +1 18:12:34 ideally, we would get jamielennox there, which makes it a little late for the Europe folks 18:12:36 #action topol to make bridges 18:12:40 topol, gyee +1 18:12:42 for one, I like to hear ayoung's voice 18:12:47 gyee, you lie 18:12:49 make sure his real 18:12:56 * ayoung lies too 18:12:58 gyee: lol 18:13:04 what's jamielennox's time zone? 18:13:12 dolphm, brisbane australia 18:13:18 oh wow 18:13:19 do we want a web conference as well or just abridge? 18:13:36 record it and then we can watch it whenever 18:13:38 4:13 AM for him right now 18:13:39 bknudson: +1 18:13:42 bknudson: +1 18:13:43 bknudson: +1 18:13:49 (who put bp notifications on the agenda?) 18:13:58 me 18:14:03 #topic bp notifications 18:14:08 #link https://blueprints.launchpad.net/keystone/+spec/notifications 18:14:08 Got keystone running in apache per ayoung and bknudson notes. Pulled in the latest olso notifier and dependencies. Applied the logging fix to remove eventlet issues in keystone/openstack/common/local.py. Applied notifications module and tested with tenants on CUD, first step, tested with log notifier, then tested with rpc notifier. Applied a patch to https://github.com/openstack/oslo-incubator/blob/master/openstack/common/rpc/amqp. 18:14:11 dolphm, with the exception of henrynash I think we are all US eastern or later. Our team meets at 5:30 PM on Modnay, and he can make that meeting 18:14:39 the short of it.. the current olso notification implementation works in keystone apache httpd 18:14:49 for tenant create, update, and delete 18:14:50 lbragstad, nice 18:14:50 lbragstad: yay! 18:14:51 lbragstad: nice. 18:15:00 lbragstad: is the code posted for review? 18:15:04 lbragstad, do I need to remove a -2 somewhere then? 18:15:05 lbragstad: should notifications be targetted at havana-m3 then? 18:15:12 lbragstad: probably lol 18:15:18 ayoung: * ^ 18:15:26 no, it's strung together in like 6 commits on my local branches 18:15:36 I have to fix something i Oslo 18:15:39 in oslo 18:15:42 i guess https://blueprints.launchpad.net/keystone/+spec/unified-logging-in-keystone should be targetted first 18:15:52 dolphm: correct 18:15:59 that *needs* to be fixed 18:16:00 lbragstad: is that a realistic goal? 18:16:05 lbragstad, are you planning on submitting them as 6 commits, or squashing? 18:16:07 lbragstad: one or both for m3 18:16:29 I am going to have to clean them up and submit them individually 18:16:38 We have a month. THat should be realistic, assuming Oslo moves, and they are pretty responsive. 18:16:47 grabbing alink 18:16:53 assuming we can be officially unblocked by oslo 18:16:55 here is part of it *( the logging fix) 18:17:05 #link https://review.openstack.org/#/c/39934/ 18:17:33 that implements unified logging for kestyone using the fix for local.py that removes the eventlet dependency 18:17:42 from there, we can sync with the notifier in oslo 18:17:52 sweet 18:18:07 and then I can push the notification module/tests/implementation as a commit on it's own 18:18:09 I think there are 3 reviews now pulling in the same stuff from oslo 18:18:17 OK, recommend we target this for H3, then 18:18:17 bknudson: yes 18:18:21 lbragstad: updated bp unified-logging-in-keystone target & impl 18:18:26 dolphm: thank you 18:18:34 Ihave to look into the jenkins issue 18:19:21 I have some fixes that are in keystone/openstack/common/rpc/amqp.py that need to land in Oslo first 18:19:32 I can get started on filing a bug for that later today 18:19:37 lbragstad: let's leave notification targeted at 'next' until logging is totally complete, then we can see how much time we have 18:19:48 dolphm: ok, that sounds fair 18:20:30 dolphm, henrynash has an interesting review that seems to slip in under the letter of the law for an acceptable feature 18:20:32 #topic High priority code reviews 18:20:38 link em up 18:20:46 https://review.openstack.org/#/c/39530/ 18:20:48 #link https://review.openstack.org/#/c/40170/ 18:20:55 #link https://review.openstack.org/#/c/39530/ 18:21:37 "Implement domain specific Identity backends" 18:21:48 ayoung: henrynash's change is pretty cool. 18:22:02 dolphm, it has no new API and config file is 100% backwards compat 18:22:03 so this was already targeted at H3 18:22:22 can we have multiple identity_api providers now? 18:22:35 bknudson, with 39530, yes 18:22:46 but the dependency registry only supports a single one? 18:22:54 please take a look at the config file changes 18:23:09 but I would like some guidance on the config changes… 18:23:17 there is some need to push up a cleaner API to the oslo code base, but it supports what we want to do 18:23:26 there are two options. I'll link 18:23:34 …the goals is to be able to create new config structure for each instantiated bbackend driver 18:23:43 #link https://review.openstack.org/#/c/39530/11/keystone/common/config.py 18:23:47 ayoung: do 11 and the most recent 18:24:00 and 18:24:18 #link https://review.openstack.org/#/c/39530/12/keystone/common/config.py 18:24:27 ayoung: yep, those are the two 18:24:48 ideally, the helper methods like register_cli_int would be on the config object itself 18:24:51 so we could do 18:24:57 conf.register_cli_int 18:25:03 ayoung: oslo's config object? 18:25:09 dolphm, yeah 18:25:18 ayoung: oslo hates that we use those functions at all 18:25:35 dolphm: so the 2nd link is a version that removes them 18:25:36 dolphm, is version 12 how they want us to do it? 18:25:38 henrynash, look like good stuff at the first glance 18:25:50 ayoung: i'd ask markmc, and take his advice :) 18:25:51 oh we have a review to remove those helper functions? 18:25:54 dolphmL, while the first one is a version that tries to keep them 18:26:01 gyee: thx 18:26:03 ayoung: henrynash: get markmc on that review! 18:26:26 dolphm: ok 18:26:30 added him 18:26:30 henrynash: thanks 18:26:40 I thought the whole point of oslo config is that you could have command-line overrides for all the options. 18:26:49 bknudson, this is not command line, though 18:26:51 although I've never tried it. 18:26:54 bknudson: that was my understanding. 18:26:54 this is multiple config files 18:27:09 henrynash, care to explain what you are doing in a bit more detail? 18:27:09 right, the command-line overrides the value in the config file 18:27:14 sure 18:27:33 bknudson: in part, yes... and i'm not sure how i would expect CLI options to interact with / override per-domain config 18:28:04 so we use the identity Manager layer to allow multiplexing of driver backends (e.g, LDAP server 1 for domainA, LDAP sever 3 for domain b, the rest share SQL etc.) 18:28:22 * dolphm [X] multiplexing 18:28:37 more checkboxes for marketing 18:28:48 nice 18:28:52 for each domain that wants its one backend, you create a 'keystone..comf' file that just contains the config overrides for the domain 18:29:00 henrynash, very cool! 18:29:30 so the manger picks up all those files, creates a new conf structure for each one and inits the request driver withit 18:29:59 …hence the need to be able to create a separate conf structure (which is where we came into this discusion) 18:30:56 one possible use of this pattern in the future is multiple SQL datasources 18:30:56 henrynash, besides LDAP, are there any other use case for this? 18:31:08 henrynash: as long as you can exploit with something dumb like POST /v3/domains {"domain": {"name": "/../../etc/passwd; #"}} 18:31:28 gyee: well, yes it isn't constrained to ldap….you could have separate SQL drivers if you wanted to keep data in different DBs per domain 18:31:53 dolphm: not sure I follow 18:32:12 do I have to restart Keystone when create domain? 18:32:17 henrynash: you're reading paths off the file system that are provided by API users 18:32:25 henrynash: generally that's not a good idea 18:32:41 henrynash, actually, I was thinking that it would be good to be able to split table spaces on module lines, so, say tokens could go into a separate RDBMS than policy or something, too. I think you are setting up a pattern, and I want people to validate that. 18:32:42 dolphm, some injection attack? :) 18:32:57 henrynash: if the format was keystone.{domain_id}.conf then the file names would be determined by keystone, and not the API user, and we can all be a lot less paranoid 18:33:29 dolphm: I toyed with whether it should be domain_id or domain_name 18:33:31 henrynash: you're also requiring that domain names be encodable in the constraints of the file system... another problem that system-assigned ID's would solve 18:33:39 keystone.{domain drop table tokens;}.conf 18:33:44 * topol just because you are paranoid doesn't mean they aren't out to get you 18:34:02 dolphm: was just concerned over readability….but I'm oK with Ids 18:34:09 bknudson: yes, that is an issue 18:34:43 bknduson: I was going to have a separate extension that provided a new API call to re-init a domain…and have that called by keystone-manage 18:35:00 henrynash: i certainly understand the readability issue 18:35:05 bknudson: that would be the only extension bit of this… 18:35:06 not a big concern since this requires config option. 18:35:11 when would re-init get called? 18:35:23 henrynash: init? 18:35:28 topol: so today, it's a keystone restart 18:35:37 yep 18:35:37 dolphm: yes, the manager init 18:35:43 henrynash: oh to initialize drivers and whatnot 18:35:48 and in the future? 18:36:14 henrynash: normally that wouldn't be done through a web api 18:36:20 topol: so I thought for now we might allow keystone-mange to have a "domain-init" function? 18:36:40 dolplhm: open to how best to do that 18:36:41 henrynash, OK 18:37:33 henrynash, more fun when we have the need to support nested domains? 18:37:55 dolphm: on the domain_name vs domain_id issue, I also thought that anyone using external serves like LDAP etc. would likely have good domain names 18:37:57 domain is in the assignments backend. It is OK to modify that to have additional information about the domains config 18:38:01 nested domains, we have a use case for that??? 18:38:02 henrynash: i'd make sure it's an issue with people actually deploying keystone before you pursue some complicated proprietary solution to a problem that doesn't actually exist :( 18:38:03 gyee: hmm, indeed :-) 18:38:25 gyee is instigating 18:38:26 gyee: domains all the way down? 18:38:37 henrynash: restarting a keystone process to pick up new config isn't unreasonable 18:38:47 dolphm +1 18:38:57 dolphm: that's what you have today out of the box 18:38:59 (until someone complains :-) ) 18:39:09 topol: +1 18:39:34 #topic open discussion 18:39:49 #link https://launchpad.net/keystone/+milestone/havana-3 18:39:55 Do bug triage 18:40:04 i raised the issue on the agenda of the migration being proposed to fix the credentials index? 18:40:07 * dolphm will do 18:40:13 server never supported paging, so suggest removing it from spec: https://review.openstack.org/#/c/39828/ 18:40:15 #action dolphm to triage all the bugs 18:40:37 dolphm, regarding OS-EP-FILTER 18:40:39 bknudson: thanks! 18:40:44 #link https://review.openstack.org/#/c/33118/ 18:41:00 #link https://review.openstack.org/#/c/40170/ 18:41:01 henrynash: that review is mysql-only 18:41:16 dolphm, that is not just for you 18:41:18 bknduson: err, do you mean sqlite? 18:41:27 if migrate_engine.name != 'mysql': return 18:41:43 so it only runs if mysql 18:41:56 ayoung: aww, that'd be appreciated lol 18:41:57 dolphm, suggestion: for all bugs that are new, assign them to someone on core to verify 18:42:00 bknuson: ahh, hmm I thought it was sqllite that was the problem…oh, well 18:42:12 (thats got to go: if migrate_engine.name != 'mysql': return ) 18:42:17 dolphm, once we've verified, mark as verified, and then you can triage 18:42:18 I don't understand why only mysql had this problem. 18:42:20 ayoung: how about subscribing ya'll as appropriate? 18:42:22 go away 18:42:28 ayoung: and you assign to yourself 18:42:34 topol : the reaso is that I think it's only broken for one DB type 18:42:39 dolphm, I've been grabbing ones 18:42:51 mostly around LDAP and identity 18:43:02 ayoung: i don't want to assign bugs to only core, as i don't want to block non-core from feeling like they can contribute fixes 18:43:16 dolphm, fair enough 18:43:17 ayoung: MUCH appreciated 18:43:26 ayoung: like for serious 18:43:34 henrynash, perhaps a comment then metnioning that 18:43:42 bkudson: I think it was a previous change that removed a constraint, and left an index hanging around…but if that is true for msysql, I agree it should be true for postgres etc. 18:44:08 bkundson: confused face in place 18:44:53 henrynash: once the tests are fixed to verify on all engines, I'll give it a whirl. 18:44:58 dolphm, I think we have 97 bugs open that have no one assigned, if I performed the query correctly 18:45:30 Oh, some of those have fix committed 18:45:35 dolphm: my reason to raise it all is that we had said "no more migrations"…do we allow this one if we think it is fixing a real issue? 18:45:38 henrynash: still planning on working this bp during m3, or should it be untargeted? https://blueprints.launchpad.net/keystone/+spec/pagination-backend-support 18:45:59 dolphm: I am planning to attack it next week 18:46:04 ayoung: it's still a lot of bugs :( 18:46:10 henrynash: cool, just wanted to check 18:46:25 henrynash: it's our only 'not started' .. no pressure ;) 18:46:35 dolphm: :-) 18:46:50 henrynash: i'm lost on the context of your question about migrations though 18:47:02 dolphm: as soon as i have a little breathing room on dayjob front, I'll hit some of the bugs i can. 18:47:13 morganfainberg: what's your dayjob, anyway? 18:47:16 * topol my money's on henynash getting is started :-) 18:47:30 dolphm: writing openstack code internally for my company. 18:47:33 dolphm: I thought we had said (maybe I 'm wrong) that we had decided no more sql migrations for H3 18:47:52 https://bugs.launchpad.net/keystone/+bugs?field.searchtext=&orderby=-importance&search=Search&field.status%3Alist=NEW&field.importance%3Alist=UNDECIDED&assignee_option=none&field.assignee=&field.bug_reporter=&field.bug_commenter=&field.subscriber=&field.structural_subscriber=&field.tag=&field.tags_combinator=ANY&field.has_cve.used=&field.omit_dupes.used=&field.omit_dupes=on&field.affects_me.used=&field.has_patch.used=&field. 18:47:53 has_branches.used=&field.has_branches=on&field.has_no_branches.used=&field.has_no_branches=on&field.has_blueprints.used=&field.has_blueprints=on&field.has_no_blueprints.used=&field.has_no_blueprints=on 18:47:54 henrynash: i guess i missed that discussion... why not? 18:47:59 ugh, too long 18:48:00 if there's no new features then there's no new migrations 18:48:08 ayoung: bit.ly? 18:48:15 dolphm, yeah, one sec 18:48:18 bknudson: migrations can fix bugs, though 18:48:22 dolphm: For some reason I thought we were trying to have no DB changes between H2 -> H3 18:48:39 dolphm: sounds like I was imagining this… :-) 18:48:45 http://bit.ly/11KfaLF 18:48:55 bknudson: the ec2 -> credentials migration would be another that wouldn't fit 18:49:03 Those should be the ones on one has looked at yet: new, undecided priority 18:49:05 dolphm: https://review.openstack.org/#/c/38367/ 18:49:48 henrynash: sounds like something that might happen by coincidence, but i certainly wouldn't -2 any migration review until icehouse or anything 18:50:11 henrynash: bknudson: check out nachi's review above ^ 18:50:21 dolphm, I am lost, we are saying no new migrations till IceHouse? 18:50:28 dolphm, so my thought was that extension migrations would go in their own repos, but for core, we would still allow them in the common repo. 18:50:32 dolphm: Ok, fine…I'll chalk that up to eating too much blue cheese late at night.... 18:50:34 gyee: no no, i was asking where that notion came from 18:50:53 gyee: if there was a discussion about it, i missed it is all 18:51:06 dolphm, no, that why I was confused 18:51:16 i suspect it's something we should take review by review though? 18:51:23 agreed 18:51:26 dolphm, that is why I was pushing for the repo split, to make things clearer. But it looks like we are not all of the same mind there. 18:51:40 I like the repo split 18:52:00 if someone wants to use it they'll be able to once the code's in. 18:52:28 ayoung: i'm not opposed to splitting the repo, i'm mostly playing devil's advocate there / don't see an immediate benefit 18:52:47 ayoung: is the repo split targeted for m3? 18:52:52 some people think there should be no extensions. 18:52:53 bknudson, the one concern that dolphm has voiced that is worth repeating here is that with Alembic, we will end up with multiple steps . 18:53:22 s/immediate benefit/immediate benefit for anyone but us/ 18:53:28 heh, we are going to have extensions. jaypipes is actually going to resubmit his "regions" change as an extension, even after that discussion 18:53:39 indeed. 18:53:42 ayoung: he hasn't done that though, and i now see why :P 18:53:43 dolphm, so, my though was that it lest su split out an extension from the main database. I was thinking 18:53:53 for something like kds, that may not belon in Keystone lng term 18:54:03 jaypipes, you decided that after a couple of drinks? :) 18:54:03 i assume he's philosophically opposed to authoring an api extension :) 18:54:31 dolphm, no, he is philosophically opposed to wastingtime when a core dev decides to roadblock 18:54:34 "couple" -- you are being generous :-) 18:54:36 gyee: no, I was always willing to do what what was asked of me... doesn't mean I can't debate it in public though ;) 18:55:12 * dolphm <3 open source community spirit 18:55:48 dolphm, so I see the repo split as the logical result of the focus on extensions. It lets us keep separate concerns separate 18:56:13 and, if we decide that something should be spun off into its own server, we have a way of deploying just that extension...sort of. 18:56:31 ayoung, speaking of that, shouldn't we be concerned about repo split with henry's separate driver per domain thingy? 18:56:33 It means that the changes for that extnesion are not intertwined with the migrations for unrelated code/ 18:56:45 gyee, nope. Doesnt' affect it 18:56:58 gyee: yes, interesting, how to keystone-manage db_sync with multiple sql backends? 18:57:01 gyee, his is, right now, only LDAP 18:57:09 ayoung: agree, that's certainly a benefit for devs 18:57:26 ayoung: i don't want to inconvenience deployers at all in the process though 18:57:30 dolphm, I was thinking also that we could enable extenstions in the future 18:57:36 ayoung: when they don't get anything out of it 18:57:44 bknudson, especially if we are using different backends per domain, we have to make sure db_sync work correctly 18:57:51 so, say KDS becomes a long term supported extension, we make it a default migration when you run db_sync 18:58:02 gyee: eek, hadn't considered that 18:58:03 right now, there are no default migrations, but it doesn;t have to stay that way 18:58:06 gyee: or at least document what you need to do. 18:58:32 ayoung: what do you mean by default migrations? 18:58:49 gyee, if we have that, each would have a migrate_version table (or the alembic equivalent) and we would be able to query it to see what it supported 18:59:03 db_sync --domain xxxx 18:59:05 dolphm, as of latest patch now db_syn only runs through what is in common 18:59:15 dolphm, and you suggested that it run through everything 18:59:21 I mean it could get ugly if we are not careful 18:59:21 I am thinking of a middle ground 18:59:33 it will run through common, and a list of default support extensions 19:00:02 so, in icehouse, if we make an extension supported by default, we will run through its migrations when db_sync is run with no parameters 19:00:16 dolphm, this way, an extension is really 0 impact if it is not enabled 19:00:43 #endmeeting