18:00:08 #startmeeting keystone 18:00:09 Meeting started Tue Mar 21 18:00:08 2017 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:12 The meeting name has been set to 'keystone' 18:00:20 o/ 18:00:23 * thingee runs 18:00:23 o/ 18:00:24 o/ 18:00:24 ping agrebennikov, amakarov, annakoppad, antwash, ayoung, bknudson, breton, browne, chrisplo, cmurphy, davechen, dolphm, dstanek, edmondsw, edtubill, gagehugo, henrynash, hrybacki, jamielennox, jaugustine, jgrassler, knikolla, lamt, lbragstad, kbaikov, ktychkova, morgan, nishaYadav, nkinder, notmorgan, portdirect, raildo, ravelar, rderose, rodrigods, roxanaghe, samueldmq, SamYaple, shaleh, spilla, srwilkers, 18:00:25 StefanPaetowJisc, stevemar, topol, shardy, ricolin 18:00:31 o/ 18:00:32 agenda #link https://etherpad.openstack.org/p/keystone-weekly-meeting 18:00:52 o/ 18:01:15 o/ 18:01:20 ehlo 18:01:58 maybe next week we will start doing roll call 18:02:24 yes, that would be useful 18:02:37 i think the list will be twice less after that 18:02:38 and give people and opportunity to opt into the ping list again 18:02:40 o/ 18:03:01 * lbragstad makes a note 18:03:23 alright - we have a lot to do today so let's go ahead and get started 18:03:26 i bet you can just look at the speakers in the last 10 meetings 18:03:38 #topic Pike goals: deploy in wsgi 18:03:54 so far there are two community goals that have been accepted for Pike 18:04:06 the first one is deploy in wsgi - which we already support 18:04:09 so that's good 18:04:09 #link https://review.openstack.org/#/c/440840/ 18:04:33 #topic Pike goals: python3.5 support 18:04:39 the second is supporting python 3.5 which we already do as well 18:04:40 #link https://governance.openstack.org/tc/goals/pike/python35.html 18:04:57 except python-memcached* 18:04:58 great :) 18:05:15 sortof...sometimes..whoknows 18:05:31 notmorgan that's the only case where we don't support py3 deployments, right? 18:06:06 yeah, well also mod_wsgi is weird with py3 18:06:13 but you can make it work...ish 18:06:18 (and i think that would be mitigated after/if we move pymemcached 18:06:20 python-memcache has a py3 version....but i think we just don't like it 18:06:24 python3-memcached.noarch 18:06:35 dstanek: it has been spotty in actually working 18:06:42 Summary : Pure python3 memcached client 18:06:42 URL : https://github.com/eguven/python3-memcached 18:06:54 notmorgan: i use it all the time outside of keystone without any issues 18:06:55 behaves badly at times then gets fixed...then doesn't... 18:07:07 basically move to pymemcache 18:07:10 looks to be actively worked on 18:07:10 o/ 18:07:18 ayoung: not really 18:07:32 I've talked with the maintainer, he has next to no time on it 18:07:41 https://github.com/eguven/python3-memcached/commit/5539869760b2c13ddf0820df66b97cbb72f99043 18:07:45 I tried to take it over 3 different times 18:07:50 https://github.com/linsomniac/python-memcached 18:08:04 yeah linsomniac has no time 18:08:15 What was last touched in December 18:08:16 joy 18:08:22 not a bad dude, just overwhelmed 18:08:25 with other work 18:08:37 that happens 18:08:42 pymemcached is maintained by Pinterest 18:08:46 but should it stop us from asserting that pike goal? 18:08:58 or do we need to move to pymemcached? 18:09:07 we should move to pymemcached 18:09:11 or use the goal as an excuse to move to pymemcached this release? 18:09:23 use the goal as the excuse 18:09:26 :) 18:09:31 fair enough 18:09:33 Need to get it into the distributions. Does Debian support it? 18:09:50 if we want it then we should create a spec. it doesn't change the fact that we already assert py3 support 18:09:50 iirc, everyone but fedora 18:10:12 we support py3, except in some cases with other deps 18:10:33 Should not be hard to get the package accepted then 18:11:01 * notmorgan will brb, coffees 18:11:20 https://admin.fedoraproject.org/pkgdb/package/rpms/python-pymemcache/ Fedora has it, too 18:11:51 dstanek well - it would prevent keystone from running in specific python3 environments, so i'm wondering if that is enough to keep us from asserting the goal? 18:12:25 lbragstad: python-memcached is a packaging issue. there is no work for us there. 18:12:27 ayoung: cool. 18:12:30 Yeah we have python3-pymemcache.noarch 18:12:32 or if the goal is only specific to keystone source? 18:12:39 is just not memcached, only client 18:12:45 dstanek: we need a driver for oslo.cache and a fix in ksm 18:12:49 dstanek: not just packaging 18:12:49 i can't speak to any mod_swgi issues, but uwsgi is fine with py3 18:13:02 https://koji.fedoraproject.org/koji/packageinfo?packageID=19104 18:13:08 uwsgi works fine, mod_wsgi is odd, but works, you need to be very specific about how you configure it 18:13:15 notmorgan: a driver to use python3-memcached? 18:13:15 oh - i was under the assumption that parts of python-memcached weren't py3 compatible 18:13:20 it assumes py2 (iirc) 18:13:28 So mod_wsgi is the C code part 18:13:29 lbragstad: python-memcache behaves weirdly in py3 18:13:33 ayoung: yes 18:13:44 but you can specify a different interpreter 18:13:50 and it has always seemed to work 18:13:54 uwsgi doesn't care. 18:13:58 and works perfectly 18:14:14 but by default py2 is what mod_wsgi uses 18:14:54 ok - so what i'm hearing is that we should be good to move forward with asserting that goal is completed for keystone 18:15:09 yes. 18:15:19 ok - but we should move at some point 18:15:24 but we *should* ensure we move to pymemcache this cycle if at all possible 18:15:26 s/move/move to pymemcached/ 18:15:32 ok 18:15:42 is anyone interested in helping dig into that work 18:15:47 and make sure we have mod_wsgi for keystone configured to run in py3 mode in gate at least for some tests 18:16:05 python3-mod_wsgi 18:16:13 its a different package in Fedora 18:16:23 the work is 1) write a dogpile/oslo.cache driver, 2) convert ksm to use pymemcached instead of python-memcached *or* to use oslo.cache 18:16:43 3) default keystone to use the new oslo.cache driver instead of the python-memcache one 18:16:47 all should be very easy to do 18:17:13 i can only imagine that a spec is required for #2 18:17:14 (you might need a minor layer of conversion, since pymemcache does not support the same interfaces as python-memcached) 18:17:21 lbragstad: nah. 18:17:29 just do it as a straight up conversion. 18:17:42 this imo, is a bug if we do a conver to pymemcache 18:17:46 a spec if we move to oslo.cache 18:18:15 a spec if we move ksm to use oslo.cache? 18:18:20 yeah 18:18:24 because that isa much bigger change 18:18:33 a lot of "setuyp the region" etc 18:18:37 same things we do in keystone 18:18:53 where direct use of pymemcache is "create translation and instantiate correct object" 18:19:04 no functional changes, no option changes, etc 18:19:11 oslo.cache is a *much* bigger change 18:19:24 notmorgan is there a noticeable advantage to one approach over the other? 18:19:50 oslo.cache might impact swift more 18:19:53 i could see having everything using oslo.cache consistently being an advantage of simplicity 18:20:01 pymemcache, if you use a simple translation object, would have no impact 18:20:08 oslo.cache is more consistent with *all* of openstack 18:20:18 lbragstad: ++ 18:20:32 those are the two sides of the concerns 18:20:34 i prefer oslo.cache 18:20:44 i think pymemcache might be *way* simpler to just drop in 18:20:49 ok - i can work on drafting a spec that details the work and at least propose it for review 18:20:53 depenmding on the amount of time folks have to dedicate to it 18:21:28 we use python-memcache directly not because we wanted to limit dependencies right? 18:23:30 dstanek or was it because keystonemiddleware's caching implementation pre-dated oslo.cache? 18:23:56 lbragstad: someone told me way back when that is was a dep thing. not sure who though 18:23:57 lbragstad: that's definitely true, but i don't know if it's the *because* 18:24:25 dolphm me either 18:24:31 notmorgan do you know? 18:25:41 we can circle back on this, too 18:25:57 wha... https://review.openstack.org/#/c/268662/ 18:26:28 dstanek, yes? 18:26:31 dstanek huh 18:26:47 dstanek looks like jamie was at least planning on moving to oslo.cache 18:26:59 perpetually 18:27:07 that's a good enough answer for me 18:27:20 i can attempt to document this in a keystonemiddleware spec 18:27:49 #action lbragstad to propose spec to keystonemiddleware detailing the steps required to move to oslo.cache 18:28:32 #topic Boston Forum Brainstorming 18:28:37 lbragstad: ++ 18:28:43 who all is planning on going to the forum? 18:28:56 lbragstad: there is a linked bug that you touched last too 18:29:01 apparently there are planning sessions and we have a deadline of April 2 to have things submitted 18:29:13 i'm planning on going. just have to work out parking. 18:29:34 #link https://etherpad.openstack.org/p/BOS-Keystone-brainstorming 18:29:40 I might be going 18:29:59 * cmurphy likely going 18:30:05 i just need a rough idea of what our attendance might look like 18:30:09 i'm going 18:30:12 and possible topics 18:30:13 I'll be there 18:30:43 it's a 15 min walk from my office 18:30:48 knikolla nice 18:30:59 i'm planning on organizing this just like we did for the PTG 18:31:10 lbragstad: are you planning on doing keystone sessions? 18:31:12 just start dumping information in the etherpad and i'll go through and organzie it 18:31:21 dstanek that's what i'm trying to figure out 18:31:38 given a list of topic, i'll try and organize them into buckets and propose them 18:31:55 lbragstad: i probably won't be in many of those. i'm planning on going to presentations and hallway talking about openstack 18:32:18 and since the deadlines is within a couple weeks - it would be nice if everyone threw their ideas down sooner rather than later 18:32:27 I'm presenting on the RBAC proposal, along with knikolla 18:33:00 i was under the assumption that the forum was going to be tailored for operator feedback, so i wasn't expecting to have to organize many keystone specific dev sessions 18:33:04 Hope to have a demo ready for then 18:33:44 lbragstad: as in having a keystone "room" for operators to visit? 18:34:03 dstanek yeah - i was for sure going to try and get that 18:34:17 Will also plan on having a Climbing gym night. 18:34:33 it seems like a lot of other projects are planning on having dev-like discussions (like the PTG) 18:35:04 i'm actually hoping to learn more about the rest of the ecosystem 18:36:11 ok - maybe i'll swing by the release room and ask specific in there 18:36:36 because I'm kind of unsure what to plan for based on who is going to be there in comparison to past summits 18:36:59 regardless, if there are things you want to talk about at the forum, please feel free to add them to the etherpad #link https://etherpad.openstack.org/p/BOS-Keystone-brainstorming 18:37:52 anyone have any questions on the forum that they want me to relay? 18:38:02 dstanek: in ksm, we uses python-memcached because swift passes us an object in some cases, and 2) history (not dep related) 18:38:10 sorry for the delay 18:38:23 notmorgan no worries, thanks for the update 18:38:33 * notmorgan is chatting with landlord about dishwasher and potential "giving up the ghost" issues. 18:38:41 (henry joined, sorry to most unfashionably late) 18:38:52 notmorgan: thanks. i wish i could remember who told me is was a dep issue 18:38:53 henrynash, ! You coming to Boston? 18:39:01 no, unfortunately not 18:39:15 if there aren't any more questions specific to the forum we can move on 18:39:17 henrynash o/ 18:39:28 #topic VMT Update: keystonemiddleware diagram and docs 18:39:30 knikolla gagehugo 18:39:36 dstanek: in ksa it would be a dep issue 18:39:50 but we don't cache there 18:40:00 wip draft review is here: https://review.openstack.org/#/c/447139/ 18:40:16 which has the updated arch diagram 18:40:36 knikolla gagehugo you two just need reviews on #link https://review.openstack.org/#/c/447139/ ? 18:40:43 as of right now? 18:41:10 yeah, but those docs still need to be filled in more 18:41:19 https://review.openstack.org/#/c/447139/2/doc/source/artifacts/keystonemiddleware/pike/figures/keystonemiddleware_architecture-diagram.png,unified 18:41:26 yeah - i parsed it a bit 18:41:31 gagehugo i need to review it again 18:42:03 um... knikolla isn't the fetch from Memcache prior to the call to keystone? 18:42:04 gagehugo knikolla anything else VMT related? 18:42:32 ayoung: nice catch. 18:42:37 ah yeah 18:42:59 I thought we stored the token validations in memcache, so why else would we be doing memcache stuff if not to see if we have a valid token? Is there another reason? 18:43:29 i'll double check to make sure, but it would make sense that memcache is checked first. 18:43:45 ayoung I probably just put the numbers in the wrong order 18:43:50 Do we also pass on the memcache key for the service to use later on? 18:44:39 ayoung: i do not think so. will check that too, see if they share config sections for that. 18:44:47 so, it should be steps 3, then 4, then 2, with something to indicate that steps 2 depends on there being no response in step 4 18:45:06 sounds like we can keep this going in the review 18:45:27 yeah 18:45:33 ok 18:45:42 gagehugo knikolla ayoung thanks! 18:45:49 #topic pike specs 18:45:55 #info 3.5 weeks until Spec Proposal Freeze 18:46:01 #info 11 weeks until Spec Freeze 18:46:07 RBAC in middleware should be there 18:46:17 its already approved, but in future state 18:46:18 lets spend the last 15 minutes on spec 18:46:22 specs* 18:46:30 ayoung i think its in ongoing 18:46:37 #topic pike specs: Project Tags 18:46:43 gagehugo o/ 18:46:50 o/ 18:46:52 #link https://review.openstack.org/#/c/431785/ 18:46:56 this one is looking good 18:47:01 lbragstad, right, and knikolla is picking up active development of the server side piece. It was working in Nov, but then has lain fallow 18:47:05 i only had a couple last minute/minor questions 18:47:26 I am fine with limiting tags by # per request 18:47:36 I think I am actually OK with this. I've seen how Kubernetes uses labels, and I think this will be done is somewhat the same way 18:47:40 gagehugo do we have an idea of what that number should be? 18:47:58 the real question is priority: 18:48:01 nova uses 50 for instances 18:48:19 gagehugo they limit the total number of tags an instance can have to 50? 18:48:33 no that was per request 18:48:37 from that schema 18:48:38 IE: who can set the tag. And I think that was the issue we had last time. I could see a scalability issue with number. 18:48:38 i.e. bulk tag options are limited to 50 as a part of API validation? 18:48:58 ayoung yeah - that was something edleafe was describing to me the other day 18:49:04 https://specs.openstack.org/openstack/nova-specs/specs/kilo/approved/tag-instances.html#rest-api-impact 18:49:13 it's better to have a bunch of smaller tags than to have one *massive* tag 18:49:19 cuz I think the way that you explained it to me before it was a huge gaping security hole 18:49:33 which begs another question, do we want to do more strict validation on the length/size of individual tags? 18:49:48 yes. yes yes 18:49:54 strictissississimo 18:50:07 make sure they are URL safe 18:50:14 sure 18:50:21 i think that's a requirement of the API WG 18:50:32 er, a guideline that they suggest for tag implementation 18:50:39 implementations* 18:50:50 gagehugo, BTW... you do realize that this is going to be a security nightmare, right? 18:51:10 people are going to want to be able to tag their own projects, but that will get in the way with the "official" tags 18:51:19 which will have security/billing info implications 18:51:24 ayoung right now only a project admin is going to be able to modify tags 18:51:39 lbragstad, define "project admin" 18:51:50 "god-mode" admin 18:52:04 whatever admin we use in the rest of our policy file 18:52:09 lbragstad, admin and is_admin_project=True? 18:52:28 lbragstad, gagehugo what if.... 18:52:34 2 distinct entities 18:52:49 one which is the labels that a user can put on their own resources, the other which are admin only 18:52:52 ayoung it should require the same rules as https://github.com/openstack/keystone/blob/master/etc/policy.json#L40-L41 18:53:11 at least until we have a better way to define granularity in policy that allows us to get around the security concerns 18:53:14 lbragstad, three little words: 18:53:17 hierarchical 18:53:18 multi 18:53:22 tenancy 18:53:26 sure 18:53:32 all the same issues come up here 18:53:45 I expect that to make this more complicated 18:53:46 a global set of tags is going to be a nuisance 18:54:06 lbragstad, I wonder if tags should be scoped to domains. 18:54:08 lets as henrynash 18:54:13 hmm 18:54:15 henrynash, should tags be scoped to domains? 18:54:23 we're going to run into similar permission issues with the limits proposal 18:54:26 hmm\ 18:54:48 on tags my cut feel is probably yes 18:54:52 gut 18:55:15 i don't know how you'd do that 18:55:23 dstanek, me either 18:55:37 if i want to see all the projects that i have access to that are tagged with 'dev' - which dev? 18:55:47 gagehugo, lets plan on a pretty heavy brainstorming session on this at the summit 18:55:58 ayoung: ok 18:56:08 to me tags are really an arbitrary string added by the user that has permission to modify the resource 18:56:21 dstanek ++ 18:56:38 dstanek: ++ that is the intended goal 18:56:42 dstanek, if the tag is used for some billing purposes, then you want to limit who can add that tag to any resource 18:56:45 or remove it 18:56:45 that's how i was thinking ofit 18:57:21 so, if the end goal is to be able to tag all "high-cost" projects you can't do that without removing the ability to set tags from the project admin 18:57:38 i would think that would be up to the deployer to make sure if they are using billing tags that they control who has that access 18:58:01 lbragstad: how would they do that? 18:58:03 so, if you want tags as a way to be able to self organize, that is a very different use case than the "tag all projects across domains for billing purposes" 18:58:23 dstanek currently our project api requires an admin 18:58:32 ayoung: ++ 18:58:39 dstanek i'd keep the tags api for projects consistent with that 18:58:57 lbragstad: but you could have domain admins be able to edit and they there's a security hole for the billing usecase 18:59:47 hmmm...i have to think about this a little more. i just got done reading the spec and was pretty happy 18:59:47 dstanek can domain admins have the admin role? 18:59:53 #link https://github.com/openstack/keystone/blob/master/etc/policy.json#L40-L41 19:00:05 no my topic today again, eh? 19:00:21 lbragstad: that is our default policy and not the policy everyone necessarily uses 19:00:23 Times up 19:00:33 breton wanna talk about in -keystone? 19:00:34 dstanek right 19:00:36 #endmeeting