21:02:51 #startmeeting crossproject 21:02:52 \o 21:02:52 o/ 21:02:52 Meeting started Tue Jun 16 21:02:51 2015 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:02:52 o/ 21:02:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:02:56 The meeting name has been set to 'crossproject' 21:02:56 Today's agenda: 21:03:00 #link http://wiki.openstack.org/Meetings/CrossProjectMeeting 21:03:06 hi 21:03:10 o/ 21:03:10 stevemar: ^^ 21:03:12 #topic Horizontal teams announcements 21:03:14 o/ 21:03:16 o/ 21:03:21 On the release management front, liberty-1 is next week 21:03:22 o/ 21:03:33 #info for projects we manage that are using development milestones, we expect the PTL (or release liaison) to show up next Tuesday to sync with us during office hours 21:03:40 0800-1000 UTC or 1800-2000 UTC in #openstack-relmgt-office 21:03:51 That will let us double-check implemented blueprints and discuss when to tag 21:03:58 Questions on that ? 21:04:10 o/ 21:04:26 o/ 21:04:30 nope 21:04:30 #info QA: grenade external plugins open for business in the big tent - http://lists.openstack.org/pipermail/openstack-dev/2015-June/066583.html 21:04:58 I brought that up previously in this meeting, just wanted to make sure teams saw the information was out there in how to get started 21:05:49 sdague: very cool — thanks for the info. 21:05:57 * jroll arrives a bit late 21:06:04 ttx: hi=) 21:06:18 if there are any questions, no is good, or put on the list 21:06:23 21:06:25 I know at least a couple of people on trove who were previously working on a grenade job — will pass the info along to make sure they see it. 21:06:34 s/no/now/ 21:06:45 #info Oslo: Request from Oslo team for Liberty Cycle - http://lists.openstack.org/pipermail/openstack-dev/2015-June/067131.html 21:07:11 A handful of items for projects who use oslo, please take a look 21:08:24 Other horizontal teams announcements ? 21:08:24 * dims not sure if this was vertical or horizontal :) 21:08:31 It is HORIZONTAL 21:08:38 :) 21:08:48 o/ 21:08:50 dims: thats a good tick list, appreciated! 21:08:52 I think I may merge the two sections for clarity :) 21:09:05 dims: oslo is a diagonal effort 21:09:35 haha 21:09:37 (i have no idea what that means, but it sounds cool) 21:09:42 Rockyg: is that a "I have announcement" or "I'm here" sort of o/ ? 21:09:44 o/ 21:09:47 thanks johnthetubaguy 21:09:59 dims: ++ will follow up. 21:10:05 thanks SlickNik 21:10:08 * edleafe is just saying hi 21:10:13 * Rockyg is lurking but here for in support of API standards 21:10:23 #topic Server versioning changes (dhellmann) 21:10:28 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067006.html 21:10:33 dhellmann: ohai 21:10:49 here! 21:10:57 As we have discussed a couple of times, we’re switching the server projects to semver versioning. 21:11:01 #link http://lists.openstack.org/pipermail/openstack-dev/2015-May/065211.html 21:11:05 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/thread.html#65278 21:11:13 I have submitted patches to all of the projects managed by the release team: 21:11:15 #link https://review.openstack.org/#/q/topic:semver-releases,n,z 21:11:16 If you agree with the numbering scheme, we need those to land before the L-1 milestone next week. 21:11:17 They’re failing right now because the versions go backwards. I’ll be pushing alpha tags to correct that tomorrow. 21:11:34 Does anyone have questions about the versions proposed, or anything else related to this change? 21:11:53 what is a non-backwards compat change (requires major version bump) for server projects? The interface to keystone is, for example, keystone-manage. 21:12:01 The REST API has its own versioning (V2.0, V3) 21:12:10 yeah, some of that is still up for discussions 21:12:31 some proposals have been when migrations are squashed, meaning that upgrades have to pass through that version 21:12:40 bknudson: indeed; I'd argue any of: removing something from the API; changing the contract of the CLI, including adding new required config options or removing config options 21:12:44 also we've discussed tagging a new major version each cycle, just because 21:13:12 bknudson: or more broadly 'if users or deployers need to care, then its incompatible and we should signal that' 21:13:20 config file changes would be backward compatability concerned 21:13:26 note that for projects under the development-milestone regime, they will just use X.0.0 like they used YYYY.Z.0 21:13:40 dhellmann: upgrade is the big one for me with my Nova hat on, we have lots of compat code for live-upgrades we want to keep dropping each release 21:13:43 they would not switch to semver 21:14:05 that is more for projects that would do intermediary releases 21:14:06 johnthetubaguy: yes, right 21:14:09 we bumped the major version in swift when we made a major change that would cause data unavaialbility if you downgraded 21:14:13 dhellmann: but we have a few months to answer that question, luckly 21:14:21 notmyname: another good one 21:15:09 dhellmann: anything else on that topic ? 21:15:14 so we'll figure those things out when the time comes, and I'll keep notes from all of these ideas 21:15:18 nope, that's it unless there are more questions 21:15:32 one item which has come up is that we've got recent-ish security advisories mentioning upcoming scheduled releases explicitly. we'll need errata to correct all those if the renumbering implementation is not delated to after the next scheduled releases 21:15:47 s/delated/delayed/ 21:16:09 mainly wanting to confirm how soon this is planned for the api server projects so that we can plan accordingly 21:16:10 fungi: our plan is to have those in place to tag X.0.0b1 instead of 2015.2.0b1 at liberty-1 21:16:46 okay, so we're not changing tag sequence on any existing stable branches 21:16:53 not at all 21:16:54 no, this is just for liberty and forward 21:17:10 that covers my concerns. thanks! 21:17:13 kilo would likely still generate 2015.1.Z 21:17:27 has to 21:17:31 doing otherwise would be even more confusing 21:17:43 yep, that makes sense. thanks 21:17:43 ++ 21:17:47 alright, moving on then 21:17:50 oh, I did also post about this to the operators list 21:17:50 #topic Clarification on the return code when a server has a hard coded length limit 21:17:52 #link http://lists.openstack.org/pipermail/openstack-operators/2015-June/007390.html 21:17:57 oops 21:18:04 The API working group has one new guidelines that is entering the freeze period: 21:18:07 sorry about that 21:18:08 #link https://review.openstack.org/#/c/181784/ 21:18:19 They would like PTLs/CPLs, and other interested parties, to take a look at these reviews. 21:18:27 They will use lazy consensus at the end of the freeze to merge them if there are no objections. 21:18:38 * nikhil_k giving one review 21:18:43 We can use a bit of time in this meeting to discuss it, if you have comments 21:18:51 etoews: you there ? 21:18:52 i'm all ears 21:19:33 shouldnt it be 413? 21:20:34 nikhil_k: that is if the request entity is too large, not the response 21:20:37 might want to differentiate from 413 Request Entity Too Large and 414 Request-URI Too Long 21:21:16 morganfainberg: ++ 21:21:18 nikhil_k: the advisory for the WG proposal is saying if the response is too large - you're getting a400 back, not a request or uri too long 21:21:21 Hrm. And 416 is bytes only. 21:21:24 krotscheck: yep 21:21:35 krotscheck: only on a range request 21:21:50 this is either a 500 (bad idea) or a 400 (more correct) 21:21:59 right 400 is fine 21:22:01 morganfainberg: Well, it's technically a range request, it's just not a range of bytes #semantics 21:22:05 indeed 21:22:21 spec says: If API limits the length of collection type property, the return code 49 should be **400 Bad Request** when request exceeding limited length. 21:22:22 wait, where are we seeing this is about range requests 21:22:23 Doesn't this imply that the request is too large: "If API limits the length of collection type property, the return code 21:22:23 should be **400 Bad Request** when the request exceeds the length limit." 21:22:25 there's very little benefit in some of these fine-grained codes 21:22:29 looks to me like... what nikhil_k said. 21:22:44 just wondering - do we only care about the response code and not about any extra info that should be returned? 21:23:04 so we return 400 but seems like there should also be some info that allows the client to recover 21:23:08 (ignore me, I'm misreading things) 21:23:14 they don't know which field was too long otherwise 21:23:31 or maybe you don't want to tell them for security reasons 21:23:45 bknudson: this guideline only covers the status code 21:23:54 all HTTP errors are mean to come witha body that describes in as much detail as the server wants to 21:24:06 there's another guideline for errors 21:24:09 the status code is purely for programmatic flow control on the client 21:24:16 bknudson: i want to say we need to address that second part independenty of the status code. what hsould the extra data returned be? it could be combined or be separate 21:24:17 this is an error 21:24:19 lifeless: ++ 21:24:22 its not a 500 21:24:27 i should say the error format in a response body 21:24:39 400 is less confusing than overloading some special value wrongly, which seemed to be the general idea we are heading towards, with more details in the body of the response 21:24:42 so, we've specified the status code, now we need to handle the other data (lifeless ++) 21:24:51 right; my point is just that 413 and 414 won't help a client go 'oh too many tags' 21:24:55 500 = server error, so lets rule that out, its a client based error 21:24:58 johnthetubaguy: ++ 21:24:59 johnthetubaguy: I agree ++ 21:25:01 and so 400 is entirely appropriate 21:25:01 lifeless: +1 21:25:47 Anyway, feels like you can comment (or +1) on the review 21:25:56 So this API guideline tweak is entirely correct AFAICT. It might not go far enough, but as etoews says there are guidelines already for the body in this case. 21:25:59 this is talking about the # of elements in a collection? 21:26:01 for reference, here's the guideline of errors #link https://review.openstack.org/#/c/167793/ 21:26:04 and not the length of a string or something? 21:26:16 s/of/for/ 21:26:26 bknudson: correct; https://review.openstack.org/#/c/181784/8/guidelines/tags.rst - expand the upper context 21:26:36 the specific context here is a PUT 21:26:42 I think the prose could be clearer 21:26:52 but its reasonably sane if you read more context 21:26:54 I assume you also don't want a tag that's 50 MB ? 21:27:11 that might trigger a 413 :) 21:27:55 alright, I think we are back to where we started, so now is as good as any time to move on 21:28:07 :) 21:28:08 I was reading that one wrong 21:28:09 go on the review and comment if you find anything 21:28:26 #topic Library release ACL changes 21:28:34 #link https://review.openstack.org/189856 21:28:40 dhellmann: you again 21:29:19 ttx: should that include KeystoneAuth once that is a thing? 21:29:28 dhellmann: ^ cc 21:29:35 morganfainberg: likely 21:29:40 We have a patch up to change the tagging permissions for projects that look like libraries: 21:29:40 #link https://review.openstack.org/189856 21:29:40 I will be submitting another for the projects managed by the release team but not part of the old “integrated release”. 21:29:40 And then I plan to encourage the -infra team to prioritize those reviews so we can make the cut-over. 21:29:40 There is also a spec up discussing automation for reviews for tags. 21:29:41 #link https://review.openstack.org/191193 21:29:42 Please take a few minutes to read through that and comment. 21:29:49 ttx: sounds good, will put that on the backburner for as soon as we're ready for a 1.0 21:30:04 morganfainberg: yeah, we can add that 21:30:24 dhellmann: yeah nothing to do yet, we're not ready for it to roll under tighter release mngmnt 21:30:33 dhellmann: but i'll keep it in mind 21:31:20 morganfainberg: ok, there are a couple of steps, including marking it release:managed in governance and updating the ACLs file 21:31:26 also for reference and rationale: 21:31:27 #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/066346.html 21:31:47 dhellmann: i'll also run our rational by you agian when we're ready on how the project works (a little different than some of the other libs) 21:31:53 dhellmann: but this can all be delayed. 21:31:54 morganfainberg: sounds good 21:32:04 dhellmann: i want to move middleware under this model for sure sooner vs later though 21:32:08 dhellmann: keystonemiddleware* 21:32:22 dhellmann: i can ping you to get that added here / what else we need to do offline though 21:32:34 morganfainberg: let's chat tomorrow 21:32:39 dhellmann: sounds good 21:32:58 dhellmann: I'm fine on all the oslo libs moving over to library-release, as I assume dims and you talk regularly 21:32:58 other comments on that ? 21:33:14 anteaya: dims is on the release team, too, so he'll be handling those releases 21:33:14 dhellmann: how can I see that all the other projects agree to this change? 21:33:19 great 21:33:24 anteaya: this is their chance to disagree :-) 21:33:28 so +1 on all oslo projects 21:33:33 * anteaya listens 21:34:06 fwiw nobody objected on the ML thread nor on the review so far. 21:34:17 anteaya: but we're doing this for all projects managed by the release team, and those project teams have all agreed to that already, so it shouldn't be too big of an issue for anyone 21:34:19 dhellmann: I commented on that review to say I am good with this for python-novaclient 21:34:26 johnthetubaguy: nice, thank you 21:34:52 a team owning the constancy checks sounds good to me, helps us to not screw up 21:34:53 also easy to fix if some PTL ends up disagreeing next week 21:35:02 johnthetubaguy: thanks 21:35:07 dhellmann: okey dokey 21:35:12 * anteaya +2's the patch 21:35:15 yep, acls are far from being etched in stone 21:35:33 ok, I guess we can move on 21:35:38 #topic Vertical teams announcements 21:35:39 also keep in mind that as we get more of this automated, it'll feel a bit more natural 21:35:42 just as long as we can point confused people at the announcement, i'm good 21:35:44 #info Swift has slightly changed its core team structure. There is now a separate swiftclient-core in addition to swift-core - http://lists.openstack.org/pipermail/openstack-dev/2015-June/066982.html 21:35:47 fungi: ++ 21:35:59 SlickNik: you had something to bring up, didn't you? 21:36:21 dhellmann: yes 21:36:24 I just wanted to follow up on one of the recent email threads on the mailing list that affects Trove in specific, and possibly other OpenStack services. 21:36:26 notmyname: any time window for the next swift release ? 21:36:34 (yet ?) 21:36:35 Referring to http://lists.openstack.org/pipermail/openstack-dev/2015-June/065731.html (Protected openstack resources) 21:36:57 I think dhellmann and some others made some good points about deploying instances into a special tenant, and isolating them -- which seems to make sense to me, and we're going with this approach with Trove for now. 21:37:04 ttx: I wish it would have happened already. there's one patch that needs to get in. I'll let you/dhellmann know as soon as I know something 21:37:14 notmyname: sounds good 21:37:19 However, even with this approach there were a few concerns that some of us had and so wanted to bring them up to see what other folks thought of them (not sure if some of these are unfounded): 21:37:38 dhellmann: if we come up with tag automation we could actually apply it to intermediary-released projects like swift too 21:37:53 1. Deploying into a different tenant prevents sharing of resources across tenants -- for instance if I need to share a keypair, or security group with my instance I need to duplicate it in the other tenant. (Neutron) Ports are a common resource here that come to mind, but neutron has a way of attaching ports defined in one tenant to instances in another (admin / advanced-service role). 21:38:02 ttx: yes, I plan to rename release_library.sh to release_project.sh or something similar 21:38:08 that would streamline the release communication (currently a mix of IRC pings/emails) 21:38:26 ttx: yep, one step at a time :-) 21:39:02 SlickNik: is this a situation where being explicit about what is shared/visible could be considered a feature, though? 21:39:22 "I have 5 keys, but only 1 should be used for trove, so I only need to install that one in the second tenant" 21:40:25 dhellmann: We've got the use case for sec-groups in Trove, and the approach we're planning to take is similar — i.e. have an API to install that particular sec-group rule in Trove. 21:40:29 I keep wondering about hierarchical tenants and things, but it feels overcomplicated 21:40:43 johnthetubaguy: Agree, me too. 21:40:57 so we have done this already, in a way, with swift and glance and how nova uses glance to store images 21:40:59 Don't have a clearcut solution yet. 21:41:02 part of the concern is to be able to consistently mitigate situations like bug 1445295 21:41:03 bug 1445295 in Trove "Guestagent config leaks rabbit password" [Undecided,New] https://launchpad.net/bugs/1445295 - Assigned to Amrith (amrith) 21:41:32 SlickNik: I still wonder if switching to multi-tenant messaging would be less effort 21:41:59 trove has "secrets" in its instance boot images. if those are in a customer-controlled tenant, then the tenant has access to download that image and get at the gooey goodness therein 21:42:17 fungi: that is frightening 21:42:27 fungi: yeah, they can't have access to this 21:42:32 locking all this down would be whack-a-mole 21:42:34 and i'm sure we have plenty of similar situations with other service platforms which rely on nova instances 21:42:41 stevebaker: unfortunatly switching to multi-tenant messaging for the guest still wouldn't solve this issue — you'd still have to deploy in a special trove tenant. 21:42:47 nikhil_k: the glance and swift idea where you need both a service token and a user token to access something, how is that going? 21:42:58 fungi: +1 21:43:22 johnthetubaguy: done on the swift side 21:43:40 johnthetubaguy: was in the kilo release 21:43:43 johnthetubaguy: the work has some proof of concept impl 21:43:44 fungi: yeah, we solved it in akanda by having no secrets in the service vm and restricting access to the agent with a private network. Apparently that won't work for all of trove's use cases. 21:43:57 demonstrated during the previous mid-cycle 21:43:57 notmyname: ah, OK, so how does that model work, would it work here for trove? 21:44:49 johnthetubaguy: not sure I understand the question. the model is simply "you need 2 tokens and they must both be valid (client and service tokens)" 21:45:08 dhellmann: I think we're gravitating towards at akanda like solution at the moment — although I would love to know more about the swift / glance dual token implementation. 21:45:26 SlickNik: yes, it sounds like there might be another option to consider 21:45:27 notmyname: I was meaning what tenant has the resource? given the tokens are different teants I guess? 21:45:29 SlickNik: let's sync offline on that 21:45:42 notmyname / nikhil_k: Will chat with you guys offline about that. 21:45:54 nikhil_k: Sounds good, thanks! 21:45:59 so nova has a lock instance method 21:46:17 I am thinking a service could create a VM in some "locked" state with a dual token system 21:46:26 johnthetubaguy: http://docs.openstack.org/developer/swift/overview_backing_store.html 21:46:58 notmyname: thanks for the link 21:47:02 I guess the main thing I worry about is, what is wrong with just having all the VMs live in another tenant, to make sure we do fix the limitations with that model in anything else that gets planned 21:47:51 johnthetubaguy: Yes so a couple of other concerns that came up with having a special tenant were around rate-limits for it, and scale. 21:48:03 the other supporting use case which seems reasonable is that it's convenient for billing/quota purposes to have them technically count toward the customer's tenant 21:48:22 fungi: yes that too. 21:48:43 SlickNik: have enough to follow-up off-meeting ? 21:48:46 So, I think I have some good direction here — don't want to rat hole on this too much. 21:48:52 ttx: yes 21:48:53 cool 21:48:55 hogepodge: I think you had something to mention ? 21:49:01 Yes, 21:49:13 SlickNik: I did comment on that in the spec I think, lets follow up later, I am half asleep right now I am afraid 21:49:17 I've been working on Defcore and Interoperability. 21:49:34 fungi: billing and quota can be handled if the app creating the instance has some other object that can be counted and that corresponds to the instance(s) 21:49:34 and doing some awesome work at it 21:49:46 We've run into Glance API version issues with testing. 21:49:49 dhellmann: I think thats the gist of my comment in the spec 21:49:55 dhellmann: agreed, but that does mean additional complication on the billing side 21:50:02 A lot of the issues are captured in this review 21:50:05 #link https://review.openstack.org/#/c/189867/ 21:50:12 * nikhil_k clicks 21:50:23 fungi: it's more complicated, but it's also explicit that you're charging for or applying a quota to a special thing, so we found it to pay off at dreamhost 21:50:30 hogepodge: mostly around nova not support glance v2, right ? 21:50:37 V1 is more widely deployed in testing (particularly with Nova) and the client. 21:50:42 Yes. 21:51:01 johnthetubaguy: where are we standing on this ? Glance v2 support in Nova ? 21:51:05 OK, so nova only has a image API thats glance v1 compatible right? 21:51:22 ttx: I think jaypipes has offered to add support for glance v2 21:51:22 dhellmann: though i think the "locked" instances idea also can be used to interpret the specialness and report it distinctly to the customer as such 21:51:23 johnthetubaguy: that's correct. There's an open blueprint that was too late for Kilo 21:51:26 v2 support in Nova, coming up in L1 21:51:30 hope to get more reviews 21:51:31 But is supposed to be worked on in Liberty 21:51:43 #link https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api 21:51:44 fungi: that could be true 21:51:50 nikhil says L1, which means one week 21:51:51 johnthetubaguy: flwang is working on it atm 21:52:08 nikhil_k: excellent 21:52:11 nikhil_k: OK, we need to get that approved for liberty, its not approved right now, and the deadline is next week sometime 21:52:15 ttx: I just meant code sorry. The functionality may not be merged that soon 21:52:18 johnthetubaguy: we'll need to talk about nova defaulting v2 cinder as well. v2 support already exists. 21:52:28 nikhil_k: ah 21:52:32 nikhil_k: flwang: Is there anything I can help out with? 21:52:35 nikhil_k: I think jaypipes said he would drive that, I should follow up with him 21:52:37 johnthetubaguy: gotcha, on my list of TODOS now 21:52:47 hogepodge: so I am still confused about the interop question and nova 21:52:54 hogepodge: I will add you to the email loop going for us all working on it 21:53:14 we have an image v1 compatible APi we have zero planns of adding an image v2 API in nova 21:53:21 johnthetubaguy: sounds good. me too then, review helps would be much appreciated! 21:53:27 what the issue with the tests here? 21:53:34 johnthetubaguy: we're using Tempest to drive interoperability testing. Right now we don't test image apis because there are vendors who object to v1 being required in the Tempest tests. 21:53:51 johnthetubaguy: which is a side effect of Nova only supporting v1 21:54:11 hogepodge: erm, not sure I get the implication there 21:54:22 * nikhil_k wants to clarify that email is only to track blokers for people already assigned and not for some secret communication 21:54:32 some vendors implement only the glance v2 api customer-facing 21:54:49 by doing things like setting the v1 api rate limit to an impossibly low value like 0 21:55:02 fungi: agreed, due to the security issues in glance v1, if you expose glance v1 directly 21:55:11 yet tempest tests using v1 because that's what nova needs 21:55:34 nova only needs that intnernally exposed 21:55:45 and then there are other vendors who have not implemented glance v2 support yet 21:55:47 shouldn't tempest tests use v2 when testing glance external APIs ? 21:55:59 they should 21:56:10 ttx glance isn't a required component of defcore 21:56:12 but glance v1 is exposed in some cases 21:56:18 ttx: so we use the nova proxy 21:56:26 hmm, ok 21:56:53 * ttx has trouble context-switching at midnight 21:56:55 ttx: we could consider image as a required component. keystone was out until just a couple of months ago 21:57:19 I can add this to the topics for next week if we want more time. 21:57:26 hogepodge: so the other issue is the tests require the upload of an image, which is another issue I guess? 21:57:31 hogepodge: can we do this async 21:57:38 I think it's a complex discussion and yes, more time can't hurt 21:57:40 hogepodge: I can't think straight this late 21:57:46 johnthetubaguy: that's an important capability imo 21:57:59 hogepodge: maybe engage with John during the week 21:58:13 so I support nova supporting glance v2, as we had planned for kilo 21:58:19 defcore meeting tomorrow too if anyone wants to participate. 21:58:21 and we can put it back on agenda next week 21:58:29 the bigger issue is working out how to support some of the glance v1 APIs that don't work in glance v2 21:58:29 johnthetubaguy: +1 to that 21:58:40 as nova has to have a glance v1 API exposed 21:58:40 ahh, yep i guess that was the meat of the problem is that defcore (via tempest) was expecting the nova image subcommands to work, which they presumably won't without glance v2 support in nova 21:58:42 long story 21:58:52 fungi: +1 yes 21:59:09 fungi: the nova image API should always work, otherwise you can't download an images in nova 21:59:13 at least in providers who have actually disabled glance v1 21:59:43 so sounds like it's more nuanced in that case 21:59:53 fungi: so if glance v1 is disabled, I don't think you can launch an VM, although its possible thats hypervisor dependent, I can't remember right now 22:00:05 got it. anyway, out of time 22:00:13 #topic Open discussion 22:00:19 last word ? 22:00:43 "fnord" 22:00:43 #endmeeting