21:02:51 <ttx> #startmeeting crossproject
21:02:52 <stevebaker> \o
21:02:52 <david-lyle> o/
21:02:52 <openstack> Meeting started Tue Jun 16 21:02:51 2015 UTC and is due to finish in 60 minutes.  The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:02:52 <pshige> o/
21:02:53 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:02:56 <openstack> The meeting name has been set to 'crossproject'
21:02:56 <ttx> Today's agenda:
21:03:00 <ttx> #link http://wiki.openstack.org/Meetings/CrossProjectMeeting
21:03:06 <bknudson> hi
21:03:10 <etoews> o/
21:03:10 <morganfainberg> stevemar: ^^
21:03:12 <ttx> #topic Horizontal teams announcements
21:03:14 <johnthetubaguy> o/
21:03:16 <nikhil_k> o/
21:03:21 <ttx> On the release management front, liberty-1 is next week
21:03:22 <dhellmann> o/
21:03:33 <ttx> #info for projects we manage that are using development milestones, we expect the PTL (or release liaison) to show up next Tuesday to sync with us during office hours
21:03:40 <ttx> 0800-1000 UTC or 1800-2000 UTC in #openstack-relmgt-office
21:03:51 <ttx> That will let us double-check implemented blueprints and discuss when to tag
21:03:58 <ttx> Questions on that ?
21:04:10 <dims> o/
21:04:26 <redrobot> o/
21:04:30 <j^2> nope
21:04:30 <sdague> #info QA: grenade external plugins open for business in the big tent - http://lists.openstack.org/pipermail/openstack-dev/2015-June/066583.html
21:04:58 <sdague> I brought that up previously in this meeting, just wanted to make sure teams saw the information was out there in how to get started
21:05:49 <SlickNik> sdague: very cool — thanks for the info.
21:05:57 * jroll arrives a bit late
21:06:04 <boris-42> ttx: hi=)
21:06:18 <sdague> if there are any questions, no is good, or put on the list
21:06:23 <sdague> </end>
21:06:25 <SlickNik> I know at least a couple of people on trove who were previously working on a grenade job — will pass the info along to make sure they see it.
21:06:34 <sdague> s/no/now/
21:06:45 <dims> #info Oslo: Request from Oslo team for Liberty Cycle - http://lists.openstack.org/pipermail/openstack-dev/2015-June/067131.html
21:07:11 <dims> A handful of items for projects who use oslo, please take a look
21:08:24 <ttx> Other horizontal teams announcements ?
21:08:24 * dims not sure if this was vertical or horizontal :)
21:08:31 <ttx> It is HORIZONTAL
21:08:38 <dims> :)
21:08:48 <Rockyg> o/
21:08:50 <johnthetubaguy> dims: thats a good tick list, appreciated!
21:08:52 <ttx> I think I may merge the two sections for clarity :)
21:09:05 <fungi> dims: oslo is a diagonal effort
21:09:35 <dims> haha
21:09:37 <fungi> (i have no idea what that means, but it sounds cool)
21:09:42 <ttx> Rockyg: is that a "I have announcement" or "I'm here" sort of o/ ?
21:09:44 <edleafe> o/
21:09:47 <dims> thanks johnthetubaguy
21:09:59 <SlickNik> dims: ++ will follow up.
21:10:05 <dims> thanks SlickNik
21:10:08 * edleafe is just saying hi
21:10:13 * Rockyg is lurking but here for in support of API standards
21:10:23 <ttx> #topic Server versioning changes (dhellmann)
21:10:28 <ttx> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/067006.html
21:10:33 <ttx> dhellmann: ohai
21:10:49 <dhellmann> here!
21:10:57 <dhellmann> As we have discussed a couple of times, we’re switching the server projects to semver versioning.
21:11:01 <dhellmann> #link http://lists.openstack.org/pipermail/openstack-dev/2015-May/065211.html
21:11:05 <dhellmann> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/thread.html#65278
21:11:13 <dhellmann> I have submitted patches to all of the projects managed by the release team:
21:11:15 <dhellmann> #link https://review.openstack.org/#/q/topic:semver-releases,n,z
21:11:16 <dhellmann> If you agree with the numbering scheme, we need those to land before the L-1 milestone next week.
21:11:17 <dhellmann> They’re failing right now because the versions go backwards. I’ll be pushing alpha tags to correct that tomorrow.
21:11:34 <dhellmann> Does anyone have questions about the versions proposed, or anything else related to this change?
21:11:53 <bknudson> what is a non-backwards compat change (requires major version bump) for server projects? The interface to keystone is, for example, keystone-manage.
21:12:01 <bknudson> The REST API has its own versioning (V2.0, V3)
21:12:10 <dhellmann> yeah, some of that is still up for discussions
21:12:31 <dhellmann> some proposals have been when migrations are squashed, meaning that upgrades have to pass through that version
21:12:40 <lifeless> bknudson: indeed; I'd argue any of: removing something from the API; changing the contract of the CLI, including adding new required config options or removing config options
21:12:44 <dhellmann> also we've discussed tagging a new major version each cycle, just because
21:13:12 <lifeless> bknudson: or more broadly 'if users or deployers need to care, then its incompatible and we should signal that'
21:13:20 <SpamapS> config file changes would be backward compatability concerned
21:13:26 <ttx> note that for projects under the development-milestone regime, they will just use X.0.0 like they used YYYY.Z.0
21:13:40 <johnthetubaguy> dhellmann: upgrade is the big one for me with my Nova hat on, we have lots of compat code for live-upgrades we want to keep dropping each release
21:13:43 <ttx> they would not switch to semver
21:14:05 <ttx> that is more for projects that would do intermediary releases
21:14:06 <dhellmann> johnthetubaguy: yes, right
21:14:09 <notmyname> we bumped the major version in swift when we made a major change that would cause data unavaialbility if you downgraded
21:14:13 <johnthetubaguy> dhellmann: but we have a few months to answer that question, luckly
21:14:21 <dhellmann> notmyname: another good one
21:15:09 <ttx> dhellmann: anything else on that topic ?
21:15:14 <dhellmann> so we'll figure those things out when the time comes, and I'll keep notes from all of these ideas
21:15:18 <dhellmann> nope, that's it unless there are more questions
21:15:32 <fungi> one item which has come up is that we've got recent-ish security advisories mentioning upcoming scheduled releases explicitly. we'll need errata to correct all those if the renumbering implementation is not delated to after the next scheduled releases
21:15:47 <fungi> s/delated/delayed/
21:16:09 <fungi> mainly wanting to confirm how soon this is planned for the api server projects so that we can plan accordingly
21:16:10 <ttx> fungi: our plan is to have those in place to tag X.0.0b1 instead of 2015.2.0b1 at liberty-1
21:16:46 <fungi> okay, so we're not changing tag sequence on any existing stable branches
21:16:53 <ttx> not at all
21:16:54 <dhellmann> no, this is just for liberty and forward
21:17:10 <fungi> that covers my concerns. thanks!
21:17:13 <ttx> kilo would likely still generate 2015.1.Z
21:17:27 <lifeless> has to
21:17:31 <ttx> doing otherwise would be even more confusing
21:17:43 <fungi> yep, that makes sense. thanks
21:17:43 <dims> ++
21:17:47 <ttx> alright, moving on then
21:17:50 <dhellmann> oh, I did also post about this to the operators list
21:17:50 <ttx> #topic Clarification on the return code when a server has a hard coded length limit
21:17:52 <dhellmann> #link http://lists.openstack.org/pipermail/openstack-operators/2015-June/007390.html
21:17:57 <ttx> oops
21:18:04 <ttx> The API working group has one new guidelines that is entering the freeze period:
21:18:07 <dhellmann> sorry about that
21:18:08 <ttx> #link https://review.openstack.org/#/c/181784/
21:18:19 <ttx> They would like PTLs/CPLs, and other interested parties, to take a look at these reviews.
21:18:27 <ttx> They will use lazy consensus at the end of the freeze to merge them if there are no objections.
21:18:38 * nikhil_k giving one review
21:18:43 <ttx> We can use a bit of time in this meeting to discuss it, if you have comments
21:18:51 <ttx> etoews: you there ?
21:18:52 <etoews> i'm all ears
21:19:33 <nikhil_k> shouldnt it be 413?
21:20:34 <morganfainberg> nikhil_k: that is if the request entity is too large, not the response
21:20:37 <bknudson> might want to differentiate from 413 Request Entity Too Large and 414 Request-URI Too Long
21:21:16 <ttx> morganfainberg: ++
21:21:18 <morganfainberg> nikhil_k: the advisory for the WG proposal is saying if the response is too large - you're getting a400 back, not a request or uri too long
21:21:21 <krotscheck> Hrm. And 416 is bytes only.
21:21:24 <morganfainberg> krotscheck: yep
21:21:35 <morganfainberg> krotscheck: only on a range request
21:21:50 <morganfainberg> this is either a 500 (bad idea) or a 400 (more correct)
21:21:59 <ttx> right 400 is fine
21:22:01 <krotscheck> morganfainberg: Well, it's technically a range request, it's just not a range of bytes #semantics
21:22:05 <krotscheck> indeed
21:22:21 <nikhil_k> spec says: If API limits the length of collection type property, the return code	49 should be **400 Bad Request** when request exceeding limited length.
21:22:22 <jroll> wait, where are we seeing this is about range requests
21:22:23 <SlickNik> Doesn't this imply that the request is too large: "If API limits the length of collection type property, the return code
21:22:23 <SlickNik> should be **400 Bad Request** when the request exceeds the length limit."
21:22:25 <lifeless> there's very little benefit in some of these fine-grained codes
21:22:29 <jroll> looks to me like... what nikhil_k said.
21:22:44 <bknudson> just wondering - do we only care about the response code and not about any extra info that should be returned?
21:23:04 <bknudson> so we return 400 but seems like there should also be some info that allows the client to recover
21:23:08 <jroll> (ignore me, I'm misreading things)
21:23:14 <bknudson> they don't know which field was too long otherwise
21:23:31 <bknudson> or maybe you don't want to tell them for security reasons
21:23:45 <etoews> bknudson: this guideline only covers the status code
21:23:54 <lifeless> all HTTP errors are mean to come witha body that describes in as much detail as the server wants to
21:24:06 <etoews> there's another guideline for errors
21:24:09 <lifeless> the status code is purely for programmatic flow control on the client
21:24:16 <morganfainberg> bknudson: i want to say we need to address that second part independenty of the status code. what hsould the extra data returned be? it could be combined or be separate
21:24:17 <lifeless> this is an error
21:24:19 <Rockyg> lifeless: ++
21:24:22 <lifeless> its not a 500
21:24:27 <etoews> i should say the error format in a response body
21:24:39 <johnthetubaguy> 400 is less confusing than overloading some special value wrongly, which seemed to be the general idea we are heading towards, with more details in the body of the response
21:24:42 <morganfainberg> so, we've specified the status code, now we need to handle the other data (lifeless ++)
21:24:51 <lifeless> right; my point is just that 413 and 414 won't help a client go 'oh too many tags'
21:24:55 <johnthetubaguy> 500 = server error, so lets rule that out, its a client based error
21:24:58 <etoews> johnthetubaguy: ++
21:24:59 <SlickNik> johnthetubaguy: I agree ++
21:25:01 <lifeless> and so 400 is entirely appropriate
21:25:01 <johnthetubaguy> lifeless: +1
21:25:47 <ttx> Anyway, feels like you can comment (or +1) on the review
21:25:56 <lifeless> So this API guideline tweak is entirely correct AFAICT. It might not go far enough, but as etoews says there are guidelines already for the body in this case.
21:25:59 <bknudson> this is talking about the # of elements in a collection?
21:26:01 <etoews> for reference, here's the guideline of errors #link https://review.openstack.org/#/c/167793/
21:26:04 <bknudson> and not the length of a string or something?
21:26:16 <etoews> s/of/for/
21:26:26 <lifeless> bknudson: correct; https://review.openstack.org/#/c/181784/8/guidelines/tags.rst - expand the upper context
21:26:36 <lifeless> the specific context here is a PUT
21:26:42 <lifeless> I think the prose could be clearer
21:26:52 <lifeless> but its reasonably sane if you read more context
21:26:54 <bknudson> I assume you also don't want a tag that's 50 MB ?
21:27:11 <lifeless> that might trigger a 413 :)
21:27:55 <ttx> alright, I think we are back to where we started, so now is as good as any time to move on
21:28:07 <nikhil_k> :)
21:28:08 <notmyname> I was reading that one wrong
21:28:09 <ttx> go on the review and comment if you find anything
21:28:26 <ttx> #topic Library release ACL changes
21:28:34 <ttx> #link https://review.openstack.org/189856
21:28:40 <ttx> dhellmann: you again
21:29:19 <morganfainberg> ttx: should that include KeystoneAuth once that is a thing?
21:29:28 <morganfainberg> dhellmann: ^ cc
21:29:35 <ttx> morganfainberg: likely
21:29:40 <dhellmann> We have a patch up to change the tagging permissions for projects that look like libraries:
21:29:40 <dhellmann> #link https://review.openstack.org/189856
21:29:40 <dhellmann> I will be submitting another for the projects managed by the release team but not part of the old “integrated release”.
21:29:40 <dhellmann> And then I plan to encourage the -infra team to prioritize those reviews so we can make the cut-over.
21:29:40 <dhellmann> There is also a spec up discussing automation for reviews for tags.
21:29:41 <dhellmann> #link https://review.openstack.org/191193
21:29:42 <dhellmann> Please take a few minutes to read through that and comment.
21:29:49 <morganfainberg> ttx: sounds good, will put that on the backburner for as soon as we're ready for a 1.0
21:30:04 <dhellmann> morganfainberg: yeah, we can add that
21:30:24 <morganfainberg> dhellmann: yeah nothing to do yet, we're not ready for it to roll under tighter release mngmnt
21:30:33 <morganfainberg> dhellmann: but i'll keep it in mind
21:31:20 <dhellmann> morganfainberg: ok, there are a couple of steps, including marking it release:managed in governance and updating the ACLs file
21:31:26 <ttx> also for reference and rationale:
21:31:27 <ttx> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/066346.html
21:31:47 <morganfainberg> dhellmann: i'll also run our rational by you agian when we're ready on how the project works (a little different than some of the other libs)
21:31:53 <morganfainberg> dhellmann: but this can all be delayed.
21:31:54 <dhellmann> morganfainberg: sounds good
21:32:04 <morganfainberg> dhellmann: i want to move middleware under this model for sure sooner vs later though
21:32:08 <morganfainberg> dhellmann: keystonemiddleware*
21:32:22 <morganfainberg> dhellmann: i can ping you to get that added here / what else we need to do offline though
21:32:34 <dhellmann> morganfainberg: let's chat tomorrow
21:32:39 <morganfainberg> dhellmann: sounds good
21:32:58 <anteaya> dhellmann: I'm fine on all the oslo libs moving over to library-release, as I assume dims and you talk regularly
21:32:58 <ttx> other comments on that ?
21:33:14 <dhellmann> anteaya: dims is on the release team, too, so he'll be handling those releases
21:33:14 <anteaya> dhellmann: how can I see that all the other projects agree to this change?
21:33:19 <anteaya> great
21:33:24 <dhellmann> anteaya: this is their chance to disagree :-)
21:33:28 <anteaya> so +1 on all oslo projects
21:33:33 * anteaya listens
21:34:06 <ttx> fwiw nobody objected on the ML thread nor on the review so far.
21:34:17 <dhellmann> anteaya: but we're doing this for all projects managed by the release team, and those project teams have all agreed to that already, so it shouldn't be too big of an issue for anyone
21:34:19 <johnthetubaguy> dhellmann: I commented on that review to say I am good with this for python-novaclient
21:34:26 <dhellmann> johnthetubaguy: nice, thank you
21:34:52 <johnthetubaguy> a team owning the constancy checks sounds good to me, helps us to not screw up
21:34:53 <ttx> also easy to fix if some PTL ends up disagreeing next week
21:35:02 <anteaya> johnthetubaguy: thanks
21:35:07 <anteaya> dhellmann: okey dokey
21:35:12 * anteaya +2's the patch
21:35:15 <fungi> yep, acls are far from being etched in stone
21:35:33 <ttx> ok, I guess we can move on
21:35:38 <ttx> #topic Vertical teams announcements
21:35:39 <dhellmann> also keep in mind that as we get more of this automated, it'll feel a bit more natural
21:35:42 <fungi> just as long as we can point confused people at the announcement, i'm good
21:35:44 <notmyname> #info Swift has slightly changed its core team structure. There is now a separate swiftclient-core in addition to swift-core - http://lists.openstack.org/pipermail/openstack-dev/2015-June/066982.html
21:35:47 <dhellmann> fungi: ++
21:35:59 <dhellmann> SlickNik: you had something to bring up, didn't you?
21:36:21 <SlickNik> dhellmann: yes
21:36:24 <SlickNik> I just wanted to follow up on one of the recent email threads on the mailing list that affects Trove in specific, and possibly other OpenStack services.
21:36:26 <ttx> notmyname: any time window for the next swift release ?
21:36:34 <ttx> (yet ?)
21:36:35 <SlickNik> Referring to http://lists.openstack.org/pipermail/openstack-dev/2015-June/065731.html (Protected openstack resources)
21:36:57 <SlickNik> I think dhellmann and some others made some good points about deploying instances into a special tenant, and isolating them -- which seems to make sense to me, and we're going with this approach with Trove for now.
21:37:04 <notmyname> ttx: I wish it would have happened already. there's one patch that needs to get in. I'll let you/dhellmann know as soon as I know something
21:37:14 <ttx> notmyname: sounds good
21:37:19 <SlickNik> However, even with this approach there were a few concerns that some of us had and so wanted to bring them up to see what other folks thought of them (not sure if some of these are unfounded):
21:37:38 <ttx> dhellmann: if we come up with tag automation we could actually apply it to intermediary-released projects like swift too
21:37:53 <SlickNik> 1. Deploying into a different tenant prevents sharing of resources across tenants -- for instance if I need to share a keypair, or security group with my instance I need to duplicate it in the other tenant. (Neutron) Ports are a common resource here that come to mind, but neutron has a way of attaching ports defined in one tenant to instances in another (admin / advanced-service role).
21:38:02 <dhellmann> ttx: yes, I plan to rename release_library.sh to release_project.sh or something similar
21:38:08 <ttx> that would streamline the release communication (currently a mix of IRC pings/emails)
21:38:26 <dhellmann> ttx: yep, one step at a time :-)
21:39:02 <dhellmann> SlickNik: is this a situation where being explicit about what is shared/visible could be considered a feature, though?
21:39:22 <dhellmann> "I have 5 keys, but only 1 should be used for trove, so I only need to install that one in the second tenant"
21:40:25 <SlickNik> dhellmann: We've got the use case for sec-groups in Trove, and the approach we're planning to take is similar — i.e. have an API to install that particular sec-group rule in Trove.
21:40:29 <johnthetubaguy> I keep wondering about hierarchical tenants and things, but it feels overcomplicated
21:40:43 <SlickNik> johnthetubaguy: Agree, me too.
21:40:57 <johnthetubaguy> so we have done this already, in a way, with swift and glance and how nova uses glance to store images
21:40:59 <SlickNik> Don't have a clearcut solution yet.
21:41:02 <fungi> part of the concern is to be able to consistently mitigate situations like bug 1445295
21:41:03 <openstack> bug 1445295 in Trove "Guestagent config leaks rabbit password" [Undecided,New] https://launchpad.net/bugs/1445295 - Assigned to Amrith (amrith)
21:41:32 <stevebaker> SlickNik: I still wonder if switching to multi-tenant messaging would be less effort
21:41:59 <fungi> trove has "secrets" in its instance boot images. if those are in a customer-controlled tenant, then the tenant has access to download that image and get at the gooey goodness therein
21:42:17 <morganfainberg> fungi: that is frightening
21:42:27 <johnthetubaguy> fungi: yeah, they can't have access to this
21:42:32 <stevebaker> locking all this down would be whack-a-mole
21:42:34 <fungi> and i'm sure we have plenty of similar situations with other service platforms which rely on nova instances
21:42:41 <SlickNik> stevebaker: unfortunatly switching to multi-tenant messaging for the guest still wouldn't solve this issue — you'd still have to deploy in a special trove tenant.
21:42:47 <johnthetubaguy> nikhil_k: the glance and swift idea where you need both a service token and a user token to access something, how is that going?
21:42:58 <johnthetubaguy> fungi: +1
21:43:22 <notmyname> johnthetubaguy: done on the swift side
21:43:40 <notmyname> johnthetubaguy: was in the kilo release
21:43:43 <nikhil_k> johnthetubaguy: the work has some proof of concept impl
21:43:44 <dhellmann> fungi: yeah, we solved it in akanda by having no secrets in the service vm and restricting access to the agent with a private network. Apparently that won't work for all of trove's use cases.
21:43:57 <nikhil_k> demonstrated during the previous mid-cycle
21:43:57 <johnthetubaguy> notmyname: ah, OK, so how does that model work, would it work here for trove?
21:44:49 <notmyname> johnthetubaguy: not sure I understand the question. the model is simply "you need 2 tokens and they must both be valid (client and service tokens)"
21:45:08 <SlickNik> dhellmann: I think we're gravitating towards at akanda like solution at the moment — although I would love to know more about the swift / glance dual token implementation.
21:45:26 <dhellmann> SlickNik: yes, it sounds like there might be another option to consider
21:45:27 <johnthetubaguy> notmyname: I was meaning what tenant has the resource? given the tokens are different teants I guess?
21:45:29 <nikhil_k> SlickNik: let's sync offline on that
21:45:42 <SlickNik> notmyname / nikhil_k: Will chat with you guys offline about that.
21:45:54 <SlickNik> nikhil_k: Sounds good, thanks!
21:45:59 <johnthetubaguy> so nova has a lock instance method
21:46:17 <johnthetubaguy> I am thinking a service could create a VM in some "locked" state with a dual token system
21:46:26 <notmyname> johnthetubaguy: http://docs.openstack.org/developer/swift/overview_backing_store.html
21:46:58 <SlickNik> notmyname: thanks for the link
21:47:02 <johnthetubaguy> I guess the main thing I worry about is, what is wrong with just having all the VMs live in another tenant, to make sure we do fix the limitations with that model in anything else that gets planned
21:47:51 <SlickNik> johnthetubaguy: Yes so a couple of other concerns that came up with having a special tenant were around rate-limits for it, and scale.
21:48:03 <fungi> the other supporting use case which seems reasonable is that it's convenient for billing/quota purposes to have them technically count toward the customer's tenant
21:48:22 <SlickNik> fungi: yes that too.
21:48:43 <ttx> SlickNik: have enough to follow-up off-meeting ?
21:48:46 <SlickNik> So, I think I have some good direction here — don't want to rat hole on this too much.
21:48:52 <SlickNik> ttx: yes
21:48:53 <ttx> cool
21:48:55 <ttx> hogepodge: I think you had something to mention ?
21:49:01 <hogepodge> Yes,
21:49:13 <johnthetubaguy> SlickNik: I did comment on that in the spec I think, lets follow up later, I am half asleep right now I am afraid
21:49:17 <hogepodge> I've been working on Defcore and Interoperability.
21:49:34 <dhellmann> fungi: billing and quota can be handled if the app creating the instance has some other object that can be counted and that corresponds to the instance(s)
21:49:34 <ttx> and doing some awesome work at it
21:49:46 <hogepodge> We've run into Glance API version issues with testing.
21:49:49 <johnthetubaguy> dhellmann: I think thats the gist of my comment in the spec
21:49:55 <fungi> dhellmann: agreed, but that does mean additional complication on the billing side
21:50:02 <hogepodge> A lot of the issues are captured in this review
21:50:05 <hogepodge> #link https://review.openstack.org/#/c/189867/
21:50:12 * nikhil_k clicks
21:50:23 <dhellmann> fungi: it's more complicated, but it's also explicit that you're charging for or applying a quota to a special thing, so we found it to pay off at dreamhost
21:50:30 <ttx> hogepodge: mostly around nova not support glance v2, right ?
21:50:37 <hogepodge> V1 is more widely deployed in testing (particularly with Nova) and the client.
21:50:42 <hogepodge> Yes.
21:51:01 <ttx> johnthetubaguy: where are we standing on this ? Glance v2 support in Nova ?
21:51:05 <johnthetubaguy> OK, so nova only has a image API thats glance v1 compatible right?
21:51:22 <johnthetubaguy> ttx: I think jaypipes has offered to add support for glance v2
21:51:22 <fungi> dhellmann: though i think the "locked" instances idea also can be used to interpret the specialness and report it distinctly to the customer as such
21:51:23 <hogepodge> johnthetubaguy: that's correct. There's an open blueprint that was too late for Kilo
21:51:26 <nikhil_k> v2 support in Nova, coming up in L1
21:51:30 <nikhil_k> hope to get more reviews
21:51:31 <hogepodge> But is supposed to be worked on in Liberty
21:51:43 <hogepodge> #link https://blueprints.launchpad.net/nova/+spec/use-glance-v2-api
21:51:44 <dhellmann> fungi: that could be true
21:51:50 <ttx> nikhil says L1, which means one week
21:51:51 <nikhil_k> johnthetubaguy: flwang is working on it atm
21:52:08 <hogepodge> nikhil_k: excellent
21:52:11 <johnthetubaguy> nikhil_k: OK, we need to get that approved for liberty, its not approved right now, and the deadline is next week sometime
21:52:15 <nikhil_k> ttx: I just meant code sorry. The functionality may not be merged that soon
21:52:18 <thingee> johnthetubaguy: we'll need to talk about nova defaulting v2 cinder as well. v2 support already exists.
21:52:28 <ttx> nikhil_k: ah
21:52:32 <hogepodge> nikhil_k: flwang: Is there anything I can help out with?
21:52:35 <johnthetubaguy> nikhil_k: I think jaypipes said he would drive that, I should follow up with him
21:52:37 <nikhil_k> johnthetubaguy: gotcha, on my list of TODOS now
21:52:47 <johnthetubaguy> hogepodge: so I am still confused about the interop question and nova
21:52:54 <nikhil_k> hogepodge: I will add you to the email loop going for us all working on it
21:53:14 <johnthetubaguy> we have an image v1 compatible APi we have zero planns of adding an image v2 API in nova
21:53:21 <nikhil_k> johnthetubaguy: sounds good. me too then, review helps would be much appreciated!
21:53:27 <johnthetubaguy> what the issue with the tests here?
21:53:34 <hogepodge> johnthetubaguy: we're using Tempest to drive interoperability testing. Right now we don't test image apis because there are vendors who object to v1 being required in the Tempest tests.
21:53:51 <hogepodge> johnthetubaguy: which is a side effect of Nova only supporting v1
21:54:11 <johnthetubaguy> hogepodge: erm, not sure I get the implication there
21:54:22 * nikhil_k wants to clarify that email is only to track blokers for people already assigned and not for some secret communication
21:54:32 <fungi> some vendors implement only the glance v2 api customer-facing
21:54:49 <fungi> by doing things like setting the v1 api rate limit to an impossibly low value like 0
21:55:02 <johnthetubaguy> fungi: agreed, due to the security issues in glance v1, if you expose glance v1 directly
21:55:11 <fungi> yet tempest tests using v1 because that's what nova needs
21:55:34 <johnthetubaguy> nova only needs that intnernally exposed
21:55:45 <fungi> and then there are other vendors who have not implemented glance v2 support yet
21:55:47 <ttx> shouldn't tempest tests use v2 when testing glance external APIs ?
21:55:59 <nikhil_k> they should
21:56:10 <hogepodge> ttx glance isn't a required component of defcore
21:56:12 <nikhil_k> but glance v1 is exposed in some cases
21:56:18 <hogepodge> ttx: so we use the nova proxy
21:56:26 <ttx> hmm, ok
21:56:53 * ttx has trouble context-switching at midnight
21:56:55 <hogepodge> ttx: we could consider image as a required component. keystone was out until just a couple of months ago
21:57:19 <hogepodge> I can add this to the topics for next week if we want more time.
21:57:26 <johnthetubaguy> hogepodge: so the other issue is the tests require the upload of an image, which is another issue I guess?
21:57:31 <johnthetubaguy> hogepodge: can we do this async
21:57:38 <ttx> I think it's a complex discussion and yes, more time can't hurt
21:57:40 <johnthetubaguy> hogepodge: I can't think straight this late
21:57:46 <hogepodge> johnthetubaguy: that's an important capability imo
21:57:59 <ttx> hogepodge: maybe engage with John during the week
21:58:13 <johnthetubaguy> so I support nova supporting glance v2, as we had planned for kilo
21:58:19 <hogepodge> defcore meeting tomorrow too if anyone wants to participate.
21:58:21 <ttx> and we can put it back on agenda next week
21:58:29 <johnthetubaguy> the bigger issue is working out how to support some of the glance v1 APIs that don't work in glance v2
21:58:29 <hogepodge> johnthetubaguy: +1 to that
21:58:40 <johnthetubaguy> as nova has to have a glance v1 API exposed
21:58:40 <fungi> ahh, yep i guess that was the meat of the problem is that defcore (via tempest) was expecting the nova image subcommands to work, which they presumably won't without glance v2 support in nova
21:58:42 <johnthetubaguy> long story
21:58:52 <hogepodge> fungi: +1 yes
21:59:09 <johnthetubaguy> fungi: the nova image API should always work, otherwise you can't download an images in nova
21:59:13 <fungi> at least in providers who have actually disabled glance v1
21:59:43 <fungi> so sounds like it's more nuanced in that case
21:59:53 <johnthetubaguy> fungi: so if glance v1 is disabled, I don't think you can launch an VM, although its possible thats hypervisor dependent, I can't remember right now
22:00:05 <fungi> got it. anyway, out of time
22:00:13 <ttx> #topic Open discussion
22:00:19 <ttx> last word ?
22:00:43 <ttx> "fnord"
22:00:43 <ttx> #endmeeting