21:01:43 <nikhil_k> #startmeeting crossproject
21:01:44 <openstack> Meeting started Tue Jul 14 21:01:43 2015 UTC and is due to finish in 60 minutes.  The chair is nikhil_k. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:01:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:01:47 <openstack> The meeting name has been set to 'crossproject'
21:01:50 <EmilienM> o/
21:01:50 <tpatil> Hi
21:01:52 <janonymous> o/
21:01:52 <notmyname> here
21:02:00 <annegentle> here
21:02:02 <elmiko> yo/
21:02:04 <edleafe> o/
21:02:05 <nikhil_k> courtesy ping for david-lyle flaper87 dims ttx johnthetubaguy rakhmerov
21:02:09 <nikhil_k> courtesy ping for smelikyan morganfainberg bswartz slagle adrian_otto mestery
21:02:11 <nikhil_k> courtesy ping for kiall jeblair thinrichs j^2 stevebaker mtreinish Daisy
21:02:15 <nikhil_k> courtesy ping for notmyname dtroyer isviridov gordc SlickNik loquacities thingee
21:02:18 <nikhil_k> courtesy ping for hyakuhei redrobot TravT emilienm SergeyLukjanov devananda
21:02:18 <johnthetubaguy> o/
21:02:20 <j^2> o/
21:02:21 <nikhil_k> courtesy ping for boris-42
21:02:22 <david-lyle> o/
21:02:23 <redrobot> o/
21:02:24 <thingee> o/
21:02:27 <loquacities> o/
21:02:37 * fungi is mostly here for this
21:02:39 <kfox1111> o/
21:02:43 <kzaitsev_mb> o/
21:02:44 * dhellmann is going to have to duck out soon
21:02:55 <dtroyer_zz> o/
21:03:01 <jecarey> o/
21:03:11 <rockyg> o/
21:03:12 <jroll> \o
21:03:24 <nikhil_k> #info Agenda: https://wiki.openstack.org/wiki/Meetings/CrossProjectMeeting
21:03:45 <nikhil_k> Before we get started
21:04:12 <mtreinish> o/
21:04:19 <boris-42> hi
21:04:22 <nikhil_k> I wanted to ask if there's anyone available to signup for volunteering chair for next week's meeting
21:04:56 <ttx> Won't be able to cover this one if nobody signs up
21:05:06 * EmilienM in holidays
21:05:09 <ttx> so we'd likely skip the meeting if nobody volunteers :)
21:05:09 <annegentle> do it, it's fun and rewarding!
21:05:27 <EmilienM> annegentle: I put my name for August
21:05:34 <annegentle> thanks EmilienM
21:05:48 <hogepodge> o/
21:05:56 <fungi> "cancelled due to oscon"
21:06:39 <ttx> ok, cancelled it is, then!
21:06:57 <nikhil_k> #agreed we will skip the meeting next week
21:07:02 * fungi hangs a "gone conferencing" sign on the meeting room door
21:07:24 <nikhil_k> Let's get started.
21:07:33 <nikhil_k> #topic Team announcements (horizontal, vertical, diagonal)
21:07:50 <nikhil_k> #info horizontal announcements
21:08:00 <nikhil_k> any news on this?
21:08:00 <ttx> On release management side, I don't have anything specific to announce this week
21:08:06 <ttx> dhellmann: anything ?
21:08:19 <dhellmann> nothing from me this week
21:08:19 * ttx blames fireworks
21:08:24 <mtreinish> nothing this week for qa
21:08:55 <johnthetubaguy> Nova wise, we have our midcyle next week, no other big news
21:09:10 <nikhil_k> I don't have a separate section for the API_WG specs, thought we can cover them here
21:09:38 <elmiko> nikhil_k: we don't have any new specs to announce at this time. we will have more up for review later this week.
21:09:50 <nikhil_k> thanks elmiko
21:10:01 <nikhil_k> #info vertical announcements
21:10:20 <nikhil_k> Any other midcycles to announce?
21:10:42 <jroll> ironic (finally) came to a consensus on how we're doing independent release cycles, and will be doing one shortly after we finish up one feature in flight
21:10:55 * notmyname is working on getting a new python-swiftclient release
21:11:10 <notmyname> also, Swift will be doing a midcycle hackathon next month
21:11:43 <jeblair> oh, on the release front, the infra spec for the automation around release tagging landed: http://specs.openstack.org/openstack-infra/infra-specs/specs/centralize-release-tagging.html
21:11:58 <jroll> oh, and ironic midcycle is 8/12-14 in seattle
21:12:20 <hogepodge> defcore mid-cycle in two weeks
21:12:31 <nikhil_k> notmyname: mind adding the info here #link https://wiki.openstack.org/wiki/Sprints ?
21:12:36 <hogepodge> July 29/30 in Austin
21:13:17 <EmilienM> nikhil_k: we have a virtual one
21:13:20 <EmilienM> wiki is updated
21:13:35 <EmilienM> puppet openstack group has a sprint (or midcycle) in september
21:13:46 <david-lyle> horizon midcycle is next week
21:14:30 <nikhil_k> good stuff
21:14:36 <gordc> since everyone is saying something, ceilometer had our's last week
21:15:24 <david-lyle> timely announcement ;)
21:15:24 <nikhil_k> I guess we are short of announcements this week so we don't have a diagonal one
21:15:41 <nikhil_k> But great to hear all the midcycle shout outs
21:15:58 <nikhil_k> Moving on..
21:16:12 <nikhil_k> A couple of cross project specs
21:16:17 <nikhil_k> first up
21:16:19 <nikhil_k> #topic Replace eventlet + monkey-patching with ??
21:16:30 <nikhil_k> #link https://review.openstack.org/#/c/164035
21:17:03 <nikhil_k> There are decent number of comments on it already but it would be nice to have diverse feedback
21:18:10 <nikhil_k> Are there any crazy ideas for a replacement?
21:18:58 <nikhil_k> I like that John has written let's propose the most dramatic change first
21:20:09 <johnthetubaguy> this is josh right?
21:20:31 <nikhil_k> Joshua Harlow, yes
21:20:57 <johnthetubaguy> I know glyph had some good ideas about slowly transitioning to twisted or async-io by using a non greenlet hub
21:21:09 <nikhil_k> #help as a first work item on the spec: Request PTLs feedback/analysis on there own projects thread-safety.
21:21:33 <jroll> johnthetubaguy: indeed, I just pointed glyph at this spec
21:21:48 <johnthetubaguy> jroll: me too, dropped him an email, cools
21:22:05 <johnthetubaguy> honestly we need to identify actual issues really
21:22:24 <johnthetubaguy> I am interested in swift's performance issues, I should catch up with the go folks about that I guess
21:22:55 <notmyname> I haven't looked over this in detail yet, but yeah, I'm pretty concerned about Swift being able to work under a thread/fork model
21:23:04 <johnthetubaguy> well, my big issue was the DB problems, but it seems like we are making those go away, a little bit
21:23:10 <nikhil_k> I think we have some in Glance right up front
21:23:25 <notmyname> ie today's swift clusters are required to handle many thousands of req/sec.
21:24:51 <johnthetubaguy> anyways, dhellmann has come great comments on there, I should take a look
21:26:52 <jroll> I feel like this spec may be too large of a change (with too many unknowns) to reasonably discuss and come to any consensus in this meeting
21:27:06 <nikhil_k> yeah, seems like this spec needs some use cases from OpenStack projects
21:27:13 <jroll> clearly eventlet has some problems
21:27:18 <jroll> threads and asyncio also have problems
21:27:32 <jroll> we need to evaluate the real-world usefulness of switching more
21:27:50 <johnthetubaguy> jroll: right, I don't see anything compelling enough in the spec
21:28:07 <jroll> johnthetubaguy: agree
21:28:53 <nikhil_k> thanks guys
21:28:58 <nikhil_k> moving on..
21:29:02 <nikhil_k> #topic Service Catalog updates
21:29:09 <nikhil_k> #link https://review.openstack.org/#/c/181393
21:29:14 <annegentle> holla
21:29:36 <annegentle> exciting topic, I know!
21:30:20 <nikhil_k> Looks neatly written ;-)
21:30:31 <annegentle> I agree ha ha :)
21:30:55 <nikhil_k> +1 on Standard required naming for endpoints
21:30:55 <annegentle> as usual, naming is the hardest part, but I think we can get a lot done towards consistency in the catalog
21:31:53 <johnthetubaguy> annegentle: I think you hit all the key issues there
21:32:27 <annegentle> sdague as partner-in-crime, er, collab
21:32:51 <annegentle> I hope we got everything, the etherpad got hosed, cough cough, but I think it's good-to-go
21:33:07 <jroll> kind-of-side-question: is there a way for keystone to send different catalogs for internal/external use?
21:33:25 <jroll> (I know little-to-nothing about catalogs, feel free to tell me that's a dumb question)
21:33:34 <annegentle> jroll: hm, I dunno. dolphm or morgan would know
21:33:43 <annegentle> jroll: it would certainly neaten up things
21:34:07 <johnthetubaguy> jroll: I need to look at some of that for the nova config
21:34:09 <jroll> annegentle: as an operator, that feels kind of key to me
21:34:12 <johnthetubaguy> there are the three urls I think
21:34:19 <dolphm> jroll: sort of. there's a feature called endpoint filtering that lets you whitelist certain endpoints for certain projects
21:34:45 <dolphm> jroll: just don't mistake it for security by obscurity!
21:34:48 <mtreinish> jroll: there are different endpoint types you can set for a service
21:35:07 <johnthetubaguy> its the handing it out the end user issue thats not so good
21:35:14 <johnthetubaguy> it only being in nova config is handy
21:35:21 <jroll> dolphm: of course, more like "nova should use this neutron endpoint that's only accessible internally"
21:35:44 <johnthetubaguy> dolphm +1 on the obscurity thing, but this is more URLs on an internal network that are of no use to end users
21:35:47 <jroll> I'm not opposed to continuing to put the url in config files, but this spec specifically calls that out as bad
21:36:01 <johnthetubaguy> jroll: yeah, I was adding a comment on that just now
21:36:01 <dolphm> jroll: that's the intention of the "internal" endpoint interface type
21:36:26 <johnthetubaguy> dolphm: maybe the filtering stops us handing that out the the user, and thats the bit thats handy
21:36:32 <jroll> dolphm: ok, cool. I want to make sure that doesn't get nuked here :)
21:36:40 <johnthetubaguy> dolphm: for glance its a round robin list of ips in some cases
21:36:45 <nikhil_k> is this a good place to bring up sub-project endpoints possiblity?
21:37:11 <nikhil_k> I may just put that comment in the spec
21:37:31 <annegentle> nikhil_k: sure, would like an example
21:37:47 <annegentle> I'm glad this made it on the cross project agenda so I could get a sense of whether we've all looked at it
21:37:54 <annegentle> so it can go another week or so
21:38:00 <johnthetubaguy> annegentle: ah, so one possible issue
21:38:06 <johnthetubaguy> if you use keystone auth for non-openstack project
21:38:12 <johnthetubaguy> you might get name clashes
21:38:22 <johnthetubaguy> in the catalog
21:38:30 <johnthetubaguy> like there could be two sorts of compute APIs
21:38:40 <nikhil_k> given we will establish a naming criterion it would be nice to assume that we may have more than one endpoint per project-scope
21:38:46 <annegentle> johnthetubaguy: yeah I'm not sure what to do about that, alluded to it with the tie-in to projects.yaml for service name
21:38:57 <devananda> on the keystone "give different endpoints for internal/external use" -- morganfainberg ?
21:38:59 <nikhil_k> I think the namespacing liek the one above should help ^ ?
21:39:04 <johnthetubaguy> annegentle: I was just thinking os-compute rather than compute, but its messy :(
21:39:25 <jroll> devananda: I think dolphm answered that for me :)
21:39:25 <annegentle> johnthetubaguy: yah
21:39:37 <devananda> jroll: oh, cool. i missed it in the scrollback then
21:39:49 <dolphm> morganfainberg is traveling today
21:39:55 <dolphm> our midcycle starts tomorrow
21:40:16 <annegentle> definitely comment on the review and I'll respond
21:40:27 <devananda> re: service names vs. project names -- i think this is more complex than we're giving it credit for
21:40:32 <nikhil_k> annegentle: "The tying between auth and getting a service catalog seems unnecessary" -- this might be tricky in some cases
21:41:18 <annegentle> devananda: seriously...
21:41:24 <nikhil_k> I think some value added services that we keep optional and may be exposed to say premium users
21:41:28 <devananda> "might get name clashes" is unacceptable. two different services with different REST APIs both registering with the same name in keystone? .....
21:41:55 <nikhil_k> in such cases exposing the endpoint might be irrelevant
21:42:17 <annegentle> devananda: I think we need to tie to governance/projects.yaml to ensure unique
21:42:34 <johnthetubaguy> devananda: so its actually happened already, I think, we use the same auth for our old cloud and openstack cloud, I think the old one is call "compute" already, but I could be wrong
21:42:44 <devananda> annegentle: I agree. Except that puts the TC back in the seat of blessing things
21:42:55 <johnthetubaguy> annegentle: well I am talking non-openstack projects using keystone really
21:43:19 <devananda> johnthetubaguy: there are a lot of new-openstack-projects that dont have official service names, right?
21:43:25 <johnthetubaguy> our in this context being rackspace, I need to stop using pronouns
21:43:30 <johnthetubaguy> devananda: very true, containers?
21:43:36 * devananda checks his assumption in the gov repo
21:44:16 <johnthetubaguy> devananda: well, I am thinking about the competing ones that are not yet registered as such, the folks looking in the door of the tent, or something
21:44:45 <nikhil_k> If someone has a single deployment (under same keystone) as hybrid (pub + priv) then name clashes are evident
21:45:07 <johnthetubaguy> nikhil_k: that would be different regions I think
21:45:10 <kfox1111> there should be uniquenes per region though.
21:45:16 <devananda> exactly - unique per region
21:45:24 <annegentle> johnthetubaguy: yeah it's the competing ones I was thinking of orignally
21:45:30 <annegentle> (I can't spell)
21:45:46 <devananda> but if I had two computes in the same region, with different REST APIs, well, that's broken
21:45:54 <devananda> anyway, perhaps we're bikeshedding now
21:45:59 <kfox1111> x-container until approved? :)
21:46:05 <nikhil_k> hmm, I don't have a real world use case where the same oper uses the same DC for pub and priv so will take the region argument
21:46:11 <annegentle> devananda: we have that at Rackspace now with "first gen compute" and "next gen compute" and yea it is confusing
21:46:22 <johnthetubaguy> devananda: yeah, so Rackspace has that, sort of, but anyways, fun fun
21:46:27 <annegentle> :)
21:46:42 <nikhil_k> I guess we've enough spice to see the spec move forward in some direction
21:46:46 <annegentle> sure, thanks nikhil_k
21:47:06 <nikhil_k> I see tpatil sneaked in his etherpad
21:47:11 <tpatil> hi
21:47:15 <nikhil_k> #topic return request-id to caller
21:47:19 <tpatil> In the last meeting, lifeless suggested to return x-openstack-request-id back to the caller in the response itself.
21:47:23 <nikhil_k> #link https://etherpad.openstack.org/p/request-id
21:47:28 <tpatil> We have analyzed and documented all cinder client methods information in google spreadsheet
21:47:33 <tpatil> #link: https://docs.google.com/spreadsheets/d/1al6_XBHgKT8-N7HS7j_L2H5c4CFD0fB8xT93z6REkSk/edit?usp=sharing
21:47:43 <tpatil> There are 6 different values returned from volume/snapshots methods
21:47:51 <tpatil> list, dict, resource class object, None, Tuple(Response object, None), Cinder exception
21:48:00 <tpatil> For each of the above return value, we have identified what changes are required to pass response headers containing x-openstack-request-id in it. You can find all that information on the etherpad
21:48:30 <tpatil> I would like to talk about few limitations of solution #3
21:48:55 <tpatil> Deleting metadata is one big problem as internally it deletes metadata key one by one so it’s not possible to return response header back to the caller
21:49:11 <tpatil> Retrieving x-openstack-request-id from response is not uniform for each method
21:49:23 <tpatil> You can see example of how to get x-openstack-request-id here
21:49:28 <tpatil> https://review.openstack.org/#/c/201434/2/nova/volume/cinder.py, refer to _cinder_volumes method.
21:49:48 <tpatil> POC: python-cinderclient patch: https://review.openstack.org/#/c/201428/
21:49:56 <johnthetubaguy> tpatil: I thought we were thinking about a list of request-ids, for the general case? I can't remember why that was a bad idea now
21:50:06 <lifeless> johnthetubaguy: I don't see why it would be a bad idea
21:50:15 <lifeless> clearly one-call-one-id is the special case
21:50:50 <tpatil> johnthetubaguy: for snapshot list, there will be single request, so we have added ListWithHeader which will contain response header
21:52:07 <jroll> I think I like option one, where get_request_id() returns a list or request IDs
21:52:11 <johnthetubaguy> tpatil: sorry, not sure I get the comment, we can return a list of request-ids in all cases I think
21:52:25 <johnthetubaguy> jroll: agreed we should have one option
21:52:37 <jroll> s/list or/list of/
21:52:52 <tpatil> johnthetubaguy: Are you talking about deleting metadata case?
21:53:01 <jroll> tpatil: all cases
21:53:01 <johnthetubaguy> tpatil: talking about all cases really
21:53:05 <jroll> a list can have one item
21:53:16 <jroll> if there's one request, return a list of one request ID
21:53:35 <jroll> in the order the requests were made
21:54:00 <johnthetubaguy> yeah, I feel asleep just before the meeting, so only half awake, but I thought we discussed this a few weeks back, and went for a list of request-ids, I figured we found something bad with that and went back to a single request-id, but my mind is probably playing tricks on me
21:54:11 <lifeless> nope
21:54:32 <lifeless> the main thing last week was the discussion about return-with or separate-call or trigger-event
21:55:08 <johnthetubaguy> yeah, it was maybe the week before we got into request-ids vs a list, but it probably got buried somewhere
21:55:10 <nikhil_k> and I believe we wanted to return objects?
21:55:20 <jroll> return-with seems bad for the obvious reason that the return value may have different types
21:55:46 <jroll> separate-call seems like the most straightforward option, from a dev standpoint
21:55:55 <tpatil> jroll: that's one concern we have
21:56:15 <johnthetubaguy> I thought the idea was we pass the return value to a function that extracts the value for the user?
21:56:21 <johnthetubaguy> so we get the best of both worlds
21:56:23 <lifeless> jroll: so its hugely racy and hard to get right, from a dev standpoint
21:56:40 <nikhil_k> I agree, it very racy with separate call
21:56:53 <lifeless> johnthetubaguy: yeah, or just an attribute at a well known place...
21:57:00 <lifeless> return values of None, True and False are where its tricky
21:57:02 <jroll> lifeless: thing = client.do_a_thing(); req_id = thing.request_ids()  # is racy and hard?
21:57:23 <lifeless> jroll: Thats return-with
21:57:40 <kfox1111> we going to have time for open discussion before the end?
21:57:42 <jroll> oh. ohhhh. I totally had this backward.
21:57:54 <lifeless> jroll: thing = client.do_a_thing(); req_ids = client.get_request_ids() # thats separate-call
21:57:55 <johnthetubaguy> jroll: I think the issue before was client.do_a_thing() client.get_request_id()
21:58:02 <johnthetubaguy> yeah, that
21:58:04 <nikhil_k> Ok, I think we have some interest for open discussion
21:58:09 <jroll> lifeless: yeah, got it. it doesn't return a response object
21:58:09 <tpatil> Request everyone to please add your feedback to the etherpad so that I can include it in the specs
21:58:19 <tpatil> by next meeting
21:58:26 <johnthetubaguy> so I think we have support for #3 here?
21:58:27 <nikhil_k> Looks like we need to move this discussion to the etherpad #link https://etherpad.openstack.org/p/request-id
21:58:38 <johnthetubaguy> we are just arguing about what it is called
21:58:38 <nikhil_k> tpatil: no meeting next week (pls scroll back)
21:58:45 <johnthetubaguy> but maybe thats just me?
21:58:50 <tpatil> I know, next-to-next meeting
21:59:03 <nikhil_k> #topic Open Discussion
21:59:12 <nikhil_k> may be we can roll over a min or two if needed
21:59:17 <kfox1111> The instance user spec could use some attention: https://review.openstack.org/#/c/186617/
21:59:18 <nikhil_k> kfox1111 ^
22:00:04 <annegentle> don't forget, tomorrow's the deadline for the Call for Speakers https://www.openstack.org/summit/tokyo-2015/call-for-speakers/
22:00:08 <annegentle> #link https://www.openstack.org/summit/tokyo-2015/call-for-speakers/
22:00:53 <hogepodge> I mentioned it earlier, but anyone who is interested in interop and testing standards please think about attending the defcore mid-cycle #link https://etherpad.openstack.org/p/DefCoreFlag.MidCycle
22:01:23 <elmiko> kfox1111: i'll take a look, sahara could really use something like this
22:01:35 <kfox1111> elmiko: thanks.
22:01:52 <kfox1111> yeah. most of the projects that build on top of vm's seem to need it.
22:02:04 <nikhil_k> Alright, we are out of time now.
22:02:04 <johnthetubaguy> kfox1111: thanks for moving this to the backlog, it would be good to get eyes on these ideas
22:02:06 <elmiko> i thought it was a cool idea last time this came up
22:02:12 <nikhil_k> Thanks for joining, have a great day/evening!
22:02:22 <nikhil_k> #endmeeting