17:00:22 <devananda> #startmeeting ironic
17:00:23 <openstack> Meeting started Mon Jul 13 17:00:22 2015 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:24 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:27 <TheJulia> o/
17:00:28 <openstack> The meeting name has been set to 'ironic'
17:00:29 <lucasagomes> o/
17:00:30 <NobodyCam> o/
17:00:32 <devananda> #chair NobodyCam
17:00:32 <openstack> Current chairs: NobodyCam devananda
17:00:35 <stendulker> o/
17:00:37 <krtaylor> o/
17:00:57 <jroll> oh hello
17:01:17 <devananda> the agenda, though light, is here: https://wiki.openstack.org/wiki/Meetings/Ironic
17:01:32 <JoshNang> o/
17:01:47 <devananda> and I need to apologize for my absense last week and lack of preparation for the meeting today.
17:01:56 <devananda> #topic announcements
17:02:15 <rloo> o/
17:02:20 <rameshg87> o/
17:02:26 <devananda> probably the biggest thing to announce today is just a reminder for our midcycle
17:02:44 <wanyen> o/
17:02:55 <NobodyCam> :)
17:03:41 <NobodyCam> has everyone filled out the Mid-Cycle Lunch questions
17:03:59 <devananda> we've got an etherpad started, though very light at this point
17:04:06 <devananda> #link https://etherpad.openstack.org/p/ironic-liberty-midcycle
17:04:09 <jroll> reminder that lunch questions and an invite for tuesday dinner/drinks is here
17:04:13 <jroll> ah, deva beat me
17:04:22 <NobodyCam> :)
17:04:36 <devananda> and if you hvaen't, please "buy" a free ticket from eventbrite so I can track attendees with the site coordinators
17:04:50 <jroll> perhaps we should start tracking what we want to hack on?
17:04:54 * NobodyCam thinks he has but is not sure
17:04:55 <jroll> or is it too early?
17:05:16 <devananda> #link https://www.eventbrite.com/e/openstack-ironic-sprint-august-2015-tickets-17533862254
17:05:19 <devananda> jroll: not too early at all
17:05:38 <jroll> k :)
17:05:47 <devananda> NobodyCam: you would have gotten a confirmation email from eventbrite ...
17:06:15 <NobodyCam> :)
17:06:32 * devananda checks attendee list
17:06:36 <devananda> NobodyCam: no - you have not signed up
17:06:42 <NobodyCam> oh
17:06:50 <NobodyCam> does BadCub have a +1?
17:07:01 <devananda> NobodyCam: this doesn't take +1's
17:07:02 <BadCub> NobodyCam: yes
17:07:06 <jroll> lol
17:07:17 <BadCub> I ordered two tickets if memory serves
17:07:18 <devananda> BadCub: oooh. you *do* list this as 2 tickets
17:07:24 <devananda> please dont do that :)
17:07:27 <NobodyCam> lol
17:07:36 * devananda wonders how he can disable that
17:07:38 <BadCub> ugh
17:07:45 <jroll> maybe BadCub was ordering two chairs so he can put his feet up
17:07:52 <BadCub> hehehe
17:07:53 * NobodyCam will sign up to buy the free ticket
17:07:57 <jroll> something something lazy PMs
17:07:59 <jroll> :P
17:08:02 <devananda> jroll: that seems reasonable
17:08:07 <NobodyCam> lol
17:08:19 <devananda> ok - any other announcements from folks?
17:08:40 <lucasagomes> just a reminder python-ironicclient gate is broken :-(
17:08:51 <lucasagomes> there's a patch fixing it but gate is pretty slow right now
17:09:05 <lucasagomes> #link https://review.openstack.org/201043
17:09:23 <devananda> lucasagomes: thanks!
17:09:39 <jroll> lucasagomes: thanks, wasn't sure if you and ruby had decided how you wanted to order that :P
17:09:46 <devananda> lucasagomes: seems like that would affect other projects, no?
17:09:57 * jroll +2
17:10:02 <lucasagomes> devananda, yup it did. It affect pretty much all projects
17:10:05 <jroll> devananda: it's just unit tests
17:10:12 <jroll> ironic/nova are already fixed up though
17:10:13 <lucasagomes> ironic is already fixed, but I forgot to look at python-ironicclient on friday
17:10:18 <lucasagomes> just found out it was broken this morning
17:10:18 <devananda> gotcha
17:10:31 <devananda> ok, moving on
17:10:31 <rloo> jroll: yeah, lucasagomes and I have a plan :)
17:10:34 <devananda> #topic subteam reports
17:10:38 <jroll> mock the world, break the world.
17:10:39 <lucasagomes> jroll, re order, me and rloo are working on it
17:10:50 <jroll> lucasagomes: rloo ok :)
17:11:09 <rloo> jroll: it is actually, we didn't use mock properly, and mock is now telling us :)
17:11:09 <jroll> networking subteam report: these specs are *so* ready.
17:11:21 <jroll> rloo: right, I know :)
17:11:24 <dtantsur> one more announcement: don't forget to submit your summit talk ;)
17:11:30 * dtantsur already did
17:11:39 * jroll assumes dtantsur is giving a talk about microversions
17:11:44 <devananda> jroll: I'm going to dig into that spec again today, i promise
17:11:45 <lucasagomes> lol
17:11:48 <dtantsur> LOOOL :D
17:11:50 <devananda> lol!
17:12:02 <dtantsur> something with gifs, like in Vancouver
17:12:12 <jroll> heh
17:12:38 <devananda> dtantsur: oh speaking of microversions, you should review my update: https://review.openstack.org/#/c/196320
17:13:00 <dtantsur> will do! let the flame war begin :)
17:13:10 <jroll> devananda: devref in specs? :/
17:13:22 <jroll> why isn't devref in ironic tree?
17:13:36 <devananda> jroll: a) we should have a devref in ironic tree (mostly just reorg of what's there)
17:13:43 <devananda> jroll: b) because we have very long lived specs
17:13:57 <devananda> which are aspirational and not completed in one (or two) cycles
17:14:00 <jroll> devananda: okay
17:14:01 <jroll> right
17:14:10 * lucasagomes adds to his todo list
17:14:15 <rloo> why not call them 'long-lived' then?
17:14:32 <devananda> rloo: I'm not tied to the name "devref"
17:14:48 <devananda> but 'aspirational' doesn't seem to instill confidence in our users :P
17:15:01 * rloo will look/comment in the patch itself later :)
17:15:18 <devananda> other subteams want to chime in?
17:15:22 <dtantsur> yep
17:15:40 <dtantsur> I'd like our simple inspector gate to join ironic experimental pipeline https://review.openstack.org/#/c/198381/
17:15:55 <dtantsur> with a goal of eventually joining other pipelines :)
17:16:01 <wanyen> secure boot for pxe-ilo spec has been there for very long time, plese review
17:16:05 <devananda> dtantsur: ++
17:16:30 <NobodyCam> dtantsur: will add to my review list but ++ on the idea
17:16:36 <dtantsur> thnx!
17:17:01 <lucasagomes> oh talking about gate, devananda I think this is waiting for you https://review.openstack.org/#/c/199494/
17:17:18 <devananda> any updates on docs or qa? or those folks still out on PTO ?
17:17:24 <lucasagomes> making pxe_ipa gate jobs voting (it's been running since march reliably)
17:17:38 <Seth__> I'd like to help
17:17:53 <NobodyCam> I know jlvillal is out
17:18:17 <devananda> lucasagomes: ack, adding to my list
17:18:24 <rloo> wrt docs, sigh. https://review.openstack.org/#/c/191900/. you know how they/we use 'bare metal service' vs 'ironic'
17:18:40 <rloo> lana seems open to using just 'ironic' instead of 'bare metal services' in the install guide
17:18:56 <rloo> i'm not quite sure that makes sense but am mentioning it
17:19:46 <devananda> i thought that was just about service name capitalization -- not about whether to use project vs service name?
17:20:11 <rloo> devananda: well, the install guide is being 'cleaned up' in that patch. and we use both 'bare metal service' and 'ironic' in that guide.
17:20:29 <rloo> devananda: so i asked them if they were cleaning it up, why they left some 'ironic's around...
17:20:44 <rloo> devananda: i suspect i should just stick with reviewing code
17:21:13 <devananda> hrmm
17:21:23 <rloo> devananda: specifically, line 1645 for comments: https://review.openstack.org/#/c/191900/6/doc/source/deploy/install-guide.rst
17:21:28 <devananda> so I will give it a skim, but overal I'd like the docs team to help us
17:21:39 <rloo> devananda: yeah, i was hoping the doc team would help us...
17:21:46 <devananda> under the assumption that they know more about making words that non-developers will understand than I do
17:21:54 <devananda> so I think this is them trying to help us
17:22:47 <devananda> rloo: ok, let's discuss this outside the meeting
17:23:00 <rloo> devananda: i'm fine if you make an executive decision :)
17:23:03 <devananda> i need to read the discussion on that doc chang e...
17:23:04 * NobodyCam adds to his list of open tabs
17:23:32 <devananda> going to time box this section since we have the etherpad status, too
17:23:39 <devananda> thanks, all, for the reports :)
17:23:54 <devananda> #topic API retries
17:24:21 <devananda> hrm, this item on the agenda doesn't follow the format for agenda items
17:24:49 <lucasagomes> Oh why? I have added it
17:24:54 <devananda> lucasagomes: it's your bug report -- https://bugs.launchpad.net/ironic/+bug/1472565
17:24:54 <openstack> Launchpad bug 1461140 in Ironic "duplicate for #1472565 conflict (HTTP 409) incorrect for some cases" [Undecided,New] - Assigned to Ruby Loo (rloo)
17:25:12 <devananda> ah, great. the floor is yours :)
17:25:15 <lucasagomes> yeah later on I found out it was duplicated. But I kept this link because I put some suggestions there
17:25:41 <lucasagomes> So basically our client do retry on every 409 (Conflict)
17:26:05 * dtantsur wants it to do more retries btw..
17:26:06 <lucasagomes> but in some situations I think it makes no sense to retry, for example, when one try to create a port which the mac address is already registered
17:26:24 <lucasagomes> this is not something that the server will fix up eventually so we shouldn't retry
17:26:25 <rloo> or if you try to create a node with an existing name :-(
17:26:30 <lucasagomes> yeah
17:26:41 <lucasagomes> I added two suggestions about how to fix it in the bug
17:26:41 <jroll> so I tend to think that client auto-retries are just a band-aid, an anti-pattern if you will
17:26:48 <jroll> and we should just fix the real issue
17:26:56 <jroll> which is that the number of locks is too damn high
17:27:03 <devananda> jroll: ++
17:27:14 <rloo> well, sometimes you just have to wait...
17:27:15 <dtantsur> I did it because it's hard to use Ironic right now without retries, but I'm open for better fix :)
17:27:30 <jroll> rloo: sure, and the error message should indicate that :)
17:27:31 <dtantsur> yeah, today I saw hardware where power on request took 17 seconds
17:27:40 <lucasagomes> right, so one suggetsion would be to use a header Retry-After
17:27:41 <lucasagomes> http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.37
17:28:08 <devananda> dtantsur: that shouldn't block the client, though
17:28:17 <lucasagomes> and the client would look at it and would only retry in case the header is specified. The value of that header is the number of seconds the client should wait before it retry
17:28:20 <dtantsur> devananda, but nothing is possible while this happens
17:28:22 <NobodyCam> lucasagomes: could we switch some of the 409 to 406's
17:28:24 <devananda> dtantsur: hardware IS slow though
17:28:38 <dtantsur> * nothing = no operations except for get
17:28:44 <lucasagomes> NobodyCam, another option would be to change the return code yes.
17:28:44 <devananda> dtantsur: nothing is possible *for that Node, because it's locked by the driver during that time?
17:28:47 <lucasagomes> I suggested 422 for that
17:28:54 <rloo> NobodyCam: that's the second suggestion, don't use 409 for the non-retries.
17:28:58 <devananda> dtantsur: or nothing is possible *at all* because the conductor is frozen?
17:29:09 <dtantsur> devananda, sorry, late evening :) for this node obviously
17:29:14 <devananda> dtantsur: ok :)
17:29:27 <devananda> dtantsur: there's a bug with the dell driver that blocks even other nodes
17:29:46 <dtantsur> yeah, yeah.. no, that's about one node
17:29:55 <jroll> I also tend to think 409 is a bad status code for "node is locked", it's not a client error which 4xx designates
17:30:01 <devananda> so re: 409, I agree that we're overloading the meaning of Conflict
17:30:12 <devananda> jroll: right
17:30:30 <lucasagomes> IMHO I believe 409 is correct for the situations we described, re creating a port with a duplicated mac address
17:30:35 <devananda> 409 is the correct error for duplicate mac, duplicate name, things like that
17:30:42 <jroll> agree
17:30:50 <devananda> I think it's also the correct error for invalid state transitions
17:30:54 <lucasagomes> yes
17:31:16 <lucasagomes> this is even merged in the API guidelines, to use 409 for async operations
17:31:18 <dtantsur> 503 service unavailable then? looks a bit too much, but maybe..
17:31:23 <lucasagomes> when you try to start something which is already started
17:31:29 <devananda> dtantsur: no - that means the service as a whole is down
17:31:35 <dtantsur> yep
17:31:36 <devananda> dtantsur: gateways and proxies will interpret that
17:31:39 <jroll> yeah, the hard part is that no 5xx codes really fit well
17:31:49 <lucasagomes> that's why I like suggestion 1) because we then can indicate whether we should retry or not on 409
17:32:08 <dtantsur> 520 Unknown Error? :D
17:32:08 <devananda> are there cases that 409 is incorrect for, aside from NodeLocked ?
17:32:14 <lucasagomes> dtantsur, we know the error
17:32:41 <dtantsur> yep, just no other codes fit even remotely IMO
17:32:57 <jroll> to reiterate, I don't believe that retrying is good behavior for the client. besides the fact that it's just slapping a bandaid on the problem, what if the node is locked because it's doing some operation that changes the state of the node, after which maybe you don't want your request to go through?
17:33:19 <dtantsur> well, I do want
17:33:31 <lucasagomes> jroll, right, yeah that's why I think suggestion 1) would be good. Because it gives the server the power to say
17:33:36 <lucasagomes> this is retryable and this is not
17:33:36 <rloo> according to https://en.wikipedia.org/wiki/List_of_HTTP_status_codes#4xx_Client_Error, 403 == locked?
17:33:37 <dtantsur> I have a script that invokes series of operations, and unless previous one failed I'd like to proceed
17:33:47 <lucasagomes> the client just needs to respect the header
17:33:56 <NobodyCam> rloo 423?
17:33:58 <jroll> lucasagomes: if we do (1) I still don't think the client should retry
17:34:04 <rloo> NobodyCam: oh yeah, 423.
17:34:12 <jroll> hah. 423 sounds reasonable :P
17:34:22 <dtantsur> jroll, so you suggest everyone to continue implementing own retries? nova, inspector, downstream scripts...
17:34:59 <devananda> are there any other cases?
17:35:27 <jroll> dtantsur: yes. or perhaps we add a method to the python client, or an argument or whatever, to make it retry. but I don't think it should retry by default.
17:35:40 <devananda> if the only issue is around NodeLocked - perhaps the solution is, after all, to move those retries into the API, and after some #, return a timeout
17:35:41 <rloo> devananda: cases for retrying? I think only when node is locked? or out of workers?
17:35:50 <devananda> rloo: ah - out of workers, yes
17:36:00 <dtantsur> devananda, ++ the best solution IMO
17:36:01 <devananda> that's a great example of a server-side issue
17:36:05 <jroll> devananda: the conductor already retries on NodeLocked, iirc
17:36:08 <dtantsur> out of workers gives 503, no?
17:36:15 <devananda> rloo: and really highlights that this isn't a 4xx error at all
17:36:17 <dtantsur> jroll, not always
17:36:32 <dtantsur> jroll, IIRC node-update fails early if it detects lock presence
17:36:40 <devananda> NodeLocked and ConductorOutOfWorkers are transient server-side errors
17:37:04 <jroll> dtantsur: that's the api I guess
17:37:08 <jroll> but the conductor always retries
17:37:11 <jroll> https://github.com/openstack/ironic/blob/master/ironic/conductor/task_manager.py#L191
17:37:27 <devananda> I mean, that's not what I thought a year or two ago, but that is becoming clear
17:37:33 <dtantsur> yeah, that's true. but e.g. in inspector node-update fails if something is going on with a node
17:37:38 <rloo> yes, NoFreeConductorWorker == 503
17:37:55 <jroll> I think the real solution is to lock less
17:38:00 <jroll> power sync loop shuoldn't lock
17:38:08 <NobodyCam> jroll: ++
17:38:15 <jroll> agent heartbeats probably shouldn't lock by default
17:38:18 <dtantsur> jroll, what about power on/off?
17:38:18 <devananda> the problem with 423 is that it represents a REST API client's ability to lock a node, which we do not expose
17:38:23 <jroll> that eliminates 90%
17:38:32 <jroll> dtantsur: I'm not sure
17:38:42 <jroll> probably should? I'd have to look at it more
17:38:46 <devananda> jroll: ++ to power sync loop using shared lock, escalating to exclusive lock IFF it needs to power on/off the node
17:38:58 <devananda> anyone want to file bug & fix that ^ ?
17:38:59 <dtantsur> jroll, but that's the source of problems in my today's case (power on/off taking 17 seconds)
17:39:01 <rloo> devananda: well, the client indirectly locks by issuing a request that causes a lock on the node.
17:39:21 <jroll> dtantsur: I feel like that's not normal. you should RMA that machine. :)
17:39:29 <devananda> jroll: agent heartbeat locks because it goes through vendor passthru
17:39:41 <jroll> devananda: right, passthru shouldn't lock by default.
17:39:43 <devananda> jroll: it's a great test case, though!
17:40:17 <dtantsur> jroll, then it should be an explicit error, but I don't wanna people report bugs about "node locked error" :)
17:40:33 <jroll> ironic has real issues that make it hard to use and we're just patching it over by making clients retry automatically
17:40:47 <jroll> which doesn't help for people not using the official client, either
17:41:02 <jroll> dtantsur: then we should make the error messages better, too.
17:41:02 <dtantsur> we can retry in API, as devananda suggested above..
17:41:02 <lucasagomes> right
17:41:12 <devananda> I would like to time box this discussion -- these are all very good points and I think we agree on the problems
17:41:26 <lucasagomes> yeah I would like at least an action plan for it
17:41:40 <dtantsur> do people like idea of retries on API level?
17:41:42 <lucasagomes> I can take a look at stop locking the nodes in some parts
17:41:43 <devananda> lucasagomes: do you have time / want to coordinate fixing these issues?
17:41:49 * dtantsur can write a spec
17:41:56 <lucasagomes> devananda, yes
17:42:02 <jroll> dtantsur: I'm not sure what "retry on api level" means. the api retries the rpc calls?
17:42:05 <lucasagomes> I think rloo was/is looking at solving it too
17:42:14 <dtantsur> jroll, at first glance, yes
17:42:32 <devananda> I would like to see an outline of these problems -- unfortunately, yea, a spec is probably the right way to go, just to make it digestible to everyone
17:42:44 <devananda> because this is going to affect several areas of the project
17:42:54 <jroll> dtantsur: the rpc calls that lock a thing already retry, I think we just need to remove the 'reservation' check in node-update etc
17:43:02 <rloo> to be clear then, until there is a spec etc we shouldn't make any more changes, like extending the retrying at the api level?
17:43:31 <dtantsur> jroll, I have to research more, I can't say for sure right now
17:43:40 <jroll> dtantsur: same
17:43:41 <devananda> rloo: lest we all try to solve this in different ways, probably a good idea
17:44:05 <devananda> #agreed we all feel that there are issue with the current locking model, especially around 409 Conflict and NodeLocked
17:44:08 <jroll> rloo: devananda +1
17:44:22 <lucasagomes> right, let's investigate which areas we may be overusing the locking the noes
17:44:24 <lucasagomes> nodes*
17:44:25 <devananda> #agreed lucas and dmitry are going to put a plan together to address these
17:44:30 <lucasagomes> as jroll have pointed out
17:44:31 <dtantsur> ack
17:44:33 <rloo> makes sense. are we good with LOCKED = 409? I don't think so.
17:44:39 <devananda> thanks much!
17:44:57 <dtantsur> rloo, changing an error code is a breaking change btw
17:45:01 <devananda> #topic open discussion
17:45:10 <rloo> dtantsur: I know. will leave that to the spec to discuss :)
17:45:17 <devananda> dtantsur would, i'm sure, like to say some things about API versions
17:45:19 <lucasagomes> dtantsur, yeah we probably will need to use micro versions for it
17:45:28 <dtantsur> \o/
17:45:32 <devananda> I have some strong opinions as well on them, which I wrote into a revision of the old spec
17:45:46 <devananda> #link https://review.openstack.org/196320
17:46:13 <jroll> I don't understand the -compatible header
17:46:37 <jroll> idk if you want to explain here or in the patch
17:46:54 <devananda> jroll: see the ref material in the patch, it's explained there
17:47:25 <jroll> devananda: I don't see any new references?
17:47:27 <devananda> wait, no it's not :(
17:47:31 <devananda> urgh. one sec
17:47:44 <jroll> hah
17:47:44 <devananda> jroll: http://www.gnu.org/software/libtool/manual/libtool.html#Updating-version-info
17:48:07 * dtantsur always hated libtool versioning
17:48:36 <jroll> devananda: ctrl+f compatible gives me nothing relevant
17:49:01 <jroll> this says bump the version if you change the api
17:49:08 <dtantsur> the only thing that we're trying to achieve with hiding features is to prevent people from "cheating" and not requesting the correct version, right?
17:49:08 <devananda> also, before I forget, I want to bring up the topic of meeting times again
17:49:49 <devananda> I did a poll on this a while back, and got ~17 repsonses
17:49:53 <NobodyCam> the night time meeting are very hard for /me to be there for?
17:50:02 <jroll> dtantsur: IMO it's valuable because you can know exactly what versions they are in, and thus if your ironic has them or not
17:50:08 <NobodyCam> esp with daylight savings time
17:50:12 <jroll> dtantsur: in other words I like sean's take on it
17:50:21 <jroll> dtantsur: though I don't think we have time to talk about this atm
17:50:24 <devananda> NobodyCam: I've missed several of the 0500GMT meetings as well
17:50:30 <dtantsur> jroll, it's not about hiding features, it's about stating versions. but yeah, better on the spec.
17:50:52 <lucasagomes> ++ to not talk about micro versioning now
17:50:55 * NobodyCam tends to fall asleep with laptop in his lap :-p
17:51:36 <devananda> the responses were more in favor of keeping the meeting, even though I do not feel that the 0500 meetings are productive
17:51:54 <lucasagomes> I have never attended the 0500 meeting because the time is just too bad for me. Would be good to to listen to the people that attend it, see if they find it useful or not
17:52:04 <dtantsur> I answered "keep" because I didn't want people to be excluded. But if core team does not attend them, then I'd change my vote..
17:52:08 <devananda> I'm mentioning it now in case anyone wants to discuss -- i'm goign to write up my thoughts and post to the ML (it's much overdue)
17:52:08 <jroll> I agree that 0500 meetings aren't typically productive
17:52:17 <rloo> devananda: for the ones that wanted to keep the meeting -- are they happy keeping the meeting if no cores attend?
17:52:20 <devananda> dtantsur: yea, usualy the core team isn't there
17:52:28 <devananda> rloo: probably not :)
17:52:35 <lucasagomes> ++ for ML
17:52:38 <devananda> we do have 2 cores in that timezone, though
17:52:45 <rloo> that's the problem with polls... can't get to the nitty gritty details.
17:52:46 <devananda> or, well, not in US/EU
17:52:50 <devananda> yea
17:52:51 <rameshg87> I am one of them .. (almost sleepy now)
17:52:57 <devananda> rameshg87: indeed :)
17:53:10 <devananda> rameshg87: also hi there!
17:53:19 <NobodyCam> rameshg87: thank you for being here :)
17:53:25 <lucasagomes> we could ask Haomeng if he he would be able to attend that meeting more often
17:53:37 <rameshg87> I too typically feel nothing much happens in 0500 meeting. I would personally rather prefer this time every week :)
17:53:58 <BadCub> I personally don;t do the 0500 mtg at all.
17:54:14 <Seth__> + for ML
17:54:19 <devananda> rameshg87: you're the most active core that that meeting is attempting to serve -- and if you'd rather just have this time, that makes it easy
17:54:47 <devananda> we completely miss mrda-away with this time, however. *sigh*
17:54:48 <rameshg87> devananda: I am all for this time rather than having an not-much-of-a-meeting at 0500 GMT
17:54:58 <devananda> rameshg87: thanks
17:55:10 <rloo> is there some other time that works?
17:55:17 <devananda> ok - i appreciate everyone's feedback. will get a post up shortly
17:55:50 <devananda> rloo: there is no time that works for everyone, and this seems to work for the majority pretty well, and we're all used to it
17:56:12 <devananda> also, 5 minutes left - and it's open discussion :)
17:56:15 <rloo> devananda: well, i mean another time that works for most cores + others that can't make this meeting.
17:56:48 <rloo> dtansur has something: https://review.openstack.org/#/c/166386/
17:57:04 <rloo> of course, it is microversion related
17:57:12 <dtantsur> my beloved microversions :)
17:57:27 <rloo> dtantsur: what do you want us to do about that? +1?
17:57:48 <devananda> dtantsur: fwiw, I would like to just call it "API version negotiation"
17:57:54 <devananda> because there's really nothing "micro" about it
17:57:56 <dtantsur> I'm mostly just bringing attention, if someone has time to help them land it
17:58:08 <jroll> ++ for versions
17:58:26 <dtantsur> what about milliversions?
17:58:32 <devananda> :P
17:58:51 <lucasagomes> names are hard, tho yeah negotiation makes more sense at least in my understanding
17:58:51 <gabriel-bezerra> dtantsur: http://hintjens.com/blog:85
17:59:02 <gabriel-bezerra> "The End of Software Versions"
17:59:20 <devananda> dtantsur: shorthand "μv"
18:00:01 <dtantsur> oh awesome!
18:00:02 <jroll> ok I gotta run, thanks everyone
18:00:07 <NobodyCam> thats time
18:00:14 <NobodyCam> great meeting all
18:00:15 <dtantsur> hmm right
18:00:16 <devananda> cheers - thanks everyone! see you next time!
18:00:17 <dtantsur> thanks!
18:00:20 <lucasagomes> thanks
18:00:22 <NobodyCam> :)
18:00:22 <rameshg87> bye
18:00:27 <devananda> #endmeeting