17:00:05 <NobodyCam> #startmeeting Ironic
17:00:05 <NobodyCam> #chair devananda
17:00:06 <openstack> Meeting started Mon Jul 27 17:00:05 2015 UTC and is due to finish in 60 minutes.  The chair is NobodyCam. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:07 <NobodyCam> Welcome everyone to the Ironic meeting.
17:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:10 <openstack> The meeting name has been set to 'ironic'
17:00:11 <openstack> Current chairs: NobodyCam devananda
17:00:20 <NobodyCam> Of course the agenda can be found at:
17:00:20 <NobodyCam> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
17:00:23 <jroll> \o
17:00:27 <lucasagomes> hi
17:00:29 <devananda> g'morning / afternoon / evening, everyone :)
17:00:33 <NobodyCam> #topic Greetings, roll-call and announcements
17:00:33 <NobodyCam> Roll-call: Who's here for the Ironic Meeting?
17:00:33 <BadCub> mornign folks
17:00:41 <NobodyCam> good morning everyone
17:00:48 <Serge> Morning
17:00:52 <rloo> o/
17:01:08 <sinval_> moning
17:01:10 <NobodyCam> great to see everyone here
17:01:10 <jroll> I just added an open discussion thing btw, so whoever is leading pls refresh agenda :)
17:01:14 <NobodyCam> #topic announcements:
17:01:15 <NobodyCam> Ironic Mid-cycle is August 12th thru 15th in Seattle
17:01:32 <devananda> we currently have 23 sign-ups for the midcycle
17:01:45 <NobodyCam> FYI: I believe the address on the eventbright ticket is incorrect: 701 Pike St vs 701 East Pike street
17:02:01 <devananda> if you haven't already, please jot your name down on the etherpad (https://etherpad.openstack.org/p/ironic-liberty-midcycle) and "buy" a ticket (link is at the top)
17:02:12 <devananda> NobodyCam: eek!
17:02:25 * devananda checks
17:02:56 <jroll> devananda: I'll be walking 10 extra blocks thanks to that :P
17:03:17 <NobodyCam> https://www.google.com/maps/dir/701+Pike+St,+Seattle,+WA+98101/Edge+of+the+Circle+Books,+701+E+Pike+St,+Seattle,+WA+98122,+United+States/@47.6125878,-122.3319583,16z/data=!3m1!4b1!4m13!4m12!1m5!1m1!1s0x54906ab5024ab26f:0xd7c85864845c80c1!2m2!1d-122.3327557!2d47.6114473!1m5!1m1!1s0x54906acb9d4dd5d7:0x22368ac5213de5ea!2m2!1d-122.3232484!2d47.61395
17:03:23 <BadCub> form to give input on food-y related things for mid-cycle is: http://goo.gl/forms/MTCOHedcYi
17:03:32 <BadCub> I only have 15 responses so far
17:03:33 <devananda> NobodyCam: you are correct ...
17:04:06 <devananda> #info Address for the midcycle is WA State Convention Center, 701 Pike St.  ** Not East Pike **
17:04:18 <NobodyCam> any have the midcycle etherpad link handy??
17:04:25 <NobodyCam> s/any/anyone/
17:04:37 <devananda> #link https://etherpad.openstack.org/p/ironic-liberty-midcycle
17:04:38 <jroll> #link https://etherpad.openstack.org/p/ironic-liberty-midcycle
17:04:41 <jroll> :|
17:04:47 <NobodyCam> :_ ty both
17:05:17 <NobodyCam> any other announcements
17:05:33 <jroll> so I just realized all but one item on the "what do folks want to hack on" list are from me
17:05:42 <devananda> NobodyCam: yes - the meeting times
17:05:51 <jroll> so folks if there's things you want to get done, please add them
17:06:02 <NobodyCam> oh devananda ++
17:06:42 <devananda> I sent out a follow-up email to the ML about two weeks ago now (in response to the last meeting we had at this time)
17:06:45 <devananda> #link http://lists.openstack.org/pipermail/openstack-dev/2015-July/069363.html
17:06:49 <dtantsur> jroll, I'm looking forward you all fixing driver composition spec for me :)
17:06:52 <devananda> that got only one response on the ML (thanks, jroll )
17:06:59 <jroll> dtantsur: :)
17:07:04 <devananda> tldr; the alternating meeting times aren't working for us
17:07:22 <lucasagomes> yeah I haven't replied cause I've never attended the other meeting
17:07:25 <devananda> heh
17:07:40 <lucasagomes> but if people that attends thinks it's not productive I'm +1 to remove it
17:07:43 <NobodyCam> for me the night time meeting is hard to attend as I fall asleep before it starts
17:07:48 <devananda> given that, in two weeks, no one has said "please don't cancel them" I'm going to, well, cancel them
17:08:00 <rloo> presumably all/most people in this meeting are fine with changing to have it now. maybe bring this up at the other date/time?
17:08:08 <devananda> #info alternate meeting times are going to stop after the midcycle
17:08:21 <BadCub> I don;t attend the late night meetings at all
17:08:27 <NobodyCam> thank you devananda for hte # info
17:08:45 <devananda> rloo: there wasn't even a meeting last week -- I cancelled it via the mailing list, with a reminder to discuss it if it was a problem, and no one did
17:09:09 <devananda> I, for one, am looking forward to having a meeting with all of you every week, instead of every other week
17:09:16 <dtantsur> ++
17:09:18 <devananda> NobodyCam: that's all for my announcements :)
17:09:46 <jroll> +1
17:09:59 <NobodyCam> anyone else?
17:10:08 <NobodyCam> if not moving on.
17:10:20 <NobodyCam> #topic SubTeam: status report
17:10:20 <NobodyCam> Posted on Whiteboard
17:10:21 <NobodyCam> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:10:44 <NobodyCam> thank you for the updates.. not a whole lot new there
17:10:55 <NobodyCam> anything to go over?
17:11:07 <dtantsur> please have a look at "testing" updates
17:11:07 <NobodyCam> ^^ re: the updates
17:11:09 <rloo> yay for neutron/ironic specs landing!
17:11:37 <NobodyCam> awesome work on that everyone!
17:11:40 <jroll> so, can we talk about the soft power / nmi interrupt thing that's currently under irmc?
17:11:55 <jroll> that's something that all drivers should be able to support, and there's a spec underway to do so
17:11:58 <devananda> jroll, Sukhdev: thank you (&others) for the work on the neutron ML2 spec. I'm really happy with where it is at now (approved, that is :) )
17:12:12 <devananda> jroll: yes pls
17:12:13 <jroll> I'm of the opinion that we shouldn't land a vendor passthru from something actively being worked on with a proper api
17:12:45 <dtantsur> will pxe_ipmitool support soft power?
17:13:02 <jroll> dtantsur: why not? ipmi supports it
17:13:02 <devananda> dtantsur: in principle, it could
17:13:13 * dtantsur just does not know :)
17:13:22 <jroll> ipmitool [...] power soft  # iirc
17:13:39 <devananda> there are challenges inherent in supporting it, but those are the same regardless of driver
17:13:39 <lucasagomes> jroll, hmm not sure what would be the problem on landing those vendor passthru, yeah it would be good to have the vendors to work together on a proper implementation
17:13:44 <lucasagomes> but can't force on it
17:13:45 <devananda> SNMP may not be able to (I'm not sure)
17:13:48 <rloo> jroll: what's the spec you are mentioning?
17:13:54 <jroll> lucasagomes: it's *already being worked on*
17:14:03 <NobodyCam> devananda: I beleieve you are correct
17:14:03 <devananda> but any modern device with a BMC should be able to support it
17:14:07 <dtantsur> then I'd rather make it a proper API
17:14:15 <jroll> rloo: https://review.openstack.org//#/c/186700/
17:14:23 <jroll> lucasagomes: ^^
17:14:33 <NobodyCam> +1 for proper api
17:14:43 <lucasagomes> jroll, right, but still we don't offer any guarantees for vendor passthru
17:14:45 <jroll> it's being worked on via vendor passthru to support a customer request sooner
17:14:49 <lucasagomes> meaning that we don't have to keep backward compat
17:14:58 <lucasagomes> once the proper api lands we can just delete the vendor passthru ones
17:15:02 <jroll> lucasagomes: sure, but do we want to land 1000 lines of code only to remove it shortly after?
17:15:08 <devananda> fwiw, I believe AMT also supports it
17:15:18 <rloo> given the time it takes to approve a spec and get it working, and that naohiro is the author of the spec AND the irmc code, I'm fine with his code. who knows when that code will get approved.
17:15:21 <lucasagomes> jroll, sure yeah I agree, I just don't think it's a big deal
17:15:34 <jroll> I'm also concerned that the same person is working on the passthru api AND the official API, and incentive to do it properly drops if the passthru code lands
17:15:54 <jroll> also, most of the code in the passthru is things that will need to be done in the general case (poll for power state etc)
17:16:12 <dtantsur> ++
17:16:14 <devananda> I'm fine in principle for vendors to do new and challenging things in /vendor_passthru/
17:16:20 <devananda> we've already seen that with raid and vlan things
17:16:29 <jroll> lucasagomes: I've already spent a lot of time reviewing the passthru implementation and it hasn't gotten very far. I'm not inclined to spend that much time on both implementations
17:16:34 <lucasagomes> yeah the same person working on both is def odd
17:16:48 <devananda> however, this is really a common feature of almost all hardware
17:17:20 <dtantsur> The stated reason is that their customers want it *now*. I'm not sure it's a compelling reason for us...
17:17:20 <devananda> eg, it's not a matter of only one or two vendors' hardware supporting it
17:17:33 <jroll> dtantsur: right, implement out of tree in that case
17:17:33 <devananda> so it's odd for only one driver to support it
17:17:39 <lucasagomes> dtantsur, ++
17:18:00 <rloo> remind me again, what's the process for drivers proposing vendor-passthrus? do they have to write a spec or bug, or just propose code?
17:18:10 <jroll> just code afaik, rloo
17:18:11 <dtantsur> last time it was "just propose"
17:18:11 <lucasagomes> rloo, just propose the code
17:18:15 <devananda> rloo: just code in general
17:18:32 <jroll> do we want to like, vote on this or something?
17:19:28 <devananda> on the one hand, I'm of the mind that we shouldn't block a vendor who wants to do something that is implemented purely in /vendor_passthru/ -- and really, we can't, because they can just do it downstream
17:19:42 <rloo> so if naohiro had not proposed a spec, would we be asking this question?
17:19:51 <devananda> on the other hand, as ya'll already said, it's odd to have the same person writing the same feature in both vendor_passthru/ and in the common API
17:20:01 <jroll> rloo: I believe it was proposed in response to me asking the question :)
17:20:17 <lucasagomes> jroll, vote whether we should not land vendor passthru stuff if there's a spec proposed/merged?
17:20:22 <devananda> others have asked for this feature to be commonly supported in the past, but no one else stepped up to write it yet
17:20:34 <jroll> lucasagomes: vote on this case, idk abuot 'always'
17:20:41 <dtantsur> "they can just do it downstream" is a good reason not to rush with landing something we have doubts in, at least as far as vendor passthru is oncerned
17:20:52 <dtantsur> * concerned
17:20:56 <jroll> right, and I'm concerned we're spending a lot of dev/review cycles on this
17:20:56 <rloo> i think the "right" thing to do (to save everyone time etc) is for us to decide now that we'll support the spec and review/approve it in a timely fashion, and not review the vendor-passthru solution.
17:21:12 <jroll> if you look at the patch there's a lot of back-and-forth and at least one full rewrite
17:21:22 <rloo> but if we cannot give some sort of time frame for the spec, it seems unfair to block the code as is.
17:21:37 <dtantsur> rloo, neither can we give some sort of frame for the patch
17:21:52 <dtantsur> I suspect more people would care about a generic thingy
17:22:17 <jroll> ^^
17:22:23 <rloo> dtantsur: yeah. that's true, no time frame for the patch. so ahhh, why do we need to vote on it. we could just not review it. personally decide I mean. (Oh, I said it out aloud.)
17:22:46 <jroll> rloo: I think we need to give the author clear direction rather than just ignoring the patch
17:22:50 <dtantsur> rloo, we can (and many do), but this particular approach is not honest IMO
17:23:02 <jroll> people have ignored that patch enough already :)
17:23:04 <devananda> rloo: I think jroll and I may be the only ones who have reviewed the patch ... because it's a driver-specific patch to vendor_passthru and, well, like you said ....
17:23:12 <NobodyCam> I'm ok with landing the VP code as long as it gets updated to the new in a timlly manner
17:23:25 <dtantsur> nobody can guarantee
17:23:45 <jroll> I think it will take about the same amount of time to land the VP and the non-VP, so why not point effort at the more general case?
17:23:49 <devananda> rloo: so, there is a corresponding question for us as core reviewers -- do we hold /vendor_passthru/ to the same level of quality? do our users expect that of us?
17:23:59 <lucasagomes> right and this case in particular being the same author
17:24:03 <jroll> unless we just approve anything that passes tests, which it is right now
17:24:04 <lucasagomes> I think it makes sense to work on the spec
17:24:07 <devananda> rloo: if not, then why ignore the patch instead of just rubber stamping it? if so, then, well, we should have a different discussoin
17:24:16 <rloo> devananda and jroll: in your opinion then, since you've reviewed the patch, do you think it is worth spending core-reviewer time reviewing that, or reviewing the spec/future code?
17:24:19 <jroll> I'd like to timebox this to 17:30 btw, if devananda is cool with that
17:24:28 <NobodyCam> TY jroll
17:24:28 <devananda> jroll: ++ to timeboxing
17:24:37 <jroll> rloo: I don't think it is worth the time, that's why I brought it up :)
17:24:45 <Madasi> since they have the same author, we would prefer they spend the dev time on the general spec
17:24:54 <lucasagomes> Madasi, ++
17:24:59 <devananda> rloo: if this implementation were not vendor_passthru, it would definitely receive discussion and debate
17:24:59 <lucasagomes> yeah for this case this is fine ^^
17:25:00 <dtantsur> devananda, we won't do a good service for a person, if we land a patch that has problems
17:25:00 <rloo> ok, so i trust jroll's opinion. lets go with the spec then.
17:25:32 <rloo> i wasn't going to review that patch since there seem to be higher priority patches to review anyway.
17:25:37 <jroll> rloo: whoa, let's not go that far (trusting my opinion)
17:25:40 <jroll> :P
17:25:42 <dtantsur> lol
17:25:57 <jroll> alright, are we in agreement then?
17:26:01 <jroll> I can put a note on the patch
17:26:05 <NobodyCam> do we want a # agreed on htis
17:26:06 <lucasagomes> ++
17:26:10 <dtantsur> ++
17:26:12 <rloo> jroll: trusting your opinion wrt effort to review? I think it is safe to do that :)
17:26:16 <jroll> +1 for #agreed
17:26:20 <devananda> rloo: I have applied a level of review to it that is less than I apply to core changes, but still thinking of "will this work for and provide value to the user?"
17:26:32 <devananda> jroll: what's the vote?
17:26:43 <jroll> we can vote instead
17:26:49 <rloo> devananda: is it a problem for the vendor to add this passthru out-of-tree?
17:26:58 <lucasagomes> devananda, if we should block that VP patch and focus on the spec for this particular change
17:27:11 <jroll> the vote sounds like > 50%
17:27:48 <rloo> jroll: don't you have to do some sort of  # vote thingy?
17:27:55 <devananda> #vote should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Yes = "allow v_p" No = "require common code"
17:28:00 <devananda> #startvote should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Yes = "allow v_p" No = "require common code"
17:28:01 <openstack> Begin voting on: should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Valid vote options are Yes, allow, v_p, No, require, common, code, .
17:28:02 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
17:28:08 <devananda> #undo
17:28:09 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0xacb8c10>
17:28:12 <devananda> that didn't work, one sec
17:28:18 <devananda> #startvote should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Yes No
17:28:19 <openstack> Already voting on 'should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first'
17:28:28 <devananda> um..
17:28:29 <devananda> #endvote
17:28:30 <openstack> Voted on "should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first?" Results are
17:28:34 <devananda> #startvote should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Yes No
17:28:35 <openstack> Begin voting on: should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first? Valid vote options are Yes, No.
17:28:36 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
17:28:47 <jroll> #vote no
17:28:52 <devananda> yes == "allow vp implementation"
17:28:56 <devananda> no == "require common change"
17:28:56 <lucasagomes> #vote no
17:28:58 <rloo> #vote no
17:28:58 <dtantsur> #vote no
17:28:59 <Madasi> #vote no
17:29:14 <TheJulia> #vote no
17:29:18 <JoshNang> #vote no
17:29:20 <NobodyCam> #vote no
17:29:21 <BadCub> #vote no
17:29:25 * jroll drafts comment
17:29:27 <natorious> #vote no
17:29:28 <thiagop> #vote no
17:29:31 <jroll> thanks everyone
17:29:37 <stendulker> #vote no
17:29:37 <NobodyCam> TY jroll
17:29:38 * rloo feels like changing vote just to be different
17:29:38 <krotscheck> o/
17:29:50 <NobodyCam> lol so hipster rloo
17:29:57 * TheJulia seconds this idea that rloo has
17:29:59 <devananda> giving it 30 seconds more
17:30:02 <lucasagomes> alright 18:30
17:30:06 <Nisha> #vote no
17:30:39 <devananda> #endvote
17:30:40 <openstack> Voted on "should we accept a /vendor_passthru/ change for SOFT_OFF, or require this particular functionality be implemented as a common feature first?" Results are
17:30:41 <openstack> No (13): TheJulia, Madasi, lucasagomes, rloo, BadCub, jroll, stendulker, dtantsur, Nisha, NobodyCam, JoshNang, natorious, thiagop
17:30:46 <devananda> all righty then
17:30:52 <NobodyCam> :)
17:31:02 * jroll updates review
17:31:03 <rloo> by the way, even though we voted -- it wasn't on the agenda that we'd discuss this, so it makes me a bit uneasy.
17:31:20 <NobodyCam> good to move on
17:31:21 <lucasagomes> yeah sounded like an open discussion thing
17:31:35 <NobodyCam> #topic Let's make a release
17:31:36 <lucasagomes> but alright, at least it sorted
17:31:36 <NobodyCam> jroll thats you.
17:31:44 <dtantsur> release \o/
17:31:45 <jroll> sorry, I saw it on the subteam thing and had to ask
17:31:47 <jroll> ok so!
17:32:04 <jroll> we last released a server version at the end of kilo
17:32:08 <NobodyCam> jroll: agenda was light today so all is good
17:32:11 <jroll> it's been a few months, we've landed a bunch of code
17:32:14 <devananda> rloo: agreed. and that highlights that I (and collectively we all) should get better at putting things on the agenda
17:32:23 <jroll> I'd like to propose we release a new version of the server this week
17:32:32 <devananda> rloo: I hope that going back to weekly same-time meetings will help .the lack of continuity has been hard ...
17:32:40 <jroll> this will involve switching to semver, which appears non-trivial but I may be wrong.
17:32:58 <jroll> there's concerns that the addition of enroll will break users
17:32:59 <rloo> devananda: here's hopin' :)
17:33:04 <devananda> jroll: the semver switch got embroiled in a debate about packaging and epochs a few weeks ago -- i need to follow up with dhellmann on that
17:33:11 <devananda> jroll: unless you have updates to share?
17:33:32 <jroll> however currently, with the current client, the enroll thing is opt-in only, so I'd not like to wait on that discussion for a release.
17:33:35 <jroll> please discuss :)
17:33:41 <jroll> devananda: I don't, I need to investigate more
17:33:50 <devananda> jroll: also, I have been pretty vocal about my concern with the change to default state == ENROLL, and pushing that into the client's responsibility is not acceptable IMO
17:34:09 <jroll> devananda: folks have released liberty-1 in other projects, so I believe that may be solved.
17:34:21 <devananda> I failed to catch this during the spec review and apologize for not bringing it up / noticing it sooner
17:34:25 <jroll> devananda: right, so. I don't believe that is a breaking change.
17:34:49 <dtantsur> we also discussed it on the summit...
17:35:00 <jroll> devananda: it's opt-in right now, it's a small change to any automated systems that depend on that behavior
17:35:09 <devananda> dtantsur: I don't understand why there'd be any pushback to an opt-in approach as I mentioned on the ML
17:35:23 <jroll> devananda: if users choose to use that version of the API, they should be aware of the repercussions
17:35:38 <jroll> this *is* opt-in, by *choosing that API version*
17:35:42 <dtantsur> I don't want people to opt-out our changes in the state machine
17:35:51 <dtantsur> jroll, exactly. we have on opt-in already, lets not have more
17:35:57 <lucasagomes> yeah
17:36:17 <lucasagomes> I have thought and discussed about it for a while last week with dtantsur and jroll on IRC
17:36:29 <lucasagomes> IMO in the long run I believe that having the version mandatory would be the best approach
17:36:39 <lucasagomes> for the next reelase I think we should still PIN at a specific version prior to ENROLL
17:36:40 <rloo> why didn't you guys summarize in that email thread?
17:36:51 <devananda> it's not opt-in if the client ever changes the version it defaults to
17:36:52 <jroll> if we're going to spend this time debating if 1.11 breaks users, then let's forget about releasing and take this back to the ML
17:36:54 <lucasagomes> but have a warning message to alert about the behavior change
17:37:10 <jroll> ok
17:37:24 <jroll> I'm not going to waste this meeting having this conversation for the tenth time
17:37:26 <rloo> jroll: for the release, there's going to be a corresponding client release, right?
17:37:33 <dtantsur> devananda, it is our approach to versioning that is broken, not enroll patch..
17:37:41 <jroll> let's just move on because we're never going to actually release with this discussion ongoing
17:37:43 <dtantsur> (me starts his favorite song)
17:37:50 <jroll> dtantsur: +1
17:37:56 <devananda> jroll: agreed - let's move on
17:37:57 <jroll> or our lack of approach
17:37:58 <devananda> dtantsur: also agreed
17:38:01 <NobodyCam> ack
17:38:08 <lucasagomes> yeah alright ML seems better indeed
17:38:14 <devananda> and midcycle
17:38:16 <rloo> can we have an action item for jroll + semver?
17:38:19 <jroll> so
17:38:19 <NobodyCam> #topic Open Discussion / Food For Thought
17:38:20 <jroll> wait
17:38:23 <jroll> wait
17:38:32 <NobodyCam> https://review.openstack.org/#/c/193439
17:38:35 <dtantsur> devananda, we can temporary disable 1.11 for the release. but we'll have to solve this problem in the future anyway
17:38:37 <jroll> we're going to wait for the midcycle to even consider releasing a thign?
17:38:40 <jroll> seriosuly?
17:38:46 <devananda> jroll: not what i meant
17:39:06 <devananda> jroll: just that the topic of versioning th eAPI is probably something we'll discuss, quite possibly over many drinks, at the midcycle
17:39:16 <jroll> we tried to change our release model to release faster, now we've found out we don't know wtf versioned APIs mean and so block a release for a month (now) and probably longer
17:39:17 <dtantsur> so pity I won't be there :D
17:39:19 <devananda> but I hit enter too soon and then tpic changed
17:39:38 <NobodyCam> we can change back
17:39:43 <jroll> it's fine, go ahead
17:39:49 <jroll> I just wanted to ask / point that out
17:39:49 <lucasagomes> let's try to sort it out on the ML
17:39:53 <rloo> if we were doing liberty-1, wouldn't we just have done it regardless of the enroll stuff/state now?
17:39:58 <lucasagomes> jroll, mind summarizing the ideas today and sending an email?
17:40:04 <dtantsur> rloo, we would
17:40:05 <jroll> I will try, yes
17:40:09 <lucasagomes> we can continue from there, I don't think it should take much time
17:40:16 <lucasagomes> thanks
17:40:28 <NobodyCam> https://review.openstack.org/#/c/193439
17:40:34 <NobodyCam> sinval you here
17:40:38 <sinval_> yep
17:40:45 <NobodyCam> your up
17:40:48 <NobodyCam> :)
17:41:02 * krotscheck is trying to get bethelwell to the midcycle, because webclient things.
17:41:07 <sinval_> so, a port can be created via a POST /v1/ports, with 'node_uuid' and 'address' specified in the body
17:41:33 <sinval_> this patch should add the possibility to create ports using node_name if the user wants it
17:41:50 <devananda> sinval_: node_name isn't a property of a PORT, however, and shouldn't be
17:42:02 <sinval_> hum
17:42:08 <lucasagomes> sinval_, so the advatage it brings is that it's easier to script? node-create ... name=blah && port-create -n blah
17:42:23 <sinval_> lucasagomes: yep
17:42:28 <lucasagomes> anything else?
17:42:37 <devananda> sinval_: the client could easily GET /nodes/<name> to fetch the UUID and use that in the BODY of POST /v1/ports/
17:42:39 <sinval_> i don't think so
17:42:43 <lucasagomes> cause we still can parse the return value from node create and figure the node uuid
17:42:48 <jroll> well
17:43:01 <lucasagomes> I see the advantage with the names
17:43:03 <jroll> so I think 'port-create --node-name blah' should be a thing
17:43:08 <rloo> what are we discussing? whether to allow the use of node name for port create?
17:43:08 <jroll> it makes the client a bit nicer
17:43:13 <lucasagomes> yeah
17:43:20 <jroll> and we can get_node_by_name() or whatever on the backend
17:43:24 <lucasagomes> so the thing is node_uuid in the ports is just part of the API object
17:43:31 <jroll> and set port.node_uuid
17:43:32 <lucasagomes> it's generated from node_id that is part of the RPC object
17:43:42 <rloo> well, regardless of how we have coded things now, does the idea of using node name make sense or not?
17:43:43 <sinval_> yes
17:43:43 <jroll> or port.node_id, I guess
17:43:45 <lucasagomes> I don't see the problem of having node_name in the ports for the API
17:43:58 <lucasagomes> we could keep both node_uuid and node_name
17:44:14 <devananda> lucasagomes: ah right. so the API tier can detect whether the supplied resource is uuid-like, and if not, try to look up the node by its name
17:44:18 <NobodyCam> we support names for nodes I kinda feel we should be able to ref that name just like we do uuid everywhere uuid would be used.. imho
17:44:22 <thiagop> we need to keep them both, or node_name is unique?
17:44:24 <lucasagomes> if people likes the idea of having a port created using the name
17:44:29 <devananda> so there's no need to do anything different in the client either
17:44:34 <lucasagomes> devananda, yup
17:44:49 <lucasagomes> devananda, we can do that, so one could create a port either using the uuid or name
17:45:01 <thiagop> I'm good with having both of them
17:45:01 <lucasagomes> it won't be saved on the db or anything
17:45:04 <devananda> rloo: from a user POV, yes -- using a Node name to refer to one or more Ports is helpful
17:45:06 <lucasagomes> so if name changes on the node that's grand
17:45:28 <NobodyCam> lucasagomes: I like that
17:45:29 <lucasagomes> #link https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/port.py#L57-L75
17:45:30 <rloo> i'm a bit lost then, trying to figure out what we're discussing, need to agree with?
17:45:41 <sinval_> on the last patch, Ramakrishnan suggested that we should have two fields in the port object, one that already exists (node_uuid) and a new one (node_name) and both can now be used to create a port
17:45:51 <devananda> but we shouldn't store the node_name on the Port object, nor return it in the API. and as lucasagomes points out, we don't even need to modify the client or add a new --node-name option
17:46:02 <rloo> oh, we don't need the port object to save the node name. just the node id.
17:46:11 <rloo> like devananda said ++
17:46:11 <NobodyCam> ++
17:46:15 <devananda> sinval_: so, the Port DB object only stores the node_id (NOT uuid) today
17:46:17 <Madasi> is node_name unique?
17:46:24 <NobodyCam> yes
17:46:30 <lucasagomes> Madasi, it's but it can be changed
17:46:35 <devananda> sinval_: the API tier returns the UUID to the user, since ID is internal only.
17:46:38 <lucasagomes> so it's not a canonical id for the resource
17:46:41 <devananda> sinval_: i do not see any reason for that to be changed
17:47:00 <sinval_> devananda: ok
17:47:13 <thiagop> if it can be changed, so we have to have the option to use uuid
17:47:16 <devananda> Madasi: node.name is unique on the nodes table, but it is not used for referential integrity -- the node.id field is the referential key
17:47:33 <NobodyCam> so did we jsut # agree on lucasagomes idea?
17:47:41 <devananda> NobodyCam: I support it
17:47:46 <Madasi> devananda: thanks, makes sense
17:47:48 <jroll> so one thing
17:47:54 <jroll> I think the problem with passing name there
17:48:03 <jroll> is the parameter is node_uuid, not node
17:48:10 <jroll> so node_uuid=my_name is weird
17:48:35 <NobodyCam> jroll: so update the param to node_id_name?
17:48:38 <rloo> jroll: didn't we have a similar issue with the node API?, we changed/added 'node' I htink, and still support 'node_uuid'?
17:48:56 <rloo> jroll: where 'node' can be uuid or name.
17:49:05 <sinval_> in the patch i changed the 'node_uuid' to 'node'
17:49:05 <thiagop> +1 to rloo 's idea
17:49:10 <jroll> rloo: indeed, I like that here too
17:49:19 <devananda> rloo: we changed that internally to simplify code paths
17:49:31 <devananda> the REST API for PUT still accepts both "uuid" and "name" fields -- and both are optional
17:49:38 <devananda> if not supplied, "uuid" is auto generated
17:50:10 <devananda> that's not the case for Ports, where the REST API requires a node_uuid parameter
17:50:14 <devananda> http://docs.openstack.org/developer/ironic/webapi/v1.html#Port
17:50:16 <NobodyCam> we have another item I do want to get to
17:50:27 * lucasagomes has an item as well
17:50:28 <rloo> devananda: yeah, that's for node-create. i was thinking of other node operations, where we had specified uuid before, but can specify uuid or name now.
17:50:31 <devananda> so jroll is correct about the REST API changes
17:50:44 <devananda> rloo: right - but the question here is for Port create
17:51:05 <devananda> let's come back to this on the review so we can move on now
17:51:19 <NobodyCam> ack
17:51:27 <NobodyCam> so then: Scheduling things
17:51:27 <NobodyCam> should we build a filtering API or a scheduler or both? Would be really useful both for nova and standalone installations.
17:51:37 <NobodyCam> jroll: thats you again :)
17:51:55 <jroll> ohai
17:51:56 <NobodyCam> #link https://review.openstack.org/#/c/204641
17:51:57 <devananda> oh ... this is much more than a 9 minute discussion
17:52:05 <NobodyCam> #link https://review.openstack.org/#/c/194453/
17:52:08 <lucasagomes> uuu yeah that will take some time
17:52:22 <devananda> also, I'm very strongly of the opinion that a placement / reservation / proper scheduler implementation inside ironic is absolyutely crazypants
17:52:25 <NobodyCam> so table to ML or next meeting?
17:52:39 <jroll> devananda: so every standalone installation should build their own scheduler?
17:52:43 <lucasagomes> I left some suggestion on the spec, didn't catch up with the answers yet
17:53:14 <devananda> jroll: ironic should expose enough data to integrate with other schedulers. and in the absence of that, perhaps we integrate with *A* scheduler by default
17:53:17 <jroll> so what this came from was the nova midcycle
17:53:18 <devananda> right now that's Nova's scheduler
17:53:28 <lucasagomes> jroll, if the API offers a very fine grained way to query stuff, maybe the "give me a node with X,Y,Z" could even live in the client?
17:53:36 * lucasagomes reads the answer on the spec
17:53:37 <jroll> where we basically decided that if we want to be able to have more than one nova-compute, we couldn't use nova's scheduler
17:53:40 <dtantsur> lucasagomes, that would be racy
17:53:45 <devananda> so yes. if you want a scheduler AND you dont want to use nova's scheduler, then umm... use another one?
17:53:53 <lucasagomes> dtantsur, client already retries
17:54:08 <jroll> because the failure to have more than one nova-compute comes from nova scheduling a given node
17:54:21 <lucasagomes> dtantsur, it could be part of the set-provision-state active... or we can come up with another command to that
17:54:22 <jroll> and we want to remove nova scheduling a specific node at all
17:54:23 <dtantsur> lucasagomes, it's useless to retry as it is now
17:54:32 <devananda> jroll: I thought we had solved that in a different way previouswly, so I do not understand where this is coming from
17:55:07 <NobodyCam> *5 (five) minutes left*
17:55:08 <devananda> jroll: that makes no sense. nova-scheduler's job is to schedule the placement of a request on a resource that meets those needs
17:55:09 <jroll> devananda: the way we solved it was basically the same, except we scheduled a flavor and let the driver choose a node within that flavor at random
17:55:19 <devananda> jroll: sure. that's workable
17:55:37 <devananda> jroll: how did you get from that to "ironic should implement a scheduler" ?
17:56:07 <jroll> devananda: because a nova-compute per flavor can get pretty crazy pretty quick, and I think this is a better way to do it
17:56:09 <lucasagomes> devananda, jroll dtantsur one thing is true for both approaches... we need to make the json fields indexable
17:56:16 <jroll> devananda: to be clear, this could be "ironic implements a filter api"
17:56:22 <jroll> and then the driver chooses one at random
17:56:33 <NobodyCam> lucasagomes: ++
17:56:39 <devananda> lucasagomes: yes
17:56:41 <lucasagomes> devananda, jroll dtantsur could we start on having a spec for that? As an atomic change and then we can go on whether we expose it via API or have a new endpoint for it
17:56:48 <jroll> but that ends with races, and I think ironic can do smarter things if the scheduling is in the backend
17:57:14 <jroll> lucasagomes: yeah, totally, but I do want to start the conversation about where a scheduler lives
17:57:19 <lucasagomes> jroll, ++
17:57:28 <dtantsur> lucasagomes, I would even say we should eventually kill instance_info and properties with fire
17:57:31 <devananda> jroll: "give me a list of things matching <query>" is very different than "reserve N things for me that match <query>"
17:57:36 <dtantsur> and have them as properly defined fields
17:57:47 <jroll> devananda: it really isn't that different
17:57:47 <lucasagomes> I see the race advantage on having an api endpoint. But I also see an advantage on having the filters on the API so it's more pluggable for other systems
17:57:57 <lucasagomes> for e.g the dashboard can use the filters
17:57:58 <devananda> jroll: it's completely different from a distributed systems POV
17:58:06 <NobodyCam> *2 (two) minutes*
17:58:08 <devananda> one is access to indexable data
17:58:08 <lucasagomes> (tho ofc both could be implemented, one doesn't block the other)
17:58:19 <jroll> devananda: right, the latter is less racy, sounds like a win to me
17:58:22 <devananda> the other is synchronization and cluster management
17:58:28 <jroll> mhmmm
17:58:48 <jroll> you don't think a distributed system should be managing it's own cluster?
17:58:54 <devananda> jroll: oh - I'm not saying it's not better -- it's just a different problem space
17:59:01 <jroll> ok
17:59:19 <NobodyCam> * less then 1 (one) minute *
17:59:20 <devananda> and I have been keeping Ironic out of that space thus far
17:59:32 <jroll> anyway, I'd love it if folks started thinking about this
17:59:39 <lucasagomes> jroll, ++
17:59:41 <devananda> in large part because that's what the rest of openstack (ostensibly) did
17:59:42 <jroll> I'll reduce my spec to just the filter API thing
17:59:45 <jroll> and we can go from there
17:59:59 <NobodyCam> jroll: sounds good TY
18:00:03 <NobodyCam> and thats time
18:00:04 <lucasagomes> jroll, indexable things?
18:00:06 <dtantsur> jroll, ++ filtering looks pretty uncontroversial (but please include capabilities)
18:00:08 <dtantsur> :)
18:00:09 <NobodyCam> Thank you everyone
18:00:11 <NobodyCam> great meeting
18:00:18 <devananda> so it is an architectural shift for us (and for openstack) if ironic goes there. worth discussing, certainly
18:00:29 <devananda> jroll: but yea, filter API is ++ from everyone afaict
18:00:35 <lucasagomes> thanks for the meeting everyone
18:00:36 <devananda> NobodyCam: thanks for keeping us on time :)
18:00:42 <jroll> lucasagomes: yeah, being able to index things and build an api around it
18:00:42 <NobodyCam> :-p
18:00:45 <jroll> thanks everyone
18:00:49 <jroll> see y'all later
18:00:49 <sinval_> thank you guys
18:00:50 <lucasagomes> jroll, ++ cool
18:00:52 <dtantsur> thanks
18:00:55 <NobodyCam> see everyone back in channel
18:01:08 <NobodyCam> #endmeeting