17:00:25 <jroll> #startmeeting ironic
17:00:26 <openstack> Meeting started Mon Nov 16 17:00:25 2015 UTC and is due to finish in 60 minutes.  The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:27 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:29 <devananda> o/
17:00:29 <openstack> The meeting name has been set to 'ironic'
17:00:30 <dtantsur> o/
17:00:32 <jroll> hi everyone!
17:00:35 <vdrok> o/
17:00:36 <sambetts> o/ hey
17:00:36 <lucasagomes> hello all
17:00:37 <krtaylor> o/
17:00:38 <mariojv> \o
17:00:38 <mjturek1> o/
17:00:41 <yuriyz> o/
17:00:42 <pc_m> /me lurking
17:00:44 <cinerama> o/
17:00:45 <Nisha> o/
17:00:45 <TheJulia> o/
17:00:56 <rloo> o/
17:00:57 <cdearborn> o/
17:01:11 <jroll> #topic Announcements and reminders
17:01:17 <rameshg87> o/
17:01:31 <rpioso> o/
17:01:37 <vsaienko> o/
17:01:52 <jroll> just a quick reminder that we're now using reno for release notes - please remember to submit release notes with patches where needed, and also to remember to check for them in reviews
17:02:02 <jroll> see doug's thread on the ML for more details about it
17:02:13 <dtantsur> for all changes? only substantial changes?
17:02:40 <jroll> dtantsur: yeah, primarily substantial changes - think about what we already put in our release notes
17:02:55 <rloo> jroll: is there a patch to add release notes for stuff that landed in M* already?
17:03:27 <jroll> rloo: not yet, I'll be doing that this week
17:04:05 * krtaylor needs to read more about it
17:04:20 <lucasagomes> #link https://review.openstack.org/#/c/242147/
17:04:23 <lucasagomes> for those interested
17:04:25 <rloo> jroll: thx
17:04:50 <jroll> anyone have other announcements?
17:05:06 <pas-ha> o/
17:05:21 <lucasagomes> jroll, maybe about mid-cycle
17:05:46 <lucasagomes> folks please take a look at the ML (will grab the link) and vote for either have mid-cycle or not in this cycle
17:05:58 <rloo> jroll: maybe about your comment about a release soon?
17:06:05 <jroll> ++
17:06:07 <davidlenwell_> o/
17:06:15 <lucasagomes> #link http://lists.openstack.org/pipermail/openstack-dev/2015-November/079119.html
17:06:16 <lucasagomes> that's it
17:06:19 <jroll> rloo: that was just a stray thought, I need to look at some things before I commit to it :)
17:06:29 <rloo> jroll: ok then :D
17:07:11 <jroll> thanks for that lucasagomes
17:07:16 * jroll moves on
17:07:23 <jroll> #topic Subteam status reports
17:07:31 <jroll> I'll give everyone (and myself) a moment to review
17:07:35 <jroll> #link https://etherpad.openstack.org/p/IronicWhiteBoard
17:08:26 <rloo> jroll: wrt boot interface, yeah, one patch landed. let me find it...
17:09:33 <rameshg87> rloo: https://review.openstack.org/#/c/216538/ (not sure if you wanted to put it here)
17:09:52 <rloo> rameshg87: i put it in the etherpad. thx.
17:10:41 <jroll> jlvillal: any updates on nova things? (/me suspects you aren't here and that's fine)
17:10:50 <jroll> krotscheck: betherly: any updates on frontend stuff?
17:11:29 <rloo> wrt py 2.6 and ironic client. is there some rule or something that says how long we have to support py 2.6?
17:11:51 <dtantsur> rloo, as long as the last release supports it (aka Juno)
17:12:04 <dtantsur> i.e. when Juno EOL's, we're fine to say goodbye to it
17:12:49 <rloo> dtantsur: thx. lintan ^^ answer to your question.
17:12:51 <jroll> well, folks were keeping 2.6 around for clients too
17:12:53 <pas-ha> rloo, probably when last RHEL w/o default Py27 is EOL'd
17:13:03 <devananda> dtantsur: that brings up a point we need to discuss -- when do we EOL ironic's juno support?
17:13:34 <lucasagomes> I think clients are suppose to continue supporting 2.6 AFAIUI
17:13:38 <dtantsur> devananda, max(official OpenStack EOL, when nobody actually is ready to support it)
17:13:42 <jroll> but sounds like the plan is to drop 2.6 for clients now, based on https://review.openstack.org/#/c/244275/
17:13:45 <dtantsur> lucasagomes, jroll, I was addressing it
17:13:50 <devananda> I believe Kilo was our first officially supported release, so we ought to be able to drop py26 now, actually
17:13:59 <jroll> devananda: this is about the client
17:14:06 <devananda> jroll: oh. nvm :)
17:14:08 <jroll> we've dropped 2.6 for the server already
17:14:12 <dtantsur> lucasagomes, jroll, ironic does not support 2.6 for a while. client - when Juno goes EOL
17:14:13 <jroll> afaik
17:14:21 <jroll> ok, thanks dtantsur :D
17:14:23 <devananda> jroll: yah, I thought we had ...
17:14:28 <dtantsur> (that what I learned last time)
17:14:31 * devananda now notices the word "client" in rloo's question
17:14:51 <davidlenwell_> devananda needs more coffeee
17:15:05 <lucasagomes> alright cool
17:15:06 <rloo> so regardless of dropping py26 from oslo, we agree we won't drop from client until after we EOL juno.
17:15:18 <dtantsur> rloo, if also drops it, we have to as well
17:15:25 <pas-ha> rloo, client uses oslo stuff
17:15:40 <rloo> dtantsur, pas-ha: oh right!
17:15:49 <jroll> juno EOL is happening real soon now, fwiw
17:15:58 <jroll> which is likely why this is coming up
17:16:12 * devananda notes that there is no stable/juno branch of python-ironicclient
17:16:18 <dtantsur> yeah, so as long as our first Oslo dep drops Py26, we have to drop it as well, essentially..
17:16:32 <dtantsur> devananda, I think stable branches for clients appeared in Kilo
17:16:34 <pas-ha> devananda, stable branches in clients appeared in Kilo only AFAIR
17:16:42 <devananda> right
17:16:53 <devananda> I'm clearly missing something (besides coffee)
17:17:02 <jroll> devananda: there's no technical link
17:17:12 <dtantsur> coffee fixes most of the problems, have some :)
17:17:16 <jroll> devananda: but folks apparently agreed to drop 2.6 in clients at the same time as juno eol
17:17:39 <devananda> jroll: huh
17:18:12 <jroll> devananda: I suspect part of that was to not support 2.6 in infra just for clients
17:18:23 <rloo> jroll: does that mean one of us should +1 to indicate ironic is good with it? https://review.openstack.org/#/c/244275/
17:18:34 * dtantsur welcomes freeing some infra resources
17:19:07 <jroll> rloo: yeah, let me verify some things and then I will +1 that and post the same for ironicclient
17:19:25 <rloo> jroll: thx. i'll add your AI to the subteam report :)
17:19:30 <jroll> and probably an ironicclient release with a major version bump to indicate it
17:19:41 <rloo> jroll: oh, you already added something so we're good
17:20:30 <jroll> :)
17:20:54 <jroll> anything else on subteam report things?
17:21:46 <jroll> #topic Open discussion
17:21:50 <jroll> dtantsur: I know you had a thing here
17:22:08 <vsaienko> I have a question about ironic multi-tenant testing in community. Do we have any thought how to perform it on CI?
17:22:31 <dtantsur> jroll, yeah, thanks
17:22:34 <jroll> vsaienko: define "multi-tenant testing"
17:23:00 <dtantsur> I'd like to mention that we're somewhat stuck in designing a proper OpenStackClient interface for ironic
17:23:16 <dtantsur> and we need more (MOAR!) opinions: http://lists.openstack.org/pipermail/openstack-dev/2015-November/078998.html
17:23:20 <pas-ha> jroll, that is how are we going to test the new Ironic/Neutron feature on gates
17:23:26 <devananda> dtantsur: moar is better :)
17:23:53 <jroll> vsaienko: pas-ha: oh, we've had a lot of discussion about that within the subteam. people have some ideas and are working on them afaik
17:24:01 * lucasagomes have to comment back on the CLI stuff
17:24:03 <dtantsur> right now nearly everyone has their own idea how to do it, so I don't even know how to reach a consensus there
17:24:17 <lucasagomes> but I kinda like sambetts's suggestion there
17:24:26 <dtantsur> maybe we need a spec, maybe some voting, whatever...
17:24:39 <rloo> dtantsur: i admit, i haven't gotten around to looking at openstackclient and how that translates to the new openstack baremetal commands. Is it clear except for this particular 'provision' one?
17:24:40 <pas-ha> jroll, we should probably then ask on ironic-neutron subteam meeting then
17:24:42 <dtantsur> I'm not asking to solve the problem right now, but it would be cool at least to find a direction
17:25:00 <dtantsur> rloo, CRUD operations are more or less clear.. it's power and provision that are troublesome
17:25:07 <rloo> dtantsur: i would have preferred a spec or something that showed all openstack baremetal * corresponding to existing ironic * commands.
17:25:13 <dtantsur> OSC has guidelines for CRUD commands
17:25:18 <jroll> vsaienko: pas-ha: tl;dr create an isolated tenant network in ovs land and make sure things happen as expected (can't access control plane etc)
17:25:32 <pas-ha> jroll, thanks
17:25:43 <jroll> np
17:25:54 <rloo> dtantsur: it seems like it would be easier (for me) if I saw such a list and could ok it, then it just needs to be implemented. are we doing them each command at a time?
17:26:01 <devananda> dtantsur: is there a writeup of the broader discussion on OSC somewhere? I was following it pre-summit but lost track since then
17:26:30 <dtantsur> devananda, I don't think so... I kind of agree with rloo, we need a spec on it
17:26:48 <dtantsur> rloo, thrash has a series of patches, with node CRUD already landed IIRC
17:26:57 <jroll> dtantsur: so, I'll go back to the thread this week, but I tend to think of it like english. "openstack baremetal power off uuid", openstack baremetal deploy uuid, etc. but idk if that matches with their guidelines
17:27:07 <devananda> jroll: ++
17:27:24 <dtantsur> I like it,
17:27:32 <dtantsur> deploy==state active, right?
17:27:40 <jroll> yeah
17:27:48 <devananda> for OSC, I'm inclined to go with simple-is-better and hide some of the complexity we might expose in the library
17:27:56 <dtantsur> for example?
17:27:57 <jroll> +1
17:28:16 <jroll> manage, inspect, provide
17:28:25 <jroll> teardown
17:28:43 <dtantsur> jroll, is it a list of commands you suggest?
17:29:03 <jroll> dtantsur: I guess, rough brainstorming, I can formalize my thoughts in email
17:29:26 <lucasagomes> jroll,  http://lists.openstack.org/pipermail/openstack-dev/2015-November/079029.html
17:29:28 <dtantsur> jroll, please do. it sounds like a good compromise, and it will move the whole process
17:29:36 <jroll> most of our verbs seem fine, as is, delete is weird
17:29:38 <jroll> I will
17:29:42 <devananda> I'll catch up on the ML thread and respond as well
17:29:43 <dtantsur> because these patches has been around since end of Kilo
17:29:43 <jroll> sorry for being mostly afk last week :(
17:29:49 <sambetts> jroll: thats going towards the suggestion I had, I did a version covering all the current cli fucntionality
17:29:50 <devananda> looks like a lot of discussion happened last week while I was travelling
17:29:50 <lucasagomes> something like that?
17:30:04 <jroll> sambetts: right
17:30:11 <dtantsur> jroll, yeah, deleted and active look weird, +1
17:30:26 <devananda> "activate" also sounds weird in that context
17:30:46 <jroll> dtantsur: note that when we want to make a node active, the api request is "deploy"
17:30:48 <jroll> iirc
17:30:50 <dtantsur> I'm ok with s/active/deploy/
17:31:14 <sambetts> yeah, that makes sense to me
17:31:59 <vdrok> I have a question about ci testing of drivers that are not yet merged - should ci be set up before they are merged in tree?
17:32:19 <dtantsur> one moment, do we more or less agree on the OSC design?
17:32:24 <vdrok> e.g. lenovo driver that is being proposed
17:32:49 <dtantsur> if so, I leave it up to jroll and devananda to communicate, right?
17:32:52 <jroll> dtantsur: I think we have enough agreement to post that to the ML
17:32:55 <jroll> and yeah that's fine
17:33:11 <dtantsur> thanks, I'm giving the mic to vdrok now :)
17:33:13 <jroll> vdrok: great question, I've been asking myself the same thing recently
17:33:14 <rloo> could we get that in a spec, eg similar to what sam did in http://lists.openstack.org/pipermail/openstack-dev/2015-November/079029.html
17:33:36 <devananda> vdrok: i think before we can require that of new drivers, we need to have that set up for existing drivers
17:33:49 <lucasagomes> devananda, ++
17:33:53 <devananda> vdrok: but in the fullness of time, yes, I think requiring CI as part of a driver submission would be reasonable
17:33:58 <jroll> devananda: yeah, mostly agree
17:34:09 <jroll> I think this cycle, don't require it but make them aware of the deadlines we're setting
17:34:10 <devananda> krtaylor: thoughts ^ ?
17:34:10 <lucasagomes> we gave other drivers 2 cycles (counting with this one) to setup the CI
17:34:14 <jroll> next cycle, totally require it
17:34:19 <devananda> jroll: right
17:34:29 <lucasagomes> so I think we could accept new drivers for now (beggining of this cycle)
17:34:41 <lucasagomes> and let then know that they have to work on the CI soon
17:34:46 <rloo> jroll: would a subteam/something about third party CI help? don't we need to inform folks first, if that doesn't happen soon, the 2 cycle thing will be later?
17:34:48 <lucasagomes> jroll, ++ for next cycle
17:34:55 <devananda> rloo: we have one already :)
17:35:01 <vdrok> jroll, yup, makes sense
17:35:09 <jroll> rloo: comms going out this week btw
17:35:17 <lucasagomes> rloo, there's a cross project group about 3rd party ci
17:35:21 * lucasagomes finds the wiki
17:35:43 <jroll> (this week is later than we wanted, but thingee and myself got busy)
17:35:46 <lucasagomes> #link https://wiki.openstack.org/wiki/Meetings/ThirdParty
17:35:52 <krtaylor> sorry, I am in another meeting too, high latency
17:36:01 <rloo> ok, just want to make sure we don't forget to do whatever (eg communicate) in a timely fashion :)
17:36:35 <jroll> totally
17:36:45 <krtaylor> yes, we need to, regardless of the spec status
17:36:48 <rloo> to be clear, I mean communicating that we expect CI from third party drivers, and this is the plan/timeframe for it to happen, etc.
17:36:58 <jroll> yep
17:37:04 <jroll> there's already drafts
17:37:13 <jroll> we should also land that spec this week
17:37:18 <rloo> thx jroll
17:37:22 <krtaylor> jroll, if you are busy, I can work with thingee and send out the first email
17:37:48 <krtaylor> jroll, agreed, I'll get a revision to the spec today
17:37:51 <jroll> krtaylor: nah, I got it. the hard part has been both of us traveling, I'm not going anywhere until thanksgiving day :D
17:37:59 <krtaylor> perfect
17:38:00 <sambetts> rloo: https://etherpad.openstack.org/p/IronicCI
17:38:06 <jroll> thank you for the offer though :)
17:39:47 <pas-ha> I would remind of an idea from summit to register/enable *all* drivers on the gate, at least in a simple non-voting job
17:40:04 <sambetts> pas-ha: Its in the spec :)
17:40:11 <pas-ha> cuz some are totally broken ATM
17:40:34 <pas-ha> sambetts, cool, thx
17:40:38 <jroll> wait
17:40:51 <jroll> I think this is different
17:41:01 <pas-ha> yes, I just thought so too
17:41:17 <jroll> what pas-ha is talking about is running a job with enabled_drivers=everything,we,have,
17:41:26 <pas-ha> exactly
17:41:32 <jroll> because apparently enabling irmc breaks conductor startup right now
17:41:39 <jroll> which I totally forgot about
17:41:44 <sambetts> oooooh! thats an interesting one!
17:41:46 <jroll> and will add to my todo list to deal with right now
17:41:47 <devananda> ++
17:41:52 <jroll> thanks for the reminder
17:41:59 <pas-ha> jroll, np
17:42:08 <jroll> deal with may mean delegating fwiw :P
17:42:27 <pas-ha> jroll, happy to help
17:42:46 <jroll> cool, I will ping if needed. thanks
17:42:48 <lucasagomes> ouch
17:44:36 <devananda> on the topic of 3rd party CI, I'd like to see the guidelines on it (what we expect, how folks should set it up, etc) in the developer docs, not in the specs repo or wiki
17:44:38 <pas-ha> basically you don't even have to _enable_ all, just install deps for all/as many as possible
17:44:57 <jroll> devananda: yes, that's the goal. the spec is informational and to get consensus on the details of the plan
17:45:56 <devananda> pas-ha: drivers are not in requirements.txt (or variants thereof). what you're suggesting would basically mean we need to add driver-requirements.txt to openstack global reqiurements
17:46:14 <devananda> because we can't install anything from pip that isn't in g-r within infra gate
17:46:24 <pas-ha> devananda, ouch
17:46:29 <devananda> on top of that, several drivers do not have any installable packages
17:46:36 <jroll> yeah, it gets weird
17:46:43 <devananda> eg, they need to be built from source. or the driver author only releases an SDK ...
17:46:48 <jroll> but I'd like to see what we can do here
17:47:01 <dtantsur> devananda, I think we can install anything, if it's only voting on our project, and infra does not have to fix it ;)
17:47:05 <devananda> we actually pulled a few drivers out of g-r about two cycles ago
17:47:12 <devananda> dtantsur: nope
17:47:14 <dtantsur> but yeah, many drivers are not on PyPI
17:47:21 <jroll> I'd rather this not vote
17:47:48 <pas-ha> jroll, sure, as any breakage in a third-party lib could wedge the gate
17:47:53 <devananda> dtantsur: if it's voting on Ironic, it can affect other projects, and so infra will care
17:47:53 <jroll> yeah
17:48:10 <jroll> it's unclear to me if nv jobs can have non-g-r things
17:48:13 <dtantsur> devananda, ok, I meant to say "only run on ironic and is never voting"
17:48:23 <devananda> dtantsur: ah. if non-voting, maybe :)
17:48:49 <jroll> so like I said, I'll poke around and investigate this
17:48:50 <dtantsur> jroll, the idea of g-r is to not have gate broken due to problems with dependencies. non-voting jobs can't break gate
17:49:00 <dtantsur> (at least as I understand the whole system)
17:49:01 <rloo> when/if we have third party CI, this issue will be tested there right?
17:49:01 <pas-ha> a simple check to install as much deps as possible and just start Ironic up
17:49:09 <pas-ha> rloo, sure
17:49:11 <jroll> rloo: yep
17:49:19 <rloo> do we want to spend time now dealing with it then?
17:49:37 <rloo> I mean, dealing with it outside of third-party CI?
17:50:06 <vdrok> the deadline for it is end of n, so I think it would be good to have something now
17:50:12 <devananda> rloo: actually no. third-party CI will only need to install each drivers' requirements
17:50:27 <devananda> this is about installing _all_ drivers' requirements in the same env --- and making sure they don't conflict
17:50:33 <pas-ha> rloo, depends on how quick we'd like to give vendors at least some visibility
17:50:38 <jroll> mmm, so if things conflict that's a problem, gr
17:50:41 <rloo> devananda: oh, yes, that is a different issue
17:50:44 <jroll> but
17:51:01 <jroll> the current issue, as it was explained to me, it just installing scciclient breaks the conductor
17:51:16 <pas-ha> jroll, yep
17:51:24 <devananda> jroll: right. and given how bad pip is at dependency resolution, we're not going to detect conflicts between drivers' dependencies unless they're in gr (and even then, only in some cases)
17:51:31 <pas-ha> as of Liberty at least
17:51:44 <devananda> jroll: hm. ok. if it's that simple, then that drivers' CI should catch it
17:51:55 <jroll> devananda: I mean, both seem important
17:52:15 <jroll> idk, need to investigate offline
17:52:22 <jroll> just talking about it won't fix it :)
17:54:11 <jroll> anything else or should we close this down?
17:54:31 <rloo> so... what? AI for jroll to file bug/look into 'that' issue wrt irmc. do we want to also enable/import all packages for alldrivers too?
17:54:53 <jroll> yeah, I'm going to investigate both
17:55:04 <rloo> jroll: ok thx :)
17:55:06 <sambetts> I think we should add it to the agenda for Wednesdays CI meeting
17:55:16 <jroll> +1 do iiit
17:55:23 <sambetts> will do :)
17:55:24 <rloo> file bugs please
17:55:37 <rloo> or specs or ??
17:55:40 <jroll> good point
17:55:46 <jroll> pas-ha: have you filed a bug for this yet?
17:55:57 <jroll> if not, please do
17:56:02 <pas-ha> not AFAIK, will retry and do
17:56:11 <jroll> thanks, reproduction steps would be helpful
17:57:15 <rloo> 3 minutes left
17:57:31 * jroll waits patiently
17:58:08 * rloo wonders what jroll is waiting for. let's end early...
17:58:12 <jroll> meh, going to call that a wrap
17:58:14 <jroll> heh
17:58:16 <lucasagomes> :-)
17:58:16 <jroll> #endmeeting