19:00:27 #startmeeting ironic 19:00:28 Meeting started Mon Jun 24 19:00:27 2013 UTC. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:00:29 o/ 19:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:00:32 The meeting name has been set to 'ironic' 19:00:50 o/ 19:00:54 #topic agenda 19:00:59 #link https://wiki.openstack.org/wiki/Meetings/Ironic 19:01:15 hi all 19:01:15 agenda looks pretty similar, though i see one new thing 19:01:25 :) 19:01:39 and just a reminder, everyone's welcome to add items to the agenda any time :) 19:01:58 #topic object models 19:02:41 we're almost done with these 19:02:52 node and port are in tree 19:02:53 yup 19:02:57 yay 19:03:02 and chassis 19:03:04 and hassis 19:03:07 :) 19:03:11 great work everyone! 19:03:38 there's a review up for driver model (thanks JimJiang1 ) 19:03:41 #link https://review.openstack.org/#/c/33920/ 19:04:04 and i'd like us to take a few minuets to talk about the long term functionality of a "driver" object model 19:04:45 i have some ideas, which i'll try to type quickly, butquestions and opinions are welcome, of course 19:05:25 as i see it, a driver object would necessarily represent a record in the db, and so we have a 1:1 relationship between db record and a "driver" 19:05:40 but we could have multiple instances of the same driver running, eg on different manager (conductor) hosts 19:06:04 also, what will the driver object / db record actually be used for? 19:06:43 via the API, we'll need to expose some information _about_ drivers. such as what driver_info properties they require, but that doesn't require a db record... 19:06:46 that's the question I also have for this meeting 19:08:03 driver info can be taken from settings 19:08:09 I guess 19:08:12 right 19:08:42 I have questions on the drivers too. there may be different kind of bareemtal nodes in a cloud, should the drive info be node based? 19:08:51 there could be an assumption (or a requirement) that driver configuration is consistent across a cluster 19:09:11 devananda, how much of the configuration? including secrets? 19:09:32 ah, let me rephrase my last sentence 19:09:47 s/driver configuration/static configuration, eg. what is in the config file/ 19:09:56 i didn't mean the per-node driver info :) 19:10:22 linggao: ironic supports multiple drivers simultaneously 19:10:55 linggao: so if, for example, you had some iLo and some DRAC hardware (and drivers for those existed, which they do not today) you could easily run both hardware in the same ironic cluster 19:11:06 (taht's one of our primary goals) 19:11:40 so we have driver configuration (eg in ironic.conf) and we have per-node driver_info (stored in the database) 19:11:41 cool. but where is the info stored for the node? 19:11:57 linggao: in db: ironic.nodes.driver_info field 19:12:17 do we have db schema defined somewhere? 19:12:22 yes 19:12:31 ironic/db/sqlalchemy/* 19:13:19 would the drivers object allow quicker searching.. ie can this conductor (manager) handle a drac node? 19:13:54 "can this conductor" -- here I think you're exposing a detail that the API shouldn't expose 19:14:09 "can this ironic API handle a drac node" -- that we should expose 19:14:17 devananda: +1 19:14:26 and then internally, the API may need to route the requiest to a conductor which can handle that node 19:14:27 +1 19:14:51 but I dont think we need to expose individual conductor (or driver) instances outside of the API 19:14:57 s/need/should/ 19:15:46 That's still not clear for me why do we need a db record for a driver 19:15:46 then seems we are leaning away from "driver" object 19:16:04 so, do we really need a db object for drivers? or can we just use RPC fan-out between API and Conductor when we need to keep them aware of eachother's capabilities? 19:16:24 ok... sounds like all 3 of us are leaning away from driver db/object :) 19:16:46 devananda: exactly :) 19:17:37 JimJiang1: i realize you implemented something based on a BP that I approved -- and we just said "no" to it. I'm sorry -- your patch is good though :) 19:18:01 #action devananda to review all the BP's more carefully 19:18:17 JimJiang1: don't give up friend :) 19:18:22 * NobodyCam notes the "HARD HAT REQUIRED" 19:18:50 moving on 19:18:51 ok:) 19:18:54 #topic API and RPC stuff 19:19:06 I had a proposal about this today 19:19:16 When all of you were sleeping 19:19:22 :D 19:19:23 :) 19:19:53 I think we should create an api.controllers.v1 package, just like we did with the objects 19:20:14 I was just going to point out that I landed a reworked ironic/api/controllers/v1.py based on the node object model 19:20:14 Otherwise we will have a loooong module 19:20:25 along with a basic framework for unit testing the API 19:20:37 and romcheg, I totally agree. that was going to be my next thing :) 19:21:24 I was hoping martyn would be around to start working on the API, but i think he was on vacation for a while? 19:21:34 devananda: I was going to start working on api, so please publish that as soon as you done it 19:21:43 romcheg: it's done 19:22:06 Ah, haven't seen the computer last few hours 19:22:12 #link https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py#L50 19:22:20 romcheg: i landed it late last week 19:22:52 I am working on adding the RPC methods for certain node actions that require lock coordination 19:22:55 Ah, no, I don't mean that 19:22:59 for example "update" 19:23:01 oh? 19:23:25 Currently we have a v1.py module which well contain all the controllers, right? 19:24:02 well, currently, yes, but i think refactoring that to a v1/{nodes,interfaces,etc}.py module is fine 19:24:14 That's what I mean 19:24:25 i haven't started on that refactoring :) 19:24:28 i think it's a great idea 19:25:44 is that a action item? 19:25:51 sure 19:25:55 yup 19:26:12 #action romcheg to refactor api/controllers/v1.py into a more maintainable modular structure 19:26:15 also 19:26:51 #action devananda to implement RPC layer for API actions that require lock management 19:26:56 #link https://review.openstack.org/#/c/34115/ 19:27:03 is an initial draft. no tests yet... 19:27:31 but basically, some API actions need to be passed to conductor, or else we get nasty race conditions (like updating a node while a conductor is deploying it!) 19:27:37 so i'm working on that 19:27:54 any quesitons on api/rpc stuff? 19:28:26 #topic image utils & pxe driver 19:28:27 Currently it's quite clear for me 19:28:49 GheRivero: looks like you're making good progress! 19:29:28 image utils are on the waiting queue... maybe a couple of tests needed and the split of the patch with the needed openstack/common out 19:29:32 by image utils, do you mean diskimage_builder? 19:29:54 or it is in ironic? 19:30:08 I think that's a separate project yet 19:30:18 linggao: a servive/wrapper around python glance client to retrieve the images,kernels,ids needed by pxe 19:30:25 linggao: no. i mean this: https://blueprints.launchpad.net/ironic/+spec/image-tools 19:31:09 GheRivero: awesome. anything I can do to help move the glanceclient patch? 19:31:37 maybe knocking some doors to have more reviews 19:31:47 as far as logistics of getting data to node, is the general level of expectation linux-only, no uefi, tftp down kernel and initrd? 19:32:15 #action devananda to get more eyes on the glanceclient image-tools patch 19:32:31 jbjohnso: for the initial release, yes 19:32:52 ok, keep in mind that the boot filename will likely warrant being changeable 19:32:58 and conditional on dhcp request 19:33:09 jbjohnso: Ironic will support >1 method later on 19:33:10 jbjohnso: other methods are quite interesting, but this is already well understood in this space, and we have a working codebase in the nova-baremetal driver 19:33:35 devananda, we have nic vendors that cannot pxe boot in 'BIOS' mode fyi.. 19:33:55 jbjohnso: in nova-baremetl, boot filename is keyed by a combination of the nova instance UUID (which is passed to the machine via DHCP BOOT) and the MAC addresses of all the physical NICs of that machine 19:34:26 jbjohnso: interesting, but i dont think that directly impacts our workflow 19:34:33 devananda, but that means the server must know ahead of time whether the node will attempt uefi or pxe boot, it may be best to have the payload adaptive to whowever node behaves 19:34:48 just food for thought 19:34:53 ack 19:34:58 noted 19:35:29 devananda, I would paste the generated isc dhcp stuff that xcat makes 19:35:44 but too lazy to pastebin and not evil enough to subject irc to it 19:35:48 when we start implementing uefi and/or ipxe, we'll have to consider such things 19:35:54 but elsif option client-architecture = 00:07 19:35:57 etc etc etc 19:36:30 jbjohnso: thanks for being the right amount of evil 19:36:31 jbjohnso and devananda: is that what node.driverinfo is for in the db? 19:36:39 jbjohnso: i'm hoping someone with more knowledge than I will dig into the ironic code at that point and add it ;) 19:37:05 linggao: node.driver_info in the db is for things like the IPMI credentials, PXE image sources, and so on 19:37:35 basically, information which is specific to that driver, that other drivers may not need, and therefor is not a standard requirement of Ironic itself 19:37:39 another thing, I noted that nodes with identical contents currently seem to copy the same initrd/kernel over and over? 19:37:41 but does it also define what kind drive a node will use? 19:37:49 e.g. deploying 80 nodes means 80 dupe copies of kernel and initrd? 19:37:54 on the server that is 19:37:59 jbjohnso: yes :( 19:38:01 jbjohnso: yeah... for now. 19:38:08 it-s on the ToDo list 19:38:19 jbjohnso: there are some notes from GheRivero in his patch about optimizing that. ^^ :) 19:38:27 use xCAT ;) 19:39:06 ok, before we run out of time, let's move on. we can talk more about PXE in open discussion :) 19:39:08 ok 19:39:16 ok 19:39:19 #topic ironic diskimage-builder element 19:39:26 NobodyCam: you're up! how's it going? 19:39:46 just wanted to get it out there that I am working on this 19:40:03 I am working on getting manager to standup 19:40:07 yay NobodyCam 19:40:19 api starts in the logs.. but is untested 19:40:46 I figure just about the time I get it all working the conductor patch will land 19:40:59 heh 19:41:18 but we should have a dib element to start an ironic server 19:41:46 well done 19:41:51 1 release is aimed at linking to the TripleO boot stack element 19:42:03 s/1/1st/ 19:42:10 one of the things i'm particularly eager to see come from that is linking ironic with keystone auth. 19:42:43 :) 19:42:51 right now, ironic API has no auth, and there's no access control implemented at the conductor or db layers yet 19:43:47 `keystone-client auth $USER $TOKEN|echo $?` 19:43:51 wont cut it? 19:44:03 :-p 19:44:04 :p 19:44:20 :) 19:44:51 actually, i think a simple "require all API requests to be from a valid OpenStack admin account" is sufficient for access control. no non-admin should be running ironic commands directly, and we can allow nova to temporarily escalate permissions when it deploys a node 19:45:28 so i think just getting that in the API is a good start :) 19:45:47 *that = validating the supplied keystone token 19:46:14 #topic open discussion 19:46:20 Can take a look at keystone 19:46:26 *I 19:47:01 open discussion : review 34132 19:47:07 question: ironic/drivers/modules directory, what those files are for? 19:47:23 are they temporary? 19:47:33 none of the other procject include tox or testr ... 19:47:55 pxe.py is under ironic/drivers and ironic/drivers/modules 19:48:03 NobodyCam: Yes, that was my concern 19:48:16 linggao: no that is where module code lives.. such as ssh 19:48:19 Tests can be run without those files 19:48:44 romcheg: other also work from command line 19:48:49 my python ipmi implementation is in a temporary public home: https://sourceforge.net/p/xcat/python-ipmi/ci/master/tree/ 19:49:06 #link http://docs.openstack.org/developer/ironic/api/ironic.drivers.base.html 19:49:17 linggao: that doc ^ describes the driver interfaces 19:49:20 if anyone wants to comment and either like it or laugh mercilessly at it 19:49:31 if I write a power driver for jbjohnso's native ipmi, where should it be checked under? 19:49:39 linggao: tl;dr a driver implements a set of interfaces. each driver/module/ implements one (or more) interfaces 19:49:57 jbjohnso: I don't think we are a laugh mercilessly kind of crowd 19:50:02 linggao: so a native_ipmi power driver would be created eg, ironic/drivers/modules/native_ipmi.py 19:50:03 I have it on good authority that my python code resembles perl too strongly 19:50:16 :) 19:50:22 :) 19:50:25 linggao: so a native_ipmi power driver *interface* would be created eg, ironic/drivers/modules/native_ipmi.py 19:50:48 jbjohnso: /mine resembles FOXPRO code I've been told 19:50:50 linggao: and then you would add those interfaces to driver classes, eg, ironic/drivers/native_ipmi_pxe.py 19:51:39 speaking of the native_ipmi driver... 19:51:52 #action devananda to create stackforge repo and import the native ipmi driver from sourceforge 19:51:53 ipmi_syncexample.py is something to peruse 19:52:01 ipmi_command.py should be mostly readable 19:52:04 #action devananda to create stackforge repo and import the native ipmi library from sourceforge 19:52:08 ipmi_session.py.... there be dragons... 19:52:34 jbjohnso: have you created a gerrit account? 19:52:39 devananda, yes 19:52:54 devananda, all that should be in order, I'm an officially blessed openstack contributor 19:53:51 jbjohnso: great. what's your name in gerrit? i want to make sure you're -core for the ipmi library 19:54:06 a few quick guesses and i haven't found you yet 19:54:45 jbjohnso: ok, msg me after the meeting :) 19:54:46 jbjohnso: Welcome to the family http://risovach.ru/upload/2012/11/generator/krestnyy-otec_4556647_orig_.jpeg 19:55:07 LOL 19:55:09 devananda, logged into review.openstack.org as jbjohnso@us.ibm.com 19:55:20 ack, ty 19:55:30 devananda, maybe I hadn't logged in there yet... 19:55:56 jbjohnso: searching for users in gerrit is painful if you dont know their _exact_ address 19:55:58 devananda, anyway, appreciate it, give me a place to git remote add origin and I'll push 19:56:01 five minute bell 19:56:36 oh, two quick announcements from me :) 19:56:44 1 - i have a patch up to rename "manager" to "conductor" 19:56:52 shouldn't be a surprise - i think we talked about this a few weeks ago 19:57:21 2 - i will be at europython conf next week. i haven't looked at the schedule so i'm not sure if anything will conflict with this meeting time 19:57:43 yay europython, are you speaking? 19:57:52 NobodyCam: mind running things if i'm not able to make it? 19:58:01 not at all :) 19:58:08 anteaya: not that i'm presently aware of 19:58:16 cool 19:58:22 booth duity!!!! 19:58:24 perhaps the hallway track 19:58:25 NobodyCam: thanks :) 19:58:39 both ^_^ 19:58:47 * NobodyCam wants swag 19:59:13 cool. times just about up -- thanks everyone! 19:59:21 good meeting :) 19:59:21 Thanks! 19:59:27 thank you 19:59:32 #endmeeting