19:01:26 #startmeeting Ironic 19:01:27 Meeting started Mon Jan 27 19:01:26 2014 UTC and is due to finish in 60 minutes. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:01:29 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:01:31 The meeting name has been set to 'ironic' 19:01:33 #chair NobodyCam 19:01:34 Current chairs: NobodyCam devananda 19:01:45 As usual, our agenda's here -- https://wiki.openstack.org/wiki/Meetings/Ironic 19:01:51 Welcome all 19:01:58 o/ 19:01:59 short agenda today :) 19:02:20 could end up being a short meeting, but who knows if we'll side track :) 19:02:42 small snnouncement: we cut a milestone as with all the other projects last week 19:02:53 Saw the email about that 19:02:54 o/ 19:03:23 o/ 19:03:51 this was more about getting the process down than actually producing a stable build, but it was, afaict, stable for the things we've implemented so far 19:04:16 things are looking good to me for our progress towarsd graduation -- keep up the good work! 19:04:35 mostly, we need deploy to really work. we're so close but could really use folks focusing on taht right now 19:04:49 let's take a look at open BPs and high pri bugs 19:04:52 #topic blue prints 19:05:20 #link https://launchpad.net/ironic/+milestone/icehouse-3 19:05:27 * devananda waits on slow connection 19:05:50 ok,we have 2 essential BPs for i3 no tdone yet 19:06:20 romcheg: think you'll be able to start on the db migration from nova-baremetal soon? or have you already and I missed it? 19:06:40 devananda: I'll submit all I did this week 19:06:45 dkehn: i've seen a lot of comments (and left some) on your neutron patch. anything more I can do to facilitate getting that done ASAP? 19:06:50 romcheg: great, thanks! 19:07:00 romcheg: awesome 19:07:21 working on it at present, lifeless has left comments on it concerning I it will be used, which is good 19:07:40 linggao: do i remember correctly that you're working on https://blueprints.launchpad.net/ironic/+spec/serial-console-access ? 19:07:48 it really depends upon the reviews, I'm taking care of them as I get them 19:07:52 linggao: or working with sun on that, i mean 19:08:11 devananda, Sun Jing is working on it. 19:08:29 also, has anyone gotten the neutron patch to work in a deployment, in conjunction with the nova driver yet? 19:08:51 nop, I'm setting up tripleo to start working on that right now 19:09:07 linggao: ah, sorry. I don't usually see sun at the meetings, just you 19:09:12 * NobodyCam notes that I should have a new rev of the driver up today. I tried (and failed) to have it up for hte meeting 19:09:13 probably will have something to say about it by tomorrow (day's almost finishing here) 19:09:23 lucasagomes: awesome, ty 19:09:39 davadanda, it's her night time. 3am in the morning. :-) 19:09:55 She will finish the console with ipmitool. 19:09:58 linggao: ah! that's a good reason not to be in the meetings :) 19:10:10 linggao: ok, thanks 19:10:20 She has transfered other blueprints to me but his one. 19:10:27 his/this 19:11:00 fwiw, the other 3 BPs currently targeted to i3 may get bumped -- depending on review bandwidth, etc 19:11:45 the ephemeral partition one I think is already in good progress, there's two patches waiting for review that makes ironic 19:11:58 create an ephemeral partition at deploy time 19:12:14 lucasagomes: awesome work on that, btw. i know it's mostly just porting from nova, but still ... :) 19:12:14 and the next one teachs ironic hw to preserve that partition if you redeploy the machine 19:12:27 yea mostly adapting the nova code to ironic 19:12:28 lucasagomes: do you recall the deploy timeout issue we chatted about a week or so ago. did we get a bug or bp for that? 19:12:42 and maybe we need to add a redeploy to the ironic driver as well 19:12:53 NobodyCam, there's a bug for that 19:12:56 lifeless: pong 19:12:58 devananda, opened 19:13:03 lifeless: pong 19:13:08 lucasagomes: let's chat about redeploy towards the end of meeting 19:13:11 * lucasagomes tries to find the link 19:13:12 lucasagomes: ack TY 19:13:21 lucasagomes: i would like to know what you mean, but not sidetrack yet :) 19:13:38 sorry wrong window 19:13:40 any other questions/comments/blockers on blue prints, before we move on? 19:13:44 NobodyCam, https://bugs.launchpad.net/ironic/+bug/1270986 19:13:45 dendrobates, ack 19:13:53 lol 19:13:58 #link https://bugs.launchpad.net/ironic/+bug/1270986 19:14:00 TY lucasagomes 19:14:08 devananda, ack* 19:14:24 #topic bugs 19:14:56 first thing is, i'd like to ask that any developers filing bugs, who have a sense of the project's needs (as ya'll probably do by now) 19:14:59 also triage them 19:15:04 devananda: I believe the above bug should be i-3 19:15:10 #link https://wiki.openstack.org/wiki/BugTriage 19:16:04 so, eg, if you spot a problem and file a bug, please set the status to triaged and the importance to the appropriate level 19:16:19 see taht link for a description of the different meanings of the fields 19:16:45 and if you're working on fixing a bug, please set the milestone target if you think you'll have a fix by a certain time 19:16:49 thanks :) 19:17:19 any particular open bugs that folks want to bring up? 19:17:34 either you're blocked fixing it, or you think the priority is wrong (too low? too high?) 19:17:56 or you think it's essential to fix by i3 but it's not listed on https://launchpad.net/ironic/+milestone/icehouse-3 ... 19:18:06 the timeout one is tricky 19:18:13 #link https://bugs.launchpad.net/ironic/+bug/1270986 19:18:14 yea 19:18:18 what is the target date for i3? 19:18:31 march 6 19:18:41 I see. 19:18:52 there's a feature freeze deadline of feb 22, i believe 19:18:53 which means mar 4 to get the changes in? 19:18:56 see the ML for the discussions 19:19:05 cause in order to use an periodic task you will have to break the lock to be able to acquire the node object and set it's provision_state to error 19:19:14 not only that, we will also have to stop the job 19:19:18 which is running in the background 19:19:26 basically, that means any major new code needs to be proposed ~2 weeks before the FF so folks can have tiem to review it and iterate 19:19:46 hm, i should have announced that at the beginning... 19:19:47 So after March 6, no more code check in? 19:20:12 linggao: after mar6, only bug fixes and docs, unless you get a feature-freeze-exception to continue working on an essential feature for the release 19:20:28 got it. 19:20:45 lucasagomes: exactly. and i dont think we've got a solution yet for "interrupt the background job" 19:20:53 feature-freeze-exception come from TC's only? 19:21:03 NobodyCam: PTL 19:21:06 devananda, yup, I was thinking about somehow use max_lobur_cell patchs 19:21:07 ack 19:21:14 to have an interface to stop the thread 19:21:30 lucasagomes: that creates the thread, but does it let us SIG_INT the thread? 19:21:42 devananda, lucasagomes, its possible to cancel background job 19:21:49 max_lobur_cell: awesome - how? 19:21:57 yea, if we had a thread pool or something 19:22:11 where I could get that future object and somehow call a stop() 19:22:15 I'd like to discuss after 19:22:20 exactly 19:22:25 max_lobur_cell: sounds good 19:22:28 idk about sending a signal to the thread 19:22:34 unless we do it with multiprocessing 19:22:35 but there are other ways 19:22:44 that would allow us to send a signal to stop that process 19:23:08 lucasagomes: right - i was using SIG_INT as an expression, not literally :) 19:23:17 gotcha 19:23:27 well we can use the signal python lib for that then 19:23:29 I think 19:23:37 so that is the only _critical_ bug that we have open today,w hich is great 19:23:48 that would make things more difficult to port ironic to windows for e.g 19:24:01 I'll ping you guys after 19:24:04 fwiw, critical bug == hold off the release until it's fixed 19:24:05 as signal is unix only... but we don't have to mind about it now as well 19:24:18 and i think this bug is that bad that we must fix it before Icehouse release 19:24:20 once I get back to my laptop :-) 19:24:36 devananda: ++ 19:24:44 any other bugs folks are concerned about ? 19:24:58 *are particularly concerned about 19:26:19 ok, moving on ... 19:26:33 #topic Tempest tests 19:26:51 we're still waiting on -infra to merge the tempest API functional tests into the pipeline 19:27:17 they were really busy last week... :) 19:27:33 ya, 19:27:49 other than that, I'm hoping agordeev2 // romcheg // etc might have some updates on devstack/tempest functional testing with VMs? 19:28:10 devananda: right! i'm fully concentraced on it 19:28:35 agordeev2: awesome! how's it going? need anything from me? 19:29:37 devananda: it's going fine. I think we'll have working patchset before the mid of i3 :) 19:30:00 devananda: no, no need of anything from you. thank you 19:30:17 agordeev2: ok, thanks! 19:30:22 #topic nova driver 19:30:25 NobodyCam: that's you :) 19:30:29 yep 19:30:57 I have a update to puch up as soon as I correctly rebase the driver 19:31:24 I am getting the node to the deploying state 19:31:40 I hope to be testing with dkehn's patch today 19:32:00 awesome 19:32:16 any idea when it'll be ready for review by nova team? 19:32:47 if things go well today. I hope to jump in to test writting 19:32:50 NobodyCam: look at https://review.openstack.org/#/c/66071/11, working on it a bit to deal with lifeless review 19:33:19 so maybe by next meeting? (i hope) 19:33:35 dkehn: ack will look after meeting 19:34:06 NobodyCam: need anything from me / others? 19:34:24 oh quietion ya 19:34:36 we use mock 19:34:47 nova appers to still be using mox 19:34:55 any pref there? 19:34:58 yea 19:35:01 mock :) 19:35:04 :) 19:35:12 all projects should be moving twoarsd py3 compat 19:35:20 mox is one of the things holding nova back 19:35:28 that what I thought 19:35:38 is mox is or out? 19:35:47 devananda + 19:35:52 mox is not py3 compat, last i heard 19:36:19 it definitely shouldn't be used in ironic, and if we're adding unit tests to other projects (eg, nova) it's nice of us to use mock instead 19:36:22 thx 19:36:39 any question for me on the nova driver? 19:37:17 not from me. anyone else? 19:37:55 i'm going to skip the next agenda item (devstack) since agordeev2 already gave an update on it, unless anyone has questions on taht topic 19:38:18 #topic tripleo 19:38:42 several of the patches to add ironic support to tripleo have merged, there's at least one more still in progress 19:38:58 #link https://review.openstack.org/#/c/66461/1 19:39:10 #link https://review.openstack.org/#/c/69013/1 19:39:11 also, lucasagomes raised a question in channel earlier 19:39:23 whether we're adding ironic to the seed or undercloud VMs and why 19:39:40 so i'd like to clarify that in case others are wondering 19:40:06 seed has some extra bits (init-no-cloud-init, or something like that) so it can run without cloud-init available 19:40:13 I have a update ALMOST ready for 66461 19:40:26 If you pardon the stupidity, what would be the use case for baking Ironic into the seed/UC VMs? 19:40:26 i believe the elements are otherwise the same as for undercloud 19:40:44 so we're adding it to undercloud first and once that's working, will add it to seed 19:40:45 matty_dubs: a working ironic 19:40:55 matty_dubs: it's replacing nova's baremetal driver 19:41:11 matty_dubs: so instead of seed and undercloud using nova+baremetal, it will use nova+ironic 19:41:15 s/it/they/ 19:41:17 Oh, whoops, _under_cloud. I'm with you now. :) 19:41:22 :) 19:41:23 :) 19:41:27 :) 19:41:55 anyone else experimenting with tripleo + ironic? 19:42:06 questions? comments? 19:42:33 I'm ostensibly at the intersection of the two, though not too heavily involved with TripleO at the moment. 19:42:48 Now that I don't have undercloud and overcloud confused (it's Monday!), it seems like it would be pretty useful. 19:43:04 matty_dubs: are you working on tuskar? 19:43:24 devananda: I came from Tuskar, and my focus is starting to shift to Ironic. Largely in support of Tuskar. 19:43:38 matty_dubs: great. welcome! 19:43:42 Thanks! :) 19:44:11 matty_dubs: you'll probably want to poke at our API a bunch, and getting tuskar to talk directly to it for enrolling / status / etc 19:45:01 Yes, was looking at precisely that last week :) 19:45:45 if there's nothing else specific to our progress to integrate with tripleo, let's jsut open the floor 19:45:48 #topic open discussion 19:46:41 just wanted to say AWESOME job everyone 19:46:47 devananda, at the beginning of the meeting I said that we probably will have to add a way to "redeploy" in the ironic driver for nova 19:46:49 I meant rebuild 19:47:11 lucasagomes: how would you see that getting used 19:47:23 s/how/when/ 19:47:49 NobodyCam, I think TripleO is already planning to use that on this cycle 19:48:06 lucasagomes: as in, upgrade the image 19:48:09 lifeless, sent an email to the openstack-dev list today I think 19:48:11 for upgrades 19:48:12 dendrobates, yes 19:48:13 :) 19:48:18 devananda, yes 19:48:18 lucasagomes: or "put new image on same node without wiping the ephemeral" 19:48:21 ok 19:48:27 yea update the deployed image 19:48:32 without removing all the user data 19:48:46 there was a patch up to implement a "redeploy", which would ostensibly re-trigger a deploy when the previous attempt failed mid-way 19:49:21 devananda: have the link handy? 19:49:22 i blocked that one because, if something fails, i think exposing the equivalent of F5 is terrible 19:49:36 I see... that's useful as well :) but yea I confused the words I meant rebuild instead of redeploy 19:49:46 lucasagomes: actually i think it's anti-useful 19:50:24 why not just "delete" + "deploy" ? 19:50:36 but anyway, for tripleo's needs, re-image is very useful 19:50:57 devananda, well, if the redeploy is smart enough to tell the scheduler to pick another node (ignore the one that failed, not try to redeploy on it) 19:51:04 I think it's kinda the same as delete and deploy again 19:51:17 kinda like an alias for both action 19:51:27 lucasagomes: at the nova level, taht will happen with automatic scheduler retries 19:51:27 actions* 19:51:56 lucasagomes: actually, i dont think the scheduler is issuing deletes for the failures, but i'll check 19:52:10 #action devananda to check if nova-scheduler issues DELETE for failed attempts prior to RETRY 19:52:30 devananda, right, hmm... that would be important to it to call delete 19:52:39 so we can do a tear_down() and remove all the config files 19:52:45 generated for that node that failed to deploy 19:52:51 lucasagomes: ironic doesn't need to expose retry in its API because teh logic for that is external. we don't have an internal scheduler today 19:52:55 inlcuding the auth_token etc 19:52:58 lucasagomes: exactly 19:53:10 but we do need to expose "reimage" 19:53:28 that can't be accomplished with any other combination of actions today 19:53:52 I see 19:53:57 'reimage' is for the upgrade use case, where you want to leave guests intact, but upgrade the operating system / OpenStack install, correct? 19:54:21 matty_dubs: yep 19:54:59 devananda: would that be needed for graduation? 19:55:04 NobodyCam: nope 19:55:08 yea that logic about "redeploy" should live in the ironic driver for e.g not in our APi directly 19:55:16 however, tripleo needs it for their story 19:55:24 lucasagomes: ++ 19:55:24 the driver can call the delete+deploy, but understand both actions as "redpeloy" 19:56:30 lucasagomes: new bug that you'll be interested in; https://bugs.launchpad.net/ironic/+bug/1272599 19:56:51 i had some time to chat with krow late on friday and he pointed that issue out 19:57:39 hmm will take a look 19:58:35 devananda, this only affects ironic!? 19:58:37 two minute bell 19:58:46 I bet it affects many other projects 19:58:48 lucasagomes: heh. probably affects other projects too ... 19:59:03 our factory resources like is used by many others 19:59:04 yea 19:59:04 Is caching a POST RFC-legal? 19:59:36 matty_dubs, idk, I will google it 19:59:40 but that looks hmm, odd 19:59:43 yea 19:59:44 lucasagomes: Ah, I can. Was mostly just curious/surprised. 20:00:01 Thats time folks 20:00:02 lucasagomes: we also talked about 200 vs 201 vs 202 return codes 20:00:16 thanks everyone! see you next week :) 20:00:19 #endmeeting