16:59:57 #startmeeting ironic 16:59:58 Meeting started Mon Feb 1 16:59:57 2016 UTC and is due to finish in 60 minutes. The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:02 The meeting name has been set to 'ironic' 17:00:04 3 seconds early \o 17:00:06 o/ 17:00:09 hi everyone 17:00:15 o/ 17:00:20 o/ 17:00:23 o/ 17:00:27 o/ 17:00:29 o/ 17:00:31 o/ 17:00:45 o/ 17:00:50 o/ 17:01:01 o/ 17:01:08 o/ 17:01:08 o/ 17:01:15 \o 17:01:20 o/ 17:01:25 #topic announcements and reminders 17:01:34 so, before we start announcing things 17:01:47 I've been, by my own accord, helping out a lot downstream the last few weeks 17:01:58 and not spending enough time upstream 17:02:02 o/ 17:02:04 and that isn't fair to you all 17:02:10 so I wanted to apologize for that 17:02:15 o/ 17:02:20 and thank the folks that have been driving the project forward in the meantime 17:02:21 o/ 17:02:31 so thank you all for that 17:02:45 I still have some loose ends but should be contributing more from here on out 17:03:08 and with that. announcements. 17:03:17 our gate is down due to the devstack/keystone v3 fallout 17:03:33 o/ 17:03:37 o/ 17:03:49 that is being reverted; we've also added v3 support to ironicclient, which is released, and waiting on a global-requirements patch. which is failing due to pypi mirrors being out of sync 17:03:52 Thanks jroll. I know for a lot of us other work does get in the way of upstream. 17:04:10 pas-ha had a patch with keystone 17:04:16 0/ 17:04:17 I would like to introduce myself to the group. I work at Dell with cdearborn. I'll be participating in the mid-cycle and attending the April Summit. Lookning forward to working with you. 17:04:18 o/ 17:04:30 hi rpioso, welcome :) 17:04:53 any other announcements? 17:04:55 hi rpioso and welcome. 17:05:12 rpioso: welcome! 17:05:13 jroll, rloo: Thx :) 17:05:19 also, a reminder to focus on our priorities - gate improvements, neutron integration work, manual cleaning 17:05:23 howdy folks 17:05:31 I'd *love* to get manual cleaning out the door and do an ironic release this week 17:05:38 but, the gate may prevent that, so ya know 17:05:43 jroll: want to remind folks about the midcycle dates? 17:05:52 yep 17:06:02 Only a day left to submit a talk proposal for the summit 17:06:22 reminder that our midcycle is happening ONLINE on february 16-18 17:06:30 please add your topics and rsvp here: https://etherpad.openstack.org/p/ironic-mitaka-midcycle 17:06:45 #info reminder that our midcycle is happening ONLINE on february 16-18 17:06:59 I'm still working on the a/v situation, I expect to have a thing picked out by next week 17:07:14 devananda: Thx! 17:07:25 #info reminder to focus on our priorities - gate improvements, neutron integration work, manual cleaning 17:07:40 #chair devananda 17:07:41 Current chairs: devananda jroll 17:07:45 thanks for reminding me on bot comands :P 17:07:50 np :) 17:08:21 anything else here? 17:08:47 #topic subteam status reports 17:08:54 as always, these are on the whiteboard: 17:08:56 #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:09:05 o/ 17:09:08 I'll give folks a few minutes to review and ask questions 17:09:40 wow, dtantsur is giving us a monthly summary for bug stats now :) 17:10:02 what needs to be done in manual cleaning? 17:10:10 zer0c00l: review 17:10:33 zer0c00l: reviews / landing the patches 17:10:53 jroll: if the gate gets unbroken for long enough, I'd like to finish landing the tempest-lib migratoin 17:11:01 we were so close two weeks ago, then *boom* 17:11:08 devananda: +1 17:11:15 The reviews specified in the spec are all abandoned 17:11:24 jroll: wrt network isolation. i guess you still want us to get the ironic parts merged asap even though the nova part is delayed til Neutron? 17:11:47 rloo: so, it will work with nova if we can land a patch to bump the API version nova uses 17:11:51 but the portgroups won't work 17:11:56 which is mega :( 17:12:02 rloo: yes. if we don't, it'll take even longer 17:12:12 and yeah, what deva sais 17:12:14 said* 17:12:24 zer0c00l: wrt manual cleaning: https://review.openstack.org/#/q/topic:bug/1526290 17:12:31 it makes sense in nova's world for them to wait until after we do a release with the new APIs 17:12:44 if we had been able to do that earlier this cycle, landing the changes in Nova now'ish would be fine with them 17:12:45 devananda, jroll: got it. am on it too :) 17:12:59 rloo: yah. thanks - your reviews on the neutron integration have been good 17:13:09 +1, ty rloo 17:13:57 rloo: please start +2'ing any of those patches you feel are good enough. I'll start approving everything up to the REST API changes soon 17:14:10 devananda: will do 17:14:21 I like jroll hope to be wrapping up my downstream stuff and will be starting on hte networking stuff 17:15:15 is this the best jumping in point: https://etherpad.openstack.org/p/ironic-neutron-mid-cycle ? 17:15:35 NobodyCam: https://review.openstack.org/#/q/status:open+project:openstack/ironic+branch:master+topic:bug/1526403 17:15:44 besides reviewing, gate seems to be the biggest headache. is the tinyipa stuff high priority? (maybe it was mentioned) 17:16:08 I'd like to get it landed soon, yes 17:16:11 rloo: the gate has been flat-out broken by several other things, but yes 17:16:14 TY jroll :) 17:16:20 at least so we can play with it and see how it does 17:16:21 whoop 17:16:23 and make a decision from there 17:16:32 the tinyipa stuff should help. we can't pivot to it immediately, but we should land it, and get a non-voting job going that uses it 17:16:34 so we can collect data 17:16:41 yep 17:16:59 I thought we were actually doing pretty well for like 36 hours before the keystone v3 thing happened. 17:17:34 jlvillal: i think things were still randomly timing out though. so lets get tinyipa in there. 17:17:54 And big thanks to dtantsur for fixing the "< something" bug last week. That was a killer 17:17:56 there should be a rule about not merging devstack changes before/on weekend. 17:18:13 ++ 17:18:22 +1 :) 17:18:32 jlvillal: yah, for, like, 36 hours ... 17:18:57 yes lets land tinyipa 17:18:59 :) 17:19:22 I like the idea of tinyipa as I have plans of having three IPA instances for the Grenade job... 17:20:26 krtaylor: wrt 3rd party CI. is the problem with the third party providers and/or should we give guidance wrt the pairing. (i don't actually know what the trouble is, just reading your notes in report) 17:21:15 rloo, I am having trouble checking to see if everyone has registered to meet the m-2 milestone 17:21:45 krtaylor: can you contact the folks directly, assuming we have contact info? 17:21:56 I need to ping thingee and see if I can get a list of email, I thought we had it in a etherpad somewhere, but I cant find it for the life of me 17:22:11 krtaylor: Isn't that what the third party wiki was for? 17:22:42 we have partial info in several places for ironic 17:22:50 It would be great if someone could take a look and verify that we've met the milestone or let us know if there are dangling chads that need to be addressed 17:23:21 that thirdparty CI isn't that useful at first glance, to see which are related to ironic: https://wiki.openstack.org/wiki/ThirdPartySystems 17:23:25 cdearborn, I did review yours and it lgtm 17:23:26 krtaylor: I much perfer the stackalytics driver list for working our who/what has CI etc 17:23:39 krtaylor: hey meant to sync up with you last week, but was at an offsite. I kind of dropped the ball on getting the communication going. Can we sync up again at the ironic qa meeting 17:24:00 krtaylor, thx very much - appreciate it! 17:24:07 sambetts, but I'm not sure that is complete, it is just the systems that have registered with stackalytics 17:24:29 thingee, no worries, absolutely, I'll ping you after 17:24:31 Yeah, I wish it was the standard instead of that wiki page though -> http://stackalytics.com/report/driverlog?project_id=openstack%2Fironic 17:24:45 definatly not complete 17:25:43 sambetts: ++ 17:25:56 krtaylor: registering with stackalytics should be a requirement 17:26:03 agreed, we can push on this in the -qa meeting 17:26:20 but the infra requirement is the thirdpartysystems page 17:26:28 hrmm 17:26:35 perhaps the page could be organized by project? 17:26:59 #link https://wiki.openstack.org/wiki/ThirdPartySystems 17:27:02 krtaylor: the stackalytics has a 'CI' column. instead of a checkmark, could it have a link to their corresponding third-party-CI wiki? 17:27:15 kr/win 43 17:27:18 doh 17:27:21 devananda, it would be really hard, the systems span projects 17:27:27 krtaylor: ah, gotcha 17:28:02 krtaylor: how about another column that thirdpartysystems that lists the projects? 17:28:09 rloo, well, it kinda is hard to get folks to update that if it is not a requirement 17:28:09 krtaylor: I see different entries by many companies, one per project 17:28:27 can we take this into open discussion or something else? 17:28:28 also, we're side tracking -- this is a discussion to bring up with infra 17:28:30 krtaylor: we can make it an ironic requirement 17:28:30 yea 17:28:49 oops. sorry, moving on now... :) 17:28:59 ok, moving on 17:29:03 #topic should we support a new feature to accept header 'X-Openstack-Request-ID' as request_id? 17:29:04 devananda, yes, we (powerkvm) were in stackalytics, but got removed for some reason, so I don't trust it a whole lot 17:29:10 #link https://etherpad.openstack.org/p/Ironic-openstack-request-id 17:29:13 lintan_: this is you 17:29:21 thanks jroll 17:29:40 what I want to say is most on the etherpad 17:29:47 https://etherpad.openstack.org/p/Ironic-openstack-request-id 17:30:19 I need want to get a decision here 17:30:33 right, so, the main question is, should we accept a request id and use it as our own 17:30:47 one thing to note, I don't believe we log request IDs, do we? 17:31:04 jroll: does ironic use request IDs now? 17:31:13 rloo: no 17:31:14 yes, we have the request id 17:31:18 I believe we have them in the context 17:31:21 but do not log them 17:31:24 jroll, AFAICT the idea is so you can track a user's request across all the services that it touches, for debugging 17:31:24 which sounds like a Really Good Idea 17:31:25 using oslo context 17:31:30 which is mildly infuriating every time I realize it 17:31:34 mgould: I'm getting there... 17:31:34 so +1 to accepting them and logging them 17:31:38 h. last time I checked, we just ignore the header 17:31:42 so 1) we need to log them before we do anything 17:31:59 2) I think we *should* accept a request ID from the api client 17:32:05 there are very specific ways to accept the header -- we DO NOT want to accept and log what ever header is passed in 17:32:09 3) we should make our nova driver send them 17:32:39 jroll: we should generate our own request-id if one is not passed in and log that 17:32:49 right. 17:32:49 I ageee with all that jroll just said. 17:32:55 we already generate one, afaik 17:33:00 we just don't log them 17:33:05 let me dig for a moment to find the discussion on accepting request-id's from API clients 17:33:14 jroll: oh. that should be easy to fix then 17:33:33 right 17:33:55 I don't think Ironic have to log them, it is possible to show them using oslo thing in log 17:34:11 so if there is a request ID we use it, and if there isn't we generate one. and that is all in the X-OpenStack-Request-Id? 17:34:28 honestly, I think we should have a RFE that describes what we want/need to do 17:34:33 lintan_: yea, oslo knows how to log them, but we're not passing this correctly 17:34:43 rloo: there are cross project specs already done for this. we don't need another one ... 17:35:06 devananda: it goes to a question i had for jroll last week. i have no idea which xproject specs ironic has decided to follow. 17:35:08 rloo: unless you mean an RFE bug to track it , in which case I completely agree 17:35:21 +1 for RFE to track it 17:35:24 devananda: and even if ironic follows, yeah, would be good to know the work involved to adopt. 17:35:53 devananda: so I don't mean to track the work, but also a description of the work that needs to be done. 17:35:59 rloo: gotcha 17:36:02 that's fair 17:36:07 devananda: /track/just track/ 17:36:19 basically, what we decide now or whatever, should go in that rfe. 17:36:22 I just do not want us re-designing it and ending up diverging unnecessarily 17:36:26 devananda, the question is according to the cross-spec, no one will pass X-OpenStack-Request-Id to other project 17:36:30 devananda: definitely. 17:36:54 http://lists.openstack.org/pipermail/openstack-dev/2016-January/085176.html 17:36:56 lintan_: that is ... not what was discussed in the design summit on this a while back :( 17:36:59 lintan_: is it no one *will*, or no one *must* 17:37:15 lintan_: in other words, is it optional to pass it, or are we not allowed to pass it 17:37:16 that mail says about logging 3 different request ids 17:38:02 maybe we should wait til rocky/someone writes that spec 17:38:19 I guess that makes it *possible* to track requests across services, but it's a lot harder than just grepping all the logs for a single string 17:38:20 yeah, let's not go logging 3 req-ids yet 17:39:33 hmmm, it seems that a spec or a ref should be done before we get an agreement 17:40:09 I can put one up with how I see it working 17:40:14 if nobody is opposed to that 17:40:27 or rather, with the work I see that needs to be done 17:40:28 jroll: as long as you do the high priority stuff first :) 17:40:47 I'm just writing the RFE, not the code :P 17:40:56 OK, I will also continue on the work 17:41:04 this looks like the best description I've seen so far: https://etherpad.openstack.org/p/request-id // http://specs.openstack.org/openstack/openstack-specs/specs/return-request-id.html 17:41:34 first step is both returning this header and logging it 17:41:47 the "return" is a python client thing 17:41:51 https://blueprints.launchpad.net/nova/+spec/cross-service-request-id seems pretty clear that the request-ID changes at service boundaries :-( 17:41:51 yep 17:41:55 jroll: well, also API response 17:41:56 I believe our api responses return it 17:41:58 nope 17:42:00 fun 17:42:17 mgould: that is unfortunate 17:42:21 mgould: that's so old, this work is to try to help unwind it 17:42:25 omg that is old 17:42:26 ignore that BP 17:42:27 yea 17:42:36 jroll, ah, great 17:42:40 closed 2 years ago 17:42:42 whew :-) 17:42:58 I try to add the header to our api's response 17:43:17 mgould: see https://etherpad.openstack.org/p/icehouse-summit-nova-cross-project-request-ids which is ALSO two years old 17:43:24 lintan_: awesome 17:43:26 #agreed jroll to write an RFE with a list of work to do 17:43:28 but has a better description of the problem folks want to solve 17:43:50 but get confused about should we accept an external request-id 17:44:19 right 17:44:29 lintan_: right. let's not accept external request id yet 17:44:33 why not. 17:44:36 this is something not expected from that cross-spec or in other projects like neutron/cinder 17:44:39 jroll: see that etherpad 17:44:45 it is not trivial 17:44:45 so we have to accept an external ID, generate our own, log both, then tag all our logs with *our* ID 17:44:47 that would be a HUGE improvement if we passed them between nova and ironic 17:44:58 jroll: rfe in relation to processing and logging the id, as in completely unrelated to https://bugs.launchpad.net/ironic/+bug/1505119 ? 17:45:00 Launchpad bug 1505119 in Ironic "[RFE] Ironic is missing a header X-Openstack-Request-Id in API response" [Wishlist,In progress] - Assigned to Tan Lin (tan-lin-good) 17:45:01 jroll: I totally agree. but it's also a HUGE problem to accept unsigned headers 17:45:06 I mean 17:45:07 and debuggers have to recursively follow the chain of ID changes 17:45:21 if an admin wants to pass a request ID 17:45:29 that is "wrong" for whatever reason 17:45:32 do we care? 17:45:32 jroll: nope. bad idea. 17:45:35 yes we care 17:45:39 why 17:45:41 what if I send a 4k header 17:45:48 you can do that anyway 17:45:52 n00b question: do we currently sign headers? 17:45:56 4k header with an exploit 17:45:59 mgould: nope 17:46:01 TheJulia: exactly 17:46:03 validate it looks like "req-$uuid" and move on 17:46:17 nope 17:46:19 jroll: see https://etherpad.openstack.org/p/icehouse-summit-nova-cross-project-request-ids 17:46:30 we discussed this at length at a cross project summit a few times 17:46:34 so, how can we exploit a system by reading a string? 17:46:40 that's what confuses me 17:46:43 let's not rehash that right now ... 17:46:44 jroll: /req-[0-9a-f]{48}/ or something? 17:47:00 we should be generating, logging, and returning the header now 17:47:07 and sort out the cross project bits after that 17:47:10 ++ 17:47:24 because one step at a time .... 17:47:35 ++ 17:47:39 sure 17:47:50 TheJulia: you're right, the rfe exists, I may add to that 17:48:01 OK, generating, logging and returning :) 17:48:11 lintan_: thanks 17:48:35 jroll: just wanted to make sure the agreed note was not specific and that rfe already existed :) 17:48:35 :) my pleasure 17:48:35 thanks lintan_ 17:48:50 TheJulia: yeah, I forgot about it, ty 17:48:53 moving on then 17:48:55 np 17:48:57 #topic open discussion 17:49:01 anyone have a thing? 17:49:03 11 minutes 17:49:31 Any Ironic related summit talks I should be prepared to vote for? :) 17:49:48 jlvillal: submission deadline is today 17:49:50 jroll: the tinyipa project config patch got a +2 from Andreas earlier today 17:49:55 devananda: Actually tomorrow 17:49:56 so the list isn't up yet 17:50:00 jlvillal: ah, right 17:50:01 It was extended. 17:50:13 sambetts: cool 17:50:20 There was a discussion on adding 'tar' format to glance 17:50:30 i want to bring it to the attention 17:50:54 unfortunately won't be able to attend this mid-cycle, but rpioso will be there 17:50:55 zer0c00l: as in tarball deployments 17:51:02 Basically glance suggested that we use 'os_tarball' instead of 'tar' to avoid confusion 17:51:05 NobodyCam: yes 17:51:08 cdearborn: It is 'virtual' as a note 17:51:27 zer0c00l: I've been meaning to reply to that thread 17:51:30 And they would approve the glance spec to add 'tar' after ironic approves the tar-payload spec 17:51:39 jroll: sure, please do. 17:51:47 zer0c00l: tl;dr, I don't see the use case? why haven't people asked for this feature in virt? 17:51:48 jlvilla, yup - have partner meetings the entire time 17:51:50 zer0c00l: neat 17:52:08 zer0c00l: I would think if tarballs were super useful like this, people would have wanted them in the past 17:52:14 jroll: clone-a-server ? 17:52:25 * devananda is guessing 17:52:34 devananda: the spec says "they're easier to build"; dunno if I buy that 17:52:44 jroll: hm. yea, I don't buy that either 17:52:51 jroll: it is. You installall the packages in a chroot 17:52:54 and compress them 17:53:01 *install all the packages 17:53:07 thats all I've really heard too..."their easier" 17:53:16 zer0c00l: that's what DIB does ... except it outputs a qcow, not a tgz 17:53:20 http://libguestfs.org/virt-make-fs.1.html 17:53:25 Are they faster too? 17:53:40 I don't see the point in building an entire feature to solve what virt-make-fs already solves 17:54:05 glance has discussed a few times creating an image-format-conversion service 17:54:11 seems like a reasonable addon to me, not a core feature 17:54:55 zer0c00l: is there a compelling reason why tarballs can't be converted to .img / .qcow ? 17:55:11 prior to uploading, I mean 17:55:24 devananda: just curious can we use .qcow2 as a ironic image format? 17:55:29 yes 17:55:31 zer0c00l: yah 17:55:58 devananda: i haven't tried coverting tar format. At yahoo we do OS releases as tarballs 17:56:14 we have these tarballs from back in 2008's 17:56:27 zer0c00l: I hope you're patching the kernels in there ..... 17:56:34 i have to check and see if those tarballs can be converted to qcow2 and their implications 17:56:44 sure we do 17:56:47 :) 17:57:26 it's just easy to add this feature and get it working than converting 20+ images to qcow2 17:57:34 zer0c00l: it is not easier 17:57:46 yeah, definitely disagree 17:57:51 okay 17:57:54 zer0c00l: ++ not easier 17:57:54 zer0c00l: converting 20 images is MUCH better than causing two projects to adopt and carry support for a few image format 17:57:54 totally disagree 17:58:06 :) 17:58:07 okay 17:58:08 s/few/new/ 17:58:14 does anyone oppose abandoning this spec, then? 17:58:22 jroll: nope 17:58:42 * jroll will do it shortly if he doesn't hear otherwise 17:58:49 * mgould would still like to understand why it's wanted 17:58:54 if that is the only reason, then yeah, i don't think we need that spec. 17:58:55 i just wish this happened earlier 17:59:00 this discussion 17:59:00 are they smaller? fs-agnostic? anything else? 17:59:05 *one* minute 17:59:09 they are fs agnostic 17:59:10 yes 17:59:15 I'd like to get your opinions on https://bugs.launchpad.net/ironic/+bug/1538653 ; would like to get some precedence decision on whether 202+Location header endpoints for async requests is OK/preferred 17:59:15 Launchpad bug 1538653 in Ironic "fix redirection of async endpoints response codes from "202 - Accepted" to "303 - See other" " [Wishlist,Opinion] 17:59:18 you can create any fs you want to 17:59:26 zer0c00l: we've been paying attention to the priority work, sorry :( 17:59:27 that is one point i would like to make 17:59:36 so we're out of time 17:59:40 jroll: we need fs agnotic os images too 17:59:41 let's continue on the spec 17:59:46 jroll: sure 17:59:53 thanks all, good meeting 17:59:57 Thank you 17:59:59 #endmeeting