17:00:23 #startmeeting ironic 17:00:23 Meeting started Mon Dec 7 17:00:23 2015 UTC and is due to finish in 60 minutes. The chair is jroll. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:27 \o 17:00:27 The meeting name has been set to 'ironic' 17:00:30 o/ 17:00:30 \o/ 17:00:33 hey there everyone :) 17:00:34 o/ 17:00:36 \o 17:00:39 o/ 17:00:42 o/ 17:00:43 o/ 17:00:44 as always our agenda is here: 17:00:44 o/ 17:00:46 #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting 17:00:51 o/ 17:00:52 o/ 17:01:03 \o 17:01:04 o/ 17:01:08 let's jump right in! 17:01:14 #topic Announcements / Reminders 17:01:20 splash... 17:01:21 a couple of things here: 17:01:48 1) ironic 4.2.2 was released friday with a very important security fix, thanks to brad morgan for reporting that 17:01:57 o/ 17:02:05 2) ironic 4.3.0 was released today with a whole bunch of awesomeness, including that bug fix 17:02:14 email announcements coming today for those 17:02:18 nice 17:02:24 <[1]cdearborn> o/ 17:02:29 3) tripleo CI jobs are coming back - please do pay attention to those 17:02:32 o/ 17:02:32 clap clap. (what took so long?) 17:02:40 anyone have other announcements or questions on those? 17:02:58 jroll: ^ question on 2 17:03:10 jroll: anything we could have done to speed up the release? 17:03:42 jroll: i mean from the time the security bug patch was out etc. not sure what happened. 17:03:45 I didn't push the release button until after the security fix had merged, and that got stuck on a devstack change for about 24hrs 17:03:48 ah. 17:03:54 o/ 17:04:10 I have a question on 2) too, where is that going to be stored, stable/4.3.0 ?? 17:04:18 yeah, we had a bug in devstack where the erase_devices clean step (which is SUPER SLOW in devstack) wasn't configured to not run 17:04:29 and so when we fixed the bug to actually run clean steps, it blew up in the gate 17:04:51 jroll: oh, i thought that devstack-turn-off-clean-step got merged last Thurs. guess it took longer :-( 17:04:52 sambetts: no stable branch for the intermediate releases, just a tag. stable/mitaka will be a thing at the end of the cycle 17:05:07 yeah, and we need to figure out a way to test the clean steps on gate so we don't have another bug like that sneaking in the code again 17:05:15 do we have a bug opened for that ^ ? 17:05:23 o/ 17:05:26 rloo: well, the gate was slow, so the patch merged friday. and then release team had questions about the version number, so release got there today 17:05:27 lucasagomes: ++ i was just going to ask! 17:05:35 rloo: the gate queue was fairly long and some rechecks for other issues wre needed 17:05:39 lucasagomes: I don't believe so, mind filing one? 17:05:40 +1 maybe partial cleaning on DS 17:05:50 jroll, I will do 17:05:55 jroll, devananda: ok, not much we could have done then. should be smoother next time :) 17:05:58 thanks dude 17:06:03 no problemo! 17:06:04 rloo: hope so :D 17:06:15 anything else here? 17:06:36 #topic subteam status reports 17:06:41 as always, reports are here: 17:06:43 #link https://etherpad.openstack.org/p/IronicWhiteBoard 17:07:01 I'll give folks a minute to review those, or fill them out :P 17:07:05 o/ 17:07:40 what's the critical bug? 17:07:56 rloo, IPA gate breakage 17:08:03 https://bugs.launchpad.net/ironic-python-agent/+bug/1522756 17:08:03 Launchpad bug 1522756 in ironic-python-agent "pyudev 0.18 changed exception for from_device_file, breaking backward compatibility (and IPA unit tests)" [Critical,In progress] - Assigned to Dmitry Tantsur (divius) 17:08:15 dtantsur: ok, thx. does it 'just' need reviews? 17:08:25 I've got a +2 on ^ 17:08:31 rloo, yes, and backport to liberty 17:08:39 simple change, we should get that landed 17:08:50 I'm trying to communicate to upstream person that this thing should not repeat too often 17:09:28 \o/ for tempest plugin stuff in ironic 17:09:31 dtantsur: guess that's why we have tests :) 17:09:39 I'm also excited to see grenade and devstack code move to our tree 17:10:00 \o/ for bifrost inspection support 17:10:35 krtaylor: updating the third party CI spec this week? I'd like to land that asap 17:10:38 dtantsur: I added a comment to the github issue asking him not to move the discussion to email. Hopefully having >1 person paying attention will help them understand how much we don't want that to break us again :) 17:10:47 i think nova spec freeze was last week? does anyone know which nova specs got approved that are related to ironic? 17:10:58 jroll, me too, I have been very sick 17:11:15 jroll, I'll have a new one within the hour 17:11:21 JayF, yeah, I've reached him/her via email just because I expected an internal email to speed the process a bit (and it really did) 17:11:24 krtaylor: oh, I'm sorry :( I could update it too if you aren't better yet 17:11:34 I'm almost done 17:11:35 JayF, I also suggested my help, if needed 17:11:48 rloo, yes the freeze was last week... I will check the ironic spec 17:11:52 rloo: nova-specs that we care about right now are 1) multiple compute host, and 2) the networking support stuff 17:11:57 wrt CI requirements being communicated with vendor teams -- all the vendors that have intree drivers in ironic? 17:12:26 rloo: all of them that we could find contact info for (and it also went to dev list) 17:12:43 rloo, all that are listed in the ci etherpad 17:12:47 * krtaylor finds that link 17:13:03 rloo: thingee will be keeping on top of that to make sure they are notified - voicemails and all. doing everything he can short of serving them court papers :P 17:13:19 jroll, krtaylor: ok, that's what i wanted to know. thx! 17:13:24 np :) 17:13:53 so, looking at these, we still have a ton of work to do this cycle. we're making good progress but need to keep going hard on all of our priorities 17:14:08 thanks to everyone for the great work so far, and please do keep it up :) 17:14:35 let's be sure to stay on track and not get distracted by all the other shiny things 17:15:00 anything else on the subteam reports? 17:15:05 .. and shiny gate breakages :D 17:15:11 hope not 17:15:24 All the webclient things are in the etherpad if anyone has questions :) 17:15:51 wrt docs, i'm curious to know what 'not in the scope for official page'. i hope they are thinking about how to include docs from all projects... 17:16:23 rloo: I'm going to send a nice long email about that today - sounds like they are meant to cover the "base" openstack services 17:16:51 jroll, liliars: thx for looking into it and pushing for them to think of the rest of the noncore services 17:16:58 then "developer" in http://docs.openstack.org/developer/ironic/ is not entirely true, right? 17:17:04 rloo: apparently only 'core projects' 17:17:11 dtantsur: right, need to re-org and re-think some things 17:17:14 rloo: yeah np, will keep digging :) 17:17:18 ouch 17:17:32 ditto for http://docs.openstack.org/developer/ironic-inspector/ 17:17:35 dtantsur: no, developer is not entirely true. when we put together our docs, it was meant to be temporary until we graduated and our docs were included with 'the rest' of the docs :-( 17:17:41 I think the real issue here is that the docs project hasn't fully embraced the big tent yet (and with good reason, it's hard) 17:18:08 they'll have to double the number of people, I guess... 17:18:11 I had some conversations with docs team in vancouver, but we haven't followed through much from either side on those changes 17:18:33 oooo, i see a cross-project spec coming out of this :D 17:18:39 they don't plan to grow support within the docs group to write or maintain docs for every project in the big tent 17:18:41 rloo: yeah, basically :) 17:18:54 but like other cross project things ,they intend(ed) to grow tooling to enable other projects 17:19:00 shall we move on? we aren't going to solve this problem today :P 17:19:05 i don't expect the docs team to write the docs but i would like some way to get our stuff integrated. 17:19:15 yup, let's move on :) 17:19:18 yeah, that's my goal as well 17:19:21 cools 17:19:22 rloo: agreement on that was basically the outcome 8mo ago 17:19:28 #topic notifications spec - use cases and configurability 17:19:31 #link https://review.openstack.org/#/c/248885/ 17:19:34 mariojv: this is you 17:19:49 for folks listening, this is a stuck spec 17:19:59 what's the 'stuck' part? 17:20:02 hi, yes - there was a discussion brought up in irc friday about the different use cases that people would use when notifications are added to ironic 17:20:09 that's what mariojv is going to tell us :) 17:20:24 the important part we were discussing is the payload section (L94) 17:20:57 the question is basically what we would want to send in the payload of a notification for something like node state changes 17:21:23 eg, the entire (santized) node object vs. merely the fields relevant to that event 17:21:54 right ^ and there are 2 different use cases that we've considered so far 17:21:56 mariojv: for state changes, original prov/target_prov states, new prov/target_prov states. do we need to send the entire node object? 17:21:58 I thought we would send only the relevant fields (previous state, new state, datetime etc) 17:22:01 * jroll apologizes for not having a chance to catch up on context here 17:22:18 * lucasagomes does the same as jroll 17:22:33 btw, the friday discussion is here: 17:22:35 #link http://eavesdrop.openstack.org/irclogs/%23openstack-ironic/%23openstack-ironic.2015-12-04.log.html#t2015-12-04T21:32:34 17:22:52 I think it boiled down to who the intended consumer is. A) a logging or debugging system; B) another service that will take some action based on the event 17:22:58 one case is where we would want to keep a point in time snapshot of a node as it goes through changes, so that we could store that and keep it in something like elasticsearch to track anomalous behavior 17:22:59 The crux is: If you want to use the notification for debugging, you want as much of the node object as possible (with secrets hidden) 17:23:05 what's the purpose of notification. to notify. not to contain all info needed for debugging? 17:23:10 another cases is B, like devananda mentioned 17:23:12 devananda: I agree with that assertion 17:23:16 if you're using the notification as a "trigger" for other services, you want to keep it slim and only include desired fields 17:23:42 does it make sense to make it configurable? (the payload info?) 17:24:32 I'd say that for production we need as small payload as possible, so that we don't overload anything (both queue and consumers) 17:24:34 I'd prefer if the payload wasn't configurable, that feels like a world of pain 17:24:40 my assertion is that our primary consumers of this feature are ironic-inspector and nova, and therefor it is (B) 17:24:42 jroll: ++ 17:24:52 dtantsur: ++ 17:24:54 yeah configuring the payload can be painful because of the versioning etc 17:25:01 scaling rabbit is one of hte biggest challenges operators have 17:25:05 ok, nix on configurability. 17:25:12 let's not put more large objects on that bus 17:25:14 then I agree with dtantsur, small payload. 17:25:14 ++ 17:25:18 but I think that as for notification we should keep it small as possible 17:25:36 e.g. for state change I'd send 1. UUID, 2. previous state, 3. new state, or something like that (maybe even without #2) 17:25:39 so "small payload" means just the relevant changes to a given object? 17:25:39 for debugging, one should look at the logs, no? 17:25:47 dtantsur, yeah, and time that occurred 17:25:49 B makes more sense to me too 17:25:55 jroll, IMO yes 17:25:57 lucasagomes, time is always included in the notification IIRC 17:26:08 dtantsur: i think we would want #2 17:26:11 right yeah it should 17:26:12 to be clear, there are debugging tools that people use today that rely on nova notifications 17:26:17 that are SUPER useful 17:26:24 and those have a ton of data 17:26:28 dtantsur: old and new state are both important to include. we can't assume a listener has processed every message sequentially 17:26:31 jroll, nova sends the whole object? 17:26:37 jroll: that's right. something like instance.create has a lot of data 17:26:55 lucasagomes: I'm not sure if it's the whole object, but it's a significant amount 17:27:04 so instance.create is an interesting example. it makes sense for that to include a lot of data about the instance that was created 17:27:06 mariojv had provided this link wrt what nova does: https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.create..7Bstart.2Cerror.2Cend.7D: 17:27:08 i don't think it's the entire object (would have to check), but it has things like host, image url, tenant, flavor id 17:27:08 what abour instance.pause ? 17:27:23 mariojv: all of that is related to the instance creation, so that makes sense 17:27:25 mariojv: got an example instance.create handy? 17:27:30 devananda, right, yeah it sounds like it should send the whole instance as it was created 17:27:30 mariojv: sorry, instance.update 17:27:40 but for things smaller, actions, suchs as instance is now deployed 17:27:46 jroll: not a raw one, would have to sanitize a bunch of things 17:27:47 nova does send almost everything ? 17:28:11 this is another interesting example: https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists: 17:28:14 mariojv: PM it to me and I'll scrub real quick? 17:28:24 jroll: sure 17:28:32 oh, there's the schema 17:28:42 if you look in instance.exists, i think there are a lot of fields that an operator may find useful that aren't directly related to just notifying external services that instances exist 17:28:49 right, so instance.update has a decent amount of data 17:29:02 BUT 17:29:02 we don't need to be the same as nova 17:29:08 if we can provide the diff, so to speak, that may be good enough 17:29:14 mariojv: but does instance.exists provide 'everything' that is available about the instance? 17:29:49 i agree with jroll that we don't need to be the same as nova - i just wanted to bring up the point that it's much more than the minimal needed info in a lot of cases 17:30:17 rloo: i don't think it's everything, but i would have to check more in depth 17:30:23 so I agree we should try to keep it small, but relevant 17:30:37 and not the entire node object 17:30:37 mariojv: i am wondering if what we return for an eg 'ironic-node-show' would be enough + any other info pertaining to the event 17:30:48 and I think what is relevant depends on each notification 17:30:55 each notification type* 17:31:00 so maybe we handle these case by case 17:31:01 rloo: i think that would be enough, and i agree it depends on the notification 17:31:08 jroll, ++ yeah it sounds like we should see what notifications are we going to start sending at first 17:31:11 and if a user finds it lacking, we talk about adding to it 17:31:14 for .error notifications, we would probably want more than a common notification that occurs when things are normla 17:31:17 and see what fields are relevant to each 17:31:17 right, so we always return a minimum set. 17:31:28 we have versions for a reason \o/ 17:31:31 mariojv: ++ 17:32:14 so, does anyone violently disagree with NOT having the entire node object in the payload? 17:32:17 i mean, we're talking about emitting events, but an eg 'ironic node-show' you should be able to know what is going on with that node from that info. 17:32:25 so in the spec, we can outline some guidelines. 1) for "normal events", send a minimal amount of data that indicates what changed 2) for anomalous events, send enough information to allow debugging with an external tool to occur 17:32:38 +1 17:32:45 sounds good to me 17:32:59 who wanted the entire node object and are they here/ok with what we've discussed? 17:33:23 i think it would be painful to have every notification outlined beforehand in a spec. rloo: i think that's enough information - it's not really "minimal" but it's not everything either 17:33:33 I feel like mariojv was the one that wanted the entire node object :) 17:33:44 ha ha. ok, we're good then! 17:34:18 jroll: yeah, i put it there initially because then notification versions would update automatically if additional fields we wanted were added to the node object 17:34:38 mariojv: by 'anomalous' you mean an error state? 17:34:39 mhm 17:34:41 but i think the disadvantages like overloading a rabbit queue would outway having to maintain notification and node object versions separately 17:34:47 devananda: yes 17:34:54 *outweigh 17:34:57 mariojv: that isn't good -- it will break consumers if a single notification class sometimes has different information 17:35:16 eg, the notification for provision state change: it must always have the same payload (for the same version) 17:35:17 devananda: this would be something like bearmetal.node.error 17:35:25 it can't have more data if the new state is error 17:35:40 aren't errors actually part of node transitions? 17:35:41 jroll: there isn't an "error" notification proposed, afaik 17:35:45 dtantsur: exactly 17:35:45 devananda: it can still be versioned, we could have different versioned objects (with version included in payload) for .error and .state_change or whatever we call the normal one 17:35:52 devananda: there isn't a node state transition proposed, either 17:35:53 mariojv: nooo :( 17:36:06 mariojv: versions are not a way to indicate type 17:36:13 devananda, we can have "error_text" field with None value is the transition is not an error 17:36:25 a "deploy error" would be, AIUI, a state_change notification and an error notification 17:36:32 not one with extra data 17:36:43 jroll: that doesn't seem to be what mariojv was proposing, though 17:36:58 devananda: that's how I interpreted it 17:37:02 devananda's right, i was thinking we just send different data in different cases 17:37:24 mariojv: you mean like what nova does? 17:37:27 oh 17:37:27 in the interest of time, can we solve the anomalous case in the spec? 17:37:38 where we have more than 30 minutes to flesh it out? 17:37:39 i do prefer the idea of having a separate error notification though 17:37:39 rloo: yes 17:37:45 fwiw, I do not feel that error is an anomalous case 17:37:51 but let's continue in the spec, I guess 17:37:52 i'm fine with that jroll 17:38:15 cool 17:38:15 thank you 17:38:31 we're mostly agreed here, I'm going to move on because this next topic is fun even if I haven't formed all my thoughts on it yet 17:38:56 :) 17:39:00 #topic stop using milestones and/or blueprints? 17:39:06 #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081435.html 17:39:12 * jroll hands dtantsur the microphone 17:39:21 thanks 17:39:33 tl;dr: we're on our own with blueprints, release team does not do anything with them any more 17:39:45 s/blueprints/milestones 17:40:08 create, close, target - we have to do it ourselves, if we feel like 17:40:29 I personally don't (for ironic-inspector), and I'm not sure we want it for ironic 17:40:44 next thought is if we don't use milestones, it reduces value for blueprints 17:40:53 I'd actually like to see all/most projects doing a similar thing, whatever 'similar' is. 17:41:01 I've heard that neutron has replaced blueprints with bugs with "RFE" in title 17:41:06 sambetts, right ^^ 17:41:46 so we have to decide, whether we continue doing the same thing without any motivation from the release team 17:41:49 jroll: mariojv rloo: I was the one who originally was supporting full node objects -- I'm very OK with this outcome just wanted to make sure we acknowledged the use cases we were supporting/rejecting 17:41:51 have they fully replaced BPs with RFEs, or only some? 17:41:59 rloo, right, and if there's no standard yet we should at least try to do the same between ironic projects (inspector, ironic, bifrost etc...) 17:42:06 lucasagomes: ++ 17:42:12 JayF: great! 17:42:27 I think the release liasons for the projects should talk and decide what would be best? 17:42:38 would it be possible to have meeting with all/most of them? 17:43:14 lucasagomes, well, as to inspector, I'd like to stop doing it, as it consumes my time without a big benefit 17:43:28 unless someone from inspector team would volunteer, which I doubt 17:43:43 * rloo never could figure out (or probably didn't care) what the blueprint/milestones/etc were for and how necessary they were. 17:43:46 jroll, not sure, that's what sambetts told me. I still see some mitaka blueprints on their page 17:43:50 so, I agree, I don't think blueprints or milestones in launchpad add any value, and they are a pain 17:43:50 dtantsur, right, we are aiming to have a standard across project right? 17:44:09 I think you guys should push this idea forward since it's time consuming with no real benefits 17:44:13 but there is value in doing the same thing as other projects 17:44:20 lucasagomes, well... release team used to set such standard, now they stopped doing that 17:44:44 lucasagomes, jroll, re doing the same thing: we're not following the same release pattern, for example.. 17:44:50 (we only match swift) 17:45:02 dtantsur: I'm thinking from a contributor's perspective - someone wants to propose a feature, how do they do so? 17:45:09 BP have provided a few benefits, with a lot of overhead, but the only one left seems to be the link between gerrit reviews and a BP 17:45:17 jroll, but with RFE in the title, which is want they already do, by the way 17:45:20 One thing that was useful?( or what it) was that at at release, there was a link where you could see which bps were completed. now we have reno, which doesn't contain links to anything like specs 17:45:22 s/want/what/ 17:45:38 eg, if I put "blueprint: foo" in a commit message, a link to that review gets added to the BP page 17:45:49 devananda, the same works for bugs 17:45:54 dtantsur: totally does :) 17:46:02 devananda, yeah that's useful, but I'm ok not having it if that's the only benefit 17:46:03 rloo, it's out fault that we don't put links to specs in reno notes ;) 17:46:04 we (bifrost) use blueprints 17:46:12 just saying that's the only thing we're using it for right now, I think 17:46:30 cinerama: are you using specs as well? 17:46:33 no 17:46:35 dtantsur: do you mean that casual contributors already propose a feature by filing a bug? (I agree, many do) 17:46:36 dtantsur: oh, yes, i just want to make sure we recognize what we 'got' with the old/existing system, and what we'll get with the new way. 17:46:45 jroll, yes, they do 17:46:55 right, ok 17:47:13 it is true that it isn't always clear if something is a feature or a bug. and then, whether it is a big enough feature that requires a spec or not. 17:47:30 jroll: I have seen many new contributors start with a very small BP proposal and get "stuck" at that point 17:47:37 FYI, here is what Neutron is doing http://lists.openstack.org/pipermail/openstack-dev/2015-December/081458.html 17:47:39 devananda: yeah, I see both 17:47:45 right, we get a lot of bugs with title "XXX does not support YYY" or "XXX does not have YYY" 17:47:45 pc_m, thanks 17:47:50 #link http://lists.openstack.org/pipermail/openstack-dev/2015-December/081458.html 17:48:09 pc_m, thanks a lot, that matches what I was told 17:48:48 They've been doing RFEs and if it is complex, follow up with blueprint. 17:49:20 right, so I don't see much reason not to do that same thing - does anyone disagree? 17:49:37 I think it worth trying 17:50:28 ++ 17:50:31 but also talk to the neutron release liason see how it's going for them 17:50:35 jroll: what do we do with all the currently-registered BPs? 17:50:52 #link http://docs.openstack.org/developer/neutron/policies/blueprints.html for details on RFEs and BPs 17:50:52 devananda: move them? kill with fire? 17:50:56 I'd convert them to RFE's and CC the original reporter 17:50:58 you've been cleaning those up, I htink 17:51:05 but there are still 144 open BPs 17:51:16 devananda: I started to and lost most of my hope for the world 17:51:23 jroll: welcome to my world :) 17:51:28 144 damn 17:51:39 O_o 17:51:56 how did neutron xsition to the new way? 17:52:23 idk, I can chat more about it with mestery and armax 17:52:54 ++ 17:53:10 IIRC, they started to do RFEs and BPs were optional. Did it at the start of a release. 17:53:49 jroll: fyi, we can disable blueprints in LP, if or when some transition is complete 17:53:55 indeed 17:53:55 So was a period of time where you could have BPs. 17:54:46 grace / deprecation period makes sense 17:54:56 ok, so should we agree to give it a try? 17:54:57 pc_m: about how long was that? a full cycle, or ..? 17:55:01 * dtantsur will also disable blueprints for inspector eventually, if nobody from the team volunteers for keeping them 17:55:15 or I should talk to neutron folks more, or? 17:55:16 devananda: I think they started the RFEs in Liberty. 17:55:36 Talk to mestery. He'll know. 17:55:41 cool 17:56:05 thanks for the info pc_m :) 17:56:18 jroll: sure 17:56:56 ok, cool, I'll talk to people and reply to dtantsur 17:57:02 to dtantsur's email, I should say 17:57:06 thanks! :) 17:57:22 thank you for bringing it up :) 17:57:38 #topic open discussion 17:57:38 have about two minutes left if anyone has something quick 17:57:54 I have one thing 17:57:57 jroll, any updates on the virtual midcycle planning? 17:58:03 hmm, go ahead, smoriya_ 17:58:06 dtantsur: nope, nothing yet 17:58:12 thanks, dantsur 17:58:19 I'd like ask you to review https://review.openstack.org/#/c/200496/ 17:58:28 This is the spec for boot from volume support 17:58:42 quick request, we need to make the json fields indexable in the database to build the filter/claim api etc... So if you are interested on that please take a look https://review.openstack.org/#/c/253605/ 17:58:49 To support boot from volume, we need to enhance both Ironic and Nova(ironic driver) 17:59:01 That means we need to get the blueprint in Nova approved. 17:59:08 smoriya_, on my list, will try to get to it tomorrow (it's evening for me) 17:59:13 as you know nova-spec freeze is last week 17:59:18 * lucasagomes needs input from people that understand databases better 17:59:21 but there is an exception process that close 12/11. 17:59:33 dtantsur: great! thanks! 17:59:42 I'd like to propose nova's blueprint as an exception 17:59:53 I don't think we'll be able to finish that feature in nova this cycle, even if the ironic spec lands today 18:00:10 I talked with johnthetubaguy and he said that we don't need spec for it in Nova but need blueprint approved. 18:00:12 but I'd love to get that spec done anyway and start working on it 18:00:25 fg 18:00:34 alright, out of time. thanks everyone! 18:00:37 #endmeeting