16:01:35 <jgriffith> #startmeeting cinder
16:01:36 <openstack> Meeting started Wed Feb 19 16:01:35 2014 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:37 <coolsvap> Hello
16:01:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:39 <openstack> The meeting name has been set to 'cinder'
16:01:44 <thingee> o/
16:01:55 <xyang2> hi
16:01:55 <jgriffith> pheww... quite a week
16:02:00 <jgriffith> and it's only Wed :)
16:02:13 <jungleboyj> Indeed.
16:02:18 <jgriffith> Ok... we've got a number of things on the agenda so let's get on it
16:02:25 <jgriffith> #topic I3 Status check/updates
16:02:46 <jgriffith> IMO there's a ton of cruft in here
16:03:06 <jgriffith> #link https://launchpad.net/cinder/+milestone/icehouse-3
16:03:35 <thingee> I agree.
16:03:42 <jgriffith> The BP and bug list should be frozen at this point
16:03:47 <jgriffith> no new proposals
16:03:59 <jgriffith> bugs we can slip to RC's of course
16:04:06 <jgriffith> but feature proposals are done
16:04:17 <jgriffith> so let's focus on those fornow
16:04:18 <jgriffith> for now
16:04:30 <jgriffith> The way I've been doing this is sort on priority
16:04:30 <thingee> with the number of reviews already in, I worry about the things that are just "started"
16:04:40 <kmartin> should any BP not in Needs Code Review be pushed to Juno?
16:04:41 <jgriffith> thingee: understood
16:04:55 <jgriffith> thingee: I think we may want to propose dumping some of these
16:05:01 <jgriffith> DuncanT: let's start with yours
16:05:09 <jgriffith> DuncanT: are you actually working on this?
16:05:23 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/filtering-weighing-with-driver-supplied-functions
16:05:24 <DuncanT> Yes. I've got code that works but needs tidying up for submission
16:05:42 <DuncanT> Realisitically if it isn't in tomorrow it is not going to be in
16:05:51 <jgriffith> DuncanT: Ok... fair enough
16:06:01 <jgriffith> DuncanT: I'm going to hold you to that when I wake up in the AM :)
16:06:09 <DuncanT> Fair enough
16:06:29 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api
16:06:52 <jgriffith> rohit404: is this you?
16:06:56 <jgriffith> ^^
16:07:02 <jgriffith> or dosaboy ?
16:07:13 <jgriffith> or nobody
16:07:29 <rohit404> jgriffith: not me
16:07:41 <jgriffith> rohit404: sorry.. Ronen Kat
16:07:50 <jgriffith> anyway... it's proposed since Jan
16:08:01 <jgriffith> https://review.openstack.org/#/c/69351/
16:08:17 <jgriffith> IMO this one is higher on the priority list for this week
16:08:49 <jgriffith> dosaboy: I think we need to break the BP though between the metadata and export/import
16:08:49 <jungleboyj> Yeah, I have a list of reviews to take a look at.  I should add this.
16:08:55 <thingee> got some drafts on that one.
16:09:23 <thingee> mostly the body key export-import confuses me
16:09:53 <jgriffith> thingee: can you point us to the part you're thinking of?
16:10:04 <thingee> https://review.openstack.org/#/c/69351/5/cinder/api/contrib/backups.py
16:10:07 <thingee> l325
16:11:15 <avishay> whoops sorry i'm late
16:11:22 <jgriffith> dosaboy: doesn't seem to be around
16:11:26 <dosaboy> jgriffith: implementing import/export without metadata would kind of be a regression since you would not be able to import/export e.g. bootable volumes
16:11:32 <jgriffith> dosaboy: oh... there he be
16:11:36 <dosaboy> aye aye
16:12:12 <dosaboy> i've not had a chance to review that patch yet tbh
16:12:23 <jgriffith> dosaboy: yeah
16:12:35 <jgriffith> so ok let's review see if we can get info from Ronene
16:12:38 <jgriffith> ronen
16:12:51 <jgriffith> dosaboy: my question is the missing parts to complete the BP
16:13:11 <jgriffith> dosaboy: I agree we need to get the metadata import/export landed still
16:13:34 <jungleboyj> jgriffith: I will make sure Ronen is aware we have Qs.
16:13:44 <jgriffith> dosaboy: It's just unclear of what actually constitutes this bp being "implemented"
16:13:48 <dosaboy> jgriffith: which bp?
16:13:59 <jgriffith> dosaboy: https://blueprints.launchpad.net/cinder/+spec/cinder-backup-recover-api
16:14:48 <ik__> I'm a newbie here :)
16:15:08 <dosaboy> jgriffith: ok i'll see if I can get that clarified
16:15:17 <jgriffith> dosaboy: thank you sir
16:15:19 <dosaboy> ik__: welcome
16:15:38 <jgriffith> dosaboy: I'd like to separate it out to what we're going to do in Icehouse and reference maybe what's still ongoing
16:15:52 <dosaboy> ok sure
16:15:53 <jungleboyj> ik__: Welcome to the party!
16:15:55 <jgriffith> we've got a number of bp's that we aren't very clear on "when is it done"
16:16:18 <jgriffith> Next...
16:16:31 <jgriffith> bswartz: you here
16:16:52 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/multiple-capability-sets-per-backend
16:17:15 <jgriffith> BP proposed and approved in Dec but no activity
16:17:24 <jgriffith> considering this will not make it
16:17:30 <kmartin> nope, push to Juno, next
16:17:33 <jgriffith> I'll get with bswartz when he's around
16:17:36 <avishay> jgriffith: i think we spoke about this a few weeks ago and bswartz said it was more complicated than he thought, and it would be juno
16:17:52 <DuncanT> He's said before that he's stuck trying to get a clean implementation
16:18:01 <jgriffith> K... done
16:18:04 <jgriffith> thanks
16:18:25 <bswartz> jgriffith: yes
16:18:35 <jgriffith> bswartz: too late we figured it out without you :)
16:18:40 <jgriffith> bswartz: shout if we're wrong
16:18:45 <jgriffith> Next...
16:18:46 <bswartz> hah yes thank you
16:18:51 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/per-project-user-quotas-support
16:18:58 <jgriffith> I'm not happy with this one
16:19:02 <jgriffith> two things...
16:19:06 <jgriffith> 1. It's ugly
16:19:10 <jgriffith> 2. Is it really needed
16:19:40 <jgriffith> https://review.openstack.org/#/c/66772/
16:19:50 <jgriffith> Don't know if anybody else has any thoughts on this?
16:20:05 <jgriffith> I have concerns about it for a number of reasons
16:20:26 <DuncanT> It certainly is ugly, and the quota code has proven to be fragile in the past....
16:20:45 <jgriffith> not the least of which being we have existing quota consistency issues, and piling user quotas (which I don't know how valuable that is anyway) on top of it makes things worse IMO
16:21:02 <jgriffith> and I'm not sure about the implementation anyway
16:21:17 <jgriffith> Anybody object to pushing it?
16:21:24 <jgriffith> I mean, pushing it out
16:21:26 <DuncanT> I can see the value of the feature but I think we should punt to J since the implementation is not ready
16:21:40 <jgriffith> anybody else?
16:21:41 <avishay> jgriffith: how can you object to its usefulness?  nova has it! :)
16:21:42 <jgriffith> DuncanT: thanks
16:21:51 <jgriffith> avishay: very very poor argument
16:21:58 <jgriffith> avishay: although it's getting used more and more lately
16:22:07 <jgriffith> thingee: has a great cartoon of that
16:22:07 <avishay> jgriffith: agree.  we should invest a bit of effort to clean up quotas first.
16:22:10 <coolsvap> jgriffith: :)
16:22:12 <ameade> what is the usefulness of it exactly? why don't project level quotas suffice?
16:23:11 <DuncanT> ameade: Allowing the tenant to do finer grained quotas inside their tenant is something some users like, e.g. in a public cloud context - means one account can be shared more widely
16:23:52 <avishay> I think push to Juno .. given that quotas are a bit broken, this will also be broken
16:24:00 <jgriffith> Done
16:24:05 <jungleboyj> avishay: +2
16:24:29 <jgriffith> Sorry.. I'm slow because i'm typing notes, updating reviews and bp's :)
16:24:42 <jungleboyj> :-)
16:24:45 <avishay> jgriffith: your secretary took the day off? :)
16:25:05 <dosaboy> hehe
16:25:10 * jungleboyj is slow because I am laid out with a stomach bug.  Was so nice of my boys to share.
16:25:29 <jgriffith> avishay: yeah... :)
16:25:37 * DuncanT is just slow.
16:25:40 <avishay> hah
16:26:11 <jungleboyj> :-)
16:26:27 <jgriffith> Ok, there's two more mediums that I think we need to talk about
16:26:35 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/local-storage-volume-scheduling
16:26:38 <jgriffith> and
16:26:53 <dosaboy> avishay: you think this is gonna make I-3 and could it include the meta support? (see comment in BP)
16:27:01 <dosaboy> this being - https://review.openstack.org/#/c/73456/
16:27:11 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/cinder-force-host
16:27:22 <jgriffith> both have code proposed
16:27:33 <jgriffith> both are kinda fugly
16:27:35 <avishay> dosaboy: TSM driver will not have metadata support in icehouse
16:28:16 <avishay> jgriffith: can we start with the 2nd (force host)?  i think that's easier because nobody liked it
16:28:18 <DuncanT> I'm extremely concerned that the local scheduler is not solving a clearly defined problem and should be thought about more carefully
16:28:28 <jgriffith> avishay: :) sure
16:28:35 <dosaboy> avishay: okey
16:28:37 <jgriffith> avishay: I think that impl gets kicked
16:28:42 <coolsvap> jgriffith: i think we had a round of discussions for force-host
16:28:49 <DuncanT> I like the idea of force host, but not if the implementation is anything other than clean, simple and unobtrustive
16:28:54 <jgriffith> avishay: but we look at doing an exception to get the feature for Icehouse still
16:29:04 <jgriffith> DuncanT: ^^
16:29:27 <bswartz> I don't see why force host is needed when volume types can achieve the same effect?
16:29:29 <jgriffith> and figure out who/when somebody rewrites it
16:29:31 <DuncanT> The proposed implementation is fugly
16:29:49 <DuncanT> bswartz: admin/testing/similar - nothing tenant facing
16:29:50 <jgriffith> bswartz: yes, that's a debatable point
16:30:01 <avishay> I think that winston put forth some very good objections in the review
16:30:06 <jgriffith> So there are some "holes" here as well
16:30:10 <avishay> bswartz: i agree
16:30:18 <jgriffith> keep in mind we don't expose "host" anywhere really either
16:30:21 <bswartz> DuncanT makes a excellent point
16:30:23 <jgriffith> at least not a mapping
16:30:46 * jgriffith only sees this as something for admin, and still limited
16:30:55 <bswartz> I do get the need for testing
16:30:56 <jgriffith> I think we need to build some better admin tools
16:31:11 <DuncanT> Certainly it isn't worth ugly code
16:31:16 <jgriffith> so I guess this falls lower on priority list
16:31:30 <avishay> I could live without this ever being implemented
16:31:40 <avishay> And if yes, clean and admin-only
16:31:45 <jgriffith> DuncanT: bswartz avishay OK... I'm going to say we punt, but if someobdy cares enough to write a clean admin interface into this we can look at it
16:31:55 <jgriffith> and it would be this week or early next
16:32:02 <avishay> jgriffith: how far can you punt it? :)
16:32:05 <jgriffith> otherwise it's not something we seem to really "need"
16:32:13 <jgriffith> avishay: depends on how long of a running start I get
16:32:15 <thingee> avishay: +1 I could live w/o it
16:32:16 <avishay> haha
16:32:20 <DuncanT> If somebody really needs it, they've got to pony up good code...
16:32:26 <jgriffith> ok..
16:32:30 <jgriffith> I'm just going to defer it then
16:32:52 <jgriffith> we need to remember to detail the bp better in Juno (I'll forget) :)
16:33:10 <avishay> back to local-storage-volume-scheduling?
16:33:16 <thingee> sure
16:33:19 <avishay> i think thingee and DuncanT had comments here?
16:33:51 <thingee> spoke to jgriffith about it. THis seems aligned with what vishy was talking about, along with jgriffith talking about brick
16:34:09 <thingee> if we want to help nova in this regard, this is something we have to move towards.
16:34:10 <DuncanT> I'm concerned that the semantics just aren't defined anywhere... they seem to want ephemeral volumes from the commit message, but aren't implementing that
16:34:51 <jgriffith> So it's not what we actually talked about in Portland and want
16:34:54 <jgriffith> but it's a start
16:34:58 <avishay> DuncanT: why ephemeral?
16:35:06 <jgriffith> DuncanT: it's no ephemeral (even if it reads that way)
16:35:17 <jgriffith> DuncanT: it's really about local attached for perf reasons
16:35:24 <jgriffith> DuncanT: that's really it in a nut-shell
16:35:37 <jgriffith> The ability to schedule local disk resources on the compute node for an instance to use
16:35:45 <avishay> does nova support booting a VM on the same host as a cinder volume?
16:35:45 <DuncanT> But what happens when the instance dies? What are the rules for connecting the volume to a new instance?
16:35:46 <jgriffith> instead of san attached
16:35:52 <jgriffith> DuncanT: same as they are today
16:35:58 <jgriffith> DuncanT: It's still a Cinder volume
16:36:16 <jgriffith> avishay: there's no coorelation
16:36:27 <jgriffith> avishay: I mean... there's no shared knowledge
16:36:34 <DuncanT> jgriffith: so you /can/ remote attach it afterwards, on any compute host? That's better... maybe just a docs problem then
16:36:35 <jgriffith> avishay: all this patch does is provide that
16:36:46 <jgriffith> DuncanT: Well... no :(
16:37:03 <jgriffith> DuncanT: so remember we have a "block" driver now that's a local disk only
16:37:08 <avishay> jgriffith: i meant to ask what DuncanT asked - if you shut down the VM, can you bring another one up to attach to your volume?
16:37:08 <jgriffith> no iscsi, no target etc
16:37:12 <thingee> jgriffith: oh, well then perhaps I'm still not understanding :)
16:37:13 <bswartz> there was a plan to add the so-called "shared knowledge" to one or both schedulers though wasn't there?
16:37:14 <jgriffith> HOWEVER you make an interesting point
16:37:28 <DuncanT> jgriffith: IMO that isn't a cinder volume...
16:37:33 <jgriffith> it would be interesting to extend the abstraction
16:37:46 <jgriffith> treat it more like a real cinder vol
16:38:00 <DuncanT> jgriffith: Or at least we don't have a rich enough interface to express that
16:38:05 <jgriffith> difference is if it's "local" to the node your provider_location and export is just the dev file
16:38:11 <jgriffith> instead of a target
16:38:11 <DuncanT> 'Island' tried that, right?
16:38:30 <jgriffith> DuncanT: I never really figured out what they were trying ;)
16:38:43 <jgriffith> DuncanT: but yes, I think it was along the same lines
16:38:46 <jgriffith> So anyway...
16:38:49 <DuncanT> My problem is that there's nothign in the return of 'cinder list' that tells me which vms I can / can't connect to
16:38:50 <jgriffith> My thoughts on this are:
16:39:01 <jgriffith> Useful features, needs a bit of thought and cleaning
16:39:10 <jgriffith> I'm ok with letting it ride til the end of the week
16:39:25 <jgriffith> if it's not cleaned up and made mo'betta then it gets deferred
16:39:49 <jgriffith> DuncanT: Yeah... to your point
16:39:52 <DuncanT> I'd really like to hear in detail what is supposed to happen after detach
16:39:59 <thingee> DuncanT: the cinder list comment is good. I think you should raise that in the review
16:40:04 <jgriffith> DuncanT: I'd say go back to my suggestion about how to abstract it so it "CAN" have a target assigned and work like any other cinder volume
16:40:17 <jgriffith> thingee: DuncanT I don't want to do that :(
16:40:28 <jgriffith> thingee: DuncanT I'd rather make it more "cinder'ish"
16:40:37 <DuncanT> I agree - make it more cinderish
16:40:39 <jgriffith> So the patch looks different this way
16:40:44 <avishay> jgriffith: +1
16:41:14 <jgriffith> It becomes more of a filter scheduling deal
16:41:24 <bswartz> My understanding of the proposal was to make it like a regular cinder volume with a hint that allowed you to bypass the iscsi layer when the target and initiator would be on the same box
16:41:26 <DuncanT> So the hint applies, but in every other respect except performance, it is a normal cinder volume
16:41:28 <jgriffith> and attach then determines "hey... can I just do a local attach or do I need an export"
16:41:29 <avishay> i think nova also needs a similar way of saying "launch a VM on the same host as this cinder volume"
16:41:47 <jgriffith> DuncanT: for the most part
16:41:50 <DuncanT> The call out to the nova API in the API server still worries me too
16:41:57 <jgriffith> avishay: yeah, it probably needs to go both ways
16:42:01 <DuncanT> But that is an implementation detail
16:42:10 <jgriffith> I don't want to go too deep on this
16:42:25 <jgriffith> I've been going back and forth on the idea for a about a year
16:42:34 <jgriffith> this was what we were aiming for with brick
16:42:46 <jgriffith> but that got completely side ways
16:43:21 <bswartz> whatever happened to brick
16:43:37 <bswartz> is it split out from cinder yet?
16:43:58 <thingee> :)
16:44:23 <DuncanT> I think a discussion about local volumes needs to start with answering the question how cindery do you want them?
16:44:33 <jgriffith> and there's new stuff in the works for cross project communicatiion and scheduling
16:44:33 <jgriffith> that solves alot of this problem
16:44:33 <jgriffith> so I hate to get carried away and invest a ton because I think that stuff is going to land in J
16:44:33 <jgriffith> alright... I'll take a look at this later and update the BP and review
16:44:38 <kmartin> bswartz: not yet, WIP
16:44:38 <jgriffith> if we get it great, if we don't we don't
16:44:38 <jgriffith> agreed?
16:44:41 <jgriffith> bswartz: no, I flat out haven't gotten around to it
16:45:03 <avishay> jgriffith: sounds good
16:45:06 <jgriffith> bswartz: and the LVM code kept changing so much this past cycle I didn't feel it was stable enough to break out
16:45:15 <jgriffith> bswartz: It's J-1 now though :)
16:45:33 <kmartin> jgriffith: timecheck 15 minutes left
16:45:40 <bswartz> jgriffith: so when can we push the nova guys to use it instead of their crappy attach code?
16:45:57 <jgriffith> Ok.. sorry I took all the time up here
16:46:12 <jgriffith> bswartz: I think hemnafk ported most of the initiator/attach stuff already?
16:46:43 <jgriffith> one more
16:46:47 <jgriffith> #link https://blueprints.launchpad.net/cinder/+spec/when-deleting-volume-dd-performance
16:46:52 <jgriffith> descent enough idea
16:47:03 <jgriffith> but it's been stagnant since october
16:47:09 <jgriffith> defer
16:47:11 <jgriffith> IMO
16:47:24 <jgriffith> not to mention as eharney points out there are considerations here
16:47:32 <thingee> sure
16:47:37 <kmartin> jgriffith: +1 defer
16:47:41 <DuncanT> If there's no code and nobody offering it, defer
16:48:04 <avishay> even though the BP seems to contain code, there's no patch :)
16:48:34 <bswartz> I think the patch is in the BP -- it's literally 2 lines
16:48:52 <DuncanT> Needs a config option too
16:48:57 <avishay> and unit test
16:48:58 <bswartz> even so eharney's alternative suggestion seems reasonable
16:49:07 <jungleboyj> DuncanT: +2
16:49:14 <jgriffith> I'll look at it later and consider implementing it
16:49:19 <jgriffith> but for now it's off the table
16:49:48 <avishay> that works
16:49:58 <jgriffith> I'll get with eharney on his stuff later
16:50:14 <jgriffith> My stuff is on the way (need one good day of no crisis or not being sick)
16:50:18 <guitarzan> do people use the cfq scheduler on their volume nodes?
16:50:24 <ik__> jgriffith: need any helping hand there? I've not started here yet.
16:50:36 <jgriffith> ik__: reviews would be fantastic :)
16:50:55 <jgriffith> cfq scheduler?
16:51:02 <guitarzan> for ionice on taht blueprint
16:51:31 <ik__> jgriffith: even if I'm novoice? :)
16:51:34 <jgriffith> guitarzan: sorry.. .don't know what you're saying :)
16:51:38 <avishay> guitarzan: i would assume so
16:51:42 <jgriffith> ik__: best way to learn the code is review :)
16:51:49 <jungleboyj> ik__: We will help you learn!
16:51:51 <guitarzan> avishay: I guess if it helped for that person with the blueprint
16:51:57 <jungleboyj> jgriffith: +2
16:52:05 <jgriffith> Ok, so wer'e about out of time and I hogged the entire meeting
16:52:14 <jgriffith> thingee:
16:52:17 <jgriffith> You ahd some items
16:52:20 <DuncanT> ik__: -1 is the best review you can provide
16:52:45 <jgriffith> ik__: be critical
16:52:46 <thingee> yes
16:52:54 <jgriffith> not typos etc but in the code quality
16:53:10 <jgriffith> we've been getting bad about writing ugly code lately IMO
16:53:12 <jgriffith> ok...
16:53:16 <jgriffith> thingee... all yours
16:53:34 <kmartin> driver maintainer: please review and update your cert results here: https://wiki.openstack.org/wiki/Cinder/certified-drivers#Most_Recent_Results_for_Icehouse
16:53:44 <thingee> topic change?
16:53:50 <thingee> milestone consideration for drivers
16:54:03 <jgriffith> #topic milestone consideration for drivers
16:54:11 <thingee> #link https://review.openstack.org/#/c/73745
16:54:47 <thingee> I want to propose something written on how we allow new drivers in.
16:55:09 <thingee> to avoid a backlog in milestone 3 when we should be focusing on stability
16:55:15 <thingee> and documentation
16:55:17 <avishay> thingee: +1
16:55:33 <jungleboyj> thingee: +1
16:55:34 <DuncanT> hear hear
16:55:35 <hemna_> +1
16:55:37 <kmartin> thingee: +1
16:56:03 <jgriffith> Thnk we all agreee, and stated this before but never wrote it in stone :)
16:56:04 <thingee> This is being more strict with maintainers, but in return we should be better on reviews in milestone 2 of getting a driver through
16:56:43 <avishay> thingee: no arguments here :)
16:56:59 <jungleboyj> thingee: Yeah, that means we have to be better about tackling the hard reviews.
16:57:10 <DuncanT> What about requiring a cert run for new drivers?
16:57:12 * ameade is curious about the cinder hackathon
16:57:18 <jungleboyj> :-)  Badges for the cores!
16:57:22 <akerr> ameade: +1
16:57:32 <avishay> DuncanT: different topic - see the wiki
16:58:00 <avishay> DuncanT: https://wiki.openstack.org/wiki/Cinder/certified-drivers
16:58:07 <thingee> DuncanT, avishay: you both asked about the cert tests. I have a review for that https://review.openstack.org/#/c/73691/
16:58:21 <thingee> needs to be more helpful as pointed out by jgriffith, otherwise, good
16:58:37 <avishay> thingee: cool
16:59:01 <thingee> so please comment on those two. let me know what wording should be fixed up. I would like to have this settled before J
16:59:09 <jgriffith> thingee: coolio?
16:59:10 <thingee> and finally hackathon
16:59:11 <avishay> thingee: 2 minutes - want to advertise your cinder 3-day super coding thing?
16:59:14 <jgriffith> thingee: next topi
16:59:19 <jgriffith> #topic hackathon
16:59:34 <thingee> so hangout will probably be it. unfortunately spots are limited.
16:59:44 <thingee> if you are going to be dedicated, please join the hangout :)
16:59:55 <thingee> I'll post a link to the room
16:59:59 <hemna_> ok
17:00:02 <thingee> topic or whatever for people to join
17:00:25 * hartsocks waves
17:00:30 <avishay> thingee: can you post before monday?  other time zones can start earlier
17:00:31 <thingee> I would really like to see us get through reviews together and finish some stability bugs.
17:00:31 <jgriffith> hartsocks: :)
17:00:37 <jgriffith> we're going we're going
17:00:37 <thingee> yes!
17:00:48 <avishay> bye all!
17:00:49 <thingee> avishay: I'll be likely up late to start
17:00:51 <thingee> ok done!
17:00:54 <jgriffith> :)
17:00:59 <jgriffith> thanks everyone
17:01:03 <thingee> thanks
17:01:04 <jgriffith> clear out for hartsocks
17:01:07 <jgriffith> #endmeeting