16:00:31 <jgriffith> #startmeeting cinder
16:00:32 <openstack> Meeting started Wed Nov  7 16:00:31 2012 UTC.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:33 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:34 <openstack> The meeting name has been set to 'cinder'
16:00:53 <jgriffith> winston-d: rongze_ around?
16:01:10 <jgriffith> rnirmal: creiht
16:01:11 <winston-d> jgriffith, should be, let me check
16:01:15 <rongze_> hi
16:01:22 <rongze_> no need check
16:01:26 <jgriffith> rongze_: :)
16:01:58 <creiht> howdy
16:02:04 <jgriffith> creiht: hey there!
16:02:19 <j_king> hello
16:02:27 <jgriffith> j_king: howdy
16:02:35 <jgriffith> alright, we're sure to have some stragglers but...
16:02:38 <jgriffith> let's get started
16:02:48 <jgriffith> #topic gate tests
16:03:08 <jgriffith> For those that didn't notice the past few days
16:03:25 <jgriffith> We had a non-deterministic failure in Cinder
16:03:33 <jgriffith> 401 error when talking to the client
16:04:09 <jgriffith> Good news https://review.openstack.org/#/c/15541/
16:04:29 <jgriffith> I went down  few rat hold with our old friend the zero out on delete :)
16:04:37 <jgriffith> Which brings me to the point
16:04:53 <jgriffith> Currently what's in the code regarding the secure erase:
16:05:09 <jgriffith> 1. A secure_delete FLAG was added defaulting to True
16:05:26 <jgriffith> 2. Gate configs were changed to set the flag to False
16:05:51 <jgriffith> 3. I dropped the dm_mapper(remove) call (this was the cause of the IO errors in kern.log)
16:06:13 <jgriffith> I'm not crazy about leaving the devstack gate configured the way it is right now
16:06:18 <jgriffith> but wanted thoughts from others?
16:06:26 <jgriffith> am I being paranoid?
16:06:28 <winston-d> i'm fine
16:07:07 <jgriffith> The other side of it is I think I'm going to implement that differently anyway
16:07:34 <jgriffith> Either a simple "cp /dev/zero /dev/mapper/d-n" or maybe something more sophisticated
16:07:45 <jgriffith> Like snapshot /dev/mapper/zero
16:07:51 <jgriffith> (on volume create)
16:08:11 <jgriffith> If anybody has any strong opinions/concerns lemme know
16:08:18 <jgriffith> Or if you have a *better* idea
16:08:51 <jgriffith> #topic blueprint targetting
16:09:13 <jgriffith> I've been going through the bp's: https://blueprints.launchpad.net/cinder
16:09:23 <jgriffith> talked with rongze_ and winston-d last night on a couple
16:09:40 <jgriffith> but wanted to get some more feedback on the Grizzly ones
16:10:12 <jgriffith> I'm proposing dropping shared-volume https://blueprints.launchpad.net/cinder/+spec/shared-volume
16:10:35 <winston-d> agree
16:10:41 <j_king> indeed
16:10:43 <jgriffith> It would be a nice feature but spanning it across multiple compute nodes would be a problem
16:11:01 <jgriffith> particularly since alot of targets don't support multi-initiator
16:11:12 <jdurgin1> jgriffith: it can actually be very useful for sharing a large data set among many nodes (read-only)
16:11:12 <jgriffith> Ok... so if no objections, I'm going to drop that one
16:11:24 <jgriffith> jdurgin1: agreed, completely
16:11:39 <jgriffith> jdurgin1: but for the iscsi case it's a bit of an issue to implement
16:12:27 <jdurgin1> I don't mind dropping it for grizzly if no one's interested in it
16:12:48 <jgriffith> jdurgin1: I'd like to see it, just priority wise, not sure
16:12:58 <jdurgin1> yeah, that makes sense
16:13:05 <jgriffith> jdurgin1: So I'll drop it from Grizz, and if we can get it great
16:13:08 <creiht> jgriffith: yeah I'm fine with that
16:13:15 <jgriffith> cool
16:13:38 <rongze_> I agree
16:13:42 <jgriffith> The hard one: https://blueprints.launchpad.net/cinder/+spec/efficient-vm-boot-from-volume
16:14:25 <jgriffith> I'd really like to get something concrete on this one
16:14:47 <jgriffith> the image read/write features get us part way there
16:15:09 <jgriffith> but we need to put some thought into how we can deal with the whole image transfer/caching etc
16:15:20 <jgriffith> I'm thinking Island may help with this?
16:15:51 <rongze_> It need tack a snapshot?
16:15:53 <dtynan> we did some stuff on this for QCOW bootable volumes, but it's still not particularly fast.
16:16:06 <creiht> jgriffith: how much of that is backend dependent?
16:16:25 <jgriffith> creiht: That's kind of a problem actually
16:16:34 <jdurgin1> I'm still confused about what this blueprint is proposing to implement
16:16:39 <winston-d> it depends on glance as well
16:16:52 <jgriffith> jdurgin1: yeah, I think the first step is to clean that up
16:16:56 <winston-d> jdurgin1, me too
16:17:02 <jgriffith> jdurgin1: The way I've interpretted it is:
16:17:28 <jgriffith> Allows you to use the same back-end for glance as for your instance_storage
16:17:28 <winston-d> Vincent is not here
16:17:54 <jgriffith> so if a back-end can do things like fast cloning etc etc you can take advantage of it
16:18:21 <jgriffith> but there's some other things that folks brought up at the summit that they'd like to see
16:18:41 <jgriffith> Unfortunately NONE of them really articulated the problem they were trying to solve
16:18:56 <creiht> jgriffith: also, aren't there similar issues for local storage?
16:19:06 <jdurgin1> perhaps someone could write up a specification for that blueprint?
16:19:15 <winston-d> creiht, i think so
16:19:17 <jgriffith> creiht: which issues?
16:19:29 <jgriffith> You mean the cross node problem?
16:19:45 <jgriffith> Or you mean solve it for local storage as well?
16:19:46 <creiht> the having to load images slowly to boot an instance
16:19:47 <winston-d> no, compute node have to copy image from glance to local disk
16:19:53 <jgriffith> creiht: yes
16:20:05 <jgriffith> creiht: sorry, was just using the back-ends as an example
16:20:38 <jgriffith> creiht: That's actually the case that received the most attention at the summit
16:20:48 <creiht> yeah
16:21:02 <jgriffith> creiht: and that's where i wonder if Island might have some ideas we can use
16:21:08 <creiht> cool
16:21:27 <jgriffith> sounds like we're all in the same boat on this blueprint.... needs some detail/direction
16:21:36 <winston-d> how did you do it in the solidfire presentation? i remember you were using solidfire for instance storage/glance as well as cinder back-end
16:21:48 <jgriffith> I'll see what I can do for that, and I think I'll end up targetting G3
16:22:04 <rongze_> cool
16:22:12 <jgriffith> winston-d: a little bit of hacking :)
16:22:22 <jgriffith> winston-d: It wouldn't fit the *general* case though
16:22:33 <jgriffith> winston-d: It was specific for us
16:22:37 <winston-d> jgriffith, ok
16:23:03 <jgriffith> winston-d: Really we had the same problem with sucking images out of glance etc
16:23:25 <jgriffith> winston-d: But we use an SF volume for instance_path
16:23:49 <jgriffith> winston-d: our internals then do some cooll stuff to optimize... but anyway
16:24:04 <winston-d> as long as nova/cinder has to talk to glance via api to get image, there should be a new API sort of stuff to allow nova/cinder to figure out what storage glance is using.
16:24:11 <jgriffith> Ok, I'll work on flushinng that bp out a bit
16:24:35 <jdurgin1> winston-d: that was added in Folsom, it's how the rbd cloning to volume works in Folsom
16:24:51 <jgriffith> jdurgin1: +1
16:25:10 <jgriffith> so in my view that was a first step
16:25:18 <winston-d> jdurgin1, sorry i missed that. mind give me pointer?
16:25:28 <jgriffith> I think we can exploit that going forward
16:25:39 <winston-d> jgriffith, yes, please
16:26:44 <jgriffith> The only other two that we should probably talk about
16:26:53 <jdurgin1> #link https://blueprints.launchpad.net/glance/+spec/api-v2-store-access
16:26:55 <jgriffith> iscsi-chap and multi-backend
16:27:05 <jdurgin1> #link https://blueprints.launchpad.net/cinder/+spec/effecient-volumes-from-images
16:27:44 <dtynan> I'd also like to mention the Glance meta one.
16:27:45 <dtynan> https://blueprints.launchpad.net/cinder/+spec/retain-glance-metadata-for-billing
16:28:12 <jgriffith> dtynan: ahh yes
16:28:16 <dtynan> we've implemented that in our version of diablo and would like to get it upstream
16:28:25 <jgriffith> dtynan: that's great with me
16:28:36 <dtynan> we're targetting G1... ;)
16:28:42 <jgriffith> dtynan: if you can do it that's cool
16:28:51 <jgriffith> dtynan: alright, I'll target it and assign to you
16:29:01 <winston-d> jdurgin1, thx!
16:29:02 <jgriffith> dtynan: G1 is like two weeks away, sure you can hit that?
16:29:18 <dtynan> well....
16:29:37 <jgriffith> dtynan: hmmmmm
16:29:44 <dtynan> we've been busy getting it into production here @ HP
16:29:54 <jgriffith> dtynan: Yup, we're all busy :)
16:29:58 <dtynan> so we know what the issues are...
16:30:07 <jgriffith> dtynan: This is why I left it sitting
16:30:14 <jgriffith> dtynan: So it's your call, and what you can commit to
16:30:20 <dtynan> but from an HP-diablo pov, it'll be in production imminently.
16:30:28 <dtynan> yeah, nothing like a challenge, eh?
16:30:30 <jgriffith> dtynan: My concernn is that it just sits there forever
16:30:44 <jgriffith> diablo haha
16:30:47 <dtynan> :)
16:31:07 <jgriffith> dtynan: so  tell me what you want to commit to?
16:31:13 <dtynan> yes.
16:31:35 <jgriffith> dtynan: dtynan yes?
16:31:48 <jgriffith> dtynan: Is that yes  for G1, yes for sitting forever?
16:31:49 <jgriffith> :)
16:32:04 <dtynan> yes for G1.
16:32:17 <jgriffith> Okie Dokie... you're officially on thehook
16:32:49 <jgriffith> On to iscsi-chap
16:32:57 <jgriffith> I propose ....
16:33:03 <jgriffith> wait, did I already bring that up
16:33:05 <jgriffith> think I did
16:33:32 <jgriffith> No... I didn't
16:33:40 <jgriffith> I propose that one is dropped
16:33:55 <jgriffith> It's been sitting for a long time and the reality is the back-ends immplement it
16:34:04 <jgriffith> Haven't heard too much yelling forit
16:34:30 <jgriffith> if vincent comes back and works on it fine, but we should remove the targetting
16:34:33 <jgriffith> agree?
16:34:45 <j_king> sure
16:35:09 <jdurgin1> fine with me
16:35:13 <winston-d> agree
16:35:17 <jgriffith> The other big items are:
16:35:50 <jgriffith> FibreChannel, multi-back-end and the API general bucket
16:36:12 <jgriffith> Anyy of the folks from HP, IBM or Brocade here today?
16:36:29 <jgriffith> Ok... multi-backend
16:36:39 <jgriffith> we need to decide on how we want this
16:36:50 <jgriffith> rnirmal: provided a working implementation
16:37:09 <jgriffith> rnirmal: do we go that route or do we do multiple services/proceses on a single box
16:37:18 <dtynan> HP.
16:37:30 <jgriffith> dtynan: yeah, wrong HP group :)
16:37:38 <dtynan> :)
16:38:06 <jgriffith> as a refresher: https://review.openstack.org/#/c/11192/
16:38:13 <jgriffith> That's the proposal from rnirmal
16:38:14 <winston-d> jgriffith, i prefer multiple services/processes
16:38:52 <jgriffith> winston-d: That's the direction I'm leaning too
16:39:05 <jdurgin1> that seems a lot easier to configure
16:39:06 <jgriffith> winston-d: It solves some concerns that were raised by others
16:39:17 <jgriffith> and let's us use the volume_type scheduler :)
16:39:24 <j_king> as long as there isn't a high level of co-ordination required, I'd be up for multiple processes
16:39:55 <jgriffith> j_king: My initial thought is something like multiple managers running in their own process based on back-end
16:40:43 <jgriffith> use all the same concepts we use today for multiple volume node configs
16:41:02 <jgriffith> creiht: thoughts?
16:41:37 <j_king> sounds good. I generally prefer the more simple implementation.
16:42:20 <creiht> jgriffith: I don't have a strong opinion here
16:42:22 <jgriffith> my keyboard is going nuts here
16:42:30 <jgriffith> creiht: fair enough
16:42:45 <jgriffith> Ok, let's proceed with multi-proces
16:42:55 <rnirmal> jgriffith: sorry came in late
16:42:55 <creiht> but your proposal seems reasonable
16:43:07 <jgriffith> rnirmal: No prob
16:43:25 <jgriffith> rnirmal: We were talking about the multi-back-end immplementation
16:43:45 <rnirmal> so multi process within the same manager ?
16:44:02 <jgriffith> rnirmal: I was actually thinking multiple managers
16:44:02 <rnirmal> or multiple managers ?
16:44:09 <winston-d> multi processes in multi managers
16:44:19 <rnirmal> jgriffith: well we don't have to do anything for it
16:44:22 <winston-d> each manager has its own process
16:44:25 <jgriffith> :)
16:44:38 <rnirmal> and the whole reason for the multi-backend was to get away from having to run multiple managers
16:44:45 <rnirmal> winston-d: that's how it's right now
16:45:01 <jgriffith> rnirmal: multiple managers on the same node
16:45:17 <winston-d> rnirmal, i thought your concern is too many volume service node to manage
16:46:11 <rnirmal> winston-d: also services right... 20 or so init scripts for each one ?
16:46:33 <rnirmal> I'd prefer a single configuration to load all the backends... if we do multiple processes within the same manager... that would be fine too
16:46:45 <winston-d> rnirmal, that can be changed, adding one new binary in bin/ can solve that problem.
16:46:56 <j_king> manager could just load a backend in each process
16:47:00 <jgriffith> TBH if you're looking at 20 back-ends I'd rather the init scripts that trying to get the config files correct
16:47:06 <j_king> just have to config the manager
16:47:57 <rnirmal> jgriffith: but why is a single config more harder than 20 configs ?
16:48:17 <jgriffith> rnirmal: Why is having an init for each manager harder than 20 configs?
16:48:43 <rnirmal> jgriffith: :) 20 log files to parse... 20 everything
16:48:43 <jgriffith> rnirmal: just an example, maybe a poor one
16:48:58 <jgriffith> rnirmal: hmmm... ok, you've got me there :)
16:49:00 <jdurgin1> separate config files is generally easier for config management to deal with
16:49:27 <winston-d> the complexity comes with a new layer 'backend' in single manager design.
16:49:44 <winston-d> i'd rather to have separate log files, IMHO
16:50:01 <jdurgin1> winston-d: +1
16:50:11 <rnirmal> winston-d: with what I proposed you can still do that
16:50:18 <rnirmal> it gives the option to do both
16:50:21 <rongze_> winston-d: +1
16:50:28 <j_king> winston-d: you still could with a single manager that manages several backend processes
16:50:32 <rnirmal> you are not tied to any particular way
16:50:35 <j_king> the process just logs
16:50:52 <winston-d> rnirmal, it's true and i think multi manager design can also combine multi configuration files into one
16:51:08 <winston-d> without adding complexity to introduce a new layer 'back-end'.
16:51:44 <rnirmal> winston-d: how would you distinguish 20 backends in the current design -> 'host' ?
16:51:49 <rnirmal> that's bad as is
16:52:03 <rnirmal> irrespective we still need the concept of a 'back-end'
16:52:13 <winston-d> rnirmal, via volume-topic
16:52:14 <rnirmal> merely more than just a driver...
16:52:22 <rnirmal> winston-d: how?
16:52:43 <rnirmal> don't tell me run on multiple hosts
16:52:46 <rnirmal> that's how it's right now
16:52:50 <winston-d> as long as different back-end instance has it unique volume-topic, scheduler is able to find it.
16:53:29 <rnirmal> how is that more different than with a 'back-end'
16:53:43 <jgriffith> Ok, seems we have a bit to work through on this :)
16:53:46 <rnirmal> the scheduler either needs to know which 'volume-topic' or which 'backend'
16:54:02 <rnirmal> which 'backend' seems more clearer than which 'volume-topic'
16:54:19 <jgriffith> rnirmal: why?
16:54:34 <rnirmal> just canonically
16:54:46 <jgriffith> meh... maybe
16:54:56 <jgriffith> So here's the thing IMO
16:55:06 <jgriffith> There are advantages and disadvantages to both
16:55:22 <jgriffith> What I'm trying to figure out is which is going to be more robust and supportable
16:55:34 <jgriffith> with an emphasis on robust
16:56:16 <winston-d> and able-to-scale
16:56:51 <jgriffith> winston-d: true, but I think both can scale, just not as clean maybe
16:57:12 <winston-d> right
16:57:25 <rnirmal> jgriffith: I'm all for a better solution if we have one :)
16:57:47 <jgriffith> rnirmal: well the problem is I think your solution is great
16:58:11 <jgriffith> rnirmal: that being said I want to look at all possibilities
16:58:28 <jgriffith> So I think what needs to happen is we need to have some code to compare
16:59:01 <jgriffith> so maybe an implementation using multiple managers that at least *functions* so we can look at the two together
16:59:07 <jgriffith> see what comes out of it?
16:59:20 <jgriffith> seem reasonable?
16:59:24 <kmartin> jgriffith: sorry, I'm here a hour late due to Daylight saving apparently.
16:59:28 <rnirmal> jgriffith: sure.
16:59:32 <jgriffith> kmartin: you and a few others :)
16:59:39 <kmartin> ;)
16:59:53 <rnirmal> kmartin: got caught in daylight savings as well :)
17:00:23 <thingee> ditto
17:00:39 <jgriffith> anyway, I think there's enough interest and merit in the multi-manager approach that we should at least pursue it a bit
17:00:57 <rnirmal> jgriffith: agreed
17:01:22 <jgriffith> ok, and we always have the first implementation that we can go with when it comes down to it
17:01:31 <jgriffith> alright... pheww
17:01:57 <jgriffith> Ummm... I wanted to save some time to see if folks had bp's they haven't gotten around to filing yet
17:02:14 * jgriffith is looking at creiht
17:02:46 <kmartin> A far as the FC blueprint with details has made it's way through HP legal is now getting reveiwed by the entire team as we meet again tomorrow morning.
17:02:56 <jgriffith> thingee: btw, wanted to see what your blockers are
17:03:18 <jgriffith> kmartin: wow.. beuarocracy at it's finest :)
17:03:33 <kmartin> jgriffith: yep
17:03:47 <thingee> jgriffith: my two bps list bootable vols and clearer error messages are blocked by my apiv2 bp
17:03:56 <thingee> should make that more clear on the bps themselves
17:04:01 <jgriffith> thingee: ahhh
17:04:22 <jgriffith> thingee: yeah, we should mark them as such
17:04:36 <thingee> for apiv2 I've posted my github branch which basically has middleware separated out and other api common code. tests pass.
17:04:44 <jgriffith> thingee: and maybe make dependencies instead of listing as blocked
17:04:46 <thingee> it's all in the bp
17:04:56 <thingee> jgriffith: sounds good
17:05:07 <jgriffith> thingee: yeah, was looking at that last night, looks good so far
17:05:14 <jgriffith> thingee: Nice work on that for a week-end :)
17:05:37 <jgriffith> thingee: unless you were just hiding it prior to that :)
17:06:06 <thingee> jgriffith: got about 26 tests failing atm for routing stuff. This is a bigger change that I was hoping and a bit of learning curve for me, but I've learned a lot about paste over the weekend and just gotta get things organized to model glance correctly
17:06:27 <jgriffith> thingee: I hear ya
17:06:28 <ollie1> @jgriffith I believe there are some folks here at HP planning to submit a bp around volume backup, to swift object store
17:06:52 <jgriffith> thingee: make sure you get the fixes for the monkey patching before you try and run with real endpoints :)
17:07:19 <dtynan> meant to mention that volume-backup BP... ;)
17:07:27 <thingee> jgriffith: I hear that
17:07:36 <jgriffith> dtynan: ollie1 noted
17:08:01 <jgriffith> keep in mind, there's some code in island that might be leveraged for this as well
17:08:24 <jgriffith> and the Lunar folks have some interest so they should be updated as well
17:08:35 <dtynan> thingee we did some stuff for list-bootable-volumes but it was through an hp-specific API hook (for now). I'd be interested in reading your BP. (and will do..)
17:09:18 <thingee> dtynan: I grabbed it because it seemed doable for me, but I'll be honest, I haven't started yet with some of the other stuff on my plate.
17:09:23 <ollie1> @jgriffith I'll pass that on
17:10:24 <thingee> I'll be focusing on apiv2 this week. would love to get to it next week if all goes according to plan (heh).
17:10:28 <dtynan> thingee: cool. I'll take a look. we figured an hp-specific API was the best (short term) way of getting the functionality before it's in the mainstream.
17:10:49 <jgriffith> thingee: If G1 is possible that would be ideal
17:11:01 <jgriffith> thingee: we can of course add to it after that, but
17:11:32 <jgriffith> thingee: we should have that in for things to build off of as you pointed out with your other BP's
17:12:00 <jgriffith> kmartin: how are you guys feeling about the FC work?
17:12:19 <jgriffith> kmartin: I know you're meeting tomororw,but what's the general feel so far?
17:12:24 <thingee> jgriffith: definitely! I'll be focusing hard on making that happen.
17:12:34 <jgriffith> thingee: great, thanks!
17:12:42 <kmartin> jgriffith: the BP looks good and we're shooting for G2
17:12:49 <jgriffith> kmartin: G2?  Really?
17:12:53 <jgriffith> kmartin: Awesome!
17:13:05 <kmartin> that the plan may slip into G3
17:13:12 <jgriffith> Ok... we're over, anything else?
17:13:17 <jgriffith> kmartin: haha
17:13:24 <jgriffith> I'll target G3 :)
17:13:42 <ollie1> fyi we're also working the bp to extend volume usage stats: https://blueprints.launchpad.net/cinder/+spec/volume-usage-metering
17:13:46 <kmartin> sounds safer :)
17:14:00 <winston-d> if you guys have time, please take a look at here: https://etherpad.openstack.org/cinder-backend-capability-report
17:14:49 <jgriffith> ollie1: can you update the bp?
17:14:51 <winston-d> that's for back-end to report capability/capacity for scheduling purpose.
17:14:59 <jgriffith> ollie1: and what is your timeline?
17:15:18 <jgriffith> winston-d: thanks for reminding me :)
17:15:23 <ollie1> I'll check that out with the person on it and update the bp
17:15:31 <jgriffith> winston-d: looks like a good start
17:15:46 <dtynan> the guy who did the volume-usage-metering code is on vacation this week.
17:16:05 <jgriffith> ollie1: ok, just FYI I'm going to hack up blueprints tonight with prejudice :)
17:16:30 <jgriffith> dtynan: ahhh
17:16:44 <winston-d> that's just some examples, we should have mandatory ones and optional ones to report.
17:16:44 <jgriffith> alright, well no big deal.  We never stop accepting blue-prints
17:16:47 <dtynan> we'll round him up next week, and get him busy on the upstream work :)
17:16:53 <dtynan> it's implemented internally.
17:16:55 <jgriffith> winston-d: yes, I'm with ya
17:17:13 <jgriffith> winston-d: it's a good model to start and see what we're doing though
17:17:32 <jgriffith> alright.... i should probably wrap this up
17:17:41 <jdurgin1> winston-d: just a heads up, I expect most of those to not make sense (and thus be optional) from a ceph perspective
17:18:07 <jgriffith> jdurgin1: I think we'll tweak this a bit to make more sense, but...
17:18:29 <jgriffith> jdurgin1: I was thiking that for any we say are mandatory but they don't apply just set None?
17:18:45 <jgriffith> jdurgin1: so *kinda* mandatory
17:18:54 <winston-d> jgriffith, jdurgin1, make sense
17:19:11 <jdurgin1> jgriffith: might as well make them option imo, but we can discuss later
17:19:13 <jgriffith> jdurgin1: The mandatory part is just that you can be queiried and return the field
17:19:50 <jgriffith> jdurgin1: so maybe better language is that you have to implement the report_capabilities method
17:20:02 <winston-d> mandatory is minimum set of capability that allow built-in filters to work, i think
17:20:04 <jgriffith> jdurgin1: what you put in there has guidelines if yousupport
17:20:23 <jgriffith> we can hash it out
17:21:07 <jgriffith> jdurgin1: but I think theres value in the function, and eve the basics like driver version etc
17:21:17 <jdurgin1> that makes more sense, but we're over a bit already
17:21:23 <jgriffith> we're way over
17:21:25 <jgriffith> alright
17:21:30 <jgriffith> thanks everyone
17:21:38 <jgriffith> don't forget DST next week :)
17:21:50 <jgriffith> #endmeeting