16:01:17 <jgriffith> #startmeeting cinder
16:01:18 * DuncanT waves
16:01:19 <openstack> Meeting started Wed Jul  2 16:01:17 2014 UTC and is due to finish in 60 minutes.  The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:23 <openstack> The meeting name has been set to 'cinder'
16:01:24 <avishay> hello
16:01:24 <rushiagr> o/
16:01:28 <mtanino> hello
16:01:28 <xyang1> hi
16:01:32 <kmartin> hi
16:01:35 <jgriffith> hey everyone
16:01:47 <tbarron> hello
16:01:56 <jgriffith> I'm going to hijack the agenda for a second cuz I can :)
16:01:57 <DuncanT> Agenda as usual at https://wiki.openstack.org/wiki/CinderMeetings
16:02:02 <jungleboyj> Hello.
16:02:04 <jgriffith> #topic blueprints
16:02:22 <jgriffith> ok... so pop quiz:  how many BP's were targetted for J1?
16:02:31 <jgriffith> anyone?
16:02:40 <rushiagr> link plz? :P
16:02:45 <avishay> feels like a trick question
16:02:47 <jgriffith> rushiagr: no link... it's a quiz
16:02:53 <jgriffith> avishay: nahh
16:02:56 <guitarzan> all of them?
16:02:59 <jgriffith> Ok... so it was like 15 at one point
16:03:01 <asselin> hi
16:03:03 <jgriffith> 15!
16:03:08 <jgriffith> Now... how many landed in J1?
16:03:14 <rushiagr> 1?
16:03:17 <mtanino> 2?
16:03:18 <jgriffith> hint.. rhymes with hero
16:03:20 <kmartin> 3
16:03:24 <jungleboyj> Zero.
16:03:27 <avishay> shmero?
16:03:31 <avishay> damn
16:03:34 <xyang1> 0
16:03:36 <zhithuang> :)
16:03:39 <rushiagr> :/
16:03:46 <jgriffith> VERY VERY bad :(
16:03:48 <jgriffith> Soo.....
16:04:04 <jgriffith> https://launchpad.net/cinder/+milestone/juno-2
16:04:15 <jgriffith> Here we are a couple weeks out from J2
16:04:17 <stevemac> hi john
16:04:19 <stevemac> hi guys
16:04:27 <jgriffith> 16 BP's targetted
16:04:38 <jgriffith> and a bunch of them in "unknown" status
16:04:42 <xyang1> we have 5 targeting J-2
16:04:44 <xyang1> not listed there
16:04:46 <jgriffith> we have a problem here
16:04:50 <xyang1> 4 drivers and CG
16:04:53 <jgriffith> xyang1: that makes the problem worse!
16:05:05 <mtanino> I have one BP.
16:05:12 <jgriffith> My point here is....
16:05:17 <xyang1> I added J-2, but removed by ttx
16:05:23 <jungleboyj> jgriffith: I will get some updates to mine out there.
16:05:24 <jgriffith> mtanino: yes, and your code is up "thanks"
16:05:29 <jgriffith> jungleboyj: thanks
16:05:33 <xyang1> jgriffith: can you target them?
16:05:46 <jgriffith> I need everyone to please please update what you've signed up to work on
16:05:54 <ttx> jgriffith needs to set a priority for it to stick to the milestone
16:06:16 <avishay> ttx is everywhere :)
16:06:21 <jgriffith> xyang1: I'll get to yours, but they're on the bottom of my list
16:06:36 <rushiagr> I can see some drivers regularly getting targetted to next release, as the progress on them is slow. We can't do anything with them, can we?
16:06:37 <ttx> avishay: say three times my name and I appear
16:06:38 <jgriffith> harlowja_away: when you come back... read my comments above
16:06:45 <jgriffith> jungleboyj: you're on the hook
16:06:46 <anteaya> o/
16:06:47 <avishay> ttx: :)
16:06:49 <jgriffith> today please ;)
16:07:08 <stevemac> jgriffith: we have 3 bp's. how do we get them in.
16:07:10 <jungleboyj> jgriffith: Indeed.
16:07:24 <jgriffith> rushiagr: do you know anything about netapps refactor from Alex?
16:07:33 <avishay> jgriffith: so what do you think the problem is?  lack of code coming in, or lack of reviews, reviews effort not focused?
16:07:41 <jgriffith> stevemac: hold on... I'll get to that next
16:07:47 <rushiagr> jgriffith: no. Not with netapp since more than a year..
16:07:59 <jgriffith> rushiagr: oops... sorry I forgot
16:08:05 <rushiagr> jgriffith: np
16:08:22 <stevemac> jgriffith: ok. thanks
16:08:24 <jgriffith> If you sign up for something, please update
16:08:41 <jgriffith> and please let me know if your plans have changed and you're not going to be working onit
16:08:57 <jgriffith> sadly many of the folks these days aren't on IRC or attending meetings which makes it difficutl
16:09:00 <jgriffith> difficult
16:09:03 <jgriffith> but a warning...
16:09:20 <jgriffith> I'm going to start punting BP's that don't seem to be making any progress or that people don't update me on
16:09:32 <thingee> jgriffith: +1
16:09:33 <jgriffith> I'm going to start doing a weekly house cleaning
16:09:54 <tbarron> jgriffith: netapp in US has mandatory vacation :-) this week
16:09:57 <jgriffith> so if you really care about your BP you need to either make progress or communicate as to whyyou're not
16:10:04 <jgriffith> tbarron: how nice for them
16:10:08 <Arkady_Kanevsky> are all these BPs reviewed?
16:10:18 <jgriffith> OpenStack is global and doesn't take vacation
16:10:19 <xyang1> jgriffith: should I ping you after the meeting about our BPs?
16:10:19 <tbarron> jgriffith: I'm new to netapp openstack and am lurking :-)
16:10:20 <jgriffith> just sayin :)
16:10:21 <stevemac> tbarron: good for you guys
16:10:37 <jgriffith> xyang1: sure... but I'm on those don't worry
16:10:39 <jgriffith> Ok
16:10:47 <jgriffith> That's might rant for the morning
16:10:54 <jgriffith> and I'll stop whining now :)
16:11:01 <jgriffith> forewarning
16:11:06 <xyang1> jgriffith: thanks
16:11:08 <jgriffith> I'm going to turn in to a bit of a jerk in the coming weeks
16:11:09 * jungleboyj is on vacation right now.  :-)
16:11:21 <jgriffith> Ok... now back to our regularly scheduled program
16:11:35 <jgriffith> # topic batching code cleanup
16:11:44 <jgriffith> #topic batching code cleanup
16:11:46 <Arkady_Kanevsky> John, do we need to do something special for BP for new/updated drivers?
16:11:48 <jgriffith> DuncanT: you're up
16:12:00 <jgriffith> Arkady_Kanevsky: nope, all I need for those is a BP
16:12:11 <DuncanT> Ok, so my point is simple, we keep getting lots of mechanical code cleanups
16:12:13 <jgriffith> I'll get around to priortizing etc
16:12:16 <Arkady_Kanevsky> +1
16:12:16 <jgriffith> submit your patch
16:12:32 <jgriffith> DuncanT: I like the idea
16:12:33 <DuncanT> Not without value, but they cause merge conflicts for actual features and make those far harder than they are
16:13:03 <DuncanT> I'm wondering if there's some tag or something we can add to make it easy to find these again at merge time?
16:13:36 <anteaya> if you all use the same topic you can do a gerrit search on it
16:13:44 <jgriffith> DuncanT: we could surely create a tag
16:13:49 <anteaya> code-cleanup might be one
16:14:13 <jgriffith> maybe something more descriptive and one level deeper
16:14:18 <anteaya> as core you can change the topic of those two patches, I believe
16:14:22 <jgriffith> pep8-hacking-fixes
16:14:28 <jgriffith> py3-updates
16:14:41 <DuncanT> If we can change the topic, then we're golden
16:14:46 <anteaya> then you can find both with one query
16:14:48 <DuncanT> Anybody not like the idea?
16:14:55 <anteaya> I *think* cores can change topics
16:15:01 <anteaya> let me know if I'm wrong
16:15:21 <DuncanT> We can sort the mechanic out of this meeting, I jsut want to know if anybody hates it?
16:15:27 <jgriffith> If I can I don't know how
16:15:29 <joa> nope, sounds fine.
16:15:36 <avishay> sounds ok to me
16:15:41 <asselin> sounds good
16:15:49 <anteaya> jgriffith: we can try after the meeting to see
16:15:54 <jgriffith> DuncanT: I think we have concensus
16:15:57 <jgriffith> anteaya: cool
16:16:00 <DuncanT> Ok, sold. I'll put a note on the mailing list, sort out the details and we can start batching
16:16:07 <DuncanT> I'm done
16:16:10 <jgriffith> DuncanT: awesome!  Nice work
16:16:16 <jungleboyj> jgriffith: I am ok with the plan.
16:16:27 <thingee> jgriffith, anteaya: "cherry-pick to" button I think would do what you want
16:16:32 <jgriffith> #topic 3'rd party ci naming
16:16:37 <jgriffith> asselin: you're on deck
16:16:45 <anteaya> no I don't think cherry pick is it, just topic changes should work
16:16:49 <asselin> so I posted a message on the ml: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html
16:16:52 <anteaya> not changing patch or parents
16:17:18 <asselin> it's a proposal to have a dedicated ci system for each vendor to do cinder-mandated tests.
16:17:41 <asselin> as a way to isolate them from other unofficial tests
16:18:07 <DuncanT> asselin: Some vendors will need multiple, since teams can be totally disjoin between products. Other than that, makes sense to me
16:18:10 <asselin> so that reviewers can quickly know what the +1 and -2 means
16:18:17 <joa> but we're not the only project which might require "official" tests right ?
16:18:29 <asselin> DuncanT, yes they would need multiple
16:18:30 <xyang1> we have 4
16:18:47 <xyang1> emc-vnx-ci, emc-vmax-ci, emc-vipr-ci, emc-xio-ci
16:18:53 <asselin> ok I see...4 vender-cinder-ci accounts
16:19:00 <asselin> b/c they're 4 different teams
16:19:03 <joa> Company-[Team or product-]ci ?
16:19:04 <jgriffith> honestly I'd sort of like to go back to my original proposal for all of this that I made back at the summit... but I'll bight my tongue :)
16:19:07 <DuncanT> asselin: Yes
16:19:27 <jungleboyj> asselin: So, this is just proposing individual accounts for each driver?
16:19:30 <xyang1> I got the names from anteaya
16:19:39 <anteaya> yes you did
16:19:57 <jgriffith> jungleboyj: please no :)
16:20:07 <jungleboyj> I.E. ibm-storwize_svc-ci
16:20:15 <jungleboyj> jgriffith: ?
16:20:29 <asselin> so we'll have 1 ci account review per driver?
16:20:35 <jgriffith> jungleboyj: that's been my big fear in all of this
16:20:36 <joa> didnt we want to avoid that ?
16:20:40 <DuncanT> jungleboyj: It is proposing to have 'cidner' int eh name of any account that does mandated ci, and not in the name of any account that doesn't, I think
16:20:58 <jgriffith> DuncanT: so here's my understanding
16:21:01 <xyang1> asselin: the reason we have 4 is because we need 4 CI systems to test 4 drivers
16:21:01 <asselin> my proposal was to have 1 per vendor.
16:21:10 <jungleboyj> DuncanT: Oh, ok.
16:21:16 <jgriffith> There are vendors that have Cinder related ci systems as well as Neturon or Nova
16:21:17 <asselin> xyang1, why is that?
16:21:21 <xyang1> asselin: we plan on consolidate them after Juno, long term plan
16:21:25 <joa> DuncanT: yeah but what if I want an account that will do the testing for every openstack project I contributed a 3rd-party to ?
16:21:27 <Arkady_Kanevsky> it will be tryicky for 1 per vendor with completely different prodcuts
16:21:38 <jgriffith> In addition there are vendors w/multiple drivers in Cinder (and others)
16:21:39 <xyang1> asselin: we don't have one place that can test all
16:21:50 <Arkady_Kanevsky> Can we have one per product instead?
16:21:52 <jungleboyj> asselin: So, we upload all our results through one account then.
16:22:00 <jgriffith> the goal is an efficient and compact way to have accounts that represent a ci system or systems
16:22:04 <Arkady_Kanevsky> thinl of gluster and ceph both udner RH now.
16:22:13 <anteaya> jgriffith: agreed
16:22:16 <e0ne> asselin: what about case when not vendor want to create ci for a driver?
16:22:17 <eharney> jungleboyj: then one broken CI job gets everyone's at that company turned off
16:22:18 <jgriffith> without having a seperate account/system for every driver in a project for those with more than one
16:22:20 <xyang1> asselin: drivers are developed in 4 BU's
16:22:31 <jungleboyj> anteaya: FYI, we are waiting for our account to get approved.  We are close to having storwize results uploadable if we can get the account approved.
16:22:35 <joa> anteaya, jgriffith: +1 :)
16:22:46 <jungleboyj> eharney: That sounds bad.
16:22:52 <asselin> ok, then are we all ok to have one ci account/review per driver?
16:22:56 <xyang1> our ports were opened on Sunday
16:23:03 <anteaya> jungleboyj: I thought ibm-storwize-ci got createed
16:23:09 <xyang1> waiting for web server to setup
16:23:10 <jgriffith> asselin: wait...
16:23:19 <jgriffith> asselin: are the seperate independent systems?
16:23:22 <asselin> I'd like all of us to be consistent
16:23:23 <jungleboyj> anteaya: Did it?  I will follow up.  Been on vacation this week.
16:23:24 <jgriffith> s/the/they/
16:23:28 <Arkady_Kanevsky> asselin proposal +1 (one per driver)
16:23:47 <jgriffith> asselin: are they?
16:23:52 <anteaya> jungleboyj: http://lists.openstack.org/pipermail/openstack-infra/2014-July/001470.html
16:23:53 <jgriffith> IMO that's what determines that
16:24:03 <jgriffith> if you have seperate CI's then yes, seperate accounts
16:24:11 <jgriffith> if you share a single CI then one account IMO
16:24:16 <asselin> jgriffith, don't understand your question
16:24:36 <jgriffith> asselin: You stated, seperate accounts for each driver
16:24:50 <jgriffith> asselin: I asked... are you implementing independent CI systems for each driver?
16:24:53 <asselin> yes, in that case we'll setup 4 accounts and 4 ci systems, one for each of our drivers
16:25:02 <jgriffith> asselin: fine by me
16:25:11 <jgriffith> I hate it but whatever
16:25:12 <jgriffith> :)
16:25:24 <asselin> and the expectation is that everyone will do the same so we're all consistent
16:25:33 <avishay> jgriffith: you have one in any case right? :)
16:25:33 <anteaya> asselin: your expectation
16:25:34 <xyang1> asselin: you have different CI systems for iSCSI and FC as well?
16:25:37 <joa> everyone ? like every 2rd-party ?
16:25:41 <joa> 3rd*
16:25:46 <anteaya> noone in third party does the same as anyone else
16:25:49 <jgriffith> avishay: unfortunatly soon I'll have 3
16:25:55 <jgriffith> but regardless
16:25:56 <avishay> :/
16:26:06 <eharney> can someone please back up a little bit and explain the actual issue the consistency rules are trying to solve/prevent?  or did i miss something?
16:26:16 <asselin> xyang1, good question....not sure right now...
16:26:24 <e0ne> asselin: what about something like 'nonvendorcompany-cinder-ci'?
16:26:24 <eharney> i like consistency but i'm not exactly sure what the goal is here
16:26:28 <thingee> jgriffith: three?
16:26:44 <avishay> does it really matter?
16:26:46 <joa> eharney: well.. The thing is; for one project, we'd love to get  only one report for all the drivers of one 3rd-party provider.
16:27:01 <joa> eharney: but the thing is, it does not necessarily match the needs/way of working of some big companies
16:27:03 <jgriffith> I say we punt on this whole thing and go back to my idea of a dashboard
16:27:06 <eharney> joa: why? Ceph and Gluster reports should be combined?
16:27:14 <jgriffith> independent of OpenStack CI
16:27:19 <asselin> e0ne, no that won't be allowed if we do one per driver
16:27:28 <e0ne> :(
16:27:33 <xyang1> asselin: do you know how to mark the results as (non voting)?
16:27:39 <anteaya> jgriffith: I think we are jamming together two things
16:27:52 <anteaya> 1) naming, which has to scale
16:27:53 <jgriffith> anteaya: yeah.. there are a ton of side topics going here
16:28:15 <anteaya> 2) viewing and interpreting results, which needs to be aggregated
16:28:23 <jgriffith> 2 minutes remain for this topic
16:28:38 <anteaya> eharney: the root of the issue is there are a lot of ci accounts: https://etherpad.openstack.org/p/automated-gerrit-account-naming-format
16:28:41 <jgriffith> anteaya: agreed
16:28:45 <anteaya> eharney: and we are getting more all the time
16:28:57 <anteaya> eharney: we need a format to name them so that naming scales
16:29:10 <jgriffith> anteaya: however my proposal was and is, it's cinder owned/specific for Cinder which helps with the scale problem
16:29:14 <jgriffith> anteaya: makes naming easier
16:29:20 <anteaya> eharney: https://review.openstack.org/#/c/101013/ is one proposal: https://review.openstack.org/#/c/101013/
16:29:27 <jgriffith> anteaya: and cuts the beuracracy of picking a name
16:29:38 <joa> jgriffith: this would mean one account for cinder-specific CI and at least another account for others CI ?
16:29:39 <jgriffith> because I get to just say "here's what it is" and move on
16:29:40 <anteaya> except for those companies that want to test additional things
16:29:55 <e0ne> asselin: some openstack providers want to test cinder with back-end for special cases. e.g. Mirantis is interesting in integrateind 3rd party ci for cinder+ceph
16:29:56 <anteaya> or companies where more than one division tests cinder
16:29:58 <jgriffith> joa: no, that's not really the intent necessarily
16:30:05 <jgriffith> but everybody is running off on tangents
16:30:06 <asselin> seems we need to pick the lowest common denominator: one per driver
16:30:11 <joa> jgriffith: okay
16:30:22 <joa> if I could I'd love to only have one acc
16:30:27 <anteaya> me too
16:30:34 <asselin> I think we can aggregate any driver vairants; e.g. iscsi & fc in a since account
16:30:36 <anteaya> I would love to have only one account per vendor
16:30:37 <jgriffith> why are we making this so difficult?
16:30:42 <asselin> since/single
16:30:53 <jgriffith> we don't have to be perfect
16:30:58 <jgriffith> it doesn't have to be "forever"
16:30:58 <anteaya> jgriffith: because there is an assumption that everyone testing cinder wants to do it the same way
16:31:10 <anteaya> well it kind of does, regarding naming
16:31:13 <jgriffith> anteaya: you're completely missing the point
16:31:20 <jgriffith> I'm not arguing against consistency
16:31:35 <jgriffith> everybody involved here has spent more time arguing about "names" than actually building a CI system
16:31:40 <DuncanT> One account per vendor is a nice to have but doesn't match the realities of some vendors in term s of business units etc
16:31:41 <jgriffith> which is ridiculous
16:31:49 <thingee> jgriffith: +1
16:31:51 <jgriffith> DuncanT: My proposal is you have the option
16:32:02 <jgriffith> If you can do one account per vendor AWESOME
16:32:02 <DuncanT> jgriffith: +10k
16:32:13 <jgriffith> if you can and have to do per dirver then frikin do it
16:32:22 <jgriffith> but please stop arguing about it and wasting time
16:32:31 <jungleboyj> jgriffith: +2
16:32:32 <xyang1> jgriffith: +1.  I'd rather focusing on getting CI to work end-to-end, rather than spend time changing account names
16:32:32 <anteaya> but what is the solution?
16:32:32 <stevemac> agree with jgriffith.
16:32:33 <joa> +1
16:32:38 <avishay> +30294013982481
16:32:44 <anteaya> so that another hp department can test cinder
16:32:56 <jgriffith> avishay: that's not my problem
16:32:56 <joa> Company-[Team or product-]ci ?  Sounds good to me.
16:32:58 <jgriffith> errr
16:33:01 <jgriffith> avishay: sorry
16:33:02 <DuncanT> anteaya: For now? Call it HP2 for all it matters
16:33:04 <avishay> :)
16:33:15 <anteaya> DuncanT: can you suggest that in the naming patch
16:33:17 <stevemac> yes, companies working on openstack come in all shapes and sizes
16:33:23 <DuncanT> anteaya: Or HP-some-team-name
16:33:29 <anteaya> DuncanT: right
16:33:36 <anteaya> that is what we are suggesting now
16:33:41 <jgriffith> times up
16:33:44 <eharney> don't these all show in Gerrit with a "pretty name" anyway?
16:33:50 <anteaya> so infra isn't upsetting vendors
16:33:59 <joa> eharney: the pretty name is part of the naming scheme
16:34:02 <asselin> ok, thanks. conclusion: {company}-{team or driver}-ci
16:34:02 <e0ne> Duncan: agree with you
16:34:05 * eharney hides
16:34:06 <anteaya> eharney: https://etherpad.openstack.org/p/automated-gerrit-account-naming-format
16:34:07 <asselin> objections?
16:34:10 <joa> eharney: some names contain Jenkins and confuse devs and reviewers
16:34:13 <jgriffith> #topic LVM support VG on shared storage
16:34:19 <jgriffith> mtanino: you around?
16:34:19 <mtanino> o.
16:34:21 <mtanino> Hi
16:34:21 <joa> asselin: agreed :)
16:34:30 <mtanino> I had some discussion about my proposed driver at openstack-dev with avishay and deepakcs.
16:34:35 <mtanino> And they recommended me to discuss the driver at the meeting. So I come here today.
16:34:38 <e0ne> asselin: great!
16:34:41 <avishay> asselin: i don't think that's what jgriffith said...
16:34:54 <mtanino> I would like to have a quick discussion about benefits, comparison to other drivers, performance.
16:34:55 <avishay> asselin: but please take it offline
16:34:57 <jgriffith> avishay: you're right it's not but I've moved on :)
16:34:59 <rushiagr> avishay: asselin: jgriffith said time's up :)
16:35:22 <mtanino> can I move forward?
16:35:24 <avishay> mtanino: please present your proposal
16:35:33 <mtanino> Could you see P8-P14 of this document?
16:35:38 <mtanino> https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf
16:35:46 <mtanino> There are benefits, comparison to other drivers, performance.
16:36:19 <flip214> mtanino: how do you handle locking, ie. that only one host does create a snapshot or LV at one time?
16:36:21 <mtanino> I would like to know these benefits are make sense for cinder driver.
16:36:28 <flip214> CLVM is what I'm getting at.
16:36:44 <jgriffith> flip214: +1
16:36:46 <mtanino> Only cinder node can create, delete, snapshot to the VG
16:36:51 <flip214> (cluster-LVM, with locking across the cluster)
16:37:05 <mtanino> compute node can only attach a volume to an instance.
16:37:08 <jgriffith> mtanino: it's not a clustered LVM really though
16:37:15 <jgriffith> err... sorry, flip214 ^^
16:37:19 <DuncanT> If only the one host running cinder-volume can do the actions, do you need locking?
16:37:26 <DuncanT> I don't think you do
16:37:32 <jgriffith> flip214: the VG only exists on one device
16:37:44 <flip214> well, we have similar things with customers who run LVM on top of DRBD, eg. for XEN
16:37:50 <avishay> my feeling is that this is a nice idea in theory, but in practice customer's won't want to turn their expensive feature-rich storage into a JBOD that is managed by LVM
16:37:58 <jgriffith> flip214: completely different approach
16:37:59 <flip214> they all want to run dual-primary
16:38:05 <jgriffith> DuncanT: I beleive you're correct
16:38:11 <jgriffith> DuncanT: ie no need for locking
16:38:13 * joa thinks it reminds him of what he's working on..
16:38:21 <jgriffith> DuncanT: it just *works* the same as LVM today
16:38:22 <DuncanT> avishay: You can use this on top of cheaper, less feature rich arrays too
16:38:24 <flip214> is there a need for thin pool LV?
16:38:31 <jgriffith> cinder node owns it, controls it etc
16:38:44 <mtanino> avishay: So I do not want to replace vendor driver. Use both vendor driver and LVM driver case by case basis.
16:38:53 <DuncanT> flip214: It supports thin or thick pretty much for free....
16:38:54 <flip214> thin LV might mean that the compute nodes write to the (thin) metadata
16:39:07 <flip214> so syncronization and locking issues *might* arise.
16:39:13 <jgriffith> flip214: again,works the same we the cinder LVM driver does
16:39:21 <DuncanT> flip214: Ah, I see your point, and agree
16:39:23 <flip214> I'm not against this proposal.
16:39:25 <jgriffith> flip214: only difference is you share it across multiple compute nodes
16:39:35 <flip214> I just want to put a word of caution into the discussion
16:39:38 <avishay> mtanino: does this require changes in Cinder other than the driver?
16:39:43 <joa> mtanino: so it does come on top of other (vendor?) drivers ?
16:39:44 <mtanino> flip214: we do not need Thinpool now
16:39:59 <flip214> mtanino: we do, if there should be efficient snapshots.
16:40:00 <jgriffith> mtanino: I'd like to understand what the benefit is?
16:40:15 <DuncanT> avishay: Requires a nova connector change too, but I'd like to see that renamed and put in anyway for personal reasons
16:40:17 <flip214> jgriffith: performance
16:40:20 <jgriffith> mtanino: I don't see the advantage of this over what we do already
16:40:27 <eharney> benefit is you don't use iSCSI to get from the same node back to itself
16:40:27 <flip214> because one indirection via iscsi is not needed anymore
16:40:29 <jgriffith> flip214: nahh... don't think so
16:40:51 <jgriffith> eharney: don't know what you mean by that
16:40:54 <flip214> the thick LVM snapshots are *really* bad if you've got more than 1 on a LV
16:41:07 <jgriffith> flip214: yes we are painfully aware :)
16:41:15 <eharney> it's direct block device attach from LVM<->VM, not LVM<->iSCSI<->iSCSI<->VM
16:41:20 <eharney> right?
16:41:23 <jgriffith> eharney: no
16:41:39 <jgriffith> the device the VG sits on is still an external san attached device
16:41:44 <jgriffith> whether that be iscsi or FC
16:41:54 <jgriffith> you're just mapping/attaching it to all of the compute nodes
16:42:02 <jgriffith> and accessing LVM directly
16:42:03 <flip214> jgriffith: so thin pool LVs are better. I wouldn't want to use them with a shared VG approach, though.
16:42:12 <eharney> which is what i said
16:42:13 <jgriffith> eharney: basicly dumping the abstraction
16:42:17 <DuncanT> jgriffith: But now the compute nodes talk directly to the SAN, not funneled through a linux node
16:42:30 <avishay> i think this is STORAGE<->iSCSI<->LVM<->VM, right?
16:42:36 <jgriffith> eharney: ^^
16:42:39 <jgriffith> what avishay said
16:42:40 <mtanino> jgriffith: I think the one of benefit is "Reduce hardware based storage workload by offloading the workload to software based volume operation.
16:42:55 <jgriffith> you left out the storage<->iscsi piece which is nice magic
16:43:04 <flip214> it should work if a thin pool is created for every (cinder volume + snapshots)
16:43:10 <jgriffith> mtanino: I don't follow
16:43:17 <mtanino> jgriffith:hmm..
16:43:26 <jungleboyj> mtanino: Makes sense.
16:43:32 <jgriffith> eharney: you see why I disagreed?
16:43:40 <flip214> then only one (compute) node accesses a thin pool at the same time
16:43:46 <jgriffith> eharney: I don't understand the benefit as it doesn't change datapath
16:43:54 <eharney> jgriffith: yes, i missed a step in the doc i was looking at
16:44:26 <jgriffith> The way I interpretted this was just that instead of attaching a volume to the compute node
16:44:32 <jgriffith> you're attaching the entire VG
16:44:44 <jgriffith> doesn't change how data is transferred for the most part
16:44:52 <jgriffith> except for caching/buffering
16:45:16 <jgriffith> just breaks the abstraction and creates yet another layer
16:45:17 <avishay> this feels like a research project rather than something customers will want to use.  on the one hand i'd want to see real customer demand for this, but on the other hand we don't require that for other drivers... don't know
16:45:36 <eharney> looks like it removes a layer to me...
16:45:51 <jgriffith> My only argument at this point is it's a LOT of code and work and I don't know what benefit?
16:46:00 <avishay> eharney: it adds LVM to the existing stack
16:46:02 <eharney> i'll have to think on this some more
16:46:03 <hemna> morning
16:46:05 <avishay> jgriffith: +8
16:46:06 <eharney> i'm clearly missing something
16:46:11 <jgriffith> Other than you can use any SAN device and don't need a driver in OpenStack for it
16:46:16 <jgriffith> which is kind of a win :)
16:46:25 <mtanino> jgriffith: Thank you for your comment.
16:46:29 <thingee> mtanino: do you have any data to back the performance you're claiming?
16:46:48 <avishay> jgriffith: but you're not really using the SAN, you're turning it into a JBOD
16:46:59 <jgriffith> avishay: yeah... that's the beauty of it
16:47:08 <jgriffith> avishay: you use any san device you want
16:47:13 <jgriffith> avishay: treat it like a jbod
16:47:13 <avishay> jgriffith: not taking advantage of storage's QoS, snapshots, etc.  might as well just buy servers with disks.
16:47:19 <mtanino> thingee: I have mesuared performance in P13 and P14
16:47:24 <hemna> there is nothing preventing you today from doing this
16:47:25 <jgriffith> avishay: even better treat a volume on it like a jbod
16:47:37 <jgriffith> hemna: there's a TON of things preventing it
16:47:43 * thingee checks p14
16:47:46 <hemna> with your backend.   create a massive volume and attach it to the cinder node and create a VG for it.  done.
16:47:46 <mtanino> thingee: https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf
16:47:52 <jgriffith> hemna: but I suspect you're thinking of doing it the exiting LVM way
16:48:06 <jgriffith> hemna: yeah, that's what I thought you might be getting at
16:48:11 <hemna> I'm not sure I see a reason for a driver to do this
16:48:32 <jgriffith> hemna: he wants to take it one level deeper and put the entire VG on every compute node
16:48:40 <jgriffith> access LVM directly on the compute node
16:48:56 <joa> sounds like something I discussed with DuncanT
16:48:56 <hemna> ugh
16:49:06 <jgriffith> mtanino: can I ask two questions:
16:49:12 <mtanino> jgriffith: Yes.
16:49:14 <jgriffith> mtanino: one of them is actually thingee 's question
16:49:14 <mtanino> please
16:49:27 <jgriffith> 1. Performance testing/data results
16:49:35 <jgriffith> including details of comparison
16:49:42 <jgriffith> 2. What's the real motivation here?
16:50:00 <jgriffith> Is this realy a performance thing... or is it a way to not have to have specific drivers for san devices?
16:50:24 <mtanino> jgriffith: I measured performace between LVMiScSI, SharedLVM, ras FC at P13, P14
16:50:35 <mtanino> raw FC volume, sorry
16:50:36 <hemna> I can't imagine the performance of this would be better than a direct iSCSI/FC attached block device to the compute node.
16:50:53 <jgriffith> mtanino: well... you need more details (or I do)
16:50:59 <jgriffith> mtanino: like how were these things configured
16:51:05 <jgriffith> did you use OpenStack
16:51:10 <thingee> single vm, single volume
16:51:12 <jgriffith> did you use the same backign device
16:51:13 <jgriffith> etc etc
16:51:27 <DuncanT> hemna: It isn't better than that, it *is* better than a fat lun attached to the head node then re-exported
16:51:37 <mtanino> jgriffith: yes. I  will try to arrange your requirement and will post openstack-dev.
16:51:41 <avishay> mtanino: the results are a bit hard to believe...adding an extra layer has no effect on latency?  is there extra caching that may effect correctness?  what happens with performance of a cloned volume (i.e., test LVM snaps vs your controller's snaps)
16:51:51 <thingee> how do things look with 8 vms, each with their own volume, doing reads/writes
16:52:03 <hemna> avishay, +1
16:52:06 <jgriffith> mtanino: so I'm not necessarily opposed to the idea
16:52:09 <DuncanT> hemna: And I can point you to somebody running 3par like that now, because they want many tiny volumes and 3par runs out too fast
16:52:23 <jgriffith> mtanino: but I think there needs to be some clarity in the motivation and benefits
16:52:26 <jgriffith> as well as costs
16:52:34 <jgriffith> There are drawbacks to this
16:52:34 <guitarzan> mtanino: specifics may give your critics a bit more insight :)
16:52:44 <mtanino> jgriffith: I understand
16:52:49 <jgriffith> DuncanT: +1
16:52:54 <jgriffith> DuncanT: same with equalogic
16:53:08 <jgriffith> DuncanT: and a bunch of people that have backend devices that have no cinder drivers
16:53:25 <hemna> DuncanT, yah I wouldn't deploy in that configuration because it's obviously going to be slow
16:53:32 <tsekiyama> avishay: It's actually removing a layer, software iSCSI daemon (tgtd) running on cinder-volume node.
16:53:32 <jgriffith> DuncanT: but there are some risks/problems with the double iscsi-hop as I call it
16:53:36 <hemna> but everyone has their reasons I suppose
16:53:51 <DuncanT> hemna: Better than 'can't use 80% of my capacity' for this custoemr at least
16:53:57 <thingee> 7 MIN WARNING
16:54:00 <jgriffith> hemna: DuncanT avishay keep in mind their focus here is FC
16:54:02 <jgriffith> not iSCSI
16:54:13 <jgriffith> thingee: thanks!
16:54:16 <harlowja> i have been summoned?
16:54:18 <avishay> tsekiyama: instead of VM-FC-Storage you have VM-LVM-FC-Storage, no?
16:54:24 <mtanino> Thank you so many comments.
16:54:26 <thingee> harlowja: update your bps
16:54:27 <jgriffith> oh.. yeah, this is the last topic anyway :)
16:54:33 <hemna> avishay, yup
16:54:36 <jgriffith> harlowja: what thingee siad
16:54:48 <jgriffith> harlowja: and... implement them :)
16:54:54 <harlowja> done
16:54:57 <harlowja> wish granted
16:54:57 <avishay> harlowja: blueprints, not beats-per-second
16:54:59 <thingee> that was easy
16:55:08 <jgriffith> avishay: LOL
16:55:35 <jungleboyj> avishay: Crank up the BPMs
16:55:39 <thingee> jgriffith: alright, so what are we leaving mtanino with?
16:55:42 <harlowja> my modem not fast enough for u thingee ?
16:55:43 <hemna> I dunno, I think if this is simply to overcome a missing cinder volume driver for an FC backend, then spend the effort writing that instead.
16:55:57 <harlowja> 14.4kbps ftw
16:56:11 <joa> btw about the bps, Should I refer my bp somewhere to improve visibility, or should I leave to you guys to review it whenever you have time ?
16:56:12 <jgriffith> hemna: perhaps
16:56:15 <mtanino> thingee: please move next item
16:56:17 <flip214> the part that this proposal is addressing is to *decrease* latency, by removing the iscsi indirection.
16:56:23 <joa> (came a bit late in the first topic)
16:56:26 <hemna> putting LVM between the array and the VM is not going to perform the same.
16:56:26 <jgriffith> hemna: or attach it to the cinder node and use what we have
16:56:45 <avishay> flip214: please explain
16:56:55 <tsekiyama> avishay: ah, I mean when compared to existing iSCSI-LVM driver
16:57:02 <flip214> avishay: before: compute => iscsi => cinder => FC => storage
16:57:02 <thingee> mtanino: that's the last item
16:57:09 <flip214> after: compute => FC => storage
16:57:15 <avishay> flip214: no...
16:57:20 <thingee> and I want to have an idea we can leave you with, because this driver keeps coming up
16:57:21 <mtanino> flip214: Yes. It's a correct. latency is decreasing compared to LVMiSCSi
16:57:22 <flip214> and splitting up the storage into parts via LVM
16:57:33 <avishay> flip214: before compute->FC->storage, after compute->LVM->FC->Storage
16:57:36 <jgriffith> flip214: I don't think so
16:57:46 <jgriffith> flip214: yeah... what avishay pointed out
16:57:53 <guitarzan> avishay: no, they're comparing to exporting to cinder volume with cinder lvm anyway
16:57:54 <hemna> avishay, yes
16:57:58 <avishay> the interesting comparison is not LVM iSCSI, it's a regular FC driver
16:58:00 <guitarzan> not directly to storage
16:58:13 <guitarzan> avishay: that's not interesting at all, it's obviously going to be worse
16:58:16 <jgriffith> flip214: or.... storage--->FC-->cinder-node--->iscsi--->compute
16:58:47 <avishay> guitarzan: right, so what's the benefit?  is anyone deploying the other way?  does it even work?
16:58:59 <hemna> wouldn't you need to have something cinder like on the compute host to divy up the LVM VG to the VMs?
16:59:04 <guitarzan> avishay: you're just getting back to the same "write a cinder driver" answer
16:59:09 <jgriffith> hemna: nope
16:59:10 <guitarzan> which is a fine viewpoint I suppose
16:59:14 <hemna> so it's reimplementing the scheduler/manager/LVM driver on the compute host?
16:59:25 <DuncanT> avishay: One advantage is that it allows you to exceed SAN limits on number of volumes / snaps, and that *is* a real problem for some people
16:59:29 <guitarzan> hemna: no?
16:59:29 <jgriffith> hemna: LVM let's you do some pretty neat stuff that way
16:59:39 <jgriffith> hemna: no
16:59:39 <flip214> look at page 10  (11), "4. Comparison of Proposed LVM volume driver
16:59:46 <jgriffith> aye aye aye
16:59:48 <hemna> ok, maybe I don't get that part then
16:59:54 <jgriffith> everybody talks, but nobody listens
16:59:56 <jgriffith> :(
17:00:04 <guitarzan> hemna: they have one c-vol managing lvm for the entire vg
17:00:09 <thingee> times up
17:00:16 <jgriffith> thanks everybody
17:00:18 <jgriffith> good meeting
17:00:19 <anteaya> thanks
17:00:22 <mtanino> thank you.
17:00:22 <flip214> thanks
17:00:22 <jgriffith> #endmeeting cinder