16:01:17 #startmeeting cinder 16:01:18 * DuncanT waves 16:01:19 Meeting started Wed Jul 2 16:01:17 2014 UTC and is due to finish in 60 minutes. The chair is jgriffith. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:20 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:23 The meeting name has been set to 'cinder' 16:01:24 hello 16:01:24 o/ 16:01:28 hello 16:01:28 hi 16:01:32 hi 16:01:35 hey everyone 16:01:47 hello 16:01:56 I'm going to hijack the agenda for a second cuz I can :) 16:01:57 Agenda as usual at https://wiki.openstack.org/wiki/CinderMeetings 16:02:02 Hello. 16:02:04 #topic blueprints 16:02:22 ok... so pop quiz: how many BP's were targetted for J1? 16:02:31 anyone? 16:02:40 link plz? :P 16:02:45 feels like a trick question 16:02:47 rushiagr: no link... it's a quiz 16:02:53 avishay: nahh 16:02:56 all of them? 16:02:59 Ok... so it was like 15 at one point 16:03:01 hi 16:03:03 15! 16:03:08 Now... how many landed in J1? 16:03:14 1? 16:03:17 2? 16:03:18 hint.. rhymes with hero 16:03:20 3 16:03:24 Zero. 16:03:27 shmero? 16:03:31 damn 16:03:34 0 16:03:36 :) 16:03:39 :/ 16:03:46 VERY VERY bad :( 16:03:48 Soo..... 16:04:04 https://launchpad.net/cinder/+milestone/juno-2 16:04:15 Here we are a couple weeks out from J2 16:04:17 hi john 16:04:19 hi guys 16:04:27 16 BP's targetted 16:04:38 and a bunch of them in "unknown" status 16:04:42 we have 5 targeting J-2 16:04:44 not listed there 16:04:46 we have a problem here 16:04:50 4 drivers and CG 16:04:53 xyang1: that makes the problem worse! 16:05:05 I have one BP. 16:05:12 My point here is.... 16:05:17 I added J-2, but removed by ttx 16:05:23 jgriffith: I will get some updates to mine out there. 16:05:24 mtanino: yes, and your code is up "thanks" 16:05:29 jungleboyj: thanks 16:05:33 jgriffith: can you target them? 16:05:46 I need everyone to please please update what you've signed up to work on 16:05:54 jgriffith needs to set a priority for it to stick to the milestone 16:06:16 ttx is everywhere :) 16:06:21 xyang1: I'll get to yours, but they're on the bottom of my list 16:06:36 I can see some drivers regularly getting targetted to next release, as the progress on them is slow. We can't do anything with them, can we? 16:06:37 avishay: say three times my name and I appear 16:06:38 harlowja_away: when you come back... read my comments above 16:06:45 jungleboyj: you're on the hook 16:06:46 o/ 16:06:47 ttx: :) 16:06:49 today please ;) 16:07:08 jgriffith: we have 3 bp's. how do we get them in. 16:07:10 jgriffith: Indeed. 16:07:24 rushiagr: do you know anything about netapps refactor from Alex? 16:07:33 jgriffith: so what do you think the problem is? lack of code coming in, or lack of reviews, reviews effort not focused? 16:07:41 stevemac: hold on... I'll get to that next 16:07:47 jgriffith: no. Not with netapp since more than a year.. 16:07:59 rushiagr: oops... sorry I forgot 16:08:05 jgriffith: np 16:08:22 jgriffith: ok. thanks 16:08:24 If you sign up for something, please update 16:08:41 and please let me know if your plans have changed and you're not going to be working onit 16:08:57 sadly many of the folks these days aren't on IRC or attending meetings which makes it difficutl 16:09:00 difficult 16:09:03 but a warning... 16:09:20 I'm going to start punting BP's that don't seem to be making any progress or that people don't update me on 16:09:32 jgriffith: +1 16:09:33 I'm going to start doing a weekly house cleaning 16:09:54 jgriffith: netapp in US has mandatory vacation :-) this week 16:09:57 so if you really care about your BP you need to either make progress or communicate as to whyyou're not 16:10:04 tbarron: how nice for them 16:10:08 are all these BPs reviewed? 16:10:18 OpenStack is global and doesn't take vacation 16:10:19 jgriffith: should I ping you after the meeting about our BPs? 16:10:19 jgriffith: I'm new to netapp openstack and am lurking :-) 16:10:20 just sayin :) 16:10:21 tbarron: good for you guys 16:10:37 xyang1: sure... but I'm on those don't worry 16:10:39 Ok 16:10:47 That's might rant for the morning 16:10:54 and I'll stop whining now :) 16:11:01 forewarning 16:11:06 jgriffith: thanks 16:11:08 I'm going to turn in to a bit of a jerk in the coming weeks 16:11:09 * jungleboyj is on vacation right now. :-) 16:11:21 Ok... now back to our regularly scheduled program 16:11:35 # topic batching code cleanup 16:11:44 #topic batching code cleanup 16:11:46 John, do we need to do something special for BP for new/updated drivers? 16:11:48 DuncanT: you're up 16:12:00 Arkady_Kanevsky: nope, all I need for those is a BP 16:12:11 Ok, so my point is simple, we keep getting lots of mechanical code cleanups 16:12:13 I'll get around to priortizing etc 16:12:16 +1 16:12:16 submit your patch 16:12:32 DuncanT: I like the idea 16:12:33 Not without value, but they cause merge conflicts for actual features and make those far harder than they are 16:13:03 I'm wondering if there's some tag or something we can add to make it easy to find these again at merge time? 16:13:36 if you all use the same topic you can do a gerrit search on it 16:13:44 DuncanT: we could surely create a tag 16:13:49 code-cleanup might be one 16:14:13 maybe something more descriptive and one level deeper 16:14:18 as core you can change the topic of those two patches, I believe 16:14:22 pep8-hacking-fixes 16:14:28 py3-updates 16:14:41 If we can change the topic, then we're golden 16:14:46 then you can find both with one query 16:14:48 Anybody not like the idea? 16:14:55 I *think* cores can change topics 16:15:01 let me know if I'm wrong 16:15:21 We can sort the mechanic out of this meeting, I jsut want to know if anybody hates it? 16:15:27 If I can I don't know how 16:15:29 nope, sounds fine. 16:15:36 sounds ok to me 16:15:41 sounds good 16:15:49 jgriffith: we can try after the meeting to see 16:15:54 DuncanT: I think we have concensus 16:15:57 anteaya: cool 16:16:00 Ok, sold. I'll put a note on the mailing list, sort out the details and we can start batching 16:16:07 I'm done 16:16:10 DuncanT: awesome! Nice work 16:16:16 jgriffith: I am ok with the plan. 16:16:27 jgriffith, anteaya: "cherry-pick to" button I think would do what you want 16:16:32 #topic 3'rd party ci naming 16:16:37 asselin: you're on deck 16:16:45 no I don't think cherry pick is it, just topic changes should work 16:16:49 so I posted a message on the ml: http://lists.openstack.org/pipermail/openstack-dev/2014-July/039103.html 16:16:52 not changing patch or parents 16:17:18 it's a proposal to have a dedicated ci system for each vendor to do cinder-mandated tests. 16:17:41 as a way to isolate them from other unofficial tests 16:18:07 asselin: Some vendors will need multiple, since teams can be totally disjoin between products. Other than that, makes sense to me 16:18:10 so that reviewers can quickly know what the +1 and -2 means 16:18:17 but we're not the only project which might require "official" tests right ? 16:18:29 DuncanT, yes they would need multiple 16:18:30 we have 4 16:18:47 emc-vnx-ci, emc-vmax-ci, emc-vipr-ci, emc-xio-ci 16:18:53 ok I see...4 vender-cinder-ci accounts 16:19:00 b/c they're 4 different teams 16:19:03 Company-[Team or product-]ci ? 16:19:04 honestly I'd sort of like to go back to my original proposal for all of this that I made back at the summit... but I'll bight my tongue :) 16:19:07 asselin: Yes 16:19:27 asselin: So, this is just proposing individual accounts for each driver? 16:19:30 I got the names from anteaya 16:19:39 yes you did 16:19:57 jungleboyj: please no :) 16:20:07 I.E. ibm-storwize_svc-ci 16:20:15 jgriffith: ? 16:20:29 so we'll have 1 ci account review per driver? 16:20:35 jungleboyj: that's been my big fear in all of this 16:20:36 didnt we want to avoid that ? 16:20:40 jungleboyj: It is proposing to have 'cidner' int eh name of any account that does mandated ci, and not in the name of any account that doesn't, I think 16:20:58 DuncanT: so here's my understanding 16:21:01 asselin: the reason we have 4 is because we need 4 CI systems to test 4 drivers 16:21:01 my proposal was to have 1 per vendor. 16:21:10 DuncanT: Oh, ok. 16:21:16 There are vendors that have Cinder related ci systems as well as Neturon or Nova 16:21:17 xyang1, why is that? 16:21:21 asselin: we plan on consolidate them after Juno, long term plan 16:21:25 DuncanT: yeah but what if I want an account that will do the testing for every openstack project I contributed a 3rd-party to ? 16:21:27 it will be tryicky for 1 per vendor with completely different prodcuts 16:21:38 In addition there are vendors w/multiple drivers in Cinder (and others) 16:21:39 asselin: we don't have one place that can test all 16:21:50 Can we have one per product instead? 16:21:52 asselin: So, we upload all our results through one account then. 16:22:00 the goal is an efficient and compact way to have accounts that represent a ci system or systems 16:22:04 thinl of gluster and ceph both udner RH now. 16:22:13 jgriffith: agreed 16:22:16 asselin: what about case when not vendor want to create ci for a driver? 16:22:17 jungleboyj: then one broken CI job gets everyone's at that company turned off 16:22:18 without having a seperate account/system for every driver in a project for those with more than one 16:22:20 asselin: drivers are developed in 4 BU's 16:22:31 anteaya: FYI, we are waiting for our account to get approved. We are close to having storwize results uploadable if we can get the account approved. 16:22:35 anteaya, jgriffith: +1 :) 16:22:46 eharney: That sounds bad. 16:22:52 ok, then are we all ok to have one ci account/review per driver? 16:22:56 our ports were opened on Sunday 16:23:03 jungleboyj: I thought ibm-storwize-ci got createed 16:23:09 waiting for web server to setup 16:23:10 asselin: wait... 16:23:19 asselin: are the seperate independent systems? 16:23:22 I'd like all of us to be consistent 16:23:23 anteaya: Did it? I will follow up. Been on vacation this week. 16:23:24 s/the/they/ 16:23:28 asselin proposal +1 (one per driver) 16:23:47 asselin: are they? 16:23:52 jungleboyj: http://lists.openstack.org/pipermail/openstack-infra/2014-July/001470.html 16:23:53 IMO that's what determines that 16:24:03 if you have seperate CI's then yes, seperate accounts 16:24:11 if you share a single CI then one account IMO 16:24:16 jgriffith, don't understand your question 16:24:36 asselin: You stated, seperate accounts for each driver 16:24:50 asselin: I asked... are you implementing independent CI systems for each driver? 16:24:53 yes, in that case we'll setup 4 accounts and 4 ci systems, one for each of our drivers 16:25:02 asselin: fine by me 16:25:11 I hate it but whatever 16:25:12 :) 16:25:24 and the expectation is that everyone will do the same so we're all consistent 16:25:33 jgriffith: you have one in any case right? :) 16:25:33 asselin: your expectation 16:25:34 asselin: you have different CI systems for iSCSI and FC as well? 16:25:37 everyone ? like every 2rd-party ? 16:25:41 3rd* 16:25:46 noone in third party does the same as anyone else 16:25:49 avishay: unfortunatly soon I'll have 3 16:25:55 but regardless 16:25:56 :/ 16:26:06 can someone please back up a little bit and explain the actual issue the consistency rules are trying to solve/prevent? or did i miss something? 16:26:16 xyang1, good question....not sure right now... 16:26:24 asselin: what about something like 'nonvendorcompany-cinder-ci'? 16:26:24 i like consistency but i'm not exactly sure what the goal is here 16:26:28 jgriffith: three? 16:26:44 does it really matter? 16:26:46 eharney: well.. The thing is; for one project, we'd love to get only one report for all the drivers of one 3rd-party provider. 16:27:01 eharney: but the thing is, it does not necessarily match the needs/way of working of some big companies 16:27:03 I say we punt on this whole thing and go back to my idea of a dashboard 16:27:06 joa: why? Ceph and Gluster reports should be combined? 16:27:14 independent of OpenStack CI 16:27:19 e0ne, no that won't be allowed if we do one per driver 16:27:28 :( 16:27:33 asselin: do you know how to mark the results as (non voting)? 16:27:39 jgriffith: I think we are jamming together two things 16:27:52 1) naming, which has to scale 16:27:53 anteaya: yeah.. there are a ton of side topics going here 16:28:15 2) viewing and interpreting results, which needs to be aggregated 16:28:23 2 minutes remain for this topic 16:28:38 eharney: the root of the issue is there are a lot of ci accounts: https://etherpad.openstack.org/p/automated-gerrit-account-naming-format 16:28:41 anteaya: agreed 16:28:45 eharney: and we are getting more all the time 16:28:57 eharney: we need a format to name them so that naming scales 16:29:10 anteaya: however my proposal was and is, it's cinder owned/specific for Cinder which helps with the scale problem 16:29:14 anteaya: makes naming easier 16:29:20 eharney: https://review.openstack.org/#/c/101013/ is one proposal: https://review.openstack.org/#/c/101013/ 16:29:27 anteaya: and cuts the beuracracy of picking a name 16:29:38 jgriffith: this would mean one account for cinder-specific CI and at least another account for others CI ? 16:29:39 because I get to just say "here's what it is" and move on 16:29:40 except for those companies that want to test additional things 16:29:55 asselin: some openstack providers want to test cinder with back-end for special cases. e.g. Mirantis is interesting in integrateind 3rd party ci for cinder+ceph 16:29:56 or companies where more than one division tests cinder 16:29:58 joa: no, that's not really the intent necessarily 16:30:05 but everybody is running off on tangents 16:30:06 seems we need to pick the lowest common denominator: one per driver 16:30:11 jgriffith: okay 16:30:22 if I could I'd love to only have one acc 16:30:27 me too 16:30:34 I think we can aggregate any driver vairants; e.g. iscsi & fc in a since account 16:30:36 I would love to have only one account per vendor 16:30:37 why are we making this so difficult? 16:30:42 since/single 16:30:53 we don't have to be perfect 16:30:58 it doesn't have to be "forever" 16:30:58 jgriffith: because there is an assumption that everyone testing cinder wants to do it the same way 16:31:10 well it kind of does, regarding naming 16:31:13 anteaya: you're completely missing the point 16:31:20 I'm not arguing against consistency 16:31:35 everybody involved here has spent more time arguing about "names" than actually building a CI system 16:31:40 One account per vendor is a nice to have but doesn't match the realities of some vendors in term s of business units etc 16:31:41 which is ridiculous 16:31:49 jgriffith: +1 16:31:51 DuncanT: My proposal is you have the option 16:32:02 If you can do one account per vendor AWESOME 16:32:02 jgriffith: +10k 16:32:13 if you can and have to do per dirver then frikin do it 16:32:22 but please stop arguing about it and wasting time 16:32:31 jgriffith: +2 16:32:32 jgriffith: +1. I'd rather focusing on getting CI to work end-to-end, rather than spend time changing account names 16:32:32 but what is the solution? 16:32:32 agree with jgriffith. 16:32:33 +1 16:32:38 +30294013982481 16:32:44 so that another hp department can test cinder 16:32:56 avishay: that's not my problem 16:32:56 Company-[Team or product-]ci ? Sounds good to me. 16:32:58 errr 16:33:01 avishay: sorry 16:33:02 anteaya: For now? Call it HP2 for all it matters 16:33:04 :) 16:33:15 DuncanT: can you suggest that in the naming patch 16:33:17 yes, companies working on openstack come in all shapes and sizes 16:33:23 anteaya: Or HP-some-team-name 16:33:29 DuncanT: right 16:33:36 that is what we are suggesting now 16:33:41 times up 16:33:44 don't these all show in Gerrit with a "pretty name" anyway? 16:33:50 so infra isn't upsetting vendors 16:33:59 eharney: the pretty name is part of the naming scheme 16:34:02 ok, thanks. conclusion: {company}-{team or driver}-ci 16:34:02 Duncan: agree with you 16:34:05 * eharney hides 16:34:06 eharney: https://etherpad.openstack.org/p/automated-gerrit-account-naming-format 16:34:07 objections? 16:34:10 eharney: some names contain Jenkins and confuse devs and reviewers 16:34:13 #topic LVM support VG on shared storage 16:34:19 mtanino: you around? 16:34:19 o. 16:34:21 Hi 16:34:21 asselin: agreed :) 16:34:30 I had some discussion about my proposed driver at openstack-dev with avishay and deepakcs. 16:34:35 And they recommended me to discuss the driver at the meeting. So I come here today. 16:34:38 asselin: great! 16:34:41 asselin: i don't think that's what jgriffith said... 16:34:54 I would like to have a quick discussion about benefits, comparison to other drivers, performance. 16:34:55 asselin: but please take it offline 16:34:57 avishay: you're right it's not but I've moved on :) 16:34:59 avishay: asselin: jgriffith said time's up :) 16:35:22 can I move forward? 16:35:24 mtanino: please present your proposal 16:35:33 Could you see P8-P14 of this document? 16:35:38 https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf 16:35:46 There are benefits, comparison to other drivers, performance. 16:36:19 mtanino: how do you handle locking, ie. that only one host does create a snapshot or LV at one time? 16:36:21 I would like to know these benefits are make sense for cinder driver. 16:36:28 CLVM is what I'm getting at. 16:36:44 flip214: +1 16:36:46 Only cinder node can create, delete, snapshot to the VG 16:36:51 (cluster-LVM, with locking across the cluster) 16:37:05 compute node can only attach a volume to an instance. 16:37:08 mtanino: it's not a clustered LVM really though 16:37:15 err... sorry, flip214 ^^ 16:37:19 If only the one host running cinder-volume can do the actions, do you need locking? 16:37:26 I don't think you do 16:37:32 flip214: the VG only exists on one device 16:37:44 well, we have similar things with customers who run LVM on top of DRBD, eg. for XEN 16:37:50 my feeling is that this is a nice idea in theory, but in practice customer's won't want to turn their expensive feature-rich storage into a JBOD that is managed by LVM 16:37:58 flip214: completely different approach 16:37:59 they all want to run dual-primary 16:38:05 DuncanT: I beleive you're correct 16:38:11 DuncanT: ie no need for locking 16:38:13 * joa thinks it reminds him of what he's working on.. 16:38:21 DuncanT: it just *works* the same as LVM today 16:38:22 avishay: You can use this on top of cheaper, less feature rich arrays too 16:38:24 is there a need for thin pool LV? 16:38:31 cinder node owns it, controls it etc 16:38:44 avishay: So I do not want to replace vendor driver. Use both vendor driver and LVM driver case by case basis. 16:38:53 flip214: It supports thin or thick pretty much for free.... 16:38:54 thin LV might mean that the compute nodes write to the (thin) metadata 16:39:07 so syncronization and locking issues *might* arise. 16:39:13 flip214: again,works the same we the cinder LVM driver does 16:39:21 flip214: Ah, I see your point, and agree 16:39:23 I'm not against this proposal. 16:39:25 flip214: only difference is you share it across multiple compute nodes 16:39:35 I just want to put a word of caution into the discussion 16:39:38 mtanino: does this require changes in Cinder other than the driver? 16:39:43 mtanino: so it does come on top of other (vendor?) drivers ? 16:39:44 flip214: we do not need Thinpool now 16:39:59 mtanino: we do, if there should be efficient snapshots. 16:40:00 mtanino: I'd like to understand what the benefit is? 16:40:15 avishay: Requires a nova connector change too, but I'd like to see that renamed and put in anyway for personal reasons 16:40:17 jgriffith: performance 16:40:20 mtanino: I don't see the advantage of this over what we do already 16:40:27 benefit is you don't use iSCSI to get from the same node back to itself 16:40:27 because one indirection via iscsi is not needed anymore 16:40:29 flip214: nahh... don't think so 16:40:51 eharney: don't know what you mean by that 16:40:54 the thick LVM snapshots are *really* bad if you've got more than 1 on a LV 16:41:07 flip214: yes we are painfully aware :) 16:41:15 it's direct block device attach from LVM<->VM, not LVM<->iSCSI<->iSCSI<->VM 16:41:20 right? 16:41:23 eharney: no 16:41:39 the device the VG sits on is still an external san attached device 16:41:44 whether that be iscsi or FC 16:41:54 you're just mapping/attaching it to all of the compute nodes 16:42:02 and accessing LVM directly 16:42:03 jgriffith: so thin pool LVs are better. I wouldn't want to use them with a shared VG approach, though. 16:42:12 which is what i said 16:42:13 eharney: basicly dumping the abstraction 16:42:17 jgriffith: But now the compute nodes talk directly to the SAN, not funneled through a linux node 16:42:30 i think this is STORAGE<->iSCSI<->LVM<->VM, right? 16:42:36 eharney: ^^ 16:42:39 what avishay said 16:42:40 jgriffith: I think the one of benefit is "Reduce hardware based storage workload by offloading the workload to software based volume operation. 16:42:55 you left out the storage<->iscsi piece which is nice magic 16:43:04 it should work if a thin pool is created for every (cinder volume + snapshots) 16:43:10 mtanino: I don't follow 16:43:17 jgriffith:hmm.. 16:43:26 mtanino: Makes sense. 16:43:32 eharney: you see why I disagreed? 16:43:40 then only one (compute) node accesses a thin pool at the same time 16:43:46 eharney: I don't understand the benefit as it doesn't change datapath 16:43:54 jgriffith: yes, i missed a step in the doc i was looking at 16:44:26 The way I interpretted this was just that instead of attaching a volume to the compute node 16:44:32 you're attaching the entire VG 16:44:44 doesn't change how data is transferred for the most part 16:44:52 except for caching/buffering 16:45:16 just breaks the abstraction and creates yet another layer 16:45:17 this feels like a research project rather than something customers will want to use. on the one hand i'd want to see real customer demand for this, but on the other hand we don't require that for other drivers... don't know 16:45:36 looks like it removes a layer to me... 16:45:51 My only argument at this point is it's a LOT of code and work and I don't know what benefit? 16:46:00 eharney: it adds LVM to the existing stack 16:46:02 i'll have to think on this some more 16:46:03 morning 16:46:05 jgriffith: +8 16:46:06 i'm clearly missing something 16:46:11 Other than you can use any SAN device and don't need a driver in OpenStack for it 16:46:16 which is kind of a win :) 16:46:25 jgriffith: Thank you for your comment. 16:46:29 mtanino: do you have any data to back the performance you're claiming? 16:46:48 jgriffith: but you're not really using the SAN, you're turning it into a JBOD 16:46:59 avishay: yeah... that's the beauty of it 16:47:08 avishay: you use any san device you want 16:47:13 avishay: treat it like a jbod 16:47:13 jgriffith: not taking advantage of storage's QoS, snapshots, etc. might as well just buy servers with disks. 16:47:19 thingee: I have mesuared performance in P13 and P14 16:47:24 there is nothing preventing you today from doing this 16:47:25 avishay: even better treat a volume on it like a jbod 16:47:37 hemna: there's a TON of things preventing it 16:47:43 * thingee checks p14 16:47:46 with your backend. create a massive volume and attach it to the cinder node and create a VG for it. done. 16:47:46 thingee: https://wiki.openstack.org/w/images/0/08/Cinder-Support_LVM_on_a_sharedLU.pdf 16:47:52 hemna: but I suspect you're thinking of doing it the exiting LVM way 16:48:06 hemna: yeah, that's what I thought you might be getting at 16:48:11 I'm not sure I see a reason for a driver to do this 16:48:32 hemna: he wants to take it one level deeper and put the entire VG on every compute node 16:48:40 access LVM directly on the compute node 16:48:56 sounds like something I discussed with DuncanT 16:48:56 ugh 16:49:06 mtanino: can I ask two questions: 16:49:12 jgriffith: Yes. 16:49:14 mtanino: one of them is actually thingee 's question 16:49:14 please 16:49:27 1. Performance testing/data results 16:49:35 including details of comparison 16:49:42 2. What's the real motivation here? 16:50:00 Is this realy a performance thing... or is it a way to not have to have specific drivers for san devices? 16:50:24 jgriffith: I measured performace between LVMiScSI, SharedLVM, ras FC at P13, P14 16:50:35 raw FC volume, sorry 16:50:36 I can't imagine the performance of this would be better than a direct iSCSI/FC attached block device to the compute node. 16:50:53 mtanino: well... you need more details (or I do) 16:50:59 mtanino: like how were these things configured 16:51:05 did you use OpenStack 16:51:10 single vm, single volume 16:51:12 did you use the same backign device 16:51:13 etc etc 16:51:27 hemna: It isn't better than that, it *is* better than a fat lun attached to the head node then re-exported 16:51:37 jgriffith: yes. I will try to arrange your requirement and will post openstack-dev. 16:51:41 mtanino: the results are a bit hard to believe...adding an extra layer has no effect on latency? is there extra caching that may effect correctness? what happens with performance of a cloned volume (i.e., test LVM snaps vs your controller's snaps) 16:51:51 how do things look with 8 vms, each with their own volume, doing reads/writes 16:52:03 avishay, +1 16:52:06 mtanino: so I'm not necessarily opposed to the idea 16:52:09 hemna: And I can point you to somebody running 3par like that now, because they want many tiny volumes and 3par runs out too fast 16:52:23 mtanino: but I think there needs to be some clarity in the motivation and benefits 16:52:26 as well as costs 16:52:34 There are drawbacks to this 16:52:34 mtanino: specifics may give your critics a bit more insight :) 16:52:44 jgriffith: I understand 16:52:49 DuncanT: +1 16:52:54 DuncanT: same with equalogic 16:53:08 DuncanT: and a bunch of people that have backend devices that have no cinder drivers 16:53:25 DuncanT, yah I wouldn't deploy in that configuration because it's obviously going to be slow 16:53:32 avishay: It's actually removing a layer, software iSCSI daemon (tgtd) running on cinder-volume node. 16:53:32 DuncanT: but there are some risks/problems with the double iscsi-hop as I call it 16:53:36 but everyone has their reasons I suppose 16:53:51 hemna: Better than 'can't use 80% of my capacity' for this custoemr at least 16:53:57 7 MIN WARNING 16:54:00 hemna: DuncanT avishay keep in mind their focus here is FC 16:54:02 not iSCSI 16:54:13 thingee: thanks! 16:54:16 i have been summoned? 16:54:18 tsekiyama: instead of VM-FC-Storage you have VM-LVM-FC-Storage, no? 16:54:24 Thank you so many comments. 16:54:26 harlowja: update your bps 16:54:27 oh.. yeah, this is the last topic anyway :) 16:54:33 avishay, yup 16:54:36 harlowja: what thingee siad 16:54:48 harlowja: and... implement them :) 16:54:54 done 16:54:57 wish granted 16:54:57 harlowja: blueprints, not beats-per-second 16:54:59 that was easy 16:55:08 avishay: LOL 16:55:35 avishay: Crank up the BPMs 16:55:39 jgriffith: alright, so what are we leaving mtanino with? 16:55:42 my modem not fast enough for u thingee ? 16:55:43 I dunno, I think if this is simply to overcome a missing cinder volume driver for an FC backend, then spend the effort writing that instead. 16:55:57 14.4kbps ftw 16:56:11 btw about the bps, Should I refer my bp somewhere to improve visibility, or should I leave to you guys to review it whenever you have time ? 16:56:12 hemna: perhaps 16:56:15 thingee: please move next item 16:56:17 the part that this proposal is addressing is to *decrease* latency, by removing the iscsi indirection. 16:56:23 (came a bit late in the first topic) 16:56:26 putting LVM between the array and the VM is not going to perform the same. 16:56:26 hemna: or attach it to the cinder node and use what we have 16:56:45 flip214: please explain 16:56:55 avishay: ah, I mean when compared to existing iSCSI-LVM driver 16:57:02 avishay: before: compute => iscsi => cinder => FC => storage 16:57:02 mtanino: that's the last item 16:57:09 after: compute => FC => storage 16:57:15 flip214: no... 16:57:20 and I want to have an idea we can leave you with, because this driver keeps coming up 16:57:21 flip214: Yes. It's a correct. latency is decreasing compared to LVMiSCSi 16:57:22 and splitting up the storage into parts via LVM 16:57:33 flip214: before compute->FC->storage, after compute->LVM->FC->Storage 16:57:36 flip214: I don't think so 16:57:46 flip214: yeah... what avishay pointed out 16:57:53 avishay: no, they're comparing to exporting to cinder volume with cinder lvm anyway 16:57:54 avishay, yes 16:57:58 the interesting comparison is not LVM iSCSI, it's a regular FC driver 16:58:00 not directly to storage 16:58:13 avishay: that's not interesting at all, it's obviously going to be worse 16:58:16 flip214: or.... storage--->FC-->cinder-node--->iscsi--->compute 16:58:47 guitarzan: right, so what's the benefit? is anyone deploying the other way? does it even work? 16:58:59 wouldn't you need to have something cinder like on the compute host to divy up the LVM VG to the VMs? 16:59:04 avishay: you're just getting back to the same "write a cinder driver" answer 16:59:09 hemna: nope 16:59:10 which is a fine viewpoint I suppose 16:59:14 so it's reimplementing the scheduler/manager/LVM driver on the compute host? 16:59:25 avishay: One advantage is that it allows you to exceed SAN limits on number of volumes / snaps, and that *is* a real problem for some people 16:59:29 hemna: no? 16:59:29 hemna: LVM let's you do some pretty neat stuff that way 16:59:39 hemna: no 16:59:39 look at page 10 (11), "4. Comparison of Proposed LVM volume driver 16:59:46 aye aye aye 16:59:48 ok, maybe I don't get that part then 16:59:54 everybody talks, but nobody listens 16:59:56 :( 17:00:04 hemna: they have one c-vol managing lvm for the entire vg 17:00:09 times up 17:00:16 thanks everybody 17:00:18 good meeting 17:00:19 thanks 17:00:22 thank you. 17:00:22 thanks 17:00:22 #endmeeting cinder