Thursday, 2011-11-10

*** wwkeyboard has left #openstack-meeting00:04
*** nati2 has quit IRC00:14
*** nati2 has joined #openstack-meeting00:15
*** Guest79151 is now known as med_out00:18
*** med_out has joined #openstack-meeting00:18
*** novas0x2a|laptop has quit IRC00:25
*** zns1 has quit IRC00:35
*** novas0x2a|laptop has joined #openstack-meeting00:36
*** dragondm has quit IRC00:38
*** novas0x2a|laptop has quit IRC00:38
*** adjohn has joined #openstack-meeting00:43
*** jakedahn has quit IRC00:47
*** novas0x2a|laptop has joined #openstack-meeting01:21
*** sleepsontheflo-1 has joined #openstack-meeting01:29
*** reed has quit IRC01:40
*** dwalleck has joined #openstack-meeting01:42
*** jakedahn has joined #openstack-meeting01:55
*** bhall has quit IRC02:04
*** jdurgin has quit IRC02:17
*** novas0x2a|laptop has quit IRC02:21
*** vladimir3p has quit IRC02:27
*** novas0x2a|laptop has joined #openstack-meeting02:29
*** gyee has quit IRC02:32
*** novas0x2a|laptop has quit IRC02:32
*** nati2_ has joined #openstack-meeting03:29
*** nati2 has quit IRC03:32
*** mmetheny has quit IRC04:08
*** mmetheny has joined #openstack-meeting04:09
*** nati2 has joined #openstack-meeting04:24
*** nati2_ has quit IRC04:24
*** bhall has joined #openstack-meeting05:07
*** bhall has quit IRC05:07
*** bhall has joined #openstack-meeting05:07
*** blamar has quit IRC05:11
*** AlanClark has quit IRC05:16
*** dwalleck has quit IRC05:54
*** dwalleck has joined #openstack-meeting05:55
*** oubiwann has quit IRC06:01
*** chmouel_ has joined #openstack-meeting06:02
*** chmouel has quit IRC06:02
*** oubiwann1 has joined #openstack-meeting06:02
*** pvo has quit IRC06:02
*** pvo has joined #openstack-meeting06:03
*** dwalleck has quit IRC06:09
*** bhall has quit IRC07:14
*** nati2_ has joined #openstack-meeting07:27
*** nati2 has quit IRC07:30
*** adjohn has quit IRC07:38
*** adjohn has joined #openstack-meeting07:46
*** adjohn has quit IRC08:06
*** jakedahn has quit IRC09:06
*** sleepsontheflo-1 has quit IRC09:19
*** jakedahn has joined #openstack-meeting09:23
*** darraghb has joined #openstack-meeting09:43
*** dendrobates has quit IRC09:56
*** dendrobates has joined #openstack-meeting10:24
*** nati2_ has quit IRC10:43
*** jakedahn_ has joined #openstack-meeting11:03
*** jakedahn has quit IRC11:05
*** jakedahn_ is now known as jakedahn11:05
*** cmagina has quit IRC11:33
*** cmagina has joined #openstack-meeting11:34
*** edconzel has joined #openstack-meeting13:12
*** shang has joined #openstack-meeting13:18
*** AlanClark has joined #openstack-meeting13:35
*** sandywalsh has quit IRC13:41
*** zul has quit IRC13:50
*** zul has joined #openstack-meeting13:53
*** mdomsch has joined #openstack-meeting14:13
*** sandywalsh_ has joined #openstack-meeting14:13
*** AlexPro has joined #openstack-meeting14:16
*** dprince has joined #openstack-meeting14:17
*** zul has quit IRC14:17
*** zul has joined #openstack-meeting14:19
*** zul has quit IRC14:19
*** chuck__ has joined #openstack-meeting14:20
*** chuck__ is now known as zul14:20
*** AlexPro has quit IRC14:36
*** zns has joined #openstack-meeting14:46
*** joesavak has joined #openstack-meeting14:52
*** jsavak has joined #openstack-meeting14:53
*** joesavak has quit IRC14:57
*** deshantm_laptop has joined #openstack-meeting14:59
*** dwalleck has joined #openstack-meeting15:39
*** joesavak has joined #openstack-meeting15:41
*** blamar has joined #openstack-meeting15:42
*** jsavak has quit IRC15:42
*** dolphm has joined #openstack-meeting15:50
*** rnirmal has joined #openstack-meeting15:53
*** danwent has joined #openstack-meeting15:57
*** adjohn has joined #openstack-meeting15:59
*** dragondm has joined #openstack-meeting16:04
*** mmetheny has quit IRC16:09
*** mmetheny_ has joined #openstack-meeting16:09
*** oubiwann1 has quit IRC16:10
*** dwalleck_ has joined #openstack-meeting16:20
*** dwalleck has quit IRC16:22
*** chmouel_ is now known as chmouel16:27
*** vladimir3p has joined #openstack-meeting16:28
*** gyee has joined #openstack-meeting16:29
*** dolphm has quit IRC16:30
*** dolphm has joined #openstack-meeting16:30
*** dwalleck_ has quit IRC16:31
*** dolphm_ has joined #openstack-meeting16:35
*** dolphm has quit IRC16:35
*** sandywalsh has joined #openstack-meeting16:39
*** dwalleck has joined #openstack-meeting16:42
*** joesavak has quit IRC16:48
*** jog0 has joined #openstack-meeting16:56
*** jog0 has quit IRC16:57
*** jog0 has joined #openstack-meeting16:57
*** reed_ has joined #openstack-meeting16:57
*** reed_ is now known as reed16:58
*** dolphm_ has quit IRC17:04
*** dolphm has joined #openstack-meeting17:05
*** dolphm has quit IRC17:10
*** dprince has quit IRC17:14
*** sleepsontheflo-1 has joined #openstack-meeting17:15
*** deshantm_laptop has quit IRC17:24
*** jog0 has left #openstack-meeting17:31
*** danwent has left #openstack-meeting17:32
*** jakedahn has quit IRC17:40
*** devcamcar has joined #openstack-meeting17:42
*** nati2 has joined #openstack-meeting17:47
*** jaypipes has quit IRC17:49
*** jdg has joined #openstack-meeting17:50
*** dprince has joined #openstack-meeting17:51
*** jdurgin has joined #openstack-meeting17:59
*** renuka has joined #openstack-meeting18:01
renukaHello, shall we start the volume meeting?18:01
*** blamar has quit IRC18:02
*** blamar has joined #openstack-meeting18:02
*** joesavak has joined #openstack-meeting18:03
*** dricco has joined #openstack-meeting18:03
DuncanTHi18:04
renukado we have representation from the volume group?18:04
renukaDuncanT: Hi18:04
renukaI did not have anything specific to bring up in today's meeting... I will start to look at boot from volume support in xenapi in the next few days18:05
renukaDuncanT: did you have any updates from the affinity work? Anything HP would like to discuss?18:06
jdgI'm finishing up my work for a SolidFire class in ISCSIDriver, may need some pointers/help from folks regarding submittal.18:07
renukajdg: would you like to talk about it here today?18:07
DuncanTrenuka: I've not done enough with affinity to have a useful update, busy week unfortunately18:07
jdgrenuka: sure, if folks have the time and we're not bumping other topics18:08
*** jsavak has joined #openstack-meeting18:08
renuka#startmeeting18:08
openstackMeeting started Thu Nov 10 18:08:23 2011 UTC.  The chair is renuka. Information about MeetBot at http://wiki.debian.org/MeetBot.18:08
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.18:08
renuka#topic SolidFire volume driver18:08
*** openstack changes topic to "SolidFire volume driver"18:08
DuncanTI was kind of hoping I could get a quick comment from somebody on https://bugs.launchpad.net/nova/+bug/888649 and whether they thoguh it was a real bug if we're time?18:08
uvirtbotLaunchpad bug 888649 in nova "Snapshots left in undeletable state" [Undecided,New]18:08
jdgI've implemented a SolidFireISCSIDriver in nova/volume/san.py and done a bit of testing here.18:09
jdgHad a couple of questions regarding reviews, submittal etc18:10
jdgAlso wanted to make sure I was not incorrect in my assumptions.18:10
renukajdg: all ears18:11
jdgOk..18:11
jdgSo we behave bit differently than others.18:11
jdgIn order to create a volume you need to have an estabilished account-ID18:11
*** joesavak has quit IRC18:12
jdgThis account ID also includes all of the CHAP settings and information18:12
*** dwalleck has quit IRC18:12
jdgWhat I ended upd with is that the only methods really implemented are create/delete volume.18:12
jdgWe don't have any concept of export, assign etc.  When a volume is created it's ready for use.18:12
*** jaypipes has joined #openstack-meeting18:13
jdgSo my proposal was:  Openstack administrator would create an account on the SF appliance for each compute node18:13
jdgThey would also set up /etc/iscsid.conf with the appropriate chap settings on each compute node.18:13
jdgThe only other thing that would be needed is a FLAG for the account ID to use on each compute node18:14
jdgI didn't want to add anything specific to the base class driver, or the db etc.18:14
jdgDoes this sound reasonable?18:14
renukaWhy is the account for a compute node, versus how it is normally done, on a per user basis18:15
jdgSo we have two different accounts we use:18:15
jdg1.  The actual management account to send API commands18:15
jdg2. A user account associated with each volume that has the CHAP info embedded18:16
jdgPerhaps I overlooked a way to do this with the existing user accounts?18:16
jdgMy thought was that since the Compute node will actually make the ISCSI connection to the Volume and pass it to the VM's via LVM this seemed to make sense18:17
jdgDid I miss something in how the ISCSI implementation works maybe?18:17
renukaWhat is typically done is, during the attach call, we have a way of passing connection information (which the volume driver is responsible for) to the compute node that this volume will be attached to18:17
jdgRight, but I have this chicken or egg situation.  I can't create a volume without an account18:18
*** novas0x2a|laptop has joined #openstack-meeting18:18
jdgI have to have the account-ID at creation time which includes CHAP info...18:18
jdg#idea  What I could do is dynamically create an a random account each time.18:19
jdgThis would then fit more in to the model that you have today.18:19
renukaare you using some proprietary code for auth?18:19
jdgNo, it's just CHAP18:19
renukawe don't have a way of associating this info when we create a user today?18:21
*** shang has quit IRC18:21
jdgNot that I could find18:21
DuncanTjdg: Is there a copy of the driver available at all, to see exactly what you did?18:21
renukai agree, looking at code might be useful18:22
jdgDuncanT: I'm happy to post it/send it.18:22
jdgThere's nothing modified in the existing code, just the addition of my sub-class and one added FLAG in san.py18:22
renukajdg: I am not entirely sure creating random accounts makes sense... sounds more like a hack18:23
*** dwalleck has joined #openstack-meeting18:23
jdgIt's a total hack  :)18:23
jdgThat's why I thought the account by compute node was a better approach18:23
*** med_out has quit IRC18:23
DuncanTThe flag / account per node sounds reasonable to me18:23
*** dwalleck has quit IRC18:23
renukaalthough how would you deal with it when the user's vm moves from one compute node to another18:24
renukaor if the user wants to now attach it to a VM on a different compute node18:24
vishyjdg: you will probably have to maintain a mapping of tenants to backend accounts in your backend18:24
jdgI came up with two ideas, one would be to do a clone of the volume (this is really just a map copy for us so not a big deal)18:24
renukathen you are completely eliminating any kind of auth anyway... so might as well have a single admin account... if the only purpose is to beat some hardware limitation18:24
jdgrenuka: and there's the second option (single admin account)18:25
vishyjdg: you could have the driver dynamically create a backend account the first time it sees a given tenant and store the info18:25
jdgvishy:  This would be ideal, not sure how this works though?18:26
vishyevery request you get contains a context object18:26
renukavishy: that assumes the user never needs their own credentials18:26
vishycontext.project_id18:26
vishyrenuka: why would they?18:26
jdgAhh... so perhaps I could create an account-ID based on the project_id18:26
vishyrenuka: I don't think you want to give users direct access to the infrastructure that makes the cloud work18:26
renukajdg: didn't you say the users are created with CHAP info18:27
vishyjdg: exactly18:27
vishyproject_id is the canonical tenant_id from keystone18:27
jdgrenuka: yes18:27
vishyso project_id is already verified as the user18:27
jdgWhen creating the account via our api you need to include the desired CHAP settings.18:27
jdgIf I do this via project_id, then I can return the info via the assign/export method.18:28
vishyjdg: so you look up the account based on project_id, if it doesn't exist18:28
jdgYep18:28
renukaisn't it cleaner to just have an extension which adds CHAP info for a user18:28
vishycreate an account in the backend with a random chap password and store it18:28
renukaat the time the user account is created18:29
vishyrenuka: that is in keystone18:29
vishyrenuka: which means we would have to make a request back to keystone to get the chap info18:29
renukayea, that was my next qs... is it worth looking into using keystone for volume?18:29
vishyrenuka: long term that might be better, but I don't know if it is worth it short term.18:30
jdgSo short term...18:30
jdgToday, who calls the export/assign methods to get the chap info after creation?  And how is this set up in /etc/initd.conf on the cmpute node?18:30
vishykeystone does support other credential sets.  We use them for ec2.18:31
vishyjdg: there is a call called initialize_connection18:31
vishyyou pass in an ip address and get back connection info18:31
vishyand the setup on the compute node is different depending on the backend18:31
jdgOk, so it sounds like the cleanest initial implementation is:18:32
jdg1.  call to create_volume comes in18:32
jdg2. I use the project_id to check if an account-id exists, if not I create it18:32
jdg3.  create the volume18:32
vishyyes, the one remaining question is, where is chap info stored?18:33
jdg4. Chap information is returned to initialize_connection the same as it is today18:33
vishywill you have multiple drivers connecting to the same backend?18:33
jdgISCSI only18:33
vishysorry i mean multiple copies of the driver code18:33
vishyas in multiple nova-volume hosts18:33
jdgyes18:34
vishybecause if so, the chap info needs to be stored in the db18:34
vishyso it can be retrieved from another host if necessary18:34
*** jakedahn has joined #openstack-meeting18:34
jdgright, but couldn't I do that through model_update?18:34
vishyand there are race conditions that will be a little nasty18:34
vishychap currently can be stored in a volume but not associated with a project_id18:35
vishyyou could do something hacky like look for any volume with the project_id and get the chap info from there18:35
vishy(including deleted volumes), but it seems a little fragile18:35
renukajdg: so just to be clear, you are ok with throwing away the initial CHAP credentials generated when the account was created, and from the first create call onwards, you will be using fake ones?18:35
vishyoh? I didn't get that18:36
jdgrenuka: no, unfortunatley that's not the case18:36
vishyjdg: so you need to be able to get that initial set of creds again18:36
vishyfor the second volume18:36
jdgI don't even need the creds again, I just need an account-id (int)18:37
vishyif there is only one copy of the volume-driver, you could just store it on disks.18:37
jdgand it has to exist of course18:37
vishyjdg: oh?18:37
vishyjdg: it will pass the creds back to you later?18:37
renukahere's the thing, if this is all being done simply to beat the hardware, might as well have a single account, no?18:37
jdgYes, you can do a get_account_info call or something along those lines18:37
jdgrenuka:  I'm leaning towards this idea at least for the first pass18:38
vishyjdg: oh in that case, I think that is all fine.  You don't need to store the creds at all18:38
renukajdg: this = single account?18:38
jdgrenuka: correct18:38
vishyrenuka: I agree, it is kind of security theatre, but at least you can look in the backend and see which volumes belong to which account18:38
jdgSo to reiterate:18:39
vishyrenuka: even if it isn't inherently more secure than using an account per project18:39
jdgThe openstack admin sets up the SF appliance and creates some "global" SF-volume account18:39
jdgAny compute node that will attach to the SF appliance will use this account for volume creation18:40
jdgI have all kinds of capabilities to create/return account info, the problem is it's all custom, and I don't know how to build and extension that plays nicely with all the other components18:41
jdgDoes this still make sense or am I missing something obvious?18:42
renukajdg: I think the first pass with a single account makes sense at this point18:42
DuncanTCertainly seems to make sense18:43
jdgOk, thanks.  I'll submit what I"ve done along with a design doc later today18:43
renukaas long as the division between create/delete and attach/detach is clean, I think it can be extended to use account info18:44
jdgNot sur of the process but perhaps someone can help me with that outside of the meeting later today18:44
renukaonce we become more clear of what the driver does18:44
jdgRemember, though.  That's the problem, we don't have an attach/detach phase.18:44
jdgWe are "ready for use" on creation18:44
DuncanTSo your iscsi volumes are all always mounted on every compute host?18:45
renukajdg: how does a VM on a random compute node connect to the storage?18:45
jdgSorry, may have sarted a rat hole.  I mean form the SF appliance.  There is no seperate attach command.18:46
jdgThis is where the requirement for the account-id comes into play because it contains the chap info18:46
renukaoh the attach command we are talking of is a nova thing18:46
jdgRight... figured that out :)18:46
jdgOk, I'll send my docs and code and hopefully it will clarify18:47
jdgThanks for walking through it with me!!18:47
renukayea, what i meant was, as long as creating the volume, and attaching it to the VM on compute node have been clearly separated, ...etc18:47
*** dricco has quit IRC18:47
renukasure18:47
*** oubiwann has joined #openstack-meeting18:47
jdgYes.. that part should be very cleanly seperated18:47
renuka#action jdg to send out docs for SolidFire driver18:47
renuka#action openstack-volume to review SolidFire design18:48
renukaDuncanT: I haven't tried to repro the snapshot bug18:51
renukais that affecting you?18:51
DuncanTIt is, yes18:51
DuncanTWhat I'm trying to get input on is what the correct behaviour should be18:51
DuncanTWe can have snapshots in existance after the volumes they came from have been delete just fine18:52
DuncanTLVM for example can't18:52
*** jakedahn has quit IRC18:53
*** nati2 has quit IRC18:53
DuncanTHence I /think/ that the driver for LVM needs to either block the volume delete if there are snapshots, or delete the snapshots.18:53
renukamakes sense18:53
DuncanTI'm happy to provide patches for one behaviour or the other, jsut wanted some input on which to pick18:53
renukasounds like a question for the mailing list.18:54
DuncanTFair enough, I'll post it up18:54
vishyDuncanT: I would think delete, but yes ask for input from the people using it18:54
*** nati2 has joined #openstack-meeting18:55
DuncanTvishy: Cheers18:55
*** jsavak has quit IRC18:55
renukaanything else before we wrap up?18:56
DuncanTI'm done for now... will post something to the list about snapshot/backup soon, almost got a sane first pass at design and example code18:56
renukaright, thanks all18:58
*** reed has quit IRC18:58
renuka#endmeeting18:58
*** openstack changes topic to "Openstack Meetings: http://wiki.openstack.org/Meetings | Minutes: http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/"18:58
openstackMeeting ended Thu Nov 10 18:58:22 2011 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)18:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-11-10-18.08.html18:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-11-10-18.08.txt18:58
openstackLog:            http://eavesdrop.openstack.org/meetings/openstack-meeting/2011/openstack-meeting.2011-11-10-18.08.log.html18:58
DuncanTThanks renuka. Keeping to time again. If only all meets managed that :-)18:58
jdgThanks a bunch Renuka18:59
jdgAnd Vishy, all for helping me out18:59
*** jdg has quit IRC19:00
vishyyw19:00
*** jsavak has joined #openstack-meeting19:05
*** reed has joined #openstack-meeting19:10
*** jakedahn has joined #openstack-meeting19:14
*** dwalleck has joined #openstack-meeting19:29
*** gyee has quit IRC19:31
*** shang has joined #openstack-meeting19:33
*** gyee has joined #openstack-meeting19:34
*** jakedahn has quit IRC19:34
*** dwalleck has quit IRC19:35
*** novas0x2a|laptop has quit IRC19:44
*** novas0x2a|laptop has joined #openstack-meeting19:44
*** adjohn has quit IRC19:45
*** dwalleck has joined #openstack-meeting19:50
*** dprince has quit IRC19:58
*** n0ano has joined #openstack-meeting20:02
*** n0ano has left #openstack-meeting20:05
*** dolphm has joined #openstack-meeting20:09
*** dolphm has quit IRC20:17
*** dolphm has joined #openstack-meeting20:18
*** darraghb has quit IRC20:18
*** dwalleck has quit IRC20:22
*** dolphm_ has joined #openstack-meeting20:23
*** dolphm has quit IRC20:23
*** dwalleck has joined #openstack-meeting20:26
*** renuka has quit IRC20:27
*** jsavak has quit IRC20:28
*** nati2 has quit IRC20:28
*** joesavak has joined #openstack-meeting20:29
*** jeblair has quit IRC20:37
*** jeblair has joined #openstack-meeting20:42
*** dwalleck has quit IRC21:09
*** dwalleck has joined #openstack-meeting21:18
*** shang has quit IRC21:23
*** joesavak has quit IRC21:26
*** joesavak has joined #openstack-meeting21:32
*** jakedahn has joined #openstack-meeting21:33
*** shang has joined #openstack-meeting21:36
*** dolphm_ has quit IRC21:39
*** dwalleck_ has joined #openstack-meeting21:47
*** dwalleck has quit IRC21:47
*** dolphm has joined #openstack-meeting21:47
*** dwalleck_ has quit IRC21:49
*** joesavak has quit IRC21:52
*** nati2 has joined #openstack-meeting21:54
*** AlanClark has quit IRC21:59
*** dolphm has quit IRC22:02
*** dolphm has joined #openstack-meeting22:03
*** dolphm has quit IRC22:07
*** df1 has joined #openstack-meeting22:26
*** dwalleck has joined #openstack-meeting22:26
*** sandywalsh_ has quit IRC22:28
*** jog0 has joined #openstack-meeting22:43
*** edconzel has quit IRC22:48
*** jakedahn has quit IRC22:55
*** jakedahn has joined #openstack-meeting22:56
*** mdomsch has quit IRC23:01
*** rnirmal has quit IRC23:01
*** mdomsch has joined #openstack-meeting23:07
*** sleepsontheflo-1 has quit IRC23:08
*** jakedahn has quit IRC23:10
*** dwalleck has quit IRC23:19
*** dwalleck has joined #openstack-meeting23:20
*** dwalleck has quit IRC23:25

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!