Tuesday, 2018-11-06

*** markstur has joined #openstack-manila00:51
openstackgerritGoutham Pacha Ravi proposed openstack/manila master: [DevRef] Add code review guideline  https://review.openstack.org/60959800:58
*** erlon has joined #openstack-manila01:37
*** ianychoi has quit IRC02:10
*** ianychoi has joined #openstack-manila02:10
*** toabctl has quit IRC02:19
*** toabctl has joined #openstack-manila02:23
*** erlon has quit IRC04:08
*** markstur has quit IRC04:15
*** markstur has joined #openstack-manila05:07
*** markstur has quit IRC06:10
*** e0ne has joined #openstack-manila06:24
*** e0ne has quit IRC06:44
*** e0ne has joined #openstack-manila06:48
*** e0ne has quit IRC07:07
*** pcaruana has joined #openstack-manila07:36
*** luizbag has joined #openstack-manila09:19
openstackgerritNguyen Hai Truong proposed openstack/manila master: Add python 3.6 unit test job  https://review.openstack.org/61579609:21
*** ganso has joined #openstack-manila09:53
*** erlon has joined #openstack-manila10:24
*** e0ne has joined #openstack-manila11:11
gansotbarron: good morning Tom!11:41
gansotbarron: I replied to your comment in https://review.openstack.org/#/c/39180511:41
gansotbarron: also, if you please explain in more detail your comment in https://review.openstack.org/#/c/609537/11:43
gansotbarron: I am not sure I understand it clearly11:43
gansogouthamr: Jens Harbott just sent an email to the mailing list about neutron migration to Bionic Beaver12:15
*** erlon has quit IRC12:19
*** e0ne has quit IRC12:29
*** e0ne has joined #openstack-manila12:43
tbarronganso w.r.t. 391805 I think your spec ought to explain that it's essentially an already-approved-spec in newton, authored by someone else.  So far as I can see it has nop mention of the original spec.13:15
gansotbarron: it wasn't approved in newton13:15
gansotbarron: it never received +2s, never merged13:15
tbarronganso: I'm looking at https://review.openstack.org/#/c/323646, isn't that the original for your spec?13:18
tbarronganso: your spec doesn't even mention it though they are the same except for e.g. the work assignee and a few small changes13:19
gansotbarron: wth I am confused now. Why did gouthamr ever propose https://review.openstack.org/#/c/391805/1 ?13:19
gansotbarron: I was pushing patchsets to a change gouthamr proposed13:20
tbarronganso: because I think we never implemented 323646?  He was trying to carry it forward?13:21
gansotbarron: this is weird, an already approved spec shouldn't need another spec to carry it forward (unless it needs updates)13:21
gansotbarron: let me compare the text13:22
tbarronganso: at least you see where I'm coming from now, right?13:22
gansotbarron: yes13:22
tbarronganso: cool13:22
tbarronganso: on the other subject for a sec13:22
tbarronhttps://review.openstack.org/#/c/609537/13:23
tbarronI think it's a well-written spec but I worry about it hard coding a particular "efficiency policy" in the scheduler.13:23
tbarronIt's a reasonable policy, but may not be what every cloud admin wants.13:24
gansotbarron: yes, this is why I listed an alternative. I thought about this and we could have a discussion to have the alternative become the main proposal13:24
tbarronSo I'm suggesting that we may want to consider making these policies configurable.13:24
tbarronI don't know that my suggestion would be the best way, but one way might be to have the list of possible destination pools/backends associate each of these13:25
tbarronwith a weight and a max capacity13:25
gansotbarron: so the "Control the placement optimization by back end" alternative would become the main proposal, which is just add a boolean config option to driver.py to say "prefer_same_pool"13:25
tbarron(I think this is a design pattern, not my immediate invention)13:25
gansotbarron: hmmmmmm this is a bit similar to the string value of "create_share_from_snapshot_in_other_backends"13:26
tbarronIf all the weights and max capacities are the same then we have a "spread it out" policy.13:26
gansotbarron: which is the "replication_group" idea borrowed from the replication design. The string value would allow the scheduler to indentify compatible backends13:26
tbarronIf local pool has high weight and no capacity limit and the others have some lower weight or lower capacity then we have policies more like what you hard-code13:27
gansotbarron: hmmm so you want to "spread it out" only if it is the same? it will hardly ever be "exactly the same"13:27
tbarronganso: yeah, I'm suggesting that the string value might be turned into a list of tuples13:27
tbarronbackend, pool, weight, max cap13:28
tbarrona list of those13:28
gansotbarron: the list of tuples may require more maintenance from the admin, to add a new backend to the list every time a new backend is added. In this case, it would actually need adding new backends to the list of all stanzas. While the string being the same accomplishes the same thing without requiring maintenance13:28
tbarronbut anyways I tend to agree that the policy aspect of that spec may need some discussion13:29
tbarronganso: no question that hard-coding the policy would require less of the cloud admin13:29
gansotbarron: no no I meant the string value of create_share_from_snapshot_in_other_backends13:30
tbarronbut perhaps there could be default values that yield something like your hard-cocde policy13:30
tbarronganso: I think I understand: using the simple string value for create_share_from_snapshot_in_other_backends is simpler than13:30
tbarronhaving weights and max_cap for each item in the set that this string selects13:31
tbarronI agree that it's simpler.13:31
tbarronBut may not yield the result that the cloud admin wants.13:31
gansotbarron: what you are suggesting is something smart that depending on weight, capacity decides whether to create locally or spread it out, instead of being a flag on/off that determines a fixed behavior per backend13:31
tbarronganso: right13:32
tbarronAnd I don't know what we should decide; it just seems an important thing to settle before approving and implementing this spec.13:32
gansotbarron: ok, we would to discuss how that rule looks like. As to what would be the threshold weight and capacity values for the scheduler to decide differently13:33
gansotbarron: s/would to discuss/would need to discuss13:33
gansotbarron: ok, I will think more about it today and update the spec, promoting the alternative to main proposal13:34
tbarronganso: agree.  Offhand I would think max cap could be a percentage and weight could be a small integral scale.13:34
gansotbarron: ok13:34
gansotbarron: back to the replication spec, since the change gouthamr proposed was actually renaming the old spec, there would not be an old spec to reference13:35
gansotbarron: in that case, what should we do? drop this change and consider it approved? or merge the updated? can we consider it already approved then?13:36
*** zul has joined #openstack-manila13:47
*** eharney has joined #openstack-manila13:48
gansotbarron: just saw your comment in the patch13:49
tbarronganso: I think you should follow the pattern in https://review.openstack.org/#/c/593391/13:50
tbarronThat doesn't require referring to the "other" patch since they are the same :)13:50
tbarronYou can then propose a small change to it putting you as the implementer, etc.13:51
gansotbarron: ok, so I just move them and keep everything the same as the original13:51
gansotbarron: ok13:51
tbarronganso: that way it will be clear that this is essentially already approved and people won't have to compare (as I did, and you just did) with the original off in some oter place13:52
gansotbarron: agree, will make the changes. Thanks Tom!13:52
tbarronthank you!13:53
*** kaisers has quit IRC14:04
*** kaisers has joined #openstack-manila14:10
*** dustins has joined #openstack-manila14:12
*** erlon has joined #openstack-manila14:55
*** e0ne has quit IRC15:06
*** markstur has joined #openstack-manila15:51
arne_wiebalckHi Manila, could someone update me on what the status/plan is for adding manila to the common openstack client?16:21
gouthamrarne_wiebalck: slated for Stein, i'm coordinating the work16:22
arne_wiebalckgouthamr: uh, nice!16:22
gouthamrarne_wiebalck: we should have the code upstream by M216:23
*** erlon has quit IRC16:25
arne_wiebalckgouthamr: great, thx!16:28
openstackgerritQuique Llorente proposed openstack/puppet-manila master: Install python3-manila in Fedora or RedHat > 7  https://review.openstack.org/61593216:33
openstackgerritRodrigo Barbieri proposed openstack/manila-specs master: Add spec for Manage-Unmanage of Share Servers  https://review.openstack.org/60734216:40
gansotbarron: addressed your concerns on 607342, will work on the others now16:46
*** erlon has joined #openstack-manila16:52
openstackgerritRodrigo Barbieri proposed openstack/manila-specs master: Move approved spec to Stein  https://review.openstack.org/39180516:56
gansotbarron: ^ part 116:57
*** erlon has quit IRC17:03
openstackgerritRodrigo Barbieri proposed openstack/manila-specs master: Improvements to spec "Extend the design of share networks to span subnets"  https://review.openstack.org/61594717:09
gansotbarron: ^ part 217:09
*** e0ne has joined #openstack-manila17:32
*** e0ne has quit IRC18:03
*** e0ne has joined #openstack-manila18:06
bswartztbarron gouthamr ganso: can we get together this afternoon for a bit to discuss https://review.openstack.org/#/c/391805/18:09
gansobswartz: yep, I am free, waiting for tbarron and gouthamr18:10
*** e0ne has quit IRC18:24
*** ociuhandu has joined #openstack-manila18:42
*** ociuhandu has quit IRC18:45
*** dustins has quit IRC18:59
*** dustins has joined #openstack-manila18:59
gouthamrganso bswartz: late ack, i can get on a call with you folks, tbarron is OOO today19:08
gansobswartz, gouthamr: I only have a few minutes left19:09
gansobswartz, gouthamr: we can start the meeting but then I'll have to leave19:10
gouthamroh, tomorrow looks good too - tbarron'll be on and off tomorrow and day after, but is really on PTO for the next three weeks, barring his participation at the Berlin Summit19:10
ganso:O19:11
*** luizbag has quit IRC19:15
*** markstur has quit IRC19:17
*** markstur has joined #openstack-manila19:39
*** markstur has quit IRC19:40
*** markstur has joined #openstack-manila19:40
bswartzSo should we just meet the 3 of us and bring tbarron up to speed later?19:44
bswartzShould use use the thursday meeting to get into detail on this topic?19:44
bswartzOther options include trying to get a time extension for this spec if we really want to see it land in Stein, or just giving up on meeting the Stein deadline, but continuing to flesh out the proposal for the "T" release19:51
gouthamrbswartz: either works for me.. i haven't looked at ganso's changes deeply, but if there are major concerns, it'd be nice to flush them out right away19:54
gansobswartz: based on today's discussion with tbarron where we acknowledged that the Replication in DHSS=True spec was approved in Newton, I suppose we can consider it already approved for Stein as is. So the multi-address family improvement would need another approval I guess19:54
bswartzOh that distinction20:30
bswartzYes we can separately consider the multi-AZ changes from the multiple-subnets in one AZ changes, but I don't advise it20:30
bswartzThere's a reason we kept punting on this work and trying to converge on a perfect design20:31
bswartzIf you don't consider both cases you might design yourself into a corner20:31
*** mmethot_ is now known as mmethot20:38
bswartzganso gouthamr: ^20:38
*** pcaruana has quit IRC21:31
*** dustins has quit IRC22:11
*** ganso has quit IRC22:51
*** erlon has joined #openstack-manila23:48

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!