Thursday, 2020-02-27

*** yamamoto has joined #openstack-meeting-alt00:05
*** yamamoto has quit IRC00:10
*** admcleod has joined #openstack-meeting-alt00:13
*** jamesmcarthur has joined #openstack-meeting-alt00:35
*** gyee has quit IRC00:44
*** jamesmcarthur has quit IRC00:51
*** jamesmcarthur has joined #openstack-meeting-alt00:52
*** jamesmcarthur has quit IRC00:57
*** jamesmcarthur has joined #openstack-meeting-alt01:22
*** jamesmcarthur has quit IRC01:23
*** jamesmcarthur has joined #openstack-meeting-alt01:23
*** masahito has joined #openstack-meeting-alt01:54
*** masahito has quit IRC01:55
*** masahito has joined #openstack-meeting-alt01:55
*** jamesmcarthur has quit IRC01:55
*** masahito has quit IRC01:56
*** yamamoto has joined #openstack-meeting-alt01:56
*** jamesmcarthur has joined #openstack-meeting-alt02:00
*** masahito has joined #openstack-meeting-alt02:02
*** jamesmcarthur has quit IRC02:31
*** ijw has quit IRC02:37
*** jamesmcarthur has joined #openstack-meeting-alt02:43
*** jamesmcarthur has quit IRC02:49
*** yamamoto has quit IRC02:49
*** jamesmcarthur has joined #openstack-meeting-alt02:53
*** diablo_rojo has quit IRC02:57
*** jamesmcarthur has quit IRC03:07
*** jamesmcarthur has joined #openstack-meeting-alt03:31
*** yamamoto has joined #openstack-meeting-alt03:55
*** jamesmcarthur has quit IRC03:58
*** yamamoto has quit IRC04:00
*** yamamoto has joined #openstack-meeting-alt04:13
*** masahito has quit IRC04:59
*** vishakha has joined #openstack-meeting-alt05:21
*** masahito has joined #openstack-meeting-alt05:22
*** rcernin has quit IRC05:33
*** rcernin has joined #openstack-meeting-alt05:33
*** links has joined #openstack-meeting-alt05:53
*** kozhukalov has joined #openstack-meeting-alt06:07
*** rcernin has quit IRC06:24
*** lbragstad has quit IRC06:26
*** haixin has joined #openstack-meeting-alt06:27
*** ccamacho has quit IRC06:49
*** e0ne has joined #openstack-meeting-alt07:16
*** e0ne has quit IRC07:23
*** e0ne has joined #openstack-meeting-alt07:24
*** e0ne has quit IRC07:29
*** e0ne has joined #openstack-meeting-alt07:48
*** e0ne has quit IRC07:52
*** tesseract has joined #openstack-meeting-alt07:53
*** slaweq has joined #openstack-meeting-alt07:53
*** e0ne has joined #openstack-meeting-alt08:06
*** e0ne has quit IRC08:08
*** ccamacho has joined #openstack-meeting-alt08:18
*** masahito has quit IRC08:28
*** apetrich has joined #openstack-meeting-alt09:21
*** yamamoto has quit IRC09:35
*** masahito has joined #openstack-meeting-alt09:41
*** masahito has quit IRC09:56
*** haixin has quit IRC09:56
*** kozhukalov has quit IRC11:17
*** kozhukalov has joined #openstack-meeting-alt11:21
*** kozhukalov has quit IRC11:35
*** kozhukalov has joined #openstack-meeting-alt11:35
*** yamamoto has joined #openstack-meeting-alt11:42
*** kozhukalov has quit IRC11:43
*** yamamoto has quit IRC11:46
*** yamamoto has joined #openstack-meeting-alt12:25
*** jamesmcarthur has joined #openstack-meeting-alt12:36
*** kozhukalov has joined #openstack-meeting-alt12:39
*** jamesmcarthur has quit IRC13:00
*** jamesmcarthur has joined #openstack-meeting-alt13:00
*** jamesmcarthur has quit IRC13:06
*** jamesmcarthur has joined #openstack-meeting-alt13:10
*** rfolco has joined #openstack-meeting-alt13:28
*** rfolco has quit IRC13:29
*** jamesmcarthur has quit IRC13:32
*** jamesmcarthur has joined #openstack-meeting-alt13:32
*** yamamoto has quit IRC13:35
*** e0ne has joined #openstack-meeting-alt13:35
*** lpetrut has joined #openstack-meeting-alt13:36
*** e0ne has quit IRC13:46
*** jamesmcarthur has quit IRC13:47
*** jamesmcarthur_ has joined #openstack-meeting-alt13:47
*** yamamoto has joined #openstack-meeting-alt13:54
*** e0ne has joined #openstack-meeting-alt13:57
*** maaritamm has joined #openstack-meeting-alt13:57
*** lbragstad has joined #openstack-meeting-alt13:59
*** vhari has joined #openstack-meeting-alt14:00
*** e0ne_ has joined #openstack-meeting-alt14:08
*** e0ne has quit IRC14:08
*** yamamoto has quit IRC14:18
*** jamesmcarthur_ has quit IRC14:35
*** jamesmcarthur has joined #openstack-meeting-alt14:40
*** jamesmcarthur has quit IRC14:48
*** danielarthurt has joined #openstack-meeting-alt14:58
*** andrebeltrami has joined #openstack-meeting-alt14:58
gouthamr#startmeeting manila15:01
openstackMeeting started Thu Feb 27 15:01:44 2020 UTC and is due to finish in 60 minutes.  The chair is gouthamr. Information about MeetBot at http://wiki.debian.org/MeetBot.15:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:01
*** openstack changes topic to " (Meeting topic: manila)"15:01
openstackThe meeting name has been set to 'manila'15:01
carlosso/15:01
dviroelhi15:01
lsekio/15:01
andrebeltramihi15:01
tbarronHello15:01
vharihey15:01
danielarthurtHi15:02
gouthamrhello o/15:02
maaritammo/15:02
gouthamrAgenda: https://wiki.openstack.org/wiki/Manila/Meetings15:02
gouthamrcourtesy ping: xyang toabctl ganso vkmc amito15:03
gouthamrlet's begin with15:03
gouthamr#topic Announcements15:03
*** openstack changes topic to "Announcements (Meeting topic: manila)"15:03
gouthamr#link https://releases.openstack.org/ussuri/schedule.html15:03
gouthamrWe're two weeks away from manila's Feature Proposal Freeze15:04
*** e0ne_ has quit IRC15:04
amitohey15:04
gouthamrwe're expecting new features to be substantially complete: i.e, unit, functional and integration tests passing by this deadline15:04
gouthamrFeature freeze itself isn't for a month after that - but it gives us enough time for review, rebases and other code churn15:05
gouthamrplease let me know if you anticipate any problems with respect to that..15:06
gouthamrno other announcements for the week15:06
gouthamrdoes anyone else have any?15:06
gouthamr#topic Goals for Victoria15:07
*** openstack changes topic to "Goals for Victoria (Meeting topic: manila)"15:07
gouthamrthe goal search email went out to the ML15:07
gouthamrif you're interested in taking a look:15:08
gouthamr#link http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012396.html15:08
gouthamr#link https://etherpad.openstack.org/p/YVR-v-series-goals15:08
gouthamrwe knew of the zuulv3 goal for a while now, but, we should anticipate another to be picked up by the community15:09
gouthamrif you're interested in proposing one, please do15:09
gouthamrWe're looking ahead to Victoria, and a gentle reminder that we published our planning etherpad for the PTG here:15:10
gouthamr#link https://etherpad.openstack.org/p/vancouver-ptg-manila-planning (Victoria PTG Planning Etherpad)15:11
gouthamrso we'll dive into whatever goals evolve in more detail during the PTG15:11
gouthamrnext up15:12
gouthamr#topic Tracking our work15:12
*** openstack changes topic to "Tracking our work (Meeting topic: manila)"15:12
gouthamrin terms of reviews needing attention15:12
gouthamr#link https://review.opendev.org/#/q/owner:tamm.maari%2540gmail.com+status:open15:13
gouthamrthis is maaritamm's work with manilaclient/OSC ^15:13
gouthamrif we don't find substantial issues, we should aim to get these patches merged by feature proposal freeze15:14
gouthamrmaaritamm's internship is almost coming to an end :(15:14
carloss:/15:14
maaritamm:(15:14
lseki:(15:14
gouthamrfeels like yesterday that she started - i think she's done an incredible job ramping up and learning all the nuances of manilaclient15:15
gouthamrand of OSC15:15
carlossgouthamr ++15:15
dviroelfor sure15:15
maaritammI will stick around as much as I can though still  :)15:15
gouthamrthat's commendable, thank you maaritamm :)15:17
gouthamrplease give these patches all the review attention you can..15:17
gouthamrany other reviews that need our attention?15:18
dviroelwhat about the last rocky backport?15:18
gouthamrdviroel: good point, i'm a little bothered by one issue wrt rocky15:19
gouthamri'd like some closure for it this week - if any patches need to land there, please alert me15:19
gouthamri think i didn't find any that needed to be in the release15:20
*** gyee has joined #openstack-meeting-alt15:21
gouthamrbut, beyond sounding cryptic - we may have a significant bugfix coming that might be appropriate before we make that final release15:21
gouthamrif we can't have that bugfix by the end of the week, i'll +1 this:15:21
gouthamr#link https://review.opendev.org/#/c/709896/ (Rocky final release proposal)15:21
gouthamrdoes that sound sane?15:22
dviroelyes15:22
gouthamron the review, i made it sound like we're going to discuss the issue in this meeting ^15:23
gouthamri'll keep you informed via #openstack-manila15:24
dviroelgouthamr: let us know if you need anything15:24
*** e0ne has joined #openstack-meeting-alt15:24
gouthamrthanks dviroel15:25
gouthamrcool, anything else?15:25
gouthamr#topic Bugs (vhari)15:26
*** openstack changes topic to "Bugs (vhari) (Meeting topic: manila)"15:26
gouthamrlet's hear from vhari!15:26
gouthamr#link: https://etherpad.openstack.org/p/manila-bug-triage-pad-new (Bug Triage etherpad)15:26
vhario/15:26
vharilets take a look at 1st bug https://bugs.launchpad.net/manila/+bug/185832815:27
openstackLaunchpad bug 1858328 in Manila "Manila share does not get into "shrinking_possible_data_loss_error" status when shrinking a share" [Low,Confirmed] - Assigned to Douglas Viroel (dviroel)15:27
tbarrondviroel: When I looked at this one, I wondered if the NetApp driver didn't return the expected "possible data loss" error because15:27
tbarronit was of the opinion15:27
tbarronthat it had caught the shrink attempt in time15:27
tbarronand there is no possible data loss15:27
tbarronOf course we don't have an error state in manila that really accomodates that ....15:28
dviroeltbarron: yes, at the is aborted.15:29
dviroels/at the/at the end15:29
tbarronif so, then I think we have the question whether NetAop should adapt to manila manager expectations15:30
tbarronand the user will be told there is possible data loss even though15:30
tbarronthere isn't really15:30
tbarronor whether we should adapt the manila manager framework15:30
tbarronwhich only allows AVAILABLE, ERORmumbleSHRINKING_POSSIBLE_DATA_LOSS15:31
gouthamrthere's a comment in the code base that seems appropriate:15:31
gouthamr#link https://opendev.org/openstack/manila/src/commit/188705d58b7022b30955bfa49d7b62ba93b7e9ef/manila/share/manager.py#L3904-L390815:31
gouthamrfeels like dejavu15:32
gouthamrit is strange we'd raise an "error" if the share is perfectly alright15:32
tbarrongouthamr: +115:32
gouthamrbut, we'd have to possibly look if drivers don't validate, but detect data loss? (is such a thing possible?)15:32
tbarronlkuchlan is suggesting we set it to AVAILABLE but generate an ERROR user msg15:33
tbarronbut I kinda think maybe we should have an additional error state15:33
gouthamrack, that seems to be u_glide's thought as well, when implementing this state15:33
tbarronand trust the drivers to signal the right thing15:33
tbarronfor each driver, if it decides to indicate the safe error, we15:34
tbarronvalidate that change in code review15:34
tbarronthis would require a corresponding tempest test case change to allow  the new error state15:35
tbarronmaybe several tempest tests and unit tests15:35
gouthamra quick code search confirms that assumption ^15:36
tbarronI haven't checked, perhaps only NetApp is doing a safe check on shrinks15:36
gouthamrapart from the Dell/EMC Unity driver, all drivers check for instances where consumed space is greater than the requested space15:37
gouthamrand return the error15:37
tbarronk15:37
gouthamrthe Unity storage driver seems to perform this validation on their storage system15:37
gouthamr#link https://opendev.org/openstack/manila/src/commit/73b0bccd9f0e3238a153cb9ee461bbaefd6aa6d4/manila/share/drivers/dell_emc/plugins/unity/client.py#L309-L318 (Dell/EMC Unity share shrinking possible data loss)15:38
gouthamreither case, it's a validation and data loss has been prevented15:38
*** jamesmcarthur has joined #openstack-meeting-alt15:38
tbarronyup15:38
dviroelyes, same thing.15:38
gouthamrso this bug is tech debt that we haven't gotten to?15:39
gouthamrwould it make sense to address it uniformly as such, rather than change the NetApp driver?15:39
tbarronseems like it.  How does the tempest test pass in 3rd party CI?15:39
dviroel^ good question, need to take a look15:40
gouthamrlkuchlan's issue seems to be with a scenario test15:40
gouthamrwe don't test for this status in an API test15:40
tbarrongouthamr: I'm for a uniform solution but that doesn't require that all drivers change at once.15:40
tbarronchange the manager to support setting the new state (and the tests)15:40
tbarronthen drivers can do it one by one15:40
gouthamrtbarron: new state?15:41
tbarronas motivated to not alarm their users unnecessarily15:41
tbarrongouthamr: I don't see any other way, do you?15:41
tbarronnew exception and new state15:41
gouthamrit's a bigger fix, but here's what i'm thinking:15:41
tbarronraise SHRINK_REFUSED15:42
gouthamr1) Fix the share manager to raise a user message on this validation error, and set the share status to "available"15:42
gouthamr2) Fix the NetApp driver to return the exception expected by the share manager for this validation so it conforms with the other drivers15:42
tbarronmgr sets STATUS_SHINKKING_ERROR15:42
gouthamr3) Fix our scenario test to expect the share to transition to "available" and for the share size to remain the same15:43
dviroel+115:43
tbarrondoes this leave us with possible data loss with some back ends but the share is available?15:44
gouthamrtbarron: that'd be the first thing to check15:44
tbarronwell we no a priori that it  could, so  this solution seems to do the wrong thing from a logical standpoint15:44
gouthamrtbarron: i looked through the drivers now, none of them are forcing a shrink, possibly because we call out the need for this validation15:44
tbarroneven it might work empirically, with all our drivers15:45
gouthamrin the driver developer guide (or the interface doc)15:45
tbarronwe're keeping a possible data loss state and15:45
tbarronhaving to make sure every driver really prevents it15:45
gouthamr#link https://docs.openstack.org/manila/queens/contributor/driver_requirements.html#share-shrinking (driver expectations)15:45
tbarrontomorrow we get a driver where there might be possible data loss, and we keep a state for that, but never use it?15:46
tbarronit seems to me a kludge and a bad design15:46
gouthamr#link https://opendev.org/openstack/manila/src/commit/14d3e268a05265db53b5cfd19d9a85a3ba73a271/manila/share/driver.py#L1163-L1165 (driver interface doc)15:46
gouthamrtbarron: yeah, not inclined to do that - if a storage system can somehow detect data loss while shrinking a share, it should be able to do so before shrinking the share15:47
tbarronSure, we set the expectations that drivers should do the right thing.  But if we rely entirely on that then15:47
gouthamrtbarron: or during shrinking the share and refuse to shrink like Unity and NetApp doing so15:47
tbarronwe can get rid of the possible data loss state entirely15:48
tbarronI think we all agree it's better for a back end to detect and prevent data loss.15:48
gouthamryes, if there is an exception in that path, we'd set the status to "error" and log - allowing operators to take a look anyway15:48
gouthamrokay, dviroel are you still comfortable handling this bug?15:50
dviroelgouthamr: yes, we have danielarthurt looking at it right now15:50
vharidviroel++15:50
tbarronthanks dviroel15:50
gouthamrawesome, ty dviroel danielarthurt - when you have your findings, please summarize on the bug report15:51
dviroelsure thing15:51
*** jakecoll has joined #openstack-meeting-alt15:52
vharicool, think we're almost out of time - passing on the token to gouthamr15:52
gouthamrthis change should be backported, for us to continue using that test case as it is written15:52
gouthamrthank you vhari - this was an interesting one15:52
vhariindeed. yw15:52
gouthamrdviroel tbarron danielarthurt: we can brainstorm further on what that means - changing this behavior does seem like it gets into the grey area between a bugfix and a feature15:53
gouthamrbut, i don't have bright ideas to fix a design issue like this15:54
tbarronyup15:54
dviroel+115:54
gouthamrokay, lets take this discussion to #openstack-manila and to the bug15:55
gouthamrdanielarthurt: i'll subscribe to the bug, but, if you don't see a response from me/tbarron/dviroel after your update - please ping us :)15:55
danielarthurtOk15:56
gouthamrty15:56
gouthamr#topic Open Discussion15:56
*** openstack changes topic to "Open Discussion (Meeting topic: manila)"15:56
andrebeltramidviroel:15:56
dviroelandrebeltrami has nothing to say, btw15:57
dviroellol15:57
lsekilol15:57
carlosshaha15:57
gouthamrhaha, it was okay to gossip about you during open discussion15:57
andrebeltramisorry for that :(15:58
gouthamrlol, np andrebeltrami15:58
gouthamralright folks, lets wrap up and see each other on #openstack-manila15:58
gouthamrthank you for attending15:58
carlossthanks gouthamr15:58
dviroelthanks!15:58
gouthamr#endmeeting15:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:58
openstackMeeting ended Thu Feb 27 15:58:45 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/manila/2020/manila.2020-02-27-15.01.html15:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/manila/2020/manila.2020-02-27-15.01.txt15:58
openstackLog:            http://eavesdrop.openstack.org/meetings/manila/2020/manila.2020-02-27-15.01.log.html15:58
*** priteau has joined #openstack-meeting-alt16:00
*** diablo_rojo has joined #openstack-meeting-alt16:00
priteau#startmeeting blazar16:00
openstackMeeting started Thu Feb 27 16:00:57 2020 UTC and is due to finish in 60 minutes.  The chair is priteau. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: blazar)"16:01
openstackThe meeting name has been set to 'blazar'16:01
priteau#topic Roll call16:01
*** openstack changes topic to "Roll call (Meeting topic: blazar)"16:01
priteauHi jakecoll16:01
jakecollgood morning16:01
priteauIs diurnalist around?16:02
jakecollHe wanted to join today16:02
jakecollI'll ping. He wanted to talk about a spec today.16:02
*** diurnalist has joined #openstack-meeting-alt16:03
diurnalisto/16:03
priteauHi diurnalist16:03
diurnalistHello- I got mixed up with DST16:03
priteauHas it changed in the US already?16:04
priteauNah it's in 10 days16:04
priteauAnyways16:04
jakecollIt messes with calendar if you haven't downloaded the ics file from the meetings page16:05
priteauAgenda for today:16:05
priteau* Update on specs work16:05
diurnalistok, not sure what happened. i had it written down for 11am16:05
priteau* Upstream contributions16:05
priteau* AOB16:05
jakecollhttp://eavesdrop.openstack.org/#Blazar_Team_Meeting16:05
jakecollYou'll want to add ics file at this link16:05
priteauImport the ical file and you won't have to ever update it16:05
*** jcoufal has joined #openstack-meeting-alt16:05
diurnalistthx16:06
priteau#topic Update on specs work16:06
*** openstack changes topic to "Update on specs work (Meeting topic: blazar)"16:06
priteauI am happy to say that I finally reviewed your spec diurnalist16:06
priteau#link https://review.opendev.org/#/c/707042/16:06
priteauSorry it took a couple of weeks16:07
priteauOverall I like the approach, just need to iron out some details of the payload16:08
*** ccamacho has quit IRC16:08
priteauLooks like tetsuro is not yet convinced16:09
diurnalistsounds good--I still prefer the service approach, but I'd be willing to follow something like a plugin approach using stevedore. with the downsides that it requires you to (a) use Python and (b) package the module locally (which usually means "manually") when using something like Kolla16:09
diurnalisti'll call out the downsides i see a bit more clearly in the alternatives and add the links you mentioned ot nova vendordata2. in that spec they discuss similar things16:10
priteauDid you see my suggestion from today16:10
diurnalistYes16:10
priteauUse the plugin approach, with one of the plugins making calls to the external service16:10
priteauThis way we could have a SimpleEnforcement plugin that does simple things like check max duration16:11
priteauNoopEnforcement would be default16:11
diurnalistah, that's what you meant. yes, that might make sense16:11
priteauExternalEnforcement would be your approach16:11
priteauAnyone could load their CustomEnforcement if they wish, their responsible for figuring out how to include the code in their container images if using kolla16:12
*** e0ne has quit IRC16:12
priteaus/their/they're/16:12
diurnalistyes, that's of course more work but i had thought it would likely move in that direction eventually16:12
priteauA downside is that it means more places where the interface might change over time16:12
priteauBut maybe it's not that much more work. After all, we probably want to encapsulate this anyway16:13
diurnalistit also opens the door to some sort of default QuotaEnforcement thing, which i think there is related work being thought about?16:13
priteauThat's right16:13
priteauAlthough one might want QuotaEnforcement + ExternalEnforcement16:13
diurnalist>.<16:14
diurnalisthaha16:14
priteauSo it could be a list of plugins that are called sequentially16:14
diurnalistyeah, like nova's scheduler16:14
priteaulike scheduler filters16:14
*** links has quit IRC16:14
priteauThis is making the design a fair amount bigger, but it might be more flexible down the line16:15
priteauAnd if it solves tetsuro's concerns at the same timeā€¦16:15
diurnalistyep16:15
diurnalistI like the idea16:15
priteauThe spec should describe the plugin API first then16:16
priteauOr another spec for the plugin API, the current one focusing on a specific implementation16:16
diurnalistProbably two specs makes sense in this case. If you have suggestions of how best to logically link them, I'd be interested16:17
diurnalistI can just put it in 'related' links16:17
diurnalistBut I don't know if something more formal is usually done16:17
priteauI am not aware of any specific approach16:17
priteauReferences would be fine16:18
priteauWhat do you think about the need for identifying cloud & region in case the external service is shared?16:19
diurnalistYes, I think that will be necessary. But, do you think that auth URL will be enough? I wonder if you can get region/domain from the client token.16:20
diurnalistI was just inspecting a token. One can get domain from that, but not region16:22
priteauI wanted to ask you why you were passing the client token to the service16:23
priteauIn one of the vendordata2 specs, they say they should actually be passing a token from nova16:23
diurnalistI guess we don't need to. I initially thought it would be useful to fetch the list of leases under the user. But you can do that with the admin token. So, I guess user_id, project_id, user_domain_id, project_domain_id, and region_name will all have to be sent. Kind of a lot, but not sure how else to do it.16:24
diurnalistand auth_url16:24
priteauWhat's inside a fernet token?16:25
diurnalistproject (id+name+domain), user (id+name+domain), roles attached, expiry, and endpoint catalog16:25
diurnalistand an issue date, and some other bookkeeping things like an audit ID and which auth method was used16:26
priteauAre you sure the catalog is in there?16:27
priteauhttps://docs.openstack.org/keystone/latest/admin/fernet-token-faq.html#why-should-i-choose-fernet-tokens-over-pki-or-pkiz-tokens16:27
diurnalisti'm just saying, if you inspect the token, the catalog is returned16:27
priteau"This issue is mitigated when switching to fernet because fernet tokens are kept under a 250 byte limit."16:27
diurnalistit's likely not encoded in there directly16:27
priteauOh I see16:27
*** tesseract has quit IRC16:27
priteauPKI and PKIZ tokens apparently include the catalog16:27
priteauI think we should avoid transferring those tokens around16:28
diurnalistSure16:28
diurnalistIf everything can be done with the admin token, that is simpler16:28
diurnalistI was more worried that admin APIs might have missing functionality16:29
diurnalistif one needed to call other openstack APIs. a design where the token is decomposed as early as possible in the request chain is better in these distributed systems IMO16:29
priteauI think you can get all the information you need16:30
diurnalistWe do at least need blazar to send some token of its own to the service, so the service knows it's actually blazar16:30
priteauI don't really like the idea of an external service making requests using a user token16:31
diurnalistwhich is in fact another argument for not sending the user token16:31
diurnalistI am not disagreeing16:31
diurnalistI'll update the spec16:32
priteauThanks16:32
priteauI think that's the main points I wanted to raise16:32
priteauAlready quite a lot :-)16:32
priteau#topic Upstream contributions16:33
*** openstack changes topic to "Upstream contributions (Meeting topic: blazar)"16:33
priteaujakecoll: I see you've updated your network reservation patch, thanks16:34
priteauIt's next on my review list16:34
priteauI've also flagged it as a review priority for the rest of the team16:35
jakecollyep. I just finished add allocations to networks on our fork. I'll update that as well.16:35
priteauThat's great16:36
priteauAre you using the allocations API for blazar-dashboard now?16:36
jakecollYes. Just went live on prod 5 minutes ago.16:37
priteauwooo16:37
diurnalistWe've been using it for the hosts dashboard for a few months16:37
diurnalistnow it's in place for all types :)16:37
priteauI didn't realise it was already on the host dashboard16:38
priteauSo except for gathering node types (which is a Chameleon concept), calendar could be database-query-free?16:38
jakecollWell, diurnalist has another spec he wanted to talk about to get us there.16:39
priteauAh, that's what you wanted to talk about?16:39
priteauSorry, I thought it was the usage enforcement one16:39
priteauLet's talk about it then16:39
*** lpetrut has quit IRC16:40
diurnalistYes, sorry16:41
diurnalistSo, one of the improvements we've additionally made to blazar-dashboard is around making resource_properties easier to use16:42
diurnalistRight now you have to know the somewhat arcane invocation needed, especially when combining queries16:42
diurnalistSo we have this resource filter (I know you know this, I'm just expanding for IRC logs)16:42
diurnalistIt's a UI element that lists all the resource property keys that are available for filtering. The user selects which key they want to filter on, and then a list of all possible values for that key are displayed, allowing them to pick one.16:43
diurnalistThis makes discovering possible resource property constraints much easier16:43
priteauAnd so we would need an API to discover these properties, as I think they are not visible to users by default?16:43
diurnalistYes16:43
diurnalistThey _may_ be visible to users by default, I'm not sure what the default host:list or host:get permissions are16:43
diurnalistBut in any case, an API call will likely be much more efficient than the alternative, which is asking for every possible resource and then itemizing all keys/values16:44
diurnalistPlus, I thought it might actually be kind of easy to implement given how Blazar already has support for these extra capabilities, which are arbitrary k/v pairs16:44
priteauIt's admin only by default16:44
diurnalistIt's a bit of a weird API as it'd mainly be used in the dashboard, but if we design it nicely, it would probably be helpful for CLI users as well16:45
diurnalistAh ok, then another reason to do it.16:45
priteauOne thing I would say then16:45
priteauWe may want to extend the DB schema to flag extra cap as user-visible16:45
*** dosaboy has quit IRC16:45
priteauBecause operators may want to reserve some for them16:45
*** jamesmcarthur has quit IRC16:45
diurnalistThat's a good point16:45
priteauYou were thinking of something like GET /os-hosts/properties?16:46
*** jamesmcarthur has joined #openstack-meeting-alt16:46
diurnalistYes, though extensions in to networks and other resources like IPs would also make sense16:46
diurnalistit's a cross-cutting feature in my mind16:47
priteauOf course16:47
priteauThough we already have /os-hosts/allocations16:47
diurnalistI think the API design is a bit odd in that we are re-implementing the same thing on multiple paths, but that's neither here nor there. I think if we define /properties as the path, it can extend os-hosts/ and networks/ and whatever else16:48
*** ijw has joined #openstack-meeting-alt16:49
diurnalistto be honest i'm not even sure how easy it is to make it a "general" feature because i think we re-implement all of the DB schemas as well for each resource type. don't we have to re-implement extra caps for example? i need to double-check16:49
*** ijw has quit IRC16:49
priteauextra caps is per-model16:49
*** ijw has joined #openstack-meeting-alt16:49
priteauAlthough I was thinking of abstracting it into a ResourceExtraCap model to make it more reusable16:49
diurnalistI would support that. I think there are other aspects of Blazar that should really be shared logic as well. it's becoming more important as the types of resources are scaling16:50
diurnalistBut from this "properties" API--sounds OK in principle?16:50
diurnalist*for16:50
priteauSounds OK to me16:51
priteauSpec needed of course16:51
diurnalistYep16:51
*** ccamacho has joined #openstack-meeting-alt16:53
priteauWe're getting near the end of the hour16:53
priteau#topic AOB16:53
*** openstack changes topic to "AOB (Meeting topic: blazar)"16:53
priteauAnything else to cover?16:53
*** e0ne has joined #openstack-meeting-alt16:53
diurnalistWe have not yet discussed participating at Vancouver16:54
jakecollI'm good for now. I'll keep a look out for comments on networks plugin16:54
jakecolloh right16:54
diurnalistMaybe next time we will know what the plan is. Pierre, are you planning on being there? Are any other Blazar core members?16:54
priteaudiurnalist: I've requested 0.5 to 1.5 day for Blazar16:54
diurnalistok16:54
priteautetsuro will be there. I don't know yet16:54
priteauStill some months away16:56
diurnalistmmhmm16:56
diurnalistNothing else from me then16:57
priteauWrapping up if there's nothing else?16:57
priteauThanks a lot for the good discussion today16:57
priteauGreat to get contributions from you16:57
priteauNext meeting the time will have changed in your local timezone!16:58
diurnalistCheers!16:58
priteau#endmeeting16:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:58
openstackMeeting ended Thu Feb 27 16:58:17 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/blazar/2020/blazar.2020-02-27-16.00.html16:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/blazar/2020/blazar.2020-02-27-16.00.txt16:58
openstackLog:            http://eavesdrop.openstack.org/meetings/blazar/2020/blazar.2020-02-27-16.00.log.html16:58
*** dosaboy has joined #openstack-meeting-alt17:01
*** diurnalist has quit IRC17:09
*** diurnalist has joined #openstack-meeting-alt17:14
*** igordc has joined #openstack-meeting-alt17:39
*** priteau has quit IRC17:40
*** jakecoll has quit IRC17:42
*** igordc has quit IRC18:01
*** igordc has joined #openstack-meeting-alt18:04
*** danielarthurt has quit IRC18:08
*** macz_ has joined #openstack-meeting-alt18:12
*** macz_ has quit IRC18:14
*** macz_ has joined #openstack-meeting-alt18:15
*** e0ne has quit IRC18:21
*** jcoufal has quit IRC18:26
*** jamesmcarthur has quit IRC18:33
*** jamesmcarthur has joined #openstack-meeting-alt18:33
*** vhari has quit IRC18:52
*** e0ne has joined #openstack-meeting-alt18:57
*** andrebeltrami has quit IRC18:58
*** e0ne has quit IRC18:59
*** e0ne has joined #openstack-meeting-alt19:10
*** jamesmcarthur has quit IRC19:12
*** jamesmcarthur has joined #openstack-meeting-alt19:40
*** gyee has quit IRC19:49
*** gyee has joined #openstack-meeting-alt19:49
*** e0ne has quit IRC20:07
*** diurnalist has quit IRC20:16
*** ijw has quit IRC20:23
*** diurnalist has joined #openstack-meeting-alt20:26
*** kozhukalov has quit IRC20:27
*** kozhukalov has joined #openstack-meeting-alt20:28
*** ijw has joined #openstack-meeting-alt20:48
*** ijw has quit IRC20:53
*** ijw has joined #openstack-meeting-alt20:53
*** ijw has quit IRC20:54
*** ijw has joined #openstack-meeting-alt20:54
*** jamesmcarthur has quit IRC20:54
*** kozhukalov has quit IRC21:08
*** slaweq has quit IRC21:15
*** kozhukalov has joined #openstack-meeting-alt21:40
*** rcernin has joined #openstack-meeting-alt21:44
*** jamesmcarthur has joined #openstack-meeting-alt21:54
*** jamesmcarthur has quit IRC22:08
*** slaweq has joined #openstack-meeting-alt22:11
*** jamesmcarthur has joined #openstack-meeting-alt22:11
*** slaweq has quit IRC22:15
*** slaweq has joined #openstack-meeting-alt22:18
*** ijw has quit IRC22:19
*** slaweq has quit IRC22:22
*** ijw has joined #openstack-meeting-alt22:47
*** jamesmcarthur has quit IRC22:49
*** ijw has quit IRC22:52
*** ijw has joined #openstack-meeting-alt22:52
*** slaweq has joined #openstack-meeting-alt23:11
*** slaweq has quit IRC23:16
*** jamesmcarthur has joined #openstack-meeting-alt23:25
*** kozhukalov has quit IRC23:35

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!