Tuesday, 2017-05-23

openstackgerritRosario Di Somma proposed openstack-infra/shade master: WIP: Add pagination for the list_volumes call  https://review.openstack.org/46692700:29
*** jamielennox is now known as jamielennox|away01:54
*** gkadam has joined #openstack-shade03:51
*** jamielennox|away is now known as jamielennox04:23
*** jamielennox is now known as jamielennox|away05:11
*** jamielennox|away is now known as jamielennox05:51
*** jamielennox is now known as jamielennox|away06:23
*** ioggstream has joined #openstack-shade07:39
*** openstackgerrit has quit IRC08:18
*** cdent has joined #openstack-shade11:26
*** gouthamr_ has joined #openstack-shade11:36
*** cdent has quit IRC11:45
*** cdent has joined #openstack-shade11:50
*** gouthamr_ has quit IRC12:02
*** noshankus has joined #openstack-shade12:08
*** rods has quit IRC12:11
noshankusHi guys, I'm trying to execute operator_cloud.get_compute_usage() from a remote machine, however, it seems to always use the internal api service URL rather than the public specified in clouds.yml. When using openstack_cloud.get_compute_limits(), it works fine for the same cloud... any ideas?12:11
*** rods has joined #openstack-shade12:14
noshankusLooks like I can specify: operator_instance = shade.operator_cloud(cloud=cloud.name, interface='public'), which doe sreturn, but fails to find any resource usage12:17
mordrednoshankus: hrm. that definitely seems like a bug for sure - let me poke for a second12:19
mordredhrm. both of those are still using novaclient under the covers - lemme go check to see if novaclient is doing something _weird_12:20
mordrednoshankus: I do not see anything in the code anywhere that would cause the thing you're talking about - but this is openstack, that doesn't mean anything. :) I'm going to put together a quick local reproduction and see what's up - may ask you to run something in just a bit12:25
noshankus@mordred Sure - thanks12:42
noshankusSo, looking at the code, interface=admin by default - setting to public as above gets a response, but I don't get any usage stats. Do I have to enable the gathering of usage stats somehow?12:44
mordrednoshankus: where are you seeing the interface=admin bit? that should only be for keystone v2 ... oh, wait - I'm looking at master12:45
mordredI believe that part is a bug we fixed that we're about to release12:45
noshankusI see... it looks like that when I specify "interface: public" in my extra_config clouds.yml, that openstack_cloud uses the parameter, but operator_cloud does not, so must be specified12:46
noshankusYeah - in __init__ def operator_cloud, it checks for "interface" in kwargs - if it's missing, it sets to admin12:48
noshankusThat's the difference I guess12:48
*** jamielennox|away is now known as jamielennox12:49
mordredAHA!12:50
mordredthank you12:50
mordredthat's totally a bug12:50
mordred(turns out interface=admin is actually a thing that pretty much should never be used and is really only a thing for keystonev2. we fixed that elsewhere for this release, but I missed the factory function12:51
noshankus@mordred No problem - happy to help :)12:52
mordredhowever - I'm definitely also getting empty responses back from the cloud I just checked - which is fair, since I've never booted a server in that project - let me check a different project12:54
noshankusStill can't get any usage stats via operator_cloud.get_compute_usage() however.... Any ideas? It does hand off to nova_client12:54
*** openstackgerrit has joined #openstack-shade13:01
openstackgerritMonty Taylor proposed openstack-infra/shade master: Fix get_compute_usage normalization problem  https://review.openstack.org/46722613:01
mordrednoshankus: that should fix the interface issue and a different one that was causing the dict returned to be broken13:01
mordrednoshankus: I'm getting data on projects that had usage, and a bunch of 0s on a project that hasn't had any servers booted in it for the time period13:02
Shrewshrm, we really have no difference in OpenStackCloud and OperatorCloud now13:04
Shrewsexcept for code modularization13:04
noshankus@mordred - cool - I'll have a look and try it out... also noticed in "def get_compute_usage", "if not proj" -> "name=proj.id" should be "name_or_id" - can't get id of proj if proj doesn't exist :)13:05
Shrewsmordred: +3'd if you want to redo the release stuff13:05
mordrednoshankus: haha. good catch.13:08
mordredShrews: and yah - as a general concept that turned out to be not nearly as useful as we may have originally thought13:08
mordredShrews: although at least the modularization puts documentation of functions into a different so people dont' thinkn they can call them when they can't ... but we could also do that just with docstrings13:09
noshankus@mordred do you actually get empty responses, or the message: "Unable to get resources usage for project:"13:12
openstackgerritMonty Taylor proposed openstack-infra/shade master: Fix get_compute_limits error message  https://review.openstack.org/46723113:13
mordrednoshankus: I get empty responses from the cloud, and then I get a dict back from shade witha bunch of 0 values13:14
noshankusStrange, I don't - I always get that message for each project. Do you happen to know if I need to enable resource usage gathering somewhere, or is it auto-built-in?13:15
mordredI don't ..if you turn on debug logging, does it give you more info?13:25
*** cdent has quit IRC13:26
mordrednoshankus: also - we'll be able to give a better error message once we convert this to REST - turns out there are useful error messagfes in the rest responses that we're not picking up via novaclient atm13:26
mordredShrews: mind doing https://review.openstack.org/#/c/467231/ too?13:28
Shrewsdone13:30
noshankus@mordred - cool thanks, how far out is the REST implementation? Also, don't forget the  "def get_compute_usage", "if not proj" -> "name=proj.id" should be "name_or_id" patch :)13:31
rodsmordred looks like default value for `osapi_max_limit` in devstack is set to 1000, is going to be hard to have a functional test for https://review.openstack.org/#/c/466927/13:34
rods* that's for cinder13:34
noshankus@mordred - nothing special with debug - I can see it ran the task, but still unable to get resource usage. I've been trying some of the other "list_" functions but only some work, like "list_hypervisor", but not "list_endpoints" or "list_services" - running against Mitaka btw13:40
mordrednoshankus: that makes me sad :(13:43
mordredrods: hrm. well - we could override that in our devstack job config I suppose13:44
mordrednoshankus: perhaps the user account you're using doesn't have access to those calls and we're not detecting/communicating that properly?13:46
noshankusHmm, am using the default "admin" account... In fact, when calling list_services() I get a "The resource could not be found. (HTTP 404)" - so maybe the API endpoints are off somehow?13:47
*** pabelanger has joined #openstack-shade13:51
pabelangerohai13:51
pabelangermordred: so, is building a caching logic for SSH keys something we'd consider for shade?  Today, there is no easy way to update an existing SSH keypair13:52
*** rcarrillocruz has quit IRC13:55
Shrewspabelanger: what do you mean? you want to avoid the delete_keypair()/create_keypair() to update it?14:01
mordrednoshankus: for list_services - are you using keystone v2 or keystone v3?14:02
pabelangerShrews: when we switch nodepool to use keypairs for infra-root users, I've since noticed an issue.  We cannot update said keypair, infra-root-keys. So, if we need to rotate a user (because we store 8 keys in infra-root-keys), we have to delete / create the key again with shade (ansible-role-cloud-launcher). Which is okay, but as soon as we delete the key, nodepool will fail to create a server because14:04
pabelangerinfra-root-keys is missing14:04
pabelangerso, looking for way to do this, with out staging 4 patches and nodepool outages14:04
pabelangerhttps://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/openstack/os_keypair.py#L14914:05
pabelangerSo, might be possible to update os_keypair in ansible to do this, just trying to see what options are14:06
Shrewspabelanger: i'm not familiar with this nodepool change. is there a review up? spec?14:08
mordredpabelanger: I mean, the real problem is that you can't update keypairs - you can only create/delete14:08
pabelangermordred: yes14:08
mordredpabelanger: so what we'd ACTUALLY need to do is create a new keypair, update nodepool config to use the new config, then delete the old keypair14:08
mordredsadly14:08
Shrewswell, there's more than that.14:09
pabelangerShrews: https://review.openstack.org/#/c/455480/14:09
Shrewsif you have nodes ready with the old keypair, they become unreachable. which is why i'm concerned about this change to nodepool14:09
pabelangerwe use this key for root user only, so nodepool will not be affected. It uses jenkins / zuul users14:10
rodsmordred ya, that should do it. Not sure what value we should set osapi_max_limit to though, I don't know how big is the devstack instance. Do you see a value in a functional test for that change? should we wait to implement pagination support to all of the list calls?14:10
mordredShrews: good point - you actually need to do this twice14:10
mordredyou need a new keypair with both the old and new keys in it. you need to update nodepool config with that key name -then you need to wait until all the node with the old key name are gone. then you can make a new keyair with only the new keys, then update the nodepool config, wait for servers to go away agin, then you can delete the old and temp keypairs14:12
pabelangermordred: yes, that's how we'd need to do it today. Which is fine, just a lot of code churn14:12
mordredit is - but there's no other way to do it - other than eating nodepool boot failures14:12
mordredI'd honestly say just eat the boot failures14:12
mordredand do the delete/re-create quickly14:12
Shrewsyou can only supply a single keypair to nodepool, right? i must be missing something14:13
mordredShrews: the situatoin is no better for key rotation when the keys are baked into the images14:13
mordredShrews: right. that's why I mentioned updating the nodepool config14:13
pabelangerIf we can avoid updating nodepool.yaml, that would be good. I could leave with boot failures for create / delete window14:14
pabelangerlive*14:14
mordredwe'd ALSO need to be able to tell nodepool about more than one local private key - so that nodepool can have the old and the new key at the same time14:14
mordredor- just update the nodepool copy of the private key when you update the keypairs14:14
pabelangerwhy do we want nodepool to have both keys again?14:15
mordredthere'll be a little window - but test job operation will still work during that time14:15
mordredpabelanger: wait- this is just hte keypair wit hthe infra-root keys in it right?14:15
pabelangeryes, for root SSH user14:16
mordredpabelanger: or is nodepool also actually using key information from this for anything itself?14:16
pabelangerno14:16
pabelangernodepool does not use this key14:16
mordredgotcha. then yah - I think it's fine14:16
pabelangerthat key is still baked into image14:16
mordredjust eat the boot failures for the few seconds during the rotation14:16
pabelangerOkay, so how would that look like14:17
mordrednova will just return an error that it couldn't boot the node becaus of invalid key_name14:17
mordredshade does not sanity check key name given to create_server14:18
pabelanger2 patches to cloud_layout.yaml? or add logic to os_keypairs (ansible) to do the delete / create in a single operation14:18
pabelangerother wise, we have a 1hr (I think) window between cloud launcher runs14:19
mordredpabelanger: GOTCHA - I understand your concern now14:19
*** rcarrillocruz has joined #openstack-shade14:19
mordredpabelanger: yes - I think adding logic to os_keypair that would simulate an "update" with a create/delete - we'll need to add a flag because thats a behavior change14:20
mordredand we'd want people to opt-in to such a change14:20
mordredShrews: ^^ that make sense to you?14:20
pabelangerAgree about flag14:20
mordredso the normal os_keypair will go "hey, I've got one named that, nothing more to do" - with the flag, it'll go "I've got one, let me check to see if its key info matches what I was given, if it does, cool, if not I will delete the existing key and create a new one with the same name"14:21
pabelangerstate latest / updated, something along that lines14:21
mordredyah. something lke that14:21
pabelangerk, I'll add it to list of things to hack on14:21
Shrewsso is the zuul key baked into the image then?14:22
pabelangeryes: http://git.openstack.org/cgit/openstack-infra/project-config/tree/nodepool/elements/jenkins-slave/install.d/20-jenkins-slave#n2214:23
Shrewsi guess it'd have to be. i didn't realize that14:23
pabelangerI don't think we all agreed about moving that key to nova14:23
Shrewsok, i think i'm now catching up.14:23
Shrewsmordred: yes, that makes sense. since nodepool retries launches anyway, hopefully there won't be too many launch failures. we could also increase that value14:25
Shrewsor even add some sort of incremental delay between attempts14:26
Shrewspabelanger: ^^^, if you want something to hack on  :)14:26
mordredShrews: ++14:27
openstackgerritMerged openstack-infra/shade master: Fix get_compute_usage normalization problem  https://review.openstack.org/46722614:27
mordredShrews: you're going to love the patch I'm about to push - based on this morning's fun with noshankus14:27
mordredI got annoyed by an interface14:27
*** cdent has joined #openstack-shade14:28
openstackgerritMonty Taylor proposed openstack-infra/shade master: Allow a user to submit start and end time as strings  https://review.openstack.org/46725714:28
mordredShrews: I didn't add a unit test because you cna't mock datetime.datetime14:31
Shrewsmordred: problem14:31
Shrews:)14:31
mordredShrews: once we convert that call to rest I believe it'll be easier to test the outcome in the rest call itself14:31
mordredsince the parameter passed to the rest call is a string14:31
mordredShrews: also - please enjoy how nova expects an iso datetime but will actually reject any iso datetime that include a tz offset14:32
*** gkadam has quit IRC14:33
noshankus@mordred - I'm was using v2.0 - switching over to v3 I now get a BadRequest: BadRequest: Expecting to find domain in project - the server could not comply with the request since it is either malformed or otherwise incorrect. The client is assumed to be in error. (HTTP 400)14:36
mordrednoshankus: kk. so - there is a fix for list_services in latest os-client-config master and latest shade master14:40
mordrednoshankus: working on getting both released today so it should all work properly14:41
noshankus@mordred - excellent - and your new patch showed me the light with the get_compute_usage() - I was passing in a datetime as a string for the start date! +1 to adding in the simplicity to be able to add in times as strings :)14:43
noshankus@mordred - thanks for all the help today - much appreciated your help and time!14:44
mordrednoshankus: yay! thanks for pointing out the issue- this was a real mess14:45
noshankus@mordred - no problem, I'm in the middle of playing around with a lot of this stuff, so if, and as I find more, it's good to know you guys are here14:46
mordredyes- please let us know - if you have problems, they're bug :)14:46
noshankuscool, will do14:46
openstackgerritMerged openstack-infra/shade master: Fix get_compute_limits error message  https://review.openstack.org/46723114:51
*** cdent has quit IRC14:51
*** cdent has joined #openstack-shade14:52
*** slaweq has joined #openstack-shade15:00
*** slaweq has quit IRC15:52
*** slaweq has joined #openstack-shade15:53
*** slaweq has quit IRC15:58
*** gouthamr has joined #openstack-shade16:35
morganmordred: ok about to start replacing keystoneclient calls now that i am caught up on emails and such post vacation17:04
morganmordred: since i really need a break from de-mocking tests.17:04
*** ioggstream has quit IRC17:08
*** slaweq has joined #openstack-shade17:17
rodsmordred about the functional tests for https://review.openstack.org/#/c/466927/2, I wonder if we should wait to add pagination support to all of the list calls17:19
*** slaweq has quit IRC17:46
*** slaweq has joined #openstack-shade17:46
*** slaweq has quit IRC17:51
*** slaweq has joined #openstack-shade18:28
*** slaweq has quit IRC18:32
mordredmorgan: awesome!18:49
mordredmorgan: so - before you get TOO far ...18:50
mordredmorgan: let's sync up on discovery ... I owe slaweq_ and rods a quick writeup on thoughts on how the discovery api-wg specs apply here ...18:50
mordredalthough keystoneauth already implements version discovery for keystone - so you're going to be _mostly_ fine - or at least fine enough to start and be upwards compat18:51
mordredbut if I can get that braindump out real quick and make sure you agree - it might be good to keep an eye out for gotchas as you work18:51
morgansure19:00
morgani figured i'd work on the easier ksc-ectomy bits. since discovery is already KSA, it wouldn't really be touched19:01
morganfor now.19:01
openstackgerritMonty Taylor proposed openstack-infra/shade master: Allow a user to submit start and end time as strings  https://review.openstack.org/46725719:06
mordredmorgan: ++19:07
mordredmorgan: sounds great to me19:07
*** slaweq has joined #openstack-shade19:08
*** slaweq has quit IRC19:24
*** slaweq has joined #openstack-shade19:25
openstackgerritRosario Di Somma proposed openstack-infra/shade master: Add pagination for the list_volumes call  https://review.openstack.org/46692720:08
*** slaweq has quit IRC20:28
*** slaweq has joined #openstack-shade20:29
*** slaweq has quit IRC20:54
*** slaweq has joined #openstack-shade20:55
*** gouthamr has quit IRC21:05
openstackgerritMonty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address  https://review.openstack.org/46738521:07
-openstackstatus- NOTICE: The logserver has filled up, so jobs are currently aborting with POST_FAILURE results; remediation is underway.21:23
*** ChanServ changes topic to "The logserver has filled up, so jobs are currently aborting with POST_FAILURE results; remediation is underway."21:23
*** slaweq has quit IRC21:30
*** slaweq has joined #openstack-shade21:30
*** slaweq has quit IRC21:35
*** jroll has quit IRC21:47
*** jroll has joined #openstack-shade21:47
*** jroll has quit IRC21:49
*** jroll has joined #openstack-shade21:53
*** cdent has quit IRC22:09
*** gouthamr has joined #openstack-shade22:21
*** slaweq has joined #openstack-shade23:09
*** slaweq has quit IRC23:15
openstackgerritMonty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address  https://review.openstack.org/46738523:19
openstackgerritMonty Taylor proposed openstack-infra/shade master: Pick most recent rather than first fixed address  https://review.openstack.org/46738523:37
mordredrods: so - I think yah, adding it to all of the list calls is likely a whole new effort we should think about. for the volumes, I thinkn it would not be too hard to to set osapi_max_limit to something super low like 2 or something... or maybe something like 20 so that we can have most of our tests not hit it - but we can maybe make one test that creates 30 tiny volumes and then lists them to test23:40
mordredthat pagination actually works maybe?23:40

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!