Tuesday, 2016-02-23

* nibalizer dissapear00:03
nibalizersorry clark00:03
*** yolanda has quit IRC00:07
*** dfflanders has quit IRC00:08
clarkbsuch snow00:31
*** rfolco has quit IRC01:07
*** maximo1 has joined #openstack-sprint01:25
*** maximo1 has left #openstack-sprint01:27
med_snow is a good thing02:25
med_unless you have to walk back to your hotel in it02:25
med_and don't have the right shoes02:25
*** baoli has quit IRC02:34
*** baoli has joined #openstack-sprint02:35
*** baoli has quit IRC03:36
fungimeh, it was solid in the air but liquid on the ground. sandals were just fine03:49
*** yolanda has joined #openstack-sprint04:17
*** mrmartin has joined #openstack-sprint05:40
*** mrmartin has quit IRC05:45
*** mrmartin has joined #openstack-sprint05:51
*** mrmartin has quit IRC06:08
*** mrda has quit IRC07:32
*** lucas-dinner is now known as lucasagomes09:27
*** mrmartin has joined #openstack-sprint11:14
*** gabriel has left #openstack-sprint11:30
*** rfolco has joined #openstack-sprint11:42
*** rfolco has quit IRC11:50
*** rfolco has joined #openstack-sprint11:51
*** lucasagomes is now known as lucas-hungry12:16
*** mrmartin has quit IRC12:27
*** baoli has joined #openstack-sprint13:20
*** baoli_ has joined #openstack-sprint13:21
*** baoli has quit IRC13:24
*** lucas-hungry is now known as lucasagomes13:26
*** mrmartin has joined #openstack-sprint13:39
*** mrmartin has quit IRC13:46
*** krtaylor has quit IRC14:01
*** jesusaurus has joined #openstack-sprint14:24
*** jesusaurus has quit IRC14:32
*** mrmartin has joined #openstack-sprint14:53
*** yolanda has quit IRC15:24
crinklepleia2: https://review.openstack.org/#/c/208751/57/hiera/group/baremetal.yaml is the list of machines15:52
pleia2crinkle: thank you!15:52
*** yolanda has joined #openstack-sprint15:57
yolanda clarkb https://review.openstack.org/#/c/275477/ ?16:00
*** dfflanders has joined #openstack-sprint16:05
rcarrillocruzdo we have a clouds.yaml already on the west jumphost?16:07
clarkbrcarrillocruz: no, we put it on the puppetmaster16:08
rcarrillocruzok, so nothing cowboy'd16:08
clarkbwhich is what that change above does16:08
rcarrillocruzreviewing16:08
clarkbrcarrillocruz: I think some people may have personal clouds.yaml but I am not aware of a global one16:08
*** rockstar has joined #openstack-sprint16:12
rockstaro/16:13
*** krtaylor has joined #openstack-sprint16:13
pleia2rockstar is local to us :) we don't have dinner plans for tonight yet, I think some folks were looking at breweries (but nothing was open Mondays)16:14
Clintwell, coopersmith's was open16:19
rockstarYeah, Coops is kinda the old standby.16:20
*** yolanda has quit IRC16:21
*** yolanda has joined #openstack-sprint16:22
nibalizeropenstack user set project openstackjenkins openstackjenkins16:24
clarkbnibalizer: yolanda rcarrillocruz https://review.openstack.org/28367116:25
clarkbnibalizer: what about openstackci?16:25
clarkbnibalizer: since that is where we will be booting the mirror here shortly16:26
nibalizerclarkb: i ran th ecommand there too16:26
nibalizeropenstack user set --project openstackci openstackci16:26
clarkbkk thanks16:26
nibalizerhiera is set and the projects/users/tenants are set16:27
clarkbnibalizer: we should just need https://review.openstack.org/283671 then https://review.openstack.org/#/c/275477/ then we can boot a mirror16:27
clarkbyolanda: rcarrillocruz ^16:27
*** mrmartin has quit IRC16:28
clarkband those two changes will conflict16:28
clarkbI will rebase16:28
nibalizerok16:30
yolandanibalizer i put a -1 in https://review.openstack.org/#/c/275485, for the same infra domain issue16:30
yolandai can fi xit16:31
jeblairmordred: do you have any work in progress to alter launch-node to avoid needing the puppetmaster?16:35
mordredjeblair: I thought that was up on the shade-launch-node patch already16:35
mordredjeblair: ut it's possible I imagined that16:36
jeblairmordred: i don't think i wrote such a thing...16:36
mordredno - I thought I added a patch on top of ... on esec16:36
*** yolanda has quit IRC16:36
*** Clint has quit IRC16:37
mordredhttps://review.openstack.org/#/c/247099/16:37
mordredjeblair: ^^16:37
*** Clint has joined #openstack-sprint16:37
jeblairmordred: awesome, thanks :)16:37
rcarrillocruzmordred: confused, i think we agreed last week we would do tenant/user creation with puppet, rest we would drive it with ansible16:38
rcarrillocruzjeblair telling me you talked about doing all that with ansible?16:38
mordredyes. but I think we can move to that later - I think your tenant creatoin patch in puppet is fine for now16:40
rcarrillocruzalright, so should i continue with user creation in puppet for now, tackle later with ansible?16:40
mordredrcarrillocruz: we also need to create/manage users in blueboxcloud, but we do not puppet that cloud because we do not run it16:40
rcarrillocruzor hjust start with user creation with ansible onwards16:40
rcarrillocruzk, i'll up a role/playbook for user creation16:41
rcarrillocruzsystem-config, yah?16:41
mordredrcarrillocruz: so the most sensible thing to me is for us to have an ansible playbook that groks our clouds.yaml that verifies the users exist on the clouds where we managed them16:41
mordredyah16:41
mordredthen we just add an admin account to clouds.yaml and we've got something that handles both infra-cloud and blueboxcloud16:41
pabelangerregarding shade-launch-node.py, is a future step to convert that directly into an ansible playbook? Or simply continue using it as a python script?16:41
jeblairyeah, i brought it up because it sounded like getting credentials to puppet for the user creation might be a little difficult and didn't want to waste work on that16:41
mordredpabelanger: yes. in the future that will be an ansible playbook16:41
rcarrillocruzfwiw:16:42
rcarrillocruzhttp://git.openstack.org/cgit/openstack-infra/infra-ansible/tree/roles/setup_openstack_resources/tasks/main.yml16:42
rcarrillocruzi'll push the same role name to system-config, and put users16:42
rcarrillocruzwe can pile on it later on for other resources16:42
pabelangerSo, I don't have anything to work on ATM. So don't mind hacking on that if needed.16:43
*** yolanda has joined #openstack-sprint16:46
clarkbpabelanger: it is probably worth poking at just because this infra cloud work exposes the holes in how we currently launch things16:47
jeblaircrinkle: i have a question in https://review.openstack.org/23175716:49
nibalizer https://review.openstack.org/27548516:49
nibalizerplease to reviewing16:49
yolandahi, can you review https://review.openstack.org/#/c/140840 ? that will create infra element, so we can use that element to deploy puppet on our infra cloud16:52
crinklejeblair: answered and fixing16:59
jeblairmordred: i'm looking at 140840 with yolanda, and i'm struggling with it being in the project-config repo rather than system-config (or really anywhere else)...17:05
mordredjeblair: yes. I do not like it being in project-config17:06
mordredjeblair: but at the time project-config was the only location where we had elements17:06
yolandacrinkle, i answered your comment for neutron parameterized entries17:07
nibalizerdo these quotas look right17:08
nibalizerhttp://paste.openstack.org/show/487920/ ?17:08
rcarrillocruzhmm, mordred, do you know when 'default_project' was added to os_user17:10
rcarrillocruz?17:10
rcarrillocruzgetting "unsupported parameter for module: default_project" on ansible 2.0.0.017:10
jeblairmordred: my gut says that "infra elements" should be in system-config, and we should use them in bifrost to build our infra cloud controller/compute images.  and then in projcect-config we should count on them being installed (by puppet or whatever) on the nodepool host, and the nodepool config in project-config should say "use the infra elements and also the devstack and git repo etc elements"17:11
jeblairmordred: does that sound reasonable?17:11
mordredjeblair: yes. I think that is an excellent idea17:13
crinklejeblair: ++17:14
GheRiverojeblair: +117:14
fungimordred: jeblair: i agree17:15
yolandasounds good to me17:15
fungiso basically 231757 is a good next step, and then 140840 should be reproposed to system-config17:16
jeblairwoot, a way forward.  231757 adds something similar to system-config, so we can probably land that and then build on top of it17:16
jeblairfungi: ya17:16
pleia2anteaya: https://review.openstack.org/28367017:16
rcarrillocruznm17:16
rcarrillocruzpebcak17:16
yolandamordred, jeblair, i can do some work on that, because on east servers i was depending on that infra element17:16
anteayapleia2: thanks17:17
crinkleanteaya: https://review.openstack.org/#/c/208751/57/hiera/group/baremetal.yaml17:17
anteayacrinkle: thank you17:17
mordredyolanda, jeblair: once those elements are there and we're happy with them I'll push up an ansible playbook to make and upload base images for all of our clouds using os_image - unless yolanda beats me too it17:17
mordredbecuase I think we can make an infra-trusty base image and have it uploaded to all of our openstackci projects and use that as the base image for things we launch with launch-node, yeah?17:18
fungipossibly even have nodepool generate/update those for us in the future, though we'd need to access them in the openstackci tenant and nodepool's uploading images to openstackjenkins so... maybe not17:20
nibalizerhttp://paste.openstack.org/show/487922/17:20
jeblairfungi: yeah, we could either trust nodepool some more, or set up a second nodepool just to do those builds17:21
yolandai'd like the idea to use nodepool for long lived servers as well17:22
clarkbgreghanyes is hardcore afk but that was the motivation behind more speration betwee nnodepool and image building17:22
clarkbthen you could just have a daemon running wherever updating images for whatever17:22
rcarrillocruzhmm17:22
clarkband consumers would use the glance api to consume the results17:22
rcarrillocruzisn't shrews around17:22
jeblairclarkb: sure, but no further changes are needed for us to start using nodepool in that manner17:23
jeblairmordred: we're having a conversation about quota -- it seems that not only is there a project quota, but there seem to be user quotas too.  i'm not sure i've ever seen this (i don't see it in bluebox)....17:24
jeblairmordred: is this ringing a bell?17:24
mordredyes17:24
mordredso - there are a few things to know here17:24
mordredone is that setting quotas does not error on bad input17:24
mordredso you might be thinking you're setting a quota, but you're not17:25
Clint\o/17:25
clarkbmordred: nova quotas as a key value service17:26
mordredyes. that is EXACTLY what it is17:26
mordredthe parameters tenant-id and user-id are exactly that17:26
mordredthey must be the UUID17:26
mordredso nova quota-update --user-id=$USER_UUID --cores 100 $TENANT_UUID17:26
mordredand if you get uuid's wrong, you're just a bad person and nova will lie to you17:27
jeblairmordred: do you know why this doesn't show up in horizon on bluebox?17:27
mordreddunno. by default I tink setting it for tenant is fine17:27
mordreduser is an optional overlay on top of it17:28
mordredif it looks like you set it for the tenatn but you do a nova absolute-limits as that user and do not see the quotas updated17:28
jeblairmordred: nibalizer says that has caused problems.... maybe we need to remove a user quota completely?17:28
mordredyah17:28
mordredthere's no need for it17:28
mordredit likely means you did not actually increase the tenant quota17:28
nibalizerheh17:31
jeblairmordred: you may want to look at the clouds.yaml in https://review.openstack.org/27548517:31
jeblairmordred: they keystonev3 usage looks maybe a little rough around the edges, particularly needing to specify the default domain, which is named default...17:32
nibalizerheh17:34
nibalizerso adding the user quota has always been needed in my experience17:34
nibalizerbut I can look at deleting it17:34
mordredjeblair: why is the domain named default?17:35
nibalizerokay i have deleted the uqota17:35
jeblairmordred: i'm told keystonev3 requires a domain, and so it creates one, and it names it default, but none of the cliens know that.17:35
mordredSO17:36
yolandafor quotas, i used puppet to set them: https://review.openstack.org/281770 and https://review.openstack.org/283304 did the trick for me17:36
mordredthe last time we talked about this I requested that we create a domain called infra and inside of that domain we created the openstackci and openstackjenkins users and tenants17:37
crinklemordred: why would we not use the default domain for that?17:37
mordredin fact, lasst time I worked with the users in west I in fact created that comain and those users17:37
mordredcrinkle: because the default domain is syntactic sugar to help with transitions from keystone v217:37
mordredwe do not have keystone v2 transitions17:37
mordredso we should create a domain and put our things in it17:38
jeblairit will be really nice to have this in version control17:38
crinkleit is worth noting that the puppet modules have fairly recently added support for v3 and there are still a lot of issues with using non-default domains17:38
mordredyup17:39
mordredI did, in fact, create an infra domain17:39
jeblairbecause the rumour here in the room was that both users were in a single infra tenant17:39
mordredso - quick terminology thing17:39
mordredusers are not in tenants17:39
jeblairwhich isn't really what we want, but also isn't what you just said you did...17:39
mordredthere are constructs called users and projects17:39
mordredprojects container resources, such as servers17:39
mordredprojects contain resources, such as servers17:40
mordreda user can be granted access to one or more projects17:40
mordreda user and a project each exist inside of a domain17:40
jeblairi understand now.17:40
rcarrillocruzmordred: i missed that domain thing17:41
rcarrillocruzgiven that i put in puppet tenants creation17:41
rcarrillocruzi'll put the domain as well17:42
rcarrillocruzso those are put on infra domain17:42
rcarrillocruz?17:42
clarkbrcarrillocruz: yes that is what mordred is asking for17:42
rcarrillocruzk17:42
mordredthere is a user and a project called 'infra' in the infra domain that can be removed - that was me poking at this earlier17:43
fungiso user:project is a many:many relationship17:43
mordredyes17:43
jeblaircrinkle says the admin user will need to be in the default domain for $puppet-reasons.  so i think default domain with admin user; use that to create infra domain with openstackci and openstackjenkins projects and users.17:43
fungibut within the scope of a domain17:43
mordredjeblair: ++17:43
mordredI think the admin user being in the default domain is a great idea17:43
mordredfungi: actually, a user can be granted access to a project in a different domain17:43
fungioh, so user:domain is also a many:many17:44
jeblairmordred: dear me.17:44
mordredfungi: a domain is purely a namespace inside of which user and project names are unique17:44
mordredfungi: no17:44
mordredfungi: a user has one and only one domain17:44
fungibut can have access to projects within different domains from its own?17:44
fungifreaky17:44
jeblairinternal federation? :)17:44
mordredyes. that is possible - but is more of an admin like thing17:45
fungiso, yeah, just plain namespacing, not limiting scope i guess17:45
mordredit's a way you could have a service user run by something in the cloud that could have the ability to go touch everyone's stuff17:45
mordredlike if you wanted a global auditing user that could run reports on the resources in all of the projects in the cloud17:45
mordrednormally a consumer of the cloud would have an account that would be a domain admin account17:46
mordredan in that domain they would be free to create users and projects as they want to17:46
mordredwithout any knowledge or visibility of what's going on in other domains17:46
mordredthe ability to grant things across domains is a super specialized thing17:46
mordredone of the reasons, btw, that I want us to make an infra domain is largely so that we can all wrap our heads around this construct17:48
mordredwhich seems like good future proofing for our thinking17:48
clarkbI mean17:48
mordredand/or any tooling we write - to make sure it deals properly with clouds that are both multi-tenant and give each user a domain17:48
clarkbthe chances we ever need multiple domains for namespacing to avoid needing knowledge across projects/users seems to be 017:48
nibalizeryep17:48
nibalizerclarkb: has it17:48
nibalizerwe have a functional user and quota set now17:49
nibalizerand domain set in patches that are in the process of landing17:49
nibalizeri think we should roll forward and nerd-out on setting up domains in east when that comes online17:49
mordredI disagree17:50
mordredI think we should delete the user and project in the default domain17:50
mordredrecreate them in the infra domain that has existed the entire time17:50
mordredand set their quota17:51
*** lucasagomes has left #openstack-sprint17:52
nibalizerso the issue with that is that we then have to wait for patches to pull out domain and replace it with infra17:53
rcarrillocruzmordred, nibalizer, clarkb: https://review.openstack.org/28371717:55
nibalizerok17:55
nibalizeri will set the user to the infra domain17:55
nibalizerclrark will manually change allclouds.yaml and boot up a mirror17:56
clarkband propose the change to record the manual changes17:56
crinklemordred: yolanda I don't understand https://review.openstack.org/#/c/280720/ , isn't /increasing/ max-connections what we did to fix "too many connections" error? i.e. https://review.openstack.org/#/c/256897/17:56
mordredsorry for the churn - thanks for humoring me on this everybody17:57
jeblairmordred: as bikesheds go it's not the worst17:57
yolandaah the commit message was misleading...17:57
mordredcrinkle, yolanda: the commit message is bad17:57
clarkbmy grump is mostly I have argued with keystone over this directly17:57
yolandayep, going to amend it17:57
clarkband its incredibly confusing without proper docs and doesn't make sense to the vast majority of your users17:58
clarkbit is good for complicated ogs17:58
clarkb*orgs17:58
clarkbits the neutron network problem all over again17:58
mordredclarkb: sure. but it is the way that it is and us ignoring it and pretending that we're still running keystone v2 is not going to solve anything17:58
yolandaso crinkle, we are talking about that and sounded that reduce the mysql connections first, to the recommended settings was good, then we need to add some changes to nova, neutron... to reduce the connection attemps, and some extra mysql tuning17:59
mordredso bringing the need for better docs is a great outcome we can produce17:59
crinkleyolanda: okay18:00
yolandacrinkle i'll put that one in -w because there are more settings that need to be tuned there, reducing the connections without actually optimizing some other settings can be even worse18:02
jeblairmordred: how should i run openstackclient on puppetmaster?18:02
mordredjeblair: what do you mean?18:04
jeblairmordred: well, we have instructions on how to use virtualenvs to run various commands, for example https://git.openstack.org/cgit/openstack-infra/system-config/tree/launch/README18:05
crinkleyolanda: cool wfm18:06
jeblairmordred: do we have a venv for openstackclient....or what?18:06
mordredjeblair: we do not - perhaps we should make one18:06
mordredjeblair: or, maybe we should just install python-openstackclient since we have shade globally installed in support of openstack inventory anyway18:06
*** mrmartin has joined #openstack-sprint18:07
jeblairmordred: maybe we already have that?18:07
mordredjeblair: oh - look there - we do18:07
nibalizermordred: jeblair clarkb http://paste.openstack.org/show/487928/18:08
nibalizerdoes that look the way we want18:08
mordredjeblair: https://review.openstack.org/28372918:09
mordrednibalizer: brilliant18:09
Clintyolanda: for http proxy use http://web-proxy.houston.hp.com:8080/ and for all other tcp traffic you can use http://socks-server.fc.hp.com:1080/18:10
Clintor for your web browser you can use the PAC at http://autocache.hp.com/18:10
yolandathx i'll try18:11
Clintwell, actually not all other tcp traffic; you should be able to use ports 22 and 443 without a proxy18:12
clarkbjeblair: mordred nibalizer https://review.openstack.org/283733 does the s/default/infra/18:15
*** yolanda has quit IRC18:18
rcarrillocruzclarkb, nibalizer: mind reviewing https://review.openstack.org/#/c/283717/ ?18:24
rcarrillocruzthe users change i  will need to rebase onto that one18:25
nibalizermore users/project/domains: http://paste.openstack.org/show/487931/18:25
nibalizermordred: plz to review18:25
crinkleI would like to land https://review.openstack.org/#/c/266902 nowish, which will require destroying the three currently active nodes and deleting the existing subnet, is that reasonable or are mirrors etc too far down the road?18:25
*** mrmartin has quit IRC18:25
rcarrillocruzmordred: i'm thinking we should probably split http://git.openstack.org/cgit/openstack-infra/infra-ansible/tree/roles/setup_openstack_resources into its own openstack-infra/setup_openstack_resources repo18:26
rcarrillocruzwhat you think?18:26
rcarrillocruzthen we would put a play on system-config for all clouds we care about18:26
rcarrillocruzthat role would need to be refactored to have cloud specific resources18:26
rcarrillocruzlike per-cloud servers/keypairs/networks/flavors/blah18:27
jeblairclarkb: lgtm18:29
jeblairclarkb, mordred, nibalizer: https://review.openstack.org/283739  adds the infracloud admin user to all-clouds18:29
fungiare we ready to merge 208751 as well? lgtm and already has another +2, previous comments seem addressed, dependency is merged...18:30
pabelangerrcarrillocruz: I prefer we prefix with ansible-role, but breaking out is good18:31
clarkbjeblair: it is actually wrong for other reasons18:34
clarkb"the request yo uhave made requires authentication"18:34
clarkbwhich doesn't indicate if auth failed or was simply not attempted18:34
mordredjeblair: fwiw ... if we want to - os-client-config supports local vendor profiles18:35
mordredjeblair: so we could make an infracloud east and infracloud west profile file and drop them on puppetmaster and nodepool containing auth_url, identity_api_version, auth_type and cacert settings18:35
mordredjeblair: and reference those by name in our clouds.yaml18:36
mordrednot important18:37
clarkbmordred: if you can inspect openstackci-infracloud-west in /etc/openstack/all-clouds.yaml and tell me if anything looks from which would result in "the request yo uhave made requires authentication" that may be helpful18:38
clarkbsince I think what is on disk is correct (puppet is not correct)18:38
clarkbI am running `OS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml openstack --os-cloud openstackci-infracloud-west --os-region-name RegionOne --debug catalog list`18:38
mordredclarkb: kk. doing now18:38
clarkbbut flavor list results in the same thing18:38
clarkbis it possible the cacert thing isn't working?18:39
clarkblet me try with --insecure18:39
clarkbnope that didn't change the output18:39
mordredclarkb:18:39
mordred      username:18:40
mordred      password:18:40
mordred      project_name:18:40
mordredthose are empty in /etc/openstack/all-clouds.yaml18:40
clarkbmordred: thats the wrong one18:40
clarkbthat is openstackjenkins (known problem)18:40
mordredoh. ci. one sec18:40
mordredsorry18:40
clarkbnp I was confused initially myself18:40
*** baoli_ has quit IRC18:45
mordredclarkb: there was no role assignment18:48
mordredclarkb: I just ran this by hand:18:48
mordred openstack role add --user b93e0f20dbd84fd8a1e7a46ca086aac7 --project 352c2a7cd67f4633a8b520214f938577 _member_18:48
mordredas the admin user18:49
mordredthat granted the _member_ role on the openstackci project to the openstackci user18:49
mordredI'm not sure how we're doing that with config management at the moment18:50
mordredI have done: openstack role add --user 7dbe0f121e424a74be2eed25399e2c75 --project 894a11e0a16a4c29bb8b884c1c70bf2c _member_18:51
clarkbmordred: rcarrillocruz has some changes but it was manual18:51
mordredwhich is the same for openstackjenkins18:51
mordredk18:51
clarkbmordred: if I run the command I get the same error fwiw18:52
clarkbso that may be a thing but is not the oonly thing18:54
clarkb2016-02-23 18:53:46.405 14134 TRACE keystone.auth.controllers DomainNotFound: Could not find domain: infra18:56
*** mrmartin has joined #openstack-sprint18:57
nibalizerhttps://review.openstack.org/#/c/208751/60/hiera/group/baremetal.yaml18:58
* clarkb tries to figure out how to list domains18:59
mordredclarkb: sorry19:01
mordredclarkb: I made one change locally that I did not propagate19:02
mordredclarkb: it's project_domain_name and user_domain_name not id19:02
mordredinfra is the name of the domain, not the id of the domain19:02
clarkbok I can make that update19:02
mordred'default' is 'special' because it's not created with openstack domain create - it's injected into the db - so default is both its name and its id19:02
clarkbthank you19:02
fungiaha, and otherwise the "id" would have to be a uuid?19:03
mordredyah19:03
fungimagical19:03
mordredfor some reason the unversioned keystone url is not working19:03
mordredah. there we go19:03
*** mrmartin has quit IRC19:04
mordredclarkb: are you editing the file by hand or via puppet?19:04
mordredclarkb: nm. I see the patch :)19:05
clarkbmordred: both, 283733 but manually editing to test before pushing tons of stuff19:05
clarkbok that got it working19:06
clarkbso 283733 should eb what we need19:06
clarkbthen once nibalizer and crinkle say it is safe we can boot a mirror node19:07
rcarrillocruzclarkb: are we good to land https://review.openstack.org/#/c/283717/19:12
rcarrillocruz?19:12
*** yolanda has joined #openstack-sprint19:13
clarkbrcarrillocruz: I think that is a better question for crinkle and nibalizer19:13
clarkbrcarrillocruz: it seems fine but I am not sure19:13
rcarrillocruzcrinkle: ^19:13
crinklercarrillocruz: i think it's good19:17
nibalizerhas anyone set quotas on the users?19:17
nibalizerI have not19:17
nibalizerrcarrillocruz: lgpm19:17
nibalizerlgtm19:18
*** sivaramakrishna has joined #openstack-sprint19:30
crinklethis is a potentially scary change that would happen when 208751 lands: Notice: /Stage[main]/Ansible/Package[ansible]/ensure: current_value 1.9.4, should be 2.0.0.2 (noop)19:30
crinklenibalizer: this is a bug https://review.openstack.org/28376619:41
nibalizercrinkle: on what host19:45
*** sivaramakrishna has quit IRC19:45
pabelangercrinkle: do any of your bifrost roles / playbooks use the ansible_user / ansible_ssh_user?19:48
crinklepabelanger: i don't think so, i don't see those anywhere in bifrost19:50
pabelangercrinkle: okay, the only issue I ran into with 2.0.0.2 was https://github.com/ansible/ansible/issues/1366919:51
mordredyah. 2.0 should be safe19:51
mordredand is also desirable19:51
crinklethejulia also confirms it should be safe19:51
clarkbnibalizer: https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img is the image we want I think. Should be added as admin so that all users can see it19:52
nibalizerhttp://paste.openstack.org/show/487922/19:52
nibalizerkk19:52
mordredclarkb: yah. as admin with public=True set19:58
yolandahi, before we start using nodepool we need to be sure that https://review.openstack.org/#/c/281310/ lands20:05
yolandawe will have a problem with deleting if not20:05
clarkbexport FQDN=mirror.regionone.infracloud-west.openstack.org20:09
clarkbjeblair: fungi etc ^ that name look correct?20:09
rcarrillocruzmordred: it seems ansible os modules doesn't allow to modify quotas? per-project compute, network, etc20:11
rcarrillocruz?20:11
rcarrillocruzor am i missing something20:11
crinklehaha20:11
crinklethou shalt not modify per-project quotas via automation20:11
*** mrmartin has joined #openstack-sprint20:11
rcarrillocruzaka, you can lock out yourself from doing anything useful by mistake :-)20:12
rcarrillocruzit seems i'm left with doing an ansible command: openstack quota blah20:13
GheRiveroI'll take a look to add the quota thing to ansible20:15
yolandacrinkle so look at my global vars https://review.openstack.org/#/c/267438/ , you will see i add rsyslog to extra packages, it should be useful to have it20:29
crinkleyolanda: sounds good to me20:31
crinkleyolanda: iputils-ping also very very useful :)20:31
rcarrillocruzyeah, that's another one missing20:32
rcarrillocruzso is less20:32
rcarrillocruzvi20:32
rcarrillocruzreally minimal :/20:32
nibalizerrcarrillocruz: this was the output of running your stuff20:43
nibalizerNotice: /Stage[main]/Openstack_project::Infracloud::Controller/Keystone_tenant[openstackjenkins]/description: description changed 'Infra short lived resources' to ''20:43
nibalizerNotice: /Stage[main]/Openstack_project::Infracloud::Controller/Keystone_tenant[openstackci]/description: description changed 'Infra Long Lived Resources' to ''20:43
nibalizerso we lost descriptions (if we care)20:43
rcarrillocruzthx, i can re-add those descriptions to puppet20:44
clarkbhttps://review.openstack.org/283792 should fix puppet on nodepool.o.o21:04
nibalizermordred: clarkb  https://review.openstack.org/283796 should give us a chance at debugging it21:10
jeblairclarkb: could you +A https://review.openstack.org/28373921:12
clarkblooking21:13
clarkbjeblair: gerrit claims "cannot merge"21:13
clarkbjeblair: but before you rebase, we should rename *_domain_id to *_domain_name21:14
clarkbbut maybe it works this way due to the specialness of default modrred was talking about21:14
jeblairclarkb: okay, i was clearly cargo culting from something untested21:14
nibalizeroh i didn't see the cannot merge21:14
nibalizeri also wonder if we need the 'floating ip source' since likely we'll never boot nodes as the admin21:15
jeblairi thought this was worked out already with the other changes21:15
jeblairi'll just go create this from scratch from first principles21:15
clarkbjeblair: it is worked out as of the last change that merged which I thought I linked oyu to earlier21:16
jeblairclarkb: updated https://review.openstack.org/283739 with my own creation21:33
jeblairclarkb: it looks different21:33
clarkbjeblair: did you test it? yolanda was saying that without the domain stuff did not wrk for them in east21:34
jeblairclarkb: i tested it in west21:34
clarkbok21:34
clarkband that does strip out the extra stuff that mordred pointed out21:34
jeblairclarkb: i used 'domain list' and 'server list'21:34
jeblairi thought that would be fairly representative21:34
clarkbyup that should both go through keystone auth so would be sufficient21:35
jheskethMorning21:35
clarkbgood morning21:36
clarkbOS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml21:37
clarkbjeblair: ^21:37
*** baoli has joined #openstack-sprint21:37
fungiif any other infra-core reviewer is okay approving https://review.openstack.org/283768 in system-config i'll watch the run_all and puppet logs21:41
yolandafungi approved21:42
fungithanks yolanda21:43
fungimordred: interesting observation, ansible-inventory.cache seems to group servers by "region" alone, and we're about to have two (soon three) clouds with the same region name. is this purely cosmetic, or likely to lead to any unforeseen issues?21:49
*** mrmartin has quit IRC21:50
jeblairfungi: i think it is cosmetic unless we actually try to use that for something (eg, a hiera group)21:51
jeblairclarkb, nibalizer, crinkle, yolanda: remote:   https://review.openstack.org/283821 Simplify infracloud clouds.yaml21:53
rcarrillocruzclarkb, nibalizer, yolanda, crinkle: https://review.openstack.org/#/c/283816/21:56
rcarrillocruzand also https://review.openstack.org/#/c/283737/21:56
rcarrillocruzmissed a comma in a param in earlier patchset21:56
clarkbrcarrillocruz: see comment on https://review.openstack.org/#/c/283816/121:59
*** dfflanders has quit IRC22:00
jeblairclarkb: remote:   https://review.openstack.org/283825 Make all-clouds.yaml admin readable22:01
anteayajhesketh: nice to see you22:07
anteayajhesketh: I hope everything is going well in your world22:07
jheskethYep not bad thanks22:07
anteayajhesketh: good to hear22:07
jheskethHoping to help out some more today. Lots to catch up on though22:08
anteayajhesketh: lots to catch up on22:08
nibalizerjhesketh: currently we're working on getitng the puppet-ansible-apply pipeline to work better, so that it puppets infracloud, so that we can boot nodes22:09
jheskethnibalizer: okay cool22:09
jeblairclarkb: final patch in series: remote:   https://review.openstack.org/283829 Add instructions on using openstackclient22:10
nibalizerthere is also a group working on getting baremetal00 under puppet/ansible management22:11
nibalizerand jeblair is working on getting config files and docs set up so that infra-roots on puppetmaster can run openstackclient22:11
pabelangerSo, it would be interested to see us run https://github.com/jtaleric/browbeat on infracloud22:18
pabelangerIIRC, there is a playbook to do some performance tuning and ensure settings are configured properly22:19
yolandamordred, i was looking at moving the infra element to system-config. How do you see that working with puppet-openstackci and nodepool ? i was thinking in create an elements directory, that gets deployed with puppet to, for example /etc/system-config/elements, and then make nodepool to use that as element path, as long as the one from project-config22:23
clarkbpabelanger: don't worry nodepool is harder on clouds than any other benchmark system if the number of times we break clouds is any indication22:24
pabelangerclarkb: I think there is some roles to check for tuning too. Never used it22:26
clarkbpabelanger: that would be interesting if it could point out plcaes we could be more better22:28
pabelangerclarkb: ya, I think it does something like that.  Simple noop checks22:28
*** cdelatte has quit IRC22:29
anteayahttps://storyboard.openstack.org/#!/board/722:34
*** baoli has quit IRC22:43
*** baoli has joined #openstack-sprint22:44
*** baoli has quit IRC22:45
*** baoli has joined #openstack-sprint22:46
*** baoli has quit IRC22:58
*** Clint has quit IRC22:58
*** Daviey has quit IRC22:58
*** dhellmann has quit IRC22:58
*** mjturek1 has quit IRC22:58
*** aarefiev has quit IRC22:58
*** krtaylor has quit IRC22:58
*** craige has quit IRC22:58
*** zhenguo_ has quit IRC22:58
*** clayton has quit IRC22:58
*** sergek has quit IRC22:58
*** devananda has quit IRC22:58
*** yolanda has quit IRC22:58
*** GheRivero has quit IRC22:58
*** ianw has quit IRC22:58
*** jroll has quit IRC22:58
*** dteselkin has quit IRC22:58
*** rfolco has quit IRC22:58
*** SpamapS has quit IRC22:58
*** clarkb has quit IRC22:58
*** _degorenko|afk has quit IRC22:58
*** sweston has quit IRC22:58
*** EmilienM has quit IRC22:58
*** natorious has quit IRC22:58
*** fungi has quit IRC22:58
*** mfisch has quit IRC22:58
*** NobodyCam has quit IRC22:58
*** tristanC has quit IRC22:58
*** yarkot has quit IRC22:58
*** mordred has quit IRC22:58
*** anteaya has quit IRC22:58
*** krotscheck has quit IRC22:58
*** morgabra has quit IRC22:58
*** pleia2 has quit IRC22:58
*** jeblair has quit IRC22:58
*** clif_h has quit IRC22:58
*** crinkle has quit IRC22:58
*** sbadia has quit IRC22:58
*** rockstar has quit IRC22:58
*** nibalizer has quit IRC22:58
*** sbadia has joined #openstack-sprint23:01
*** crinkle has joined #openstack-sprint23:01
*** clif_h has joined #openstack-sprint23:01
*** nibalizer has joined #openstack-sprint23:01
*** rockstar has joined #openstack-sprint23:01
*** aarefiev has joined #openstack-sprint23:01
*** mjturek1 has joined #openstack-sprint23:01
*** dhellmann has joined #openstack-sprint23:01
*** Daviey has joined #openstack-sprint23:01
*** Clint has joined #openstack-sprint23:01
*** devananda has joined #openstack-sprint23:01
*** sergek has joined #openstack-sprint23:01
*** clayton has joined #openstack-sprint23:01
*** zhenguo_ has joined #openstack-sprint23:01
*** craige has joined #openstack-sprint23:01
*** krtaylor has joined #openstack-sprint23:01
*** NobodyCam has joined #openstack-sprint23:01
*** mfisch has joined #openstack-sprint23:01
*** fungi has joined #openstack-sprint23:01
*** natorious has joined #openstack-sprint23:01
*** EmilienM has joined #openstack-sprint23:01
*** sweston has joined #openstack-sprint23:01
*** yolanda has joined #openstack-sprint23:01
*** ianw has joined #openstack-sprint23:01
*** GheRivero has joined #openstack-sprint23:01
*** jroll has joined #openstack-sprint23:01
*** dteselkin has joined #openstack-sprint23:01
*** baoli has joined #openstack-sprint23:01
*** anteaya has joined #openstack-sprint23:03
*** krotscheck has joined #openstack-sprint23:03
*** jeblair has joined #openstack-sprint23:03
*** morgabra has joined #openstack-sprint23:03
*** pleia2 has joined #openstack-sprint23:03
*** tristanC has joined #openstack-sprint23:03
*** yarkot has joined #openstack-sprint23:03
*** mordred has joined #openstack-sprint23:03
*** yolanda has quit IRC23:03
*** GheRivero has quit IRC23:04
*** rfolco has joined #openstack-sprint23:06
*** SpamapS has joined #openstack-sprint23:06
*** clarkb has joined #openstack-sprint23:06
*** _degorenko|afk has joined #openstack-sprint23:06
*** clarkb has quit IRC23:07
*** SpamapS has quit IRC23:07
*** SpamapS has joined #openstack-sprint23:09
*** cdelatte has joined #openstack-sprint23:24
anteayahere is a board I created for the infra-cloud story: https://storyboard.openstack.org/#!/board/823:28
anteayaeveryone should be able to view it23:28
anteayanew tasks need to be added here: https://storyboard.openstack.org/#!/story/200017523:29
anteayathen I can pull it onto the board as a card23:29
*** yolanda has joined #openstack-sprint23:30
*** GheRivero has joined #openstack-sprint23:31
*** clarkb has joined #openstack-sprint23:34
jheskethare all the patches using the infra-cloud topic?23:46
* jhesketh thinks he'll just go reviewing23:46
jheskethalthough let me know if there's something in particular I can help with23:46
nibalizerjhesketh: https://review.openstack.org/#/c/283862/ id like to see this reviewe23:48
jheskethnibalizer: did you see Clint's query on that one?23:53
anteayajhesketh: can you see the review lane on the infra-cloud board? https://storyboard.openstack.org/#!/board/823:53
jheskethanteaya: yep, will work through that23:54
anteayajhesketh: the cards aren't mapped to gerrit review ids23:54
anteayabut those cards have been prioritized23:54
jheskethokay I'll try and find the corresponding reviews23:55
anteayajhesketh: you are now an owner of the board23:57
anteayajhesketh: you can move things around in the lanes and add lanes and such23:57
anteayajhesketh: I also made you an owner on my test board: https://storyboard.openstack.org/#!/board/723:58
anteayajhesketh: so you have a place to play23:58
nibalizerjhesketh: krm23:59
nibalizerso how does the del statement work23:59
*** baoli has quit IRC23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!