Wednesday, 2016-02-24

nibalizerhrm00:00
nibalizerjhesketh: ill fix it for that00:01
fungidel can be used to remove a key from an associative array, or to delete an arbitrary python object reference for that matter00:10
fungisort of the inverse of variable assignment00:10
* jhesketh 's irc client hasn't been pinging him00:19
jheskethanteaya: cool thanks00:19
jheskethnibalizer: yeah if the object doesn't exist you get an error (key maybe, I can't remember)00:20
Clintyeah, KeyError00:27
nibalizerjhesketh: we are wrapping up00:28
nibalizertalk to you on the other side00:28
jheskethnibalizer: no trouble00:28
jheskethhave a good evening all!00:29
anteayathanks jhesketh, have a great day00:29
jheskethwill do00:29
anteayasee you tomorrow00:29
jhesketh:-)00:29
*** rfolco has quit IRC00:41
*** rfolco has joined #openstack-sprint00:43
*** rfolco has quit IRC00:43
*** imcsk8_ has joined #openstack-sprint01:26
*** baoli has joined #openstack-sprint01:58
*** yolanda has quit IRC02:28
*** sivaramakrishna has joined #openstack-sprint02:46
*** baoli has quit IRC03:55
*** baoli has joined #openstack-sprint04:03
*** yolanda has joined #openstack-sprint04:24
*** baoli has quit IRC04:28
*** baoli has joined #openstack-sprint04:29
*** baoli has quit IRC04:39
clarkbpretty sure that the issue with making a mirror was security groups04:40
clarkbwe have two rules without a source ip range04:40
clarkbthis is also very funky with openstackclient and I do not recommend it04:40
clarkbpuppet is building a mirror now05:02
clarkbif this works we can add dns records in the morning and run nodepool against the cloud05:02
clarkbthe problem with security group rules was we had the defaults, no port 22 ingress just an intergroup rule which is useless for external communication. I deleted the two intergroup rules (ipv4 and ipv6) and added two rules that allow all ipv4 and all ipv605:04
clarkbwe typically have to do this in every cloud we setup as users so this is expected05:04
clarkbhttp://15.184.54.245/ tada05:17
clarkbapache did not start after the first reboot, so going to see if that is consistent but its up05:17
clarkbwill be down a few minutes while I reboot05:18
clarkbAH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1. Set the 'ServerName' directive globally to suppress this message I think that may be the issue I had to manually restart apache after rebooting05:20
clarkbbut we can figure that out in the morning, otherwise we hvae a mirror05:20
*** openstack has joined #openstack-sprint13:21
*** mrmartin has quit IRC13:55
*** rfolco has joined #openstack-sprint14:01
pabelangerclarkb: nice!14:03
pabelangerYa, I think ansible-launch-node could be use to ensure sane defaults for security groups too14:04
pabelangeratleast give a warning if they are missing14:04
pabelangerheading down to lobby for some breakfast14:05
*** AJaeger has joined #openstack-sprint14:19
fungion my way to lobby to meet for pancakes14:26
mordredpabelanger: yes. I agree14:30
*** krtaylor has quit IRC15:02
*** mrmartin has joined #openstack-sprint15:05
*** krtaylor has joined #openstack-sprint15:15
*** yolanda has quit IRC15:24
*** krtaylor has quit IRC15:34
*** krtaylor has joined #openstack-sprint15:47
*** baoli has quit IRC15:58
*** ricky1 has joined #openstack-sprint15:58
*** ricky1 has left #openstack-sprint15:59
*** baoli has joined #openstack-sprint15:59
*** ricky1 has joined #openstack-sprint16:04
mordredmorning everybody - I hit high travel burnout levels and went home instead of joining you in fort collins16:08
mordreddownside - I don't get to beer with everyone16:08
mordredupside - I probably won't snap and kill anyone with an ax16:08
*** dfflanders has joined #openstack-sprint16:10
*** ricky1 has quit IRC16:13
jeblairmordred: we're having a conversation about booting and dns etc16:14
* crinkle approves not being axemurdered16:14
clarkbrcarrillocruz: http://paste.openstack.org/show/488043/ is what I did to openstackci security groups16:14
jeblairmordred: maybe we should get on the pbx?16:14
clarkbrcarrillocruz: basically delete the intergroup rule because it is redundant when you allow all traffic and they don't perform well. Then add some rules to allow all ipv4 and ipv6 traffic16:14
mordredjeblair: yes. maybe so16:15
*** yolanda has joined #openstack-sprint16:16
*** mrmartin has quit IRC16:17
*** yolanda has quit IRC16:18
*** yolanda has joined #openstack-sprint16:19
rcarrillocruzok16:22
rcarrillocruzclarkb:16:22
rcarrillocruzwas able to reconnect to my bouncer16:22
rcarrillocruzthx16:22
mordred/var/lib/puppet/reports/controller00.hpuswest.ic.openstack.org/16:23
mordreddest: /var/lib/puppet/reports/{{ ansible_fqdn }}16:24
mordredsrc: "{{ puppet_logfile }}"16:25
mordredlogfile: "{{ puppet_logfile }}"16:25
clarkbrcarrillocruz: http://paste.openstack.org/show/488043/16:26
rcarrillocruzack16:26
nibalizerohai16:28
nibalizeri am at the secueiry checkpoibt thing16:28
nibalizercan someone fetch me?16:28
Clintnibalizer: on it16:28
nibalizerty16:29
mordredhttps://review.openstack.org/28423616:30
jeblairClint: https://review.openstack.org/27549116:30
jeblairclarkb: ^16:30
mordredr = requests.post(endpoint, json=payload, **requests_kwargs)16:36
mordredhttps://review.openstack.org/28424316:42
rcarrillocruzyolanda:  https://review.openstack.org/#/c/284246/16:46
*** baoli_ has joined #openstack-sprint16:46
*** baoli has quit IRC16:48
clarkbrcarrillocruz: you also need to delete the existing group rules16:52
clarkbnova falls over when you add too many hosts to a security group that has group rules16:54
clarkbso you actually need to delete them :/16:54
clarkbrcarrillocruz: it may be easiest, to delete all rules, then add the ingress and egress back in?16:55
rcarrillocruzclarkb: i'm dealing other thing on east baremetal, where i'm testing things on16:58
rcarrillocruzhttp://paste.openstack.org/show/488051/16:59
rcarrillocruzcan you tell me if you have two different 'default' secgroups in west16:59
rcarrillocruzcos that's another edge case i need to fix then16:59
clarkbwow you have two default security groups16:59
jeblairclarkb: sigh16:59
rcarrillocruzyeah , w.t.f.16:59
clarkbrcarrillocruz: are you using admin creds?17:01
clarkbI wonder if those are two different security groups for different users17:01
rcarrillocruzyeah17:01
rcarrillocruzadmin17:01
clarkbrcarrillocruz: I think that is it17:02
clarkbjeblair: https://review.openstack.org/#/c/283944/17:02
rcarrillocruzso what user do you use in west17:02
rcarrillocruzopenstackci17:02
rcarrillocruz?17:02
rcarrillocruzon the clouds.yaml17:02
rcarrillocruz...17:02
clarkbrcarrillocruz: I used openstackci because that is where I booted the mirror, but now we have to do the same for openstackjenkins17:03
clarkbbasically each user needs the same config17:03
*** mjturek1 has left #openstack-sprint17:03
med_Is mordred in FOCO/HPE today?17:05
Clintin a way17:05
clarkb| 355739 | infracloud-west        | ubuntu-trusty    | ubuntu-trusty                        | 1456332856 | None                                 | None                                 | building | 00:00:11:55 |17:06
mordredmed_: nope. I'm at home and have dialed in17:06
mordredclarkb: so - it is possible to configure the global default security group17:07
mordredclarkb: that changes what the defintion of the security group is that gets defined on project creation17:07
mordredclarkb: I THINK17:07
mordredclarkb: but I'm not sure it's worth doing that - since as you point out we have to fix this on our other clouds too17:08
clarkbmordred: last time we looked into that I thought we decided we couldn't do it17:08
mordredclarkb: so just having some ansible that makes the security groups be what we want is probably more sane17:08
clarkbbecause neutron is opinionated17:08
mordredoh - that's also possible17:08
fungialso there's always the chance we need to fix something about our security groups after the project is created, and changing the global default only fixes it for projects you create after you change that, as memory serves17:10
mordredyah17:12
clarkbmordred: the latest run seems to have run iwth your changes in and it failed17:21
clarkbmordred: the file is logged and noted in the puppet_run_all logfile17:21
mordredwoot17:22
clarkbmordred: I am assuming your next thing is to try and post it directly?17:22
mordredclarkb: that is a much nicer error message17:23
mordrednibalizer: regarding that ^^17:23
mordrednibalizer: is there any way that puppetdb is breaking because there is no controller cert in the puppetmaster CA even though we're connecting as puppetmaster?17:24
mordrednibalizer: the error is:17:24
mordredException: [Errno 336265218] _ssl.c:355: error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib"17:24
mordreduse_PrivateKey makes me think something is not authing in the way we think17:25
clarkbjeblair: OS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml python shade-launch-node.py --cloud openstackci-infracloud-west --region RegionOne --flavor mirror --image ubuntu-trusty --config-drive $FQDN17:34
nibalizermordred: woot17:37
nibalizermordred: a little nitpick on one of the patches17:44
nibalizerotherwise lgtm17:44
mordrednibalizer: k17:44
nibalizercan we get another core on 284290 and 28429217:50
rcarrillocruzmordred: was hoping to chat about the ansible creation of initial resources of our clousd thing. I've been pushing random playbooks to system-config (as you probably saw), but I was thinking into pulling the cloud launcher bits from infra-ansible into its own repo, as an ansible role17:56
rcarrillocruzso infra can use17:56
rcarrillocruzinfra-ansible can use it17:56
rcarrillocruzother things can use it17:56
rcarrillocruzthoughts17:56
rcarrillocruz?17:56
rcarrillocruzmy idea is that we would have a big yaml17:56
rcarrillocruzwith all our clouds resources inventory (servers, networks, flavors, etc)17:57
rcarrillocruzand the role would just create all them17:57
rcarrillocruzright now it assumes it's just a run against a single cloud, but could do it multi-cloud17:57
rcarrillocruzclarkb: https://review.openstack.org/#/c/284246/17:57
clarkbrcarrillocruz: that doesn't appear to delete the group rules18:00
rcarrillocruzright, because it's idempotent...18:00
rcarrillocruzif there are, it will just change them18:00
rcarrillocruzif they are not there, it will create them18:01
clarkbrcarrillocruz: where are we deleting them then?18:01
clarkbwe must delete the old inter group rules18:01
clarkbrcarrillocruz: see http://paste.openstack.org/show/488043/18:01
mordrednibalizer: I just made three patches in ansible-puppet, system-config and ansible-puppet that you might want to look at18:01
mordrednibalizer: they're not urgent18:02
nibalizermordred: fure18:02
nibalizersure18:02
mordredrcarrillocruz: yes, I agree with big yaml that the rle would just create them18:02
nibalizeryou might +2a this https://review.openstack.org/#/c/278850/18:02
mordredrcarrillocruz: I mean, I think it's actually just entry in group_vars18:02
mordredrcarrillocruz: but yeah18:02
mordredrcarrillocruz: and also, the entry for each server could totally have cloud: rax or whatever in its entry18:03
rcarrillocruzso18:03
rcarrillocruzhow about18:03
rcarrillocruzi create a github repo18:03
rcarrillocruzwith the splitted role18:03
rcarrillocruzthen i show folks18:03
rcarrillocruzthen we talk about bringing it into openstack-infra18:03
rcarrillocruz?18:03
clarkbwe don't need github18:03
clarkbmaybe I missed something18:03
rcarrillocruzi can go ahead and create a project in that namespace18:03
mordredclarkb: he's saying "why don't I make the split and put it somewhere"18:03
rcarrillocruzright18:04
mordredyah. then we can import it18:04
mordredyes. we need a launch-node role18:05
mordredessentially18:05
mordredso we need a repo called 'ansible-role-launch-node'18:05
jeblairmordred, clarkb, crinkle: remote:   https://review.openstack.org/284325 Set hostname in launch node18:07
jeblair15.184.55.918:13
*** degorenko is now known as _degorenko|afk18:14
mordredrcarrillocruz: this is the role you're talking about splitting right: ubygems/rubygems-mirror ?18:15
mordredgah18:15
rcarrillocruzno18:16
rcarrillocruzdef i'm not a ruby guy18:16
rcarrillocruz:D18:16
rcarrillocruzlet me link you18:16
mordredrcarrillocruz: this is the role you're talking about splitting right: https://git.openstack.org/cgit/openstack-infra/infra-ansible/tree/roles/setup_openstack_resources18:16
rcarrillocruzhttp://git.openstack.org/cgit/openstack-infra/infra-ansible/tree/roles/setup_openstack_resources18:16
rcarrillocruzthat one18:16
mordredyah18:16
rcarrillocruz++18:16
rcarrillocruzso idea would be18:16
rcarrillocruzyou feed in a yaml18:16
rcarrillocruzthat would contain:18:16
rcarrillocruzclousd:18:16
rcarrillocruz  bluebox:18:16
rcarrillocruz    servers:18:17
rcarrillocruz....18:17
rcarrillocruzflavous:18:17
rcarrillocruz.....18:17
rcarrillocruzinfracloud-west:18:17
rcarrillocruz  servers:18:17
rcarrillocruz...18:17
rcarrillocruzso on and so forth18:17
crinklefungi: nibalizer https://review.openstack.org/#/c/28433118:18
clarkbrcarrillocruz: cloud then region then server is the hierarchy I think18:20
rcarrillocruzgood call18:21
rcarrillocruzregion below cloud18:21
rcarrillocruz++18:21
jeblairwho wants to review the grafana change? https://review.openstack.org/28331218:23
AJaegerjeblair: I did...18:26
pabelangerAJaeger: ^18:27
AJaegerpabelanger: What do you mean?18:28
rcarrillocruzyolanda: https://review.openstack.org/#/c/284246/18:29
jeblairAJaeger: thanks! :)18:29
AJaegerpabelanger: wrong channel? Did you want to point out the project-config change?18:29
pabelangerAJaeger: sorry, lagged out.  Was referring to the grafana change18:31
pabelangerhaving some bad wifi atm18:32
AJaegerno worries18:33
mordredfungi: https://git.openstack.org/cgit/openstack-infra/infra-ansible/tree/roles/setup_openstack_resources18:39
mordredgah18:39
mordredfungi: https://github.com/emonty/ansible/commit/c737bd48bc4ee246c378898abb80bacdd80c0e2f18:39
mordredrcarrillocruz, clarkb: I disagree - I think it should be servers: and each server should have cloud and region as params18:40
clarkbmordred: hrm18:40
mordredthis is because the loops in the yaml will be very weird othewise18:40
mordredregion and cloud are parameters to os_serveR:18:40
clarkbI guess either way works18:40
clarkbah18:40
mordredso it's an easy loop the one way18:41
clarkbyup18:41
fungiexciting! so what conditions cause the list to be duplicated? or is this just belt/suspenders on theory that it might happen?18:41
mordredfungi: no clue. just belt and suspenders :)18:41
fungiokay18:41
fungicurious to see whether this makes it go away18:41
mordredme too18:42
mordredrcarrillocruz, clarkb: also, the yaml can go into group_vars/localhost.yml18:43
mordredrcarrillocruz: and can just be referenced that way18:43
jeblairfatal: [cacti.openstack.org]: FAILED! => {"changed": false, "failed": true, "msg": "value of logdest must be one of: stdout,syslog, got: ['stdout']"}18:44
jeblairhrm.18:44
jeblairthat kind of looks like an ansible puppet role problem...18:45
clarkbjeblair: ya a type mismatch18:45
clarkbbetween string and list of string18:46
jeblairi don't know why it would only happen when running the playbook manually...18:46
jeblairclarkb: yeah18:46
mordredjeblair: how are you running the playbook manually?18:47
jeblairansible-playbook --limit='cacti.openstack.org:localhost' /tmp/jeblair.yaml18:47
jeblairmordred: /tmp/jeblair.yaml is a copy of the 'else' playbook18:47
jeblairmordred: with !disabled removed from hosts18:47
mordredjeblair: /tmp/jeblair.yaml is going to be missing the group_vars18:47
jeblairbecause cacti is in the disabled host list18:47
mordredjeblair: you need to also copy the group_vars dir from the playbooks dir18:48
mordredjeblair: that said - you have uncovered a bug in the role18:48
*** mrmartin has joined #openstack-sprint18:48
mordred            logdest=dict(18:56
mordred                required=False, default=['stdout'],18:56
mordred                choices=['stdout', 'syslog']),18:56
jeblairclarkb, pleia2, mordred, nibalizer: root@puppetmaster:~# ./kick.sh cacti.openstack.org18:57
nibalizerjeblair: oo18:57
mordredjeblair, nibalizer, clarkb: https://review.openstack.org/28434819:00
mordredjeblair, nibalizer, clarkb: https://review.openstack.org/28435219:06
nibalizerpabelanger: http://paste.openstack.org/show/488081/ should get you started19:09
*** mrmartin has quit IRC19:14
rcarrillocruzclarkb: https://review.openstack.org/#/c/284246/ is good now19:22
rcarrillocruzwe had to debug some random ansible issue, if you set environment at a task the with_lines command doesn't get it, just the task shell command :/19:22
pabelangernibalizer: thanks19:27
mordredrcarrillocruz: reviewed19:30
rcarrillocruzmordred: lol19:35
rcarrillocruzi didn't push19:36
rcarrillocruzjust commited locally19:36
rcarrillocruz(facepalm)19:36
mordredheh19:36
rcarrillocruzcheck it out again pls19:36
mordredrcarrillocruz: ossum19:37
pabelangerhttp://grafana.openstack.org/dashboard/db/nodepool-infra-cloud19:54
pabelangerexciting19:54
jeblairmordred, fungi, yolanda: remote:   https://review.openstack.org/284386 Fix omfra admin creds in all-clouds.yaml20:03
fungigah20:04
fungijeblair: i guess this will allow us to perform admin-level actions from the puppetmaster host rather than limiting it to the openstackci user. seems desirable, but i can see why it was previously openstackci for consistency with clouds where we don't have admin access20:07
jeblairfungi: no, it's a new cloud definition: "admin-infracloud-west"20:08
jeblairfungi: we have one for bluebox too20:08
jeblairfungi: we have 2 cloud dfns for most clouds, and 3 for these20:08
fungioh, yep20:09
fungii missed the admin- prefix on the name there20:10
clarkbjeblair: `OS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml openstack --os-cloud openstackjenkins-infracloud-west --os-region-name RegionOne server list`20:16
clarkbnote that image list does not work it gets a 404, nova image-list does work for whatever reason20:17
AJaegerIs nibalizer change to run devstack-trusty in infra cloud fine to merge? https://review.openstack.org/#/c/284391 Then I'll +A...20:17
nibalizerAJaeger: yes20:17
nibalizerwe're sprinting as you know20:18
nibalizerwe think it will work20:18
AJaegernibalizer: I know you're sprinting - just don't know timing ;)20:18
AJaegernibalizer: then let's try it ;)20:18
clarkbmordred: `OS_CLIENT_CONFIG_FILE=/etc/openstack/all-clouds.yaml openstack --os-cloud openstackci-infracloud-west --os-region-name RegionOne server list` on the puppetmaster 404s20:20
mordredclarkb: ossum20:20
clarkbhttps://jenkins03.openstack.org/computer/ubuntu-trusty-infracloud-west-8281553/20:25
clarkbhttps://jenkins03.openstack.org/job/swift-coverage-bindep/2/console20:30
clarkbthe first job on infracloud20:30
AJaeger0.3 jobs ;) in use according to grafana20:31
AJaegerwooot!20:31
nibalizerhttps://review.openstack.org/#/c/284398/ to bump to 10 nodes20:32
jeblairhttps://review.openstack.org/28131020:34
pabelangerhttps://jenkins03.openstack.org/job/swift-coverage-bindep/2/console infracloud-west20:36
pabelangerdoh20:36
pabelangershould read before posting20:37
rcarrillocruz\o/20:42
pabelangershould read before posting20:44
pabelangerfail20:44
Clintjob's done20:51
clarkbyolanda: http://paste.openstack.org/show/488102/20:57
yolandathx20:57
yolandaclarkb http://paste.openstack.org/show/488103/20:59
jheskethMorning21:05
jheskethhow's it going?21:07
clarkbjhesketh: we ran our first job on infracloud via nodepool https://jenkins03.openstack.org/job/swift-coverage-bindep/2/console21:08
jheskethoh exciting :-)21:08
clarkbit was slower than expected so now we are digging into why that may be21:08
jheskethokay21:09
jheskethstill, that's pretty cool21:09
jheskethkudos to all!21:09
rcarrillocruzhttps://pythonhosted.org/python-hpilo/health.html21:15
rcarrillocruzsearch for 'raid' ^21:16
rcarrillocruzcrinkle: ^21:16
clarkbcompute035:/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host4/target4:0:0/4:0:0:021:19
clarkbfungi: ^21:19
fungithanks21:20
clarkbjhesketh: currently looking into what the disk situation is as disk IO (installing packages) was really slow21:22
clarkbwe seem to have confirmed that / sits on a logical volume provided by an HP raid controller with disks in a raid 021:23
clarkbRAID bus controller: Hewlett-Packard Company Smart Array G6 controllers (rev 01)21:24
Clintcciss-vol-status21:24
clarkbcat compute035:/sys/devices/pci0000:00/0000:00:07.0/0000:06:00.0/host4/target4:0:0/4:0:0:0/raid_level21:26
nibalizerSpamapS: ohai21:26
clarkbsays RAID 021:26
SpamapSo/21:26
nibalizerdoo you remember the disk situation of the computers in west?21:26
SpamapSin the past we had some non-uniformity in the hp west rack w.r.t. disk controllers21:26
nibalizerraid etc?21:26
SpamapSMANY had no BBWC21:27
SpamapSsome did21:27
jeblairSpamapS: do you want to voice with us?21:27
SpamapSAt some point we had some upgraded21:27
jeblairlet me rephrase...21:27
SpamapSlol I get it21:27
jeblairSpamapS: if you want to voice conference, we can arrange it; let me know.21:28
SpamapSyeah I can jump in there21:28
jeblairhttps://wiki.openstack.org/wiki/Infrastructure/Conferencing21:29
jeblairroom 601321:29
SpamapSin 2 voice conversations already at the moment, standby21:29
SpamapSdown to one now21:30
SpamapSif I could just get the voices in my head to shut up... ;)21:30
SpamapSjeblair: ok, jitsi-ing into that conf room21:30
jeblairSpamapS: cool; put the voices in your head on mute ;)21:31
ClintP212 controller, single physical 2tb drive in "raid 0"21:36
crinkleSpamapS: ubuntu@15.184.52.321:40
*** krtaylor has quit IRC21:41
clarkbCONF.libvirt.disk_cachemodes21:42
clarkbis what nova reads to set the cache mode21:43
clarkbhttp://docs.openstack.org/kilo/config-reference/content/list-of-compute-config-options.html21:44
clarkbdisk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none is what ps reports we are doing21:45
nibalizer[    4.172895] sd 4:0:0:0: [sda] Write cache: disabled, read cache: disabled, doesn't support DPO or FUA21:45
nibalizerfrom dmesg21:45
clarkbhttps://www.suse.com/documentation/sles11/book_kvm/data/sect1_1_chapter_book_kvm.html explains our options21:47
Clintand SpamapS recommended unsafe21:48
clarkbyup21:48
Clintpuppet-nova: # [*libvirt_disk_cachemodes*]21:51
crinkle^21:51
*** krtaylor has joined #openstack-sprint21:55
rcarrillocruzctrl all show status21:58
crinkleI think https://review.openstack.org/284435 is the setting we want?21:58
crinkleshould probably try running it on one of the computes before running it on all of them21:59
rcarrillocruzhttp://www.lazysystemadmin.com/2012/01/hpacucli-check-raid-information-from.html22:00
rcarrillocruzjeblair:22:00
clarkbcrinkle: file=/var/lib/nova/instances/758549a0-0dcc-4f53-bb98-647cb62b782b/disk,if=none,id=drive-virtio-disk0,format=qcow2,cache=none22:02
clarkbfile=unsafe22:03
jeblairrcarrillocruz: https://etherpad.openstack.org/p/zmeWythuDL22:12
jeblairmordred: how do i get output from an ansible shell command from a playbook?22:18
jeblairmordred: see the second part of https://etherpad.openstack.org/p/zmeWythuDL22:18
mordredjeblair: looking22:18
mordredjeblair: that's untested - gimme a sec and I'll tell you for real22:19
*** delattec has joined #openstack-sprint22:34
*** cdelatte has quit IRC22:37
clarkb85b1dc74-266c-41c8-aa26-1cee94ea7048 is new instance22:41
clarkbhttps://jenkins01.openstack.org/job/gate-tempest-dsvm-full-ceph/6837/console22:41
*** rfolco has quit IRC22:46
nibalizer2016-02-24 23:09:01.863 5846 WARNING nova.compute.api [req-a825bb40-3dc4-470b-a992-ee6cc1c4d0f8 7dbe0f121e424a74be2eed25399e2c75 894a11e0a16a4c29bb8b884c1c70bf2c - - -] [instance: 85b1dc74-2623:09
nibalizer6c-41c8-aa26-1cee94ea7048] instance's host compute035.hpuswest.ic.openstack.org is down, deleting from database23:09
jeblairnibalizer: remote:   https://review.openstack.org/284448 Cacti: add hosts to alternate graph trees23:11
jeblairmordred:  https://review.openstack.org/28432523:17
crinkleroot@controller00:~# nova host-update23:18
crinkleusage: nova host-update [--status <enable|disable>]23:18
crinkle                        [--maintenance <enable|disable>]23:18
crinkle                        <hostname>23:18
crinkle^ i think not fiery enough23:18
rcarrillocruzare we good to land:23:23
rcarrillocruzhttps://review.openstack.org/#/c/283737/23:23
rcarrillocruzhttps://review.openstack.org/#/c/283816/23:24
rcarrillocruzhttps://review.openstack.org/#/c/283870/23:24
rcarrillocruzhttps://review.openstack.org/#/c/284246/23:24
rcarrillocruzwe already maeked as completed on whiteboard, so we should merge those23:24
fungijeblair: mordred: clarkb: Clint: https://review.openstack.org/284463 addresses my comments on 28432523:28
mordredfungi: wow. /etc/mailname23:28
Clintfungi: why do you want fqdn in /etc/hostname?23:29
*** mestery has quit IRC23:30
*** mestery has joined #openstack-sprint23:35
fungiClint: don't we?23:36
Clinti doubt it23:36
fungioh, we were just making sure the fqdn was in /etc/hosts on the physical hosts23:38
clarkbya we just needed hostname -f to work23:38
fungihrm, but we _do_ want /etc/mailname to have the fqdn then23:38
Clintyes23:38
fungigrr... yep, you're right. this is what i get for confusing /etc/hostname with the bsds' /etc/myname23:40
nibalizerhttps://review.openstack.org/#/c/284465/1 is pretty striaghtforward23:41
clarkbmordred: 85b1dc74-266c-41c8-aa26-1cee94ea704823:42
clarkbthis instance was killed in the middle of its life for what we think is nova compute changing its name23:42
clarkbbut it wasn't removed from the db, libvirtd was asked to stop it though23:43
mordredhttp://paste.openstack.org/show/488129/23:44
mordredhttp://paste.openstack.org/show/488130/23:45
crinklemordred: okay the other thing is that nova hypervisor-list | grep compute035 has two things23:48
crinkleso one of them needs to go away23:48
jeblairglean: error: unrecognized arguments: --version23:50
clarkbthat instance is no longer in nodepool23:55
mordredcrinkle: http://paste.openstack.org/show/488131/23:55
mordredcrinkle: which one is good?23:55
crinklemordred: this is the information nova has http://paste.openstack.org/show/488132/23:56
crinklemordred: so the one that has state 'down' is bad23:56
mordredcrinkle: so 21 is the bad one23:56
mordredcrinkle: deleted23:57
crinklemordred: lgtm ty23:57
mordredcrinkle: should I take out compute004 as well?23:57
crinklemordred: yes please23:57
rcarrillocruz- name: Disable glean23:59
rcarrillocruz  shell: echo 'manual' > /etc/init/glean.override creates=/etc/init/glean.override23:59
rcarrillocruzjeblair: ^23:59
mordredcrinkle: done23:59
crinklemordred: ty23:59

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!