Tuesday, 2016-05-24

*** jhesketh_ is now known as jhesketh00:02
anteayait sounds convincing to me00:15
nibalizeryah we'd use dns00:18
fungipleia2: anteaya: yes, that's the long and short of it00:18
nibalizerthat way we can start up storyboard02.o.o, get it all working right, then cutover dns real fast as the only change00:18
nibalizeras the standard-unit-of-upgrading00:18
fungipuppetboard would show the name from nova/inventory though00:19
*** bkero has joined #openstack-sprint00:19
nibalizerthat is true00:20
nibalizerand that would be sorta confusing00:20
fungii honestly wonder whether using a separately-zoned subdomain for these wouldn't solve some of our dns automation issues00:20
anteayaokay confusing reigns in puppetboard, but the users can find the thing they are looking for00:20
anteayayay00:20
fungithe launch can set up forward/reverse dns with the ordinal suffix, actual service addresses in the normal openstack.org zone are still maintained by hand though00:21
fungiso storyboard.openstack.org ends up being a cname to storyboard02.infra.openstack.org or whatever00:22
* nibalizer has reviewed anything with the topic trusty-upgrades00:22
nibalizeranybody got a patch they are about to hit submit on before I turn in to a pumpkin?00:22
pleia2nibalizer: no no, a baseball00:22
fungiwe could (much more easily) delegate teh infra.openstack.org zone to something maintained by automation that way00:22
nibalizerpleia2: are we bringing a glove?00:23
pleia2nibalizer: I am not00:23
nibalizeri have been told to get a giants cap00:23
nibalizerwe can use that00:23
funginibalizer: pumpkinize thyself00:23
pleia2fungi: I think we're going down a rabbit hole with delegated dns00:23
fungimaybe00:23
pleia2nibalizer: there you go :)00:23
fungidns automation is unlikely to happen in the nearish future otherwise though00:23
pleia2yeah00:23
fungiit's the there are two parties who aren't tightly coupled and don't coordinate both responsible for maintaining the openstack.org domain each with separate needs/goals problem00:24
fungii'm unsure i'll be able to convince the other group who co-maintain that domain with us that we can move the whole thing somewhere with a better api00:25
fungibut if we limit our automation updates to a subdomain, we can delegate that somewhere else they don't need to worry about00:25
pleia2yeah, that's fair00:26
fungithat's the only reason i'm even considering it00:26
fungianyway, food for thought. that's something we can pretty easily switch to later without any significant change in the naming plan00:26
fungi_if_ we discover that lack of autoconfigured dns records is a blocker for automating the rest00:27
* pleia2 nods00:27
*** baoli has quit IRC00:32
*** baoli has joined #openstack-sprint00:33
*** baoli has quit IRC00:33
fungioh, here's one possible gotcha with the ordinal suffix00:33
fungido any of our puppet modules do special things with $::fqdn?00:33
*** baoli has joined #openstack-sprint00:34
fungilike, assume it's the site name for a name-based virtual host?00:34
fungiwe've been operating on the fqdn==service paradigm in some places for long enough we may have baked that assumption into configuration management00:35
bkeroYeah, there's stuff like this:00:37
bkero  class { 'openstack_project::mirror':00:37
bkero    vhost_name => $::fqdn,00:37
bkeroand stuff like this:00:37
bkero    controller_public_address        => $::fqdn,00:37
bkero    openstackci_password             => hiera('openstackci_infracloud_password'),00:38
fungithat will all need fixing as we come across it00:40
bkerooh definitely00:43
bkeroopenstack_project::mirror, openstack_project::storyboard, openstack_project::storyboard::dev, openstack_project::infracloud::controller are the classes that need updating (these are the entries that use ::fqdn in site.pp)00:45
bkeroinfracloud::controller might be able to get away with still using it, but I'm not familiar enough to stay. Storyboard not so much.00:46
bkerocrinkle: ^00:46
*** rfolco has quit IRC00:54
*** rfolco has joined #openstack-sprint00:56
*** anteaya has quit IRC01:18
*** cdelatte has quit IRC01:37
*** baoli has quit IRC03:05
*** rfolco has quit IRC03:10
*** baoli has joined #openstack-sprint03:44
*** sivaramakrishna has joined #openstack-sprint04:26
*** baoli has quit IRC05:16
*** morgabra has quit IRC06:15
*** zhenguo_ has quit IRC06:16
*** zhenguo_ has joined #openstack-sprint06:18
*** med_ has quit IRC06:21
*** morgabra has joined #openstack-sprint06:23
*** med_ has joined #openstack-sprint06:25
*** med_ has quit IRC06:25
*** med_ has joined #openstack-sprint06:25
*** ig0r_ has joined #openstack-sprint08:40
*** sivaramakrishna has quit IRC09:34
*** cdelatte has joined #openstack-sprint11:03
*** ig0r_ has quit IRC11:34
*** rfolco has joined #openstack-sprint11:54
*** ig0r_ has joined #openstack-sprint11:56
*** pabelanger has joined #openstack-sprint12:09
pabelangermorning12:22
pabelangerGoing to start on zuul-mergers this morning12:23
pabelangerlooks like we can launch zm09.o.o then cycle out each other server12:23
pabelangeror take zm01.o.o out of service12:24
pabelangerand upgrade to trusty12:24
*** baoli has joined #openstack-sprint12:55
*** baoli_ has joined #openstack-sprint12:56
*** baoli has quit IRC13:00
fungibkero: and that's just in the system-config repo. taking the puppet-storyboard repo for example, we have 4 different places in its manifests where $::fqdn is used directly13:17
fungiso while the new naming plan has merit, i think there's a lot of cleanup work to be done before we can get there13:18
pabelangerHmm13:38
pabelangerso, I am confused. Where does group_names come from? https://github.com/openstack-infra/ansible-puppet/blob/master/tasks/main.yml#L1713:38
pabelangeror how is it setup13:39
fungipabelanger: http://docs.ansible.com/ansible/playbooks_variables.html#magic-variables-and-how-to-access-information-about-other-hosts13:54
fungi"group_names is a list (array) of all the groups the current host is in"13:55
fungiso ansible creates that based on our group definitions13:55
pabelangerfungi: Ya, I found that a few mins ago. Should have posted a follow up.  Basically debugging why I cannot launch zm09.o.o and it looks to be a bug in your hiera data handling13:55
fungimy hiera data handling is just fine, thank you ;)13:55
pabelangerwe fail to copy /etc/puppet/hieradata/production/group yaml files13:56
fungiahh, as in during launch-node.py or in general?13:56
pabelangerya, launch-node.py13:56
fungii'm not surprised. sounds like a bug/oversight13:58
pabelangerI think there is a few issues, testing again as root to see what happens14:14
*** anteaya has joined #openstack-sprint14:14
jeblairpabelanger: i'd just drop zm01 and keep the numbers the same.  we should be able to handle it.14:17
pabelangerjeblair: Sure, I can do that.  Is there a process to put zuul-merger into shutdown mode or just stopping the service is enough14:18
pabelangerAlso, it looks like our permissions on /etc/puppet/hieradata need to be updated, running launch-node.py as non-root user fails to populate some hieradata on the remote node.14:19
jeblairpabelanger: nope, just stop14:20
pabelangerjeblair: ack14:21
jeblairnibalizer: the enumerated hostnames don't help with making cutover faster -- in your storyboard example, we would still need to cutover the production database, etc.  the only things the hostnames help with is not having to use uuids temporarily while we have 2 servers.14:22
pabelangerokay, zm01.o.o service stopped14:31
pabelangerlaunching zm01.o.o on trusty14:32
*** anteaya has quit IRC14:42
*** bkero has quit IRC14:42
pabelangerokay, never server up, but had these errors at the end of launch-node.py: http://paste.openstack.org/show/498627/14:44
pabelangerrunning /usr/local/bin/expand-groups.sh manually worked as expected14:48
pabelangerjeblair: fungi: What is the procedure for the original zm01.o.o.  Should I delete it or suspend it after DNS records have been updated?14:49
jeblairpabelanger: i think you can delete it14:50
fungipabelanger: yeah, delete in nova14:50
pabelangerjeblair: ack14:50
fungiand in dns14:50
pabelangerfungi: ack14:50
jeblairanyone have any problems with cacti?  if not, i'll delete the old one now14:50
fungioh, wait, you're just contracting and then expanding, rather than the other way around14:50
pabelangerjeblair: nope, looks to be working here14:50
fungiso no dns deletion needed, just dns changes i guess14:50
pabelangerfungi: okay, great14:51
fungijeblair: cacti seems fine to me14:54
pabelangerokay, I've decreased the TTL for zm02-zm08 to 5mins. Will help bring server online faster15:03
pabelangerjust waiting for zm01 to be picked up before moving on15:03
*** cdelatte has quit IRC15:35
*** yujunz has joined #openstack-sprint15:36
*** yujunz has quit IRC15:38
jeblairdeleted old cacti15:39
*** bkero has joined #openstack-sprint15:40
*** delattec has joined #openstack-sprint15:45
*** anteaya has joined #openstack-sprint15:45
pabelangerjeblair: just a heads up, I had to run: ssh-keygen -f "/root/.ssh/known_hosts" -R cacti.openstack.org on puppetmaster.o.o15:54
pabelangerjeblair: ansible was failing to SSH into the new server15:54
pabelangerI had to do the same with zm01.o.o15:54
fungiyep, host replacements require ssh key cleanup15:55
fungithat's normal15:55
*** morgabra has quit IRC15:55
*** morgabra has joined #openstack-sprint15:55
jeblairpabelanger: ah, thanks i forgot about that15:56
jeblairpabelanger: though i had disabled it in ansible, or so i thought15:56
jeblairpabelanger: was it trying to do that recently?15:56
pabelangerjeblair: Ya, I just noticed it this run15:56
jeblairpabelanger: hrm15:56
jeblairit is in the emergency file15:56
pabelangerjeblair: not sure how long it has been trying15:56
jeblair2fb22df0-c176-4885-9a66-5735519c719b # new cacti15:56
jeblaircdaf8e04-9fce-4b70-af56-38b3208fe4b4 # old cacti15:56
fungias we discovered with the ssh timeouts a while back, ansible will still ssh into "disabled" hosts, it just won't run puppet apply15:56
pabelangerodd15:56
jeblairfungi: oh15:57
pabelangerAh, didn't know that15:57
*** delattec has quit IRC15:57
*** anteaya has quit IRC15:57
jeblairi have removed it, so it should now be normal15:57
fungiso related to my reply on the server naming thread, i'm planning to replace storyboard.openstack.org today with another storyboard.openstack.org and not storyboard01.openstack.org because i don't want to drag out our current upgrade efforts by insisting that we refactor our puppet modules to support arbitrary hostnames at the same time16:02
fungiif replacing a non-numbered host with a numbered one works for certain classes/modules then i'm not opposed, and we should probably avoid baking further hostname assumptions into our config management, but for manifests which require potentially disruptive work to support that i'd rather separate that work from the upgrade work16:04
*** anteaya has joined #openstack-sprint16:06
*** delattec has joined #openstack-sprint16:06
pabelangerfungi: what's the fix? rather then use ::fqdn, change it out for actually hostname?16:07
pabelangernot sure that would work either... let me think about it for a bit16:10
fungipabelanger: the "fix" is to pass the name you want in as a parameter wherever you instantiate the module/class in question16:12
fungisome of the hits for $::fqdn are just default values, so we may already be plumbed for it in a lot of places, but we're not necessarily passing those in today because we can just rely on the default16:13
anteayathe freenode netsplits are taking me into the far corners of the universe, which while fun, may mean I disappear mid-conversation16:15
anteayadon't take it personally16:15
anteayaI'm trying to stay current by reading the logs16:15
pabelangerfungi: so, looks like we needed to delete the original zm01.o.o DNS entries.  Since we ended up with 2 of them16:17
anteayaand I support using storyboard.openstack.org rather than 0116:17
pabelangerfungi: I would have expected the original to be overwritten like you mentioned too16:17
fungipabelanger: oh, yeah as i said modify the existing dns entries16:18
fungipabelanger: you obviously shouldn't add new ones16:18
fungithat ends up turning it into a round-robin16:19
pabelangerfungi: Yup, I'll do that moving forward16:19
*** ig0r_ has quit IRC16:24
*** delatte has joined #openstack-sprint16:43
*** delattec has quit IRC16:46
pabelangerHmm16:49
pabelangergraphite.o.o might be acting up again16:52
fungiwe get into an oom condition there semi-regularly16:53
anteayawhat do you mean when you say acting up16:53
pabelangerhttp://grafana.openstack.org/dashboard/db/zuul-status?panelId=16&fullscreen doesn't appear to be updating any more16:53
fungisome services not running? any recent oom killer entries in dmesg -T output?16:53
pabelangerfungi: let me check16:53
pabelangerlast segfault was May 16 for dmesg16:55
pabelangermaybe it isn't graphite but nodepool16:56
pabelangerHmm16:57
pabelanger2016-05-24 16:23:37,996 INFO nodepool.NodePool: Target jenkins01 is offline16:57
pabelangeris last log entry for nodepool16:57
pabelangerswitching to openstack-infra16:57
fungiahh, yep, i put jenkins01 into prepare for shutdown yesterday as i had a glut of some 100 nodes in a ready state not running any jobs while we were under a backlog, and i didn't remember to restart it once it finished running the few jobs it did have in progress16:58
nibalizergood morning17:05
anteayamorning nibalizer17:05
bkerohowdy17:08
pabelangerzm01.o.o looks to be good, I manually ran ansible17:11
pabelangeransible-playbook -vvv -f 10 /opt/system-config/production/playbooks/remote_puppet_else.yaml --limit zm01.openstack.org17:11
pabelangergoing to start on zm02.o.o shortly17:11
pabelangerlooks like zuul.o.o firewall is still referencing the old IP address of zm01.o.o. Waiting to see if puppet notices the difference17:20
nibalizerpabelanger: woot17:22
pabelangerfungi: I cannot remember, is iptables smart enough to refresh DNS every X mins or should I look to manually reload the rules17:24
fungiyou have to manually reload. host resolution is done when the rules are parsed17:26
pabelangerokay, I suspected that was the issue17:26
pabelangerfungi: which method do we use to reload iptables?17:26
pleia2probably should use service restart17:27
fungithat aside, i'm not a fan of iptables rules that rely on host resolution. that means a compromised dns (without dnssec, mitm is relatively trivial) can adjust your firewall rules17:27
pleia2or reload for iptables17:27
pabelangerYa, I tend to do service reload iptables &17:27
pabelangeror restart17:27
fungiin at least most places we put ip addresses in our iptables rules17:27
pabelangerbut background to to make SSH happy17:27
pleia2screen++17:28
pabelangerpleia2: good choice17:28
pabelangerokay, zm01.o.o now attached to gearman17:29
pabelangermoving on to zm02.o.o17:30
*** baoli_ has quit IRC17:40
pleia2do we have a story going for things to fix in xenial upgrade path? I couldn't find one, and I know I'm getting ahead of ourselves, but I have a bunch of errors over here on my plate I want to dump somewhere17:41
pleia2could just create another etherpad too17:41
pleia2noted in current etherpad, but planet may need to wait until our xenial upgrade because of broken planet-venus package17:42
pleia2for fun I quickly tried it on a xenial system, but our server manifest needs a fair amount of love for 16.0417:42
pleia2moving on to test services on static17:44
pabelangerzm02.o.o online, and processing gearman events17:49
pabelangerokay, moving to zm03.o.o now17:52
anteayapleia2: I think a story for xenial upgrades is a good idea18:02
pleia2ok, I'll put it in system-config18:03
anteayagood idea18:05
pabelangerzm03.o.o up and running now18:10
anteayanice work18:10
pabelangermoving on to zm04.o.o now18:12
pabelangercould almost write an ansible playbook for this!18:12
pabelangergoing real smooth now18:12
*** ig0r_ has joined #openstack-sprint18:23
pabelangergreat! 4/8 zuul-mergers are now ubuntu-trusty18:31
fungihrm, one issue with following the launch readme to the letter. it says to use /opt/system-config/production/launch but seems to want permission to modify ../playbooks/remote_puppet_adhoc.retry from there18:35
fungijeblair: were you running from a local clone of system-config to which your account had write access maybe?18:35
jeblairfungi: that is possible18:35
pabelangerfungi: I've been using a local clone also18:36
fungigiving that a try now18:36
jeblairfungi, pabelanger: we can disable retry files in the ansible config...18:38
fungilooking good so far. i wonder if there's a flag it needs to tell it to write retry files somewhere other than the playbooks directory, or whether we should just switch teh docs18:39
fungioh, or that!18:39
pabelangerjeblair: ++18:39
jeblairdo we want to disable them everywhere?  (i think probably so; i've never used them in our context)18:39
pabelangerI've never used a .retry yet18:39
funginew problem...18:39
fungi[WARNING]: log file at /var/log/ansible.log is not writeable and we cannot create it, aborting18:39
jeblairfungi: i've seen that but i thought it was not fatal18:40
pabelangerYa, we need to update the permissions to puppet:puppet18:40
pabelangerI am also lanuching a server ATM too18:40
pabelangerwonder if that is the reason18:40
fungioh, hrm yeah looks like i failed for some other unspecified reason. need to run with --keep and try again so i can see the puppet logs18:40
pabelangerI had too many issues running as non-root this morning.  I've since moved to running launch-node.py as root18:42
pabelangerI believe there is a permission issue on /etc/puppet/hieradata for non-root users18:43
pabelangerI haven't debugged it more18:43
pabelangerI've also had to patch launch-node with the follow hack: http://paste.openstack.org/show/498717/18:44
pabelangerotherwise group hieradata didn't seem to copy properly18:44
*** baoli has joined #openstack-sprint19:27
pabelangertaking a break after the meeting for a few minutes, before decided which host to do next.19:35
anteayapabelanger: makes sense, nice work getting them all done19:36
pabelangeranteaya: thanks. People who did the puppet manifests deserve the credit, I just pushed buttons :D19:38
*** baoli has quit IRC19:39
*** ig0r_ has quit IRC19:39
anteayapabelanger: pushing the buttons helps19:43
pabelangerokay, going to work on zuul-dev.o.o now20:06
*** baoli has joined #openstack-sprint20:07
anteayapabelanger: cool20:09
pabelangerokay, zuul-dev.o.o online and running ubuntu-trusty20:23
anteayayay20:23
*** baoli has quit IRC20:29
*** baoli has joined #openstack-sprint20:29
*** baoli has quit IRC20:34
*** baoli has joined #openstack-sprint20:35
pabelangerokay, so it looks like graphite.o.o is ready. Moving to it20:37
pabelangergoing to need some help migrating the volume for it however20:38
anteayago go graphite20:42
pabelangerI believe we'll need to do http://docs.openstack.org/infra/system-config/sysadmin.html#cinder-volume-management first on the current graphite.o.o server20:42
pabelangerfungi: jeblair: do you have a moment to confirm ^. I need to do those steps for graphite.o.o to persist data20:44
anteayapabelanger: not sure about them but I'm in the tc meeting, 15 minutes remaining20:44
pabelangeranteaya: ack20:44
fungipabelanger: yes, you'll need to deactivate the logical volumes on the current production server, detach them and attach them to the replacement server20:45
pabelangerI stand correct, It seems there already is a volume20:46
pabelanger| 505ff749-bf4a-4881-8a4e-ff2f50d1e0ca | graphite.openstack.org/main02                        | in-use    | 1024 | Attached to graphite.openstack.org on /dev/xvde         |20:46
fungithere are more steps than that but it's the gist (umount /var/lib/graphite/storage, vgchange -a n main, openstack server volume detach...)20:46
jeblairpabelanger: right, there should be an existing volume and what fungi said.20:47
pabelangerfungi: okay, let me get the replacement server up first, confirm puppet is happy, then review the steps to migrate the volume20:47
fungiwe haven't directly documented moving a volume from one server to another but that document gets you basic familiarity with the toolset there20:49
fungigist is to make sure you don't yank volumes out from under a server and corrupt them20:50
pabelangerright20:50
pabelangerokay, new server is online20:50
fungimake sure they're firmly settled and taken out of service at both the filesystem level and volume group level before detaching them at the cinder level20:50
pabelangerlet me stop graphite from running on the old server20:51
jeblairpabelanger: stop apache, carbon-cache, statsd20:51
pabelangerjeblair: thanks20:52
pabelangerokay, unmount /var/lib/graphite/storage next20:52
pabelangerumount*20:53
pabelangervgchange -a n main next20:53
pabelanger  0 logical volume(s) in volume group "main" now active20:54
pabelangerfungi: confirming, that is correct? ^20:54
fungipabelanger: yep. you can also confirm with vgs20:55
fungiodd that lvs still shows the graphite logvol20:57
fungii don't remember if it's supposed to or not20:58
pabelangerI'm not familiar enough with vgs: http://paste.openstack.org/show/498746/20:58
pabelangerI assume that looks correct?20:58
anteayapabelanger: i have opened the paste and stared at it20:59
anteayaI have no idea what is should look like though21:00
pabelangerfungi: jeblair: okay, I see 2 volumes attached to graphite.o.o using openstack volume list21:00
*** rfolco has quit IRC21:03
pabelangerfungi: pvs shows both volumes on the original host21:05
pabelangerwhich I believe is expected21:05
fungii wonder if that's https://launchpad.net/bugs/1088081 happening21:05
openstackLaunchpad bug 1088081 in lvm2 (Ubuntu) "udev rules make it impossible to deactivate lvm volume group with vgchange -an" [High,Confirmed] - Assigned to Dimitri John Ledkov (xnox)21:05
jeblairfungi, pabelanger: that looks correct to me21:06
jeblairwhat's wrong?21:06
fungioh, i simply can't remember whether lvs and vgs show the logvol and vg even after making the vg unavailable with vgchange21:06
pabelangerNothings wrong, I am just confirming things are correct.  First time detaching/attaching IPs21:06
pabelangererr21:07
pabelangervolumes21:07
jeblairyou use lvs to confirm that it's inactive -- the "o" attr in lvs says that it's "open" (which == active)21:07
jeblairthat is absent in lvs on graphite, so we're good21:07
jeblair  LV       VG   Attr   LSize Origin Snap%  Move Log Copy%  Convert21:07
jeblair  graphite main -wi--- 2.00t21:07
jeblaircompare to another host:21:07
jeblair  LV    VG   Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert21:07
jeblair  cacti main -wi-ao--- 20.00g21:07
pabelangergreat, continuing to detach the volume from openstack21:07
jeblair(oh, a for active, o for open == mounted.  but yeah, you want them both gone)21:08
fungiahh, for some reason i was thinking there was an equivalent flag in the vgs output but not finding it in the manpage21:09
pabelangerokay, currently detaching volumes21:20
*** baoli has quit IRC21:25
*** baoli has joined #openstack-sprint21:26
pabelangerstill detaching21:35
pabelangerfungi: is detaching a volume a time consuming process?21:46
anteayaI'm not of the belief detaching a volume takes a long time21:49
anteayaI don't recall it taking this long for other processes I have witnessed21:49
pabelangerwe are up to 45mins now21:52
pabelangerto detach 2 volumes from graphite.o.o21:52
anteayathat seems odd21:52
anteayaI don't have anything helpful to offer21:53
anteayahave you spent any time in the cinder channel before?21:53
anteayathey are a nice group21:53
pabelangerI have not21:53
anteayamight be your excuse to introduce yourself21:53
fungipabelanger: it normally isn't time-consuming. maybe try halting the instance?21:53
anteaya<-- going for a walk, back later21:53
fungii wonder if it's waiting for activity to "stop" on the volume (i've not seen that happen before afaik)21:54
pabelangerfungi: Ya, I am not sure. I pvs and lvs return empty now21:55
pabelangermy knowledge is lacking atm21:55
*** baoli has quit IRC21:59
*** baoli has joined #openstack-sprint21:59
pabelangerfungi: jeblair: I have to step away for a few minutes.  Still waiting for volumes to detach (detaching is current status).  Will need some guidance on the reattach process, once openstack is happy again21:59
pabelangerin the mean time, graphite.o.o is still down21:59
jeblairpabelanger: wow, that wasn't instantaneous?21:59
fungimay be necessary to open a ticket with fanatical support22:00
jeblairhuh, i don't see an option for online chat support22:08
jeblairsubmitted ticket 160524-dfw-000368922:10
pabelangerback22:14
pabelangerjeblair: nope :(22:14
pabelangerperhaps because I did both volumes back to back?22:15
pabelangerokay, stepping away for a bit.  Going to update status log with current outage22:28
*** SotK has quit IRC22:41
*** SotK has joined #openstack-sprint22:43
*** baoli has quit IRC22:55
anteayaI am much less attractive to the bugs than others are this season, I wonder if it is the garlic I have been taking as immune support23:33
anteayaif so, double happiness23:33
*** yuikotakadamori has left #openstack-sprint23:34
fungithe other trick locals were fond of in the mountains where i grew up was to eat the wild onions that proliferated there23:35
anteayaah yes23:35
anteayasulfur23:35
fungibut yeah i've heard that lots of garlic can have a similar effect23:35
anteayanice and smelly23:35
anteayaI have not skimped on the onions this week either23:36
anteayathough they are the tame variety, not nearly as pungent as the wild23:36
jhesketho/23:39
fungii have the storyboard replacement booted again. will try switching to it shortly23:40
anteayajhesketh: great thank you23:45
anteayajhesketh: so here is the etherpad: https://etherpad.openstack.org/p/newton-infra-distro-upgrade-plans23:46
anteayafungi: yay23:46
anteayajhesketh: and line 64 is the apps entry23:46
anteayaand docado is the person who is the ptl for the apps service23:46
anteayaand fungi and I talked to him about what needs to be done to switch over23:47
anteayaI'll get the url for that conversation23:47
anteayajhesketh: http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2016-05-24.log.html#t2016-05-24T20:07:4223:47
anteayaand unfortunately both he and I are horrible shades of yellow in that rendering23:48
anteayaI'm going with the log version but at least you have the timestamp23:48
*** ianw has joined #openstack-sprint23:49
* jhesketh gets up to speed23:50
anteayajhesketh: take your time I'm between dinner courses23:50
anteayaI'll be back in a bit for questions23:50
anteayanot that I can answer them, but I can listen23:50
jhesketh:-)23:51
anteaya:)23:51

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!