Monday, 2018-03-19

*** flemingo has joined #openstack-ansible00:03
*** flemingo has quit IRC00:07
*** odyssey4me has quit IRC00:09
*** odyssey4me has joined #openstack-ansible00:09
*** savvas has joined #openstack-ansible00:18
*** flemingo has joined #openstack-ansible00:19
*** savvas has quit IRC00:23
*** flemingo has quit IRC00:23
openstackgerritMerged openstack/openstack-ansible master: Isolate the Ansible bootstrap  https://review.openstack.org/54752600:34
*** savvas has joined #openstack-ansible00:36
*** savvas has quit IRC00:40
*** flemingo has joined #openstack-ansible00:42
*** flemingo has quit IRC00:47
*** savvas has joined #openstack-ansible00:54
*** flemingo has joined #openstack-ansible00:57
*** savvas has quit IRC00:58
*** flemingo has quit IRC01:01
*** flemingo has joined #openstack-ansible01:06
*** flemingo has quit IRC01:10
*** flemingo has joined #openstack-ansible01:21
*** flemingo has quit IRC01:25
*** flemingo has joined #openstack-ansible01:27
*** flemingo has quit IRC01:31
*** flemingo has joined #openstack-ansible01:48
*** flemingo has quit IRC01:53
*** flemingo has joined #openstack-ansible01:58
*** flemingo has quit IRC02:03
*** PIT0Q0cmbrnt has joined #openstack-ansible02:07
*** flemingo has joined #openstack-ansible02:08
*** flemingo has quit IRC02:12
*** flemingo has joined #openstack-ansible02:23
*** flemingo has quit IRC02:27
*** pmannidi has joined #openstack-ansible02:33
*** flemingo has joined #openstack-ansible02:41
*** esberglu has joined #openstack-ansible02:43
*** flemingo has quit IRC02:46
*** savvas has joined #openstack-ansible02:53
*** savvas has quit IRC02:57
*** esberglu has quit IRC02:58
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406403:01
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400303:21
*** flemingo has joined #openstack-ansible03:31
*** flemingo has quit IRC03:35
*** kstev has joined #openstack-ansible03:38
*** pmannidi has quit IRC03:38
*** esberglu has joined #openstack-ansible03:39
openstackgerritLogan V proposed openstack/openstack-ansible-lxc_container_create master: Collect physical host facts if missing  https://review.openstack.org/55410503:40
*** kstev is now known as kstevzzz03:41
*** pmannidi has joined #openstack-ansible03:42
*** esberglu has quit IRC03:43
openstackgerritLogan V proposed openstack/openstack-ansible master: Do not collect physical host facts in playbook  https://review.openstack.org/55410903:45
*** udesale has joined #openstack-ansible03:46
mhaydenORLY03:46
*** flemingo has joined #openstack-ansible03:47
*** flemingo has quit IRC03:51
*** flemingo has joined #openstack-ansible03:59
*** flemingo has quit IRC04:04
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert systemd services to common role(s)  https://review.openstack.org/55411904:08
cloudnullmhayden: ohai!04:08
cloudnullhows it?04:08
*** flemingo has joined #openstack-ansible04:10
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400304:12
prometheanfireohai04:12
openstackgerritMerged openstack/openstack-ansible-openstack_hosts stable/queens: Fix openstack_host_module_file typo  https://review.openstack.org/55085204:14
*** flemingo has quit IRC04:14
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert systemd services to common role(s)  https://review.openstack.org/55411904:23
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400304:29
*** flemingo has joined #openstack-ansible04:30
*** flemingo has quit IRC04:34
*** indistylo has joined #openstack-ansible04:34
*** flemingo has joined #openstack-ansible05:32
*** flemingo has quit IRC05:37
*** ANKITA has joined #openstack-ansible05:38
*** flemingo has joined #openstack-ansible05:46
*** openstackgerrit has quit IRC05:49
*** flemingo has quit IRC05:50
*** udesale has quit IRC05:51
*** udesale has joined #openstack-ansible05:52
*** flemingo has joined #openstack-ansible06:08
*** flemingo has quit IRC06:13
*** flemingo has joined #openstack-ansible06:19
*** flemingo has quit IRC06:23
*** ianychoi__ has joined #openstack-ansible06:26
*** ianychoi_ has quit IRC06:29
*** aruns has joined #openstack-ansible06:30
*** indistylo has quit IRC06:32
*** niraj_singh has joined #openstack-ansible06:33
*** udesale_ has joined #openstack-ansible06:34
*** radeks has joined #openstack-ansible06:37
*** udesale has quit IRC06:38
*** flemingo has joined #openstack-ansible06:38
*** flemingo has quit IRC06:43
*** aruns has quit IRC06:46
*** udesale_ is now known as udesale06:47
*** flemingo has joined #openstack-ansible06:48
*** flemingo has quit IRC06:53
*** pmannidi has quit IRC06:58
*** sep_ has joined #openstack-ansible07:11
*** sep has quit IRC07:13
*** armaan has quit IRC07:15
*** armaan has joined #openstack-ansible07:15
*** ANKITA has quit IRC07:16
*** udesale_ has joined #openstack-ansible07:18
*** openstackgerrit has joined #openstack-ansible07:18
*** udesale has quit IRC07:18
openstackgerritMerged openstack/openstack-ansible master: Stop running get-ansible-role-requirements with -vvv  https://review.openstack.org/55290807:18
*** yolanda_ is now known as yolanda07:21
*** aojea has joined #openstack-ansible07:31
*** flemingo has joined #openstack-ansible07:38
*** pcaruana has joined #openstack-ansible07:39
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible stable/ocata: Update all SHAs for 15.1.18  https://review.openstack.org/55059907:40
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible stable/pike: Update all SHAs for 16.0.10  https://review.openstack.org/55059307:40
*** indistylo has joined #openstack-ansible07:42
Taseerevrardjp: good morning. Any idea, how I might be able to resolve the congress-tempest-plugin ?07:42
evrardjpTaseer: good morning07:42
*** flemingo has quit IRC07:42
Taseerupdating SHA's did not work..07:42
evrardjphey07:42
evrardjplet's check together07:42
Taseersure07:42
evrardjpI see they have merged something to update their requirements in master yesterday07:43
Taseerhttp://logs.openstack.org/71/503971/43/check/openstack-ansible-deploy-congress-ubuntu-xenial/ac59ea3/job-output.txt.gz#_2018-03-17_07_28_33_31630607:43
Taseerlogs ^07:43
evrardjphttps://review.openstack.org/#/c/553171/207:44
Taseerso, should we trigger the pipeline ?07:44
*** ANKITA has joined #openstack-ansible07:45
Taseerevrardjp: the above patch is for congress, not for congress-tempest-plugin07:48
evrardjpTaseer: I don't see any problem in the patch.07:50
evrardjpTaseer: that's true07:50
Taseerbut according to the logs, it is failing on congress-tempest-plugin dependencies07:51
evrardjpwill check on the requirements of hacking in congress-tempest-plugin07:51
Taseerok..07:51
olivierbourdon38morning everyone, am I right thinking CI is somehow broken today ? My patch was passing CI 2 days agao an I can not understand why it is failing now looking at the logs. Thanks for your time and help07:51
evrardjphttps://github.com/openstack/congress-tempest-plugin/blob/master/test-requirements.txt#L507:51
evrardjpolivierbourdon38: which branch, and which patch please?07:52
evrardjpI guess master?07:52
olivierbourdon38master + https://review.openstack.org/#/c/553881/07:52
*** flemingo has joined #openstack-ansible07:54
Taseerevrardjp: I see... I guess that is what needs to change, the only version that I see on PyPI is v1.0.007:55
*** indistylo has quit IRC07:55
evrardjpTaseer: I remember discussions about the hacking lib07:55
evrardjpI can't remember what's the deal with it though. Was it removed from everywhere, or now used everywhere? ... I have to dig07:56
evrardjpbut at least let's ask for consistency there07:56
evrardjpolivierbourdon38: both check and gates failed on tempest, which is never good, let's check a little more in details07:57
olivierbourdon38yep, definitely agreed, however could not understand real failure reason looking into the logs :-(07:57
*** mbuil has joined #openstack-ansible07:58
Taseerevrardjp: let me try asking in congress. but I don't get a very positive response there07:58
evrardjp:(07:58
Taseerthey lack people like you :)07:58
*** flemingo has quit IRC07:58
evrardjpolivierbourdon38: we had disk space issues recently07:59
evrardjpolivierbourdon38: the thing is, checking at ARA/ansible logs is unfriendly, so I usually check on tempest testr output07:59
evrardjpwhich says okay here, with no data. So VERY weird.07:59
evrardjpolivierbourdon38: Request timed out08:00
evrardjpfailed to reach available status (current creating) within the required time (300 s)08:00
olivierbourdon38ok so I should trigger recheck then ?08:00
evrardjpolivierbourdon38: I did it08:00
olivierbourdon38cool thanks08:00
evrardjpolivierbourdon38: since the spectre meltdown mitigation, this comes up more often08:01
*** epalper has joined #openstack-ansible08:01
evrardjpI wonder if we can make the timeout higher.08:01
evrardjpalso I wonder if we've not a race condition in tempest08:01
evrardjpolivierbourdon38: could you file a bug for these, so we keep a record of the issue?08:02
evrardjpsaying you've got issues on timeouts and things like that for the last 2 days08:02
*** flemingo has joined #openstack-ansible08:04
*** hamza21 has joined #openstack-ansible08:05
evrardjpTaseer: checking at the versions of hacking in keystone, the constraint is different, but what should be installed is the same: it should both be 0.1208:08
*** flemingo has quit IRC08:08
evrardjpTaseer: now the question is why do we install this08:08
Taseerthis where we need congress folks to help us...08:09
olivierbourdon38:q08:10
olivierbourdon38sorry wrong window08:11
*** gokhan has joined #openstack-ansible08:16
evrardjpTaseer: could you run this on your machine and see which wheels are built for the tempest venv?08:19
evrardjpTaseer: I've checked, we've decided to use test-requirements for some packages that needed it, so because congress-tempest-plugin ships with one, we install it.08:20
evrardjpthe question is now is are we in developer mode and all08:20
evrardjp(hint: in the integrated, we shouldn't)08:20
evrardjplet me check that in your logs08:20
evrardjpoh yes it doesn't matter08:20
*** udesale_ has quit IRC08:23
hwoaranggood morning08:23
evrardjpTaseer: I am still on it there is something wrong there: Tempest installation for keystone skips the install of test-requirements, where congress doesn't08:26
*** osnaya has joined #openstack-ansible08:27
evrardjpwhile both have tempest plugins which holds test-requirements.txt08:27
*** flaviosr has joined #openstack-ansible08:28
evrardjpTaseer: I am afraid you've digged up a bigger problem.08:29
evrardjpI'd say thank you but now I realize I have more work, so now I don't thank you anymore.08:29
evrardjp:p08:29
osnaya@evrardjp I have a single controller node and 2 computes I am deploying OSA Pike directly on Ubuntu Host (no containers)... setup-hosts.yml was successful. setup-infrasructure.yml has 1 err. wsrep is OFF. In my case I have a single controller node, is WSREP OFF ok. Can I proceed with setup-openstack.yml run? Or WSREP needs to be ON even for a single node controller for MariaDB?08:30
*** flemingo has joined #openstack-ansible08:30
*** udesale_ has joined #openstack-ansible08:30
evrardjposnaya: Did the setup infra fail?08:31
evrardjpif it failed on galera, your infra is incomplete08:31
evrardjpyou have to make the playbook pass08:31
evrardjprunning with one node shouldn't be a problem for the playbook run08:32
osnayasetup-infra says failed... on checking there is a single failure with WSREP is OFF....08:32
evrardjposnaya: could you paste the log ?08:32
evrardjpfull log of the run08:32
Taseerevrardjp: let me know if I can help you in the problem08:32
osnayayes08:32
evrardjpTaseer: haha thanks. But I guess I will double check with odyssey4me before starting this, to know if I woke up too early or ...08:33
Taseerthat we, we both will be equally grateful08:33
evrardjpfor me your patch is okay, and I will vote positively.08:33
Taseeryou made my day, saying this ^08:34
*** dgonzalez has left #openstack-ansible08:34
evrardjp:)08:34
*** flemingo has quit IRC08:35
*** osnaya has quit IRC08:37
evrardjpmnaser: good suggestion on the inotify08:41
evrardjp:D08:41
*** flemingo has joined #openstack-ansible08:43
*** foutatoro has joined #openstack-ansible08:44
*** flemingo has quit IRC08:48
openstackgerritJean-Philippe Evrard proposed openstack/openstack-ansible-os_keystone stable/ocata: Use the good configuration directive for memcache servers  https://review.openstack.org/55417008:56
*** holser__ has joined #openstack-ansible08:57
*** flemingo has joined #openstack-ansible08:58
*** shardy has joined #openstack-ansible08:59
*** flemingo has quit IRC09:02
*** ibmko has quit IRC09:06
evrardjphwoarang: good morning!09:06
evrardjphow are things?09:06
*** sar has joined #openstack-ansible09:06
hwoaranglike every monday morning...09:06
hwoarangsucks!09:06
hwoarangwhat about you? :)09:07
*** mpranjic has quit IRC09:07
evrardjpI am so tired.09:07
evrardjp:)09:07
hwoaranglol nice way to start the week09:07
*** rpittau has joined #openstack-ansible09:07
*** rpittau has quit IRC09:08
evrardjpI realized friday I was paying with a prepaid account a hosted server, and that account's quota expired yesterday. Ofc, I was unavailable for the whole week-end so I started to work on this at 9PM yesterday :(09:08
evrardjpI am now in negative balance and I still have to migrate part of my stuff to another server... A good night sleep I could say!09:09
*** osnaya has joined #openstack-ansible09:09
*** rpittau has joined #openstack-ansible09:10
evrardjp(Also I didn't realize I wrongly set the RAM of this "cloud server" to 4G instead of 1GB, that's why the quota expired ... 4 times as fast.09:10
evrardjpanyway!09:10
evrardjphow are things for this monday?09:10
evrardjphwoarang: Taseer might have revelled a bad bug in our tempest testing, I need another pair of eyes to make sure I am not wrongly dreaming.09:11
evrardjpWe can discuss after coffee and all that jazz09:11
osnaya@evrardjp for some reason paste doesn't let me cut&paste the whole log... do I need to do partial in multiple times? any idea?09:13
evrardjppastebinit <filename>09:13
evrardjpshould work09:13
evrardjpelse just do the setup-infra playbook log09:13
jrosserevrardjp: i'd like to keep this moving as it's potentially v.bad https://bugs.launchpad.net/openstack-ansible/+bug/175609109:14
openstackLaunchpad bug 1756091 in openstack-ansible "dynamic inventory changes cause container ip to continually change" [Undecided,New]09:14
jrosseri think we've shown it certainly affects master, the other branches need investigation i think09:14
evrardjpjrosser: you're right09:15
evrardjpjrosser: wasn't there a patch from jamespo ?09:15
jrosserhttps://review.openstack.org/#/c/553726/09:15
jrosserare there any tests for the inventory that would catch this09:16
evrardjpI think it got enough eyes, if it's not working it wasn't catched enough by tests09:16
evrardjpjrosser: we are a serious problem with the inventory coverage.09:17
evrardjpI will be working on the CLI I've demonstrated at the ptg09:17
evrardjpI just am lacking time09:17
openstackgerritMerged openstack/openstack-ansible-lxc_container_create master: Add lxc.haltsignal to container configs  https://review.openstack.org/55394909:18
foutatorohi all, does somebody know how to deploy openstack multi region or multi site with OSAD ?09:18
hwoarangevrardjp: oh :/ sorry about that09:18
osnaya@evrardjp here is the log of setup-infrastructure with Pike single controller node+2 computes on Ubuntu 16.04 host is_metal true (no containers...)09:18
hwoarangevrardjp: opnfv stuff should be done in a bit09:18
osnaya@evrardjp http://paste.openstack.org/show/704304/09:18
evrardjpjrosser: I think at some point we have to stop band-aiding this inventory, and fixing it for real. And that's what I am trying to do.09:18
evrardjphwoarang: I think Taseer has done the right thing, so we are good. I think we are not doing the right thing for the other tempest plugins, example: keystone.09:19
evrardjp:(09:19
evrardjpso to do the right thing, I will break things :(09:19
evrardjpfoutatoro: we have multiple deployers that do differently, depending on your use case. logan- jmccrory might help you, but they are U.S. based.09:20
*** mpranjic has joined #openstack-ansible09:20
evrardjpjust fyi09:20
evrardjposnaya: you can run on -v for a lighter log :)09:20
evrardjposnaya: also I don't see a failure in that log?09:21
evrardjpor is my search skillz wrong?09:21
osnaya@evrardjp the last line of the log EXIT NOTICE [Playbook execution failure] **************************************09:22
osnaya@evrardjp is my interpretation wrong?09:23
*** chhagarw has joined #openstack-ansible09:23
openstackgerritOlivier Bourdon proposed openstack/openstack-ansible master: Add more infos into error message  https://review.openstack.org/55388109:23
osnaya@evrardjp just curious now ... how do u check for failure? is there a specific key word or return code?09:24
foutatoroevrardjp: thanks for this information09:24
osnaya@evrardjp I looked for "TASK [galera_server : Check that WSREP is ready] .... FAILED - RETRYING...", is there a diff way to find failures?09:27
osnaya@evrardjp .... please clarify.... so is my interpretation wrong? Is there a failure or no failure in the log?09:29
*** udesale_ has quit IRC09:30
*** udesale has joined #openstack-ansible09:32
osnaya@evrardjp please clarify... since I thought there is a failure.... so breaking my head to debug and fix it before running setup-openstack.yml as next step...09:33
*** exodusftw has quit IRC09:37
*** ibmko has joined #openstack-ansible09:38
*** exodusftw has joined #openstack-ansible09:39
*** osnaya has quit IRC09:40
*** indistylo has joined #openstack-ansible09:41
*** gus has quit IRC09:43
*** gus has joined #openstack-ansible09:44
*** osnaya has joined #openstack-ansible09:50
evrardjpcloudnull: I need tor un an idea through you in respect to the inventory, nspawn/lxc container creation, could you ping me when available?09:51
osnaya@evrardjp sorry... got errored out on freenode connectivity.... not sure if you provided comments on my query on the log for setup-infrastructure09:52
evrardjposnaya: I think your paste was truncated09:52
evrardjpI dont see anything about FAILED when I search in my browser09:52
osnaya@evrardjp let me paste it again09:52
*** admin0 has joined #openstack-ansible09:54
admin0morning09:54
evrardjpjrosser: jamespo I have a few intermediate steps I just thought of for the inventory, I'll move all of that into a spec, for clarity09:55
osnaya@evrardjp you are right... log got truncated.... this time I cut and paste the last excerpt of the log with the error.... http://paste.openstack.org/show/704319/09:55
evrardjposnaya: you can't do a -v instead of -vvv?09:55
evrardjposnaya: I still don't see the FAILED09:56
osnaya@evrardjp I gave the log from the last run, do you want me to run it again with -v? that will take probably about 20min before I can generate another log09:56
*** flemingo has joined #openstack-ansible09:57
evrardjposnaya: maybe I am tired, but I don't see anything wrong in your last two pastes. Yes, run it again with only -v , and then pastebinit <logfile>09:57
evrardjppastebinit is not truncating things, it works09:57
osnaya@evrardjp OK, I gave the last 2 TASKs in the log now... http://paste.openstack.org/show/704320/09:59
*** flemingo has quit IRC10:01
evrardjposnaya: I think wsrep ready should be ON even if you have one node.10:03
evrardjpso there is something wrong in your environment10:03
evrardjposnaya: is that prod?10:03
evrardjpif it's a new deploy, i'd delete the containers, and re-create10:03
osnaya@evrardjp it is in lab10:03
evrardjpjust delete and recreate10:04
evrardjpthe galera part10:04
evrardjplxc-container-destroy.yml --limit galera_all10:04
osnaya@evrardjp i am deploying with no containers... directly on Ubuntu host (with isMetal=true)...10:04
evrardjpoh yeah10:04
evrardjpwell10:04
evrardjpremove galera package from your host10:04
evrardjpapt purge :)10:05
evrardjpcheck your sources too10:05
evrardjpapt sources10:05
evrardjposnaya: probably worth knowing what's wrong with your mysql too10:06
evrardjpbut to make sure everything is clean, I'd remove mysql and mariadb from your host10:06
evrardjpand from the apt sources10:07
osnaya@evrard on the local host I logged in to mysql with sudo user and did some basic commands, created a db and table and deleted it...10:07
osnaya@evrardjp so if I re-run setup-infrastructure will it remove it and re-install? or do I have to purge them manually and re-run?10:08
evrardjposnaya: show status like '%wsrep%'; ?10:08
evrardjposnaya: purge manually10:08
evrardjpwe ensure things are installed, not that things are removed and then re-installed, that would be inefficient for most deploys10:08
osnaya@evrardjp show status like %wsrep% http://paste.openstack.org/show/704326/10:10
evrardjpyeah that's terribad10:10
evrardjppurge and re-install10:10
evrardjpcheck your apt sources10:10
evrardjpthere is something wrong there10:10
evrardjpadmin0: good morning10:11
osnaya@evrardjp so I need to purge mysql and galera both... how about memcached and rabbit.... do I need to purge them too and re-run, or they are ok to re-run on exisitng?10:12
osnaya@evrardjp is there a sample you can provide on what to expect for status like 'wsrep%', so I can refer upon re-run10:14
*** flemingo has joined #openstack-ansible10:15
odyssey4meevrardjp Taseer I think I mentioned this before, but trying to make the integrated build tempest test things is a bit complicated and is not something we've really catered properly for. We need to revisit the whole mechanism to ensure that it does the right things for tempest and the plugins. I suspect that the plugins need to be installed unconstrained, or maybe some do and some don't. For now I'd recommend just10:17
odyssey4me avoiding trying to use that for now and dealing with it as a secondary problem.10:17
odyssey4meBut it's up to you really. I've not got time right now to dig into it.10:17
admin0osnaya, wsrep not working ? check if any firewalls, ports connectivity issues10:19
admin0the desired output is already in the documentation ..10:19
*** flemingo has quit IRC10:20
admin0check also if you can manually start it out and see if they connect10:20
osnaya@admin0 I am deploying on a single controller node... can u please explain the last line above... manually start and see if they connect?10:23
admin0aah10:23
admin0thought multi nodes10:23
admin0how many galera containers are created ?10:23
osnaya@admin0 I am deploying with isMetal=true (on Ubuntu host - NO containers....)10:24
evrardjpodyssey4me: well, I wonder if we should install the test-requirements of the tempest plugins10:25
osnaya@admin0 Pike.... It can be done on Ubuntu host directly right? following the docs...10:25
evrardjpwe should probably not10:25
evrardjpor make it optional10:25
admin0osnaya, if its single node, there is no need for wsrep to work10:25
admin0if mysql starts OK, you are good to go10:25
evrardjpif we make it optional, we can probably skip some things that are only used for the development of the plugins10:25
evrardjpodyssey4me: but my concern is completely other10:26
odyssey4meevrardjp IIRC we do if they exist?10:26
evrardjpthat's what we do indeed10:26
evrardjpbut they exist10:26
odyssey4meyes, and that was necessary at the time10:26
evrardjpthat's not what I meant10:26
admin0osnaya, when you restart mysql manually, does it boot fine ?10:26
odyssey4meI've for a long time thought that we need to re-think how we implement the use of tempest.10:26
osnaya@evrardjp @admin0 that was my initial thought... that WSREP is meant for multinode cluster, in my case it is a single node so even if WSREP is OFF, it shouldn't matter....10:26
evrardjpI meant that's useful if you want to code on the plugins, not to use them themselves.10:26
odyssey4meevrardjp yeah, but in the first plugin implementations we did we *had* to install the test requirements10:27
evrardjpodyssey4me: I think we should just rely on requirements, and optionally (on top of if they exist), install them if someone wants10:27
admin0for the playbooks checking stuff, it could matter .. but for you to move ahead, it should not matter if you confirm mysql can reboot fine by itself10:27
evrardjpodyssey4me: that's true I remember that10:27
odyssey4memaybe we need an extra flag to allow enabling/disabling that install per plugin10:27
evrardjpthat's whY it think we should just add a new flag10:27
evrardjpahah my idea exactly10:27
osnaya@admin0 on the localhost, I did user sudo login and check mysql... mariadb --- I can create a db, a table and deleted it10:27
odyssey4medefault to true, but allow a flag to set if false10:27
evrardjpodyssey4me: but that's not the most concerning story10:27
admin0osnaya, on other hand, i do not see practically any benefit running mysql on metal as opposed to containers .. unless its on a node dedicated for just database and not shared with anything else10:28
evrardjpthe most concerning story is that this whole series of tasks are skipped for the main build10:28
*** flemingo has joined #openstack-ansible10:28
evrardjpso... I wonder how the hell does tempest work properly... It's just a ticking time bomb10:28
evrardjplet me explain what I mean here:10:28
osnaya@admin0 in my case, I am not supposed to use any containers for this first phase.... we may try with containers in the later phse10:29
evrardjpwe don't generate anything for tempest_plugins in openstack-ansible/ bootstrap aio10:29
evrardjpso we are using defaults of the os_tempest role, which has tempest_plugins: []10:29
evrardjp(I haven't found somehting on group vars)10:29
evrardjpso it means this is why the tasks are skipped into the main build10:30
evrardjpbut... at the same time10:30
evrardjphttps://github.com/openstack/keystone/tree/master/keystone_tempest_plugin10:30
admin0osnaya, whoever gave you that requirement, you need to go back and say .. if you want me to use OSA and no containers, you need dedicated server for DB10:30
evrardjpthat folder still exists but is empty10:30
admin03 of them10:30
odyssey4meevrardjp it's because we're still only executing tests which tempest has built-in, so yes it's a ticking time bomb10:30
evrardjpok10:31
evrardjpthat's what I thought10:31
evrardjpthanks for confirming10:31
odyssey4mehistorically all tests were in tempest itself, but they're slowly migrating out - when they're all out, we'll be testing nothing10:31
evrardjpI will change that to have the same logic as what we have for the whitelist10:31
evrardjpif something keystone is deployed, use this plugin10:31
odyssey4mewe set the whitelist in the group vars IIRC, which overrides the defaults in the tempest role10:32
evrardjpthe whitelist is fine, it's more the plugins that don't exist10:32
admin0osnaya, the whole reason we have containers is to ensure our playbooks works and gives a production like platform that can be upgraded/updated/maintained etc .. yes sure, you can hack your way around it .. but not a lot .. because these are edge cases that people have not teste well, nor would they be tested well and properly10:32
evrardjpwe should have them10:32
odyssey4mebut yes - when I did the overhaul to use the new tempest method of testing for newton, the plugins stuff wasn't very mature10:32
evrardjpwe can just re-use the same reasoning.10:32
*** flemingo has quit IRC10:32
odyssey4menow that there're more plugins, we need to overhaul how they're used and how it's decided which tests to use10:32
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible-tests master: test-ansible-env-prep: Fix ARA installation for tests  https://review.openstack.org/55419210:33
osnaya@admin0 wow... is that a strict requirement, right now I have 1 controller and 1 compute.... so can I proceed or not?10:33
admin0osnaya, is this a test or a production platfrom ?10:33
odyssey4meit's probably best not to make that hold back the congress integration though10:33
evrardjpodyssey4me: agreed10:33
admin0how big is the controller box ( in terms of cpu/ram/disk ) ?10:33
evrardjpbut now it tests nothing, which is consuming infra for nothing.10:33
osnaya@admin0 currently it is in lab as a prototype, but the idea is to use it in production later with more servers...10:34
evrardjposnaya: running with just one galera on bare metal is possible.10:34
admin0osnaya, its not a strict requirement .. i am just saying if you go beyond what everyone has tested and how the software works, then we are not able to help properly, nor would you get maximum benefit of the software10:34
evrardjposnaya: I used it. Just if you're saying only one node, avoid haproxy and all that jazz.10:34
odyssey4meevrardjp it may test nothing with tempest, but it tests convergance - which is as good as many of the other things we test in the integrated build10:34
evrardjpodyssey4me: agreed, but it now shows a failure. So we should maybe have tempest_run false temporarily.10:35
odyssey4meevrardjp I'm not sure how congress integrates with other things, but perhaps the best thing for Taseer to work on is those integrations?10:35
evrardjpagreed10:35
admin0if your goal is to just get to a point where you can say "yeah it works"  ..but in that case breaking the how to maintain, how to upgrade, how to maintain HA, how to update etc, then its not a logical choice is it ? also if later your galo is to use containers and use it like prod, what is the reason to not do it right away10:37
admin0why waste your time trying to make it work in a manner that you will throw it later away10:37
osnaya@evrardjp @admin0 @odyssey4me --- in that case, I have a serious question, since I am evaluating use of OSA for deployment of OSA on Ubuntu host (strictly NO containers, right now), which will be used in production in the next phase on host. Should I even proceed with the excercise?10:37
*** flemingo has joined #openstack-ansible10:38
odyssey4meosnaya the advice from admin0 is sounds - best is to learn the way everyone else uses it, so that you get familiar with it... then once you're familiar with it, try using more edge cases10:39
odyssey4meosnaya at the moment, a no-container build for us is still a bit of an edge case10:39
odyssey4mewe know it works, but most of our users do not use it that way10:39
evrardjposnaya: deploying on bare metal is supported, and tested in queens/master. We still haven't integrated tests into Pike.10:39
odyssey4meif you're confident in your own ability to troubleshoot and work things out, then go ahead10:40
admin0osnaya, if you stick to the guidelines, OSA will automatically do stuff for you in 10 minutes .. you can still go your own path, but that might take 10 hours for you to fix ( because we cannot help ) and operations wise, its not a good choice to even proceed ..10:40
odyssey4mebut yes, we worked out the kinks in queens - pike may still have some of them10:40
evrardjposnaya: our dna is for the operational experience, and with machine container that experience is best. Look at what you just had with mariadb: If you had a container, you'd just destroy and recreate10:40
admin0best way i can say is to first do what OSA is meant to do .. do it like 10 times .. then yes, you will have enough confidence and knowledge to hack your own way around it10:40
evrardjpadmin0: ++10:40
odyssey4me++10:41
evrardjposnaya: it's not you can't do it, it's just we've been there. :)10:41
admin0i hacked ceph before and now openvswitch+linuxbridge hybrid setup10:41
*** flemingo has quit IRC10:42
admin0osnaya, running a cloud is not installation itself .. its how you manage it , how you  maintain it, how you provide SLA on it, how confident you are around it , how you update it, how you fix security flaws, how confident are you that when you are on vacation and your junior does a "oops" command, how confident are you that when your manager calls, you can fix it within an hour .. there are a lot of things around a certain choice made10:43
admin0you can do your own way, and we can try to help .. but in the bigger picture, you are wasting your own time, company's time, our time in trying to right away do something else .. your path may be the best and we are wrong, but then you can go that path, and then help us make this better that it fits your use case as well .. that means you need to understand it first how it works, the playbooks,fix yourself why galera playbook is wiating for wsrep10:46
admin0.. those stuff10:46
admin0ultimately your way will also work with a lot of hacks . but then it will be like those 40 minutes ubuntu+juju presentation where they click here and there and at the end of 40 minutes, they will show a working horizon .. leaving everyone with "ok .. now what" .. imagine your setup works today .. single node . no containers .. what is next ? then what ? because you have to update it, backup it, make changes to it.. don't think about just10:49
admin0installation .. say there is a new security stuff out tomorrow that yuo have to update only neutron .. if container, you can update just neutron .. if no container, what if that neutron change lib will break nova ? you need to think ahead these stuff ..10:49
admin0i can speak on these and know, because i have seen them and faced them10:49
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_glance master: [WIP] Use a common python build role for source builds  https://review.openstack.org/55134410:50
openstackgerritMerged openstack/openstack-ansible master: Stop inventory constantly giving containers new IP  https://review.openstack.org/55372610:50
openstackgerritMerged openstack/openstack-ansible-os_neutron master: Move the roles defined in meta to tasks  https://review.openstack.org/55000410:50
admin0I am not not discouraging you ..  i am just saying i have "fell down that cliff and broke my leg" and preventing you to do the same10:51
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/queens: Stop inventory constantly giving containers new IP  https://review.openstack.org/55419610:51
osnaya@evrardjp @admin0 @odyssey4me thanks guys for the tips... I guess since in the current phase, I can't use containers for the project, so I probably would need to write my own ansible scripts to deploy services on the host directly, I guess? Unless I can convince the decision makers to use containers.....10:51
*** savvas has joined #openstack-ansible10:51
admin0osnaya, i think i can sell OSA even to redhat/mirantis reps .. any chance i can talk to them :)10:51
admin0i have convinced a lot of people to go away with their purple unicorn written here self made ansible to embrace OSA ..10:52
osnaya@admin0 you mean osa using containers ..... right?10:52
admin0yes10:53
admin0use it the way its supposed to be used ..10:53
admin0because its not 1 person flipping a coin and saying "heads: containers, tails: no containers" .. there is a lot of thought on why containers are used and why people see benefit in them and embrace OSA10:53
osnaya@admin0 so doing it as edge case may be more debug than really using OSA for benefits.... since the mainstream Dev/QA is both focussed for container based deployments...10:54
admin0osnaya, look everywhere .  osa, mirantis, redhat everyone moving to containers based model .. its there for a reason10:55
admin0you can fight the change now ( blackberry android example ) but either way .. after a lot of man hours wasted .. you your decision makers have to give in10:55
osnaya@admin0 so I will try and convince for using containers first, but for whatever reasons, if we have to stick to deploying it on host directly, then writing our ansible scripts may be a better path.... I guess10:55
odyssey4meosnaya your original issue was that wsrep was off?10:56
admin0osnaya, its not .. if you write your own script . then you have to maintain your own script, document them, test them , when new flaw comes out, write the upgrade yourself, when new upgrade comes, write your own upgrade script every 6 months10:56
admin0when new guy comes, waste time on onboarding10:57
admin0we did all that10:57
odyssey4medid that fail the playbook? because wsrep being off would be normal in a single-node environment, and we test single node environments in all our testing, so that seems odd10:57
admin0now after OSA, its easy .. all documentaton is out there, test is out here, feature is out there10:57
*** chason has quit IRC10:57
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible stable/pike: Update existing container_networks  https://review.openstack.org/52835710:57
*** chason has joined #openstack-ansible10:58
admin0i was in a multi-week multi dozen stakeholder round table on osa vs self vs others .. i can talk for ours10:58
admin0good news: OSA prevailed :)10:58
*** ansmith has quit IRC10:58
odyssey4mejrosser jamespo I've combined jamespo's fix into andymccr's original fix - see https://review.openstack.org/528357 - hopefully that passes gating and also passes the muster in your test environment?10:58
*** A1ve5 has quit IRC10:59
admin0hmm.. maybe i should write a documention on OSA selection process and choices to consider :)10:59
admin0i mean openstack vendor selection process :)11:00
admin0with a pros and cons of every approach11:00
odyssey4meadmin0 not a bad idea, this is not an uncommon thing11:00
admin0i will do that :)11:00
odyssey4mein our docs we obviously can't do that, because we can't be opinionated - but you can in your docs ;)11:00
jrosseri wonder if theres some containers confusion here becasue everyone thinks that means docker/k8s when actually in the OSA case it couldnt be more different11:00
odyssey4mejrosser yeah, possibly11:01
jrosseryou could just substitute container/vm for OSA and have exactly the same discussion11:01
admin0i will make the decision from an operations prospective .. for a guy tasked with deploying and maintaining cloud for the whole organization11:01
jrosserfrom a conceptual point of view11:01
odyssey4meosnaya the rejection of the use of containers may be a misunderstanding - think of the way we use containers as virtual machines11:01
admin0my role exactly in the company where i have to give .. these are the X reasons why  I selected OSA vs redhat vs mirantis vs self vs others11:02
odyssey4methat's exactly how we build and treat them11:02
*** flemingo has joined #openstack-ansible11:02
osnaya@admin0 yep that would be a good idea, I wrote few points myself and that is how I started with evaluating OSA ... in the first place11:02
odyssey4meosnaya all of that said though, an all metal environment should still work - I'd recommend using Queens if possible because we actually have tested that for all-metal and have the sample configs to help11:03
odyssey4mePike will probably work, but there may be some bugs to work through so it'll slow things down11:03
osnaya@admin0 @odyssey4me BTW, for isMetal=true and single controller, in that case WSREP=OFF should not have been considered failure, right?11:04
odyssey4meosnaya with or without containers, a single galera node should not cause a galera cluster failure11:04
openstackgerritManuel Buil proposed openstack/openstack-ansible-os_neutron master: Provide a dynamic repo for ODL  https://review.openstack.org/54928811:04
Taseerodyssey4me: what did you mean about "integrations11:04
odyssey4meif it does, then that's a bug11:04
Taseer?11:04
odyssey4meTaseer well, how does congress get used? does it have some sort of inter-service communication that can be instrumented?11:05
odyssey4meie does it get used by nova/neutron/whatever11:05
odyssey4meosnaya to be clear though, if there's only one node then there is no replicatiojn, so WSREP=OFF is expected11:05
Taseerodyssey4me: how would it solve the hacking dependency problem ?11:06
odyssey4meit just shouldn't cause a failure in the play or any services11:06
olivierbourdon38evrardjp thx seems like https://review.openstack.org/#/c/553881/ is now ok with regards to latest comments and CI11:06
odyssey4meTaseer it does not - solving that is something we can figure out in another stream... evrardjp and I discussed that earlier and he has an idea for that... my suggestion was that you focus more on how congress is used while evrardjp tries his idea out11:07
*** flemingo has quit IRC11:07
Taseerodyssey4me: ok, will do11:07
admin0osnaya, ansible is meant to run a command and evaluate output .. plus if you look into the playbooks they have all tags, multiple smaller playbooks .. so you can use -vvvv to run . see where its stuck and what it is trying to do .. and then for some reasons if it fails but you know is OK. run it manually .and then in the next run, move foward by skipping that step11:09
odyssey4meolivierbourdon38 LGTM, but made a suggestion to improve readability. Those long lines aren't fun to work through.11:09
jamespoodyssey4me: looks good to me, should still fix our issue and stop the IP reassignment11:12
openstackgerritJesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_glance master: [WIP] Use a common python build role for source builds  https://review.openstack.org/55134411:12
openstackgerritOlivier Bourdon proposed openstack/openstack-ansible master: Add more infos into error message  https://review.openstack.org/55388111:12
jamespoodyssey4me: just to note that my section there will only add/update things set, it won't take anything away11:13
ioniis there a way to still keep neutron agents inside container?11:14
odyssey4mejamespo yeah, that's always been a thing though as far as I know, so we're not changing anything11:14
ionion qeens11:14
odyssey4meioni sure, if you want to - just add an env.d override to set is_metal false for them11:14
jamespoodyssey4me OK cool, we could add another loop to remove keys missing from the updated side11:14
jamespoif anyone wanted it11:15
ioniodyssey4me, cool. so before was is_metal false the default one and now is true11:15
osnaya@admin0 the last run I did was with -vvv .... here is the last few tasks log (setup-infrastructure.yml run showed failed at the end summary).11:15
odyssey4meioni however, in the long term it's better to move to having them on the host - that way you don't lose your L3 whenever a container's config is changed11:15
osnaya@admin0 http://paste.openstack.org/show/704320/11:15
ioniodyssey4me, is there a downside on using agents directly on controller?11:16
osnaya@admin0 it shows failed=1 (looking thru for FAILED.... I found on WSREP ready task...11:16
ioniapart from being a mess11:16
*** flemingo has joined #openstack-ansible11:16
admin0osnaya, cat setup-infra playbook and you will see its made up of multiple ones .. run it specific then skip it in the next step if mysql is running11:17
odyssey4meioni well, most ideally you'd want to dedicate some hosts for the networking (that way it won't be a mess, and it'll scale better)... but I'm not aware of any down side - it may be better to discuss that with cloudnull / jamesdenton though11:17
admin0ioni, i use agents directly on the controller11:18
admin0for performance because on very high usage, the container itself become a bottleneck11:18
ioniadmin0, indeed11:18
admin0i do not use dedicated network nodes ..  all network agents are on_metal on compute nodes11:18
admin0so that i do not have a single/limited points of failure11:18
admin0when a node needs to be maintained, i just migrate the routers/agents to another compute11:19
ioniadmin0, i only use for now for dhcp11:19
ionito give ips to the instances automatically11:19
ionino routers or haproxy11:19
admin0and for our NFV tests, i also need OVS ..so is_metal is the only way for me to proceed as well11:19
ionii use linuxbridge :)11:20
admin0i use hybrid :)11:20
admin0i was a linuxbridge only guy who got hired to a OVS only office11:20
admin0so we are on a hybrid mode with OSA :)11:21
openstackgerritMerged openstack/openstack-ansible-os_keystone stable/ocata: Use the good configuration directive for memcache servers  https://review.openstack.org/55417011:21
admin0best of both worlds :)11:21
ioniadmin0, can i pm you11:21
ioni?11:21
*** flemingo has quit IRC11:21
admin0of course you can11:21
admin0btw this is my hybrid setup: http://www.openstackfaq.com/openstack-ansible-with-openvswitch-pike/11:22
odyssey4meosnaya hmm, I see that wsrep is showing off and that task is set to fail if that's the case - let me try and validate that against some recent test tasks to see if that's normal or not11:22
osnaya@admin0 quick q... with odyssey4me's suggestion, with a single controller node, WSREP OFF should be OK.... In that case I ran the next os-keystone.yml and see this error... http://paste.openstack.org/show/704359/... just wanted to see if mysql is used in teh next step and does it go thru...11:23
admin0you missed the chat where we said .. try to restart mysql manually and see if you can connect11:24
admin0and if yes, skip that step and run the rest of the galera-install.yml playbook11:24
odyssey4meosnaya ok, so if we look at a very recent test which was executed, we can see in http://logs.openstack.org/93/550593/2/check/openstack-ansible-deploy-aio-ubuntu-xenial/efde112/job-output.txt.gz that the task 'Check that WSREP is ready' was skipped 3 times then resulted in OK on the 4th. Does that match your log output?11:24
*** flemingo has joined #openstack-ansible11:27
osnaya@odyssey4me I am running Pike.... In my case I see only one retry and then it says fatal infra1(host) failed... see log excerpt @http://paste.openstack.org/show/704320/11:28
*** flaviosr has quit IRC11:30
odyssey4meosnaya compare your log and the log I liked (which is from pike), look carefully through your whole log sequence to see where it failed exactly11:31
odyssey4methat will help figure out why it failed11:31
odyssey4meyour result should also have seen that task executed 4 times - if it was only once, twice of 3 times then we have more information to work with11:31
*** flemingo has quit IRC11:31
osnaya@odyssey4me ok, I will compare the log tar you pointed above with my run and check it... Pls confirm the tar log is above is from Pike, right?11:33
odyssey4meosnaya yes, it's from https://review.openstack.org/550593 which is the most recent patch we have to the integrated build for pike11:34
*** alefra has joined #openstack-ansible11:34
osnaya@odyssey4me was that run done with 1 controller node as well?11:34
odyssey4meosnaya yes11:35
*** chason has quit IRC11:35
*** chason has joined #openstack-ansible11:35
osnaya@odyssey4me thanks, will take a look11:35
odyssey4meosnaya if you have access to build a single VM with 8GB RAM and 8vCPU, then I'd suggest building an AIO to compare your environment with to troubleshoot11:35
odyssey4meosnaya the AIO uses containers, but is a very well known good state - it may help troubleshooting to get an AIO up to compare states with for your build11:36
*** osnaya has quit IRC11:41
evrardjp[m]great dmsimard11:42
*** ANKITA has quit IRC11:46
*** holser__ has left #openstack-ansible11:46
*** flemingo has joined #openstack-ansible11:47
*** foutatoro has quit IRC11:49
*** armaan has quit IRC11:51
*** logan- has quit IRC11:52
*** armaan has joined #openstack-ansible11:52
*** flemingo has quit IRC11:52
*** logan- has joined #openstack-ansible11:52
*** armaan has quit IRC11:53
*** armaan has joined #openstack-ansible11:53
jrosseris it right that rabbitmq is listening on the TLS and non-TLS ports? also web management UI port?11:57
odyssey4mejrosser it is how it's implemented as a standard default right now, yes - not to say it couldn't change12:00
*** Sha0000 has joined #openstack-ansible12:00
jrosserodyssey4me: i was just having a quick poke around what was/wasnt SSL - i've only given it a cert/key for the external endpoint12:01
*** flemingo has joined #openstack-ansible12:01
jrosserbut it seems that the internal stuff is ssl, but with a self signed cert12:01
odyssey4mejrosser yep12:02
jrosseri did then wonder how on earth i would approach making internal cert/keys, given you dont know the container names up front.....12:02
*** flemingo has quit IRC12:06
evrardjpjrosser: that's also what I think we should change :)12:08
evrardjpbut that's a longer question we have there12:08
*** hamza21 has quit IRC12:14
*** chason has quit IRC12:19
*** chason has joined #openstack-ansible12:19
*** kstevzzz has quit IRC12:24
*** Guest2363844 has joined #openstack-ansible12:25
*** chason has quit IRC12:25
*** chason has joined #openstack-ansible12:25
*** hamza21 has joined #openstack-ansible12:26
*** flemingo has joined #openstack-ansible12:26
*** savvas_ has joined #openstack-ansible12:29
*** tobberydberg_ has joined #openstack-ansible12:30
*** flemingo has quit IRC12:31
*** savvas has quit IRC12:32
*** tobberydberg_ has quit IRC12:35
*** aruns__ has joined #openstack-ansible12:37
*** indistylo has quit IRC12:39
*** flaviosr has joined #openstack-ansible12:40
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible master: zuul: Make openSUSE a voting job  https://review.openstack.org/55297712:41
*** odyssey4me has quit IRC12:42
*** odyssey4me has joined #openstack-ansible12:42
*** Sha0000 has quit IRC12:42
*** ansmith has joined #openstack-ansible12:45
openstackgerritMerged openstack/openstack-ansible-plugins stable/pike: Reduce interactions by nearly 50%  https://review.openstack.org/55016112:52
*** armaan has quit IRC12:53
*** rromans has quit IRC12:56
*** rromans has joined #openstack-ansible12:57
*** RandomTech has joined #openstack-ansible13:02
*** ibmko has quit IRC13:02
openstackgerritMerged openstack/openstack-ansible-lxc_container_create master: Use hostnamectl to set the container hostname  https://review.openstack.org/55391113:13
niraj_singhIs there any document to add git repository specially for ansible roles or can we follow https://docs.openstack.org/infra/manual/creators.html13:14
*** ibmko has joined #openstack-ansible13:20
RandomTechHello, is anyone here able to help me troubleshoot what i believe is a horizon or glance issue13:21
*** aruns__ has quit IRC13:25
*** aruns__ has joined #openstack-ansible13:25
*** shardy is now known as shardy_mtg13:28
openstackgerritMerged openstack/openstack-ansible-os_swift stable/ocata: Switch pypy functional test to experimental  https://review.openstack.org/55374113:29
odyssey4meniraj_singh please liaise with evrardjp for that13:32
niraj_singhodyssey4me: ok13:32
openstackgerritJakub Jursa proposed openstack/openstack-ansible-plugins master: Fixed insecure passing in keystone module  https://review.openstack.org/55422613:32
openstackgerritJakub Jursa proposed openstack/openstack-ansible-plugins master: Fixed insecure passing in keystone module  https://review.openstack.org/55422613:36
*** foutatoro has joined #openstack-ansible13:36
niraj_singhevrardjp: Is there any document to add git repository specially for ansible roles or can we follow https://docs.openstack.org/infra/manual/creators.html13:37
evrardjpcongrats on gokhan joining us as a core for the telemetry roles!13:37
jafehacongratulations! gokhan, i'm really happy you take care of the telemetry stack13:38
openstackgerritMerged openstack/openstack-ansible-openstack_hosts stable/queens: Remove host_need_pip  https://review.openstack.org/54776913:39
gokhanthanks evrardjp jafeha . I will struggle to work a lot :)13:40
RandomTechcongrats gokhan13:41
openstackgerritMerged openstack/ansible-hardening master: Updated from global requirements  https://review.openstack.org/55145013:42
*** mardim has quit IRC13:42
*** hamzy__ is now known as hamzy13:43
*** aruns__ has quit IRC13:44
evrardjpjafeha: he is joining mnaser too :)13:45
evrardjpgood team!13:45
evrardjpniraj_singh: I can do this for you, what are your plans?13:46
niraj_singheverardjp: my plan is to add a ansible role for masakari project.13:47
evrardjpoh ok13:47
evrardjpgreat!13:47
jafehaeven better, i love this news13:47
niraj_singhthanks13:47
*** esberglu has joined #openstack-ansible13:47
jafehawhat a great team here <313:48
evrardjpniraj_singh: I think you have a role already right? Is that for maskari and masakari-monitors, or is that two different roles?13:48
evrardjpto know how many things to import13:49
niraj_singheverdjp: i dont have role. But i have created a POC for masakari role. So i can start with it.13:49
gokhanthanks RandomTech ;)13:49
evrardjplet's sync in PM13:49
evrardjpthis way we don't pollute this channel with administrative details13:50
*** aruns__ has joined #openstack-ansible13:50
admin0niraj_singh, \o/ for masakari :D13:51
admin0when will it be ready :D ?13:51
admin0:D13:51
admin0i can help with test, validations, what-if questions, edge case tests13:52
openstackgerritManuel Buil proposed openstack/openstack-ansible-os_neutron master: Allow choosing different versions of ODL to deploy  https://review.openstack.org/54928813:53
openstackgerritMerged openstack/openstack-ansible-os_rally master: Add CentOS deployment support  https://review.openstack.org/55210713:54
*** mardim has joined #openstack-ansible13:59
*** hamza21 has quit IRC14:01
*** TxGirlGeek has joined #openstack-ansible14:02
*** flemingo has joined #openstack-ansible14:03
evrardjpadmin0: I'd like if you could review the spec then14:03
evrardjpwhen the spec will be up14:03
admin0ok14:04
*** flemingo has quit IRC14:08
cloudnullmornings14:18
*** ibmko has quit IRC14:18
SamYapleo714:20
evrardjpgood morning my dear cloudnull and SamYaple !14:22
evrardjpcloudnull: do you know where I was this week-end?14:22
evrardjpYou have 2 guesses!14:22
*** aruns__ has quit IRC14:25
*** admin0 has quit IRC14:26
pabelangerso, after fighting ansible over the weekend, I was able to use both lxc_hosts and lxc_container_create roles to bootstrap lxc containers on my host. I did have to give up on the idea of having ansible_user != root for it to work, but once I switched. Things justed worked.14:26
pabelangerquestion I did have, I guess both roles don't support unprivileged lxc containers? Since everything is installed using the root user?14:27
cloudnullpabelanger: yea i dont think we've ever tested unprivleged containers.14:29
evrardjppabelanger: yes the creation of containers is based on root14:29
cloudnulli dont know of anything that were actively doing to block it though14:29
evrardjpbut what you do inside is up to you?14:29
evrardjppabelanger: patches welcome :)14:30
pabelangerokay, that is what I figured, I suspect patches welcome to add it :)14:30
pabelangerhehe14:30
dmsimardbtw I came across this last weekend: https://www.reddit.com/r/ansible/comments/85d9tk/my_weekend_project_a_website_showing_the_output/14:30
dmsimardFor the lazy it's a recording of what every callback does https://rndmh3ro.github.io/14:30
cloudnullevrardjp: did you go to the stella factory ?!?!14:30
dmsimard(There are some cool callbacks I didn't even know existed)14:30
*** michealb has joined #openstack-ansible14:30
evrardjpcloudnull: yes, the city of the stella factory!14:30
evrardjpdmsimard: that's cool14:31
cloudnullthats like mecca right?14:31
RandomTechHello everyone, Im having issues uploading images through horizon, I was able to use the instructions here: http://www.openstackfaq.com/openstack-add-images-to-glance/ to do so through the CLI, however, when i go through horizon it tells me "Unable to create the image." Anyone have ideas on why it may not be working?14:31
evrardjpdmsimard: I have seen their code, but not in action, so that's super ocol14:31
evrardjpcool*14:31
dmsimardevrardjp: right, the code is there, it's documented14:31
evrardjpcloudnull: ofc14:31
evrardjphaha14:31
dmsimardevrardjp: but I mean, I don't necessarily think about enabling the "actionable" callback cause I have no idea what it really does14:32
dmsimardbut then I look at it.. and I'm like woah, that's pretty awesome actually14:32
*** kstev has joined #openstack-ansible14:32
cloudnullpabelanger: ++14:32
dmsimard(actionable only prints unreachable/changed/failed tasks, it leaves out skipped/ok iiuc)14:32
hwoarangcloudnull: hello. i think that network may have left another artifact behind :( the openSUE AIO jobs keep failing at http://logs.openstack.org/77/552977/3/check/openstack-ansible-deploy-aio_basekit-opensuse-423/07f8f78/job-output.txt.gz#_2018-03-19_12_58_20_207519 . This is where we try to access external network from within the container for the first time. What do you think...? the role tests pass fine. it's only AIO that fails.14:33
evrardjppabelanger: we never cared about that due to the nature of what we are doing, so I am interested by your use case :)14:34
pabelangercloudnull: what are your thought about creating container_user setting, to allow the option for inventory to set ansible_user=root (baremetal) and container_user=foo (inside lxc) when sudoers is setup for foo. Today, it is not possible to have them be different.14:35
SamYapleRandomTech: you can check the horizon logs in the horion container. there is probably something in there describing theissue14:36
pabelangerevrardjp: great! I'll see if I can find time to start POC on it in coming days / weeks14:36
SamYapleyou can also log client debug messages bychanging local_settings14:36
logan-pabelanger: yeah that would be great to see... sounds similar to https://bugs.launchpad.net/openstack-ansible/+bug/164573214:36
openstackLaunchpad bug 1645732 in openstack-ansible "Allow for ansible become in SSH plugin" [Wishlist,Confirmed]14:36
*** indistylo has joined #openstack-ansible14:36
cloudnullpabelanger: I think that'd be a great idea14:37
cloudnullhwoarang:  interesting14:37
hwoarangcloudnull: but in this patch https://review.openstack.org/#/c/552721/ it seems to work....14:38
hwoarangbut every other AIO opensuse job seems to be failing all the time on that bit of code14:38
cloudnullhum, so maybe we've already fixed it :)14:39
pabelangerlogan-: ++ yah, that is the exact use case I am looking for. In testing this weeked, I hacked in 'sudo' into lxc-attach command and it was working with ansible_user != root. But found some edge cases with a few tasks.  So happy to see there is something already in the for of a feature request14:39
cloudnullI wonder what the difference is14:39
cloudnull...14:39
* cloudnull looking14:39
*** afranc has quit IRC14:39
evrardjppabelanger: great14:40
* evrardjp loves our community14:40
cloudnullpabelanger: https://github.com/openstack/openstack-ansible-plugins/blob/master/connection/ssh.py#L310-L31214:41
cloudnulllooks like you can set the remote user14:41
cloudnullso root / become to host, and remote user into the container14:41
RandomTechSamYaple: im not noticing anything saying an image upload failed in horizon-error.log, but i am noticing that its time seems to be off14:41
cloudnullhttps://github.com/openstack/openstack-ansible-plugins/blob/master/connection/ssh.py#L136-L14714:42
RandomTechdo you think that could potentially cause the issue?14:42
pabelangercloudnull: right, exposing container_user could be a way to allow that to be different14:42
pabelangeralso14:42
pabelangerhttps://github.com/openstack/openstack-ansible-plugins/blob/master/connection/ssh.py#L344 seems to be just duplicate code, since setup in init() above14:43
cloudnullpabelanger: ah, I get it now. yes that would be good14:43
pabelangergreat14:43
cloudnullhaha. yup I was just looking at that too14:43
cloudnull:D14:43
*** indistylo has quit IRC14:44
*** Sha000000 has joined #openstack-ansible14:48
*** udesale has quit IRC14:50
SamYapleRandomTech: if the time isout of sync  you could be havign keystone token timeouts14:51
SamYaplefix the time so its thesame onall nodes14:51
TahvokWas the meeting moved by one hour?14:51
*** Sha000000 has quit IRC14:51
TahvokBecause the wiki still says 16:00 UTC14:51
*** ibmko has joined #openstack-ansible14:51
SamYapleTahvok: im assuming day light savings timeaffected this14:53
TahvokNop for us.. We are still UTC +2 as we were before the discussion14:53
*** chyka has joined #openstack-ansible14:55
*** Sha000000 has joined #openstack-ansible14:56
RandomTechdoes that cirros test image require the use of DHCP?14:58
logan-reviews on https://review.openstack.org/#/c/549638/, https://review.openstack.org/#/c/554105/, and https://review.openstack.org/#/c/554109/ appreciated14:59
evrardjplogan-: on my way14:59
logan-ty15:00
spotzlogan-: Can I fix your release note one 549638?15:00
logan-yes15:00
logan-please15:01
logan-:)15:01
openstackgerritAmy Marrich (spotz) proposed openstack/openstack-ansible master: Disable ceph-ansible NTP installation  https://review.openstack.org/54963815:01
evrardjplogan-: great patches15:01
*** Sha000000 has quit IRC15:01
evrardjplogan-: are you using ironic by any chance? I seem to recall someone at the ptg starting to use it15:02
spotzevrardjp: logan- Ok the other 2 are done, just waiting for that one to test again15:02
logan-thx spotz, evrardjp15:02
logan-yeah I am looking at ironic15:02
cloudnullevrardjp: http://eavesdrop.openstack.org/irclogs/%23openstack-ansible/latest.log.html#t2018-03-19T09:51:29 -- what's up?15:03
evrardjpok...15:03
*** flemingo has joined #openstack-ansible15:03
evrardjpandymccr: logan- prometheanfire there are deprecations on the ironic role: http://lists.openstack.org/pipermail/openstack-dev/2018-March/128438.html I hope I'll have time to submit patches there. Reviews and tests welcome.15:03
logan-ok thx15:04
openstackgerritMerged openstack/openstack-ansible stable/pike: Update all SHAs for 16.0.10  https://review.openstack.org/55059315:04
openstackgerritMerged openstack/openstack-ansible stable/ocata: Update all SHAs for 15.1.18  https://review.openstack.org/55059915:04
prometheanfireevrardjp: ok, at least we are not using pxe_ssh (to my knowlege), just pxe_ipmitool iirc15:06
evrardjpprometheanfire: yes. the first two.15:06
prometheanfireya'15:06
*** flemingo has quit IRC15:07
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Use local container meta-data  https://review.openstack.org/55403615:07
*** geb has quit IRC15:10
*** galstrom_zzz is now known as galstrom15:11
RandomTechSamYaple: when i go into a horizon controller and time ntptime it doesnt recognize the command. Do the containers have there time managed differently?15:11
logan-RandomTech: the containers don't have access to the system clock, they can only inherit the time. you should see chrony running on the host itself (chronyc tracking will output some status info)15:13
RandomTechlogan-: when i do chronyc tracking on the host it states that it is not installed15:16
*** geb has joined #openstack-ansible15:16
*** flemingo has joined #openstack-ansible15:18
*** Leo_m has joined #openstack-ansible15:21
*** flemingo has quit IRC15:23
logan-RandomTech: do you have anything set to disable the ansible-hardening role from running on your hosts?15:23
RandomTechnot that i know of15:23
logan-https://github.com/openstack/ansible-hardening/blob/238d8728607335f01eecf1fd77c107f6f86a8ed6/tasks/rhel7stig/misc.yml#L252-L284 is where chrony is configured. try running the openstack-hosts-setup.yml playbook and watch the output of those tasks.15:24
openstackgerritMerged openstack/openstack-ansible-lxc_hosts master: Remove generic default interfaces  https://review.openstack.org/55395515:25
RandomTechwill it be fine to run that with the test deployment running logan- ?15:25
openstackgerritMerged openstack/openstack-ansible-lxc_container_create master: Collect physical host facts if missing  https://review.openstack.org/55410515:26
logan-i'm not sure what you mean15:28
RandomTechi was just amking sure it wouldnt mess anything up with the currently running openstack15:28
*** flemingo has joined #openstack-ansible15:28
logan-assuming you are using the same OSA version as you last deployed with, then running the playbook again should basically change nothing15:30
*** chyka has quit IRC15:30
*** chyka has joined #openstack-ansible15:30
*** flemingo has quit IRC15:33
openstackgerritPaul Belanger proposed openstack/openstack-ansible-plugins master: Remove duplicate check of self.container_user  https://review.openstack.org/55426615:35
openstackgerritPaul Belanger proposed openstack/openstack-ansible-plugins master: Add container_user to SSH connection plugin  https://review.openstack.org/55426715:35
pabelangerlogan-: evrardjp: cloudnull: ^would be interested in thoughts how we could test that in the gate. I know you have some existing functional tests but need pointed in the right direction15:36
*** savvas_ has quit IRC15:38
*** flemingo has joined #openstack-ansible15:41
*** flemingo has quit IRC15:45
*** Sha0000 has joined #openstack-ansible15:46
*** armaan has joined #openstack-ansible15:46
RandomTechlogan-: i did not see anything about those tasks in openstack-hosts-setup.yml15:47
*** savvas has joined #openstack-ansible15:48
*** hw_wutianwei has quit IRC15:51
*** savvas has quit IRC15:52
*** vnogin has joined #openstack-ansible15:53
*** Sha0000 has quit IRC15:53
*** markvoelker_ has joined #openstack-ansible15:56
*** markvoelker has quit IRC15:56
ioniguys15:57
ioniits a bug or not yet added to the changelog15:57
ionictrl1_heat_api_container-7267e45e             RUNNING 1         onboot, openstack 10.0.3.128, 172.29.225.213                 -15:57
ionictrl1_heat_apis_container-2340bd87            RUNNING 1         onboot, openstack 10.0.3.16, 172.29.224.138                  -15:57
ionictrl1_heat_engine_container-99d743c0          RUNNING 1         onboot, openstack 10.0.3.35, 172.29.224.38                   -15:57
ionii have a new heat-api containter15:57
*** savvas has joined #openstack-ansible15:58
ionithat's on queens15:58
*** markvoelker has joined #openstack-ansible15:59
*** markvoelker_ has quit IRC16:01
*** savvas has quit IRC16:02
*** osa_cloud has joined #openstack-ansible16:02
odyssey4mehmm, looks like a group name change there16:02
osa_cloudHi everyone, I would like to know how to deploy a Ceph Object Gateway on ceph-ansible. Is there a playbook or documentations to do this ? Thanks ;)16:04
*** pcaruana has quit IRC16:07
*** savvas has joined #openstack-ansible16:07
*** savvas_ has joined #openstack-ansible16:10
logan-RandomTech: sorry it looks like the hardening execution was split into a separate play. try running the security-hardening.yml playbook16:11
*** savvas has quit IRC16:11
openstackgerritMerged openstack/openstack-ansible-lxc_hosts master: Use local container meta-data  https://review.openstack.org/55403616:13
RandomTechhere is the output logan-: http://paste.openstack.org/show/704656/16:15
RandomTechit just says ok for all tasks16:15
openstackgerritLogan V proposed openstack/openstack-ansible-plugins master: Add tests for container_user connection attribute  https://review.openstack.org/55427716:16
logan-pabelanger: ^ maybe?16:16
RandomTechi can use chronyc now logan-16:18
pabelangerlogan-: cool, lets see what happens16:20
*** Sha0000 has joined #openstack-ansible16:20
*** TxGirlGeek has quit IRC16:20
RandomTechstill unable to create the image using horizon logan-16:21
*** osa_cloud has quit IRC16:23
*** vnogin has quit IRC16:23
*** vnogin has joined #openstack-ansible16:24
*** foutatoro has quit IRC16:25
mbuilhwoarang: https://hastebin.com/cuyahadugi.php I think that omit passes "null" when that variable does not exist and ODL role takes "null" as the value16:27
openstackgerritLogan V proposed openstack/openstack-ansible-plugins master: Add tests for container_user connection attribute  https://review.openstack.org/55427716:28
openstackgerritMerged openstack/openstack-ansible-os_nova master: Updated from global requirements  https://review.openstack.org/55222116:29
*** admin0 has joined #openstack-ansible16:32
hwoarangmbuil: i think the ODL role has to handle the empty/missing param like other roles. if var is emty then supply a default. but that has to happen inside the role16:32
openstackgerritMerged openstack/openstack-ansible stable/queens: Fix BOOTSTRAP_OPTS  https://review.openstack.org/55374416:33
mbuilhwoarang: I changed the role to: suse_rpm_repo_url: "{{ repo_url | default('https://download.opensuse.org/repositories/Virtualization:/NFV/openSUSE_Leap_42.3/') }}"16:33
hwoarangand I think that variables which are expected to be overriden by deployers have to be in defaults/main.yml16:33
hwoarangmbuil: but then you have to maintain the default url in both neutron and ODL roles16:34
mbuilhwoarang: I added that to the ODL role. In os_neutron, I still have repo_url: "{{ odl_repo_url | default(omit) }}"16:34
hwoarangah ok16:34
hwoarangthat would work i think16:35
evrardjpcloudnull: odyssey4me that idea of doing JIT IP allocation can't work. :/ ansible_host is filled during dynamic inventory, and is used almost everywhere, including group vars, which would I guess require early generation16:35
mbuilhwoarang: it is not :(16:35
mbuilcan you pass me the link where you read that? Maybe I am doing something wrong16:35
hwoarangdefault(omit) doesn't work for role vars as described in the ansible issue16:35
hwoarangi put all the info on the patchset16:35
hwoarangand a link to an existing role which allows deployers to override variables16:36
ioniodyssey4me, i'll open a bug report16:37
evrardjpI guess we could do it but it's very ugly16:38
evrardjpbecause it's only parsed at runtime, right16:38
odyssey4meevrardjp it could work, but it would require considerable changes to be made16:39
*** mhfoo has joined #openstack-ansible16:39
evrardjpit's kind of a chicken and egg thing.16:39
*** Sha0000 has quit IRC16:40
evrardjpI will think about it a little further16:41
mhfooDo you run the ansible-hardening role before you install applicaiton (apache, monitoring, …) or after? Or does it matter?16:41
*** TxGirlGeek has joined #openstack-ansible16:44
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406416:44
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400316:44
cloudnullmhfoo: it doesn't really matter16:45
cloudnullwe run the role before teh deployment16:45
*** armaan has quit IRC16:46
cloudnullhowever it could just as easily be run after.16:46
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Optimize tasks for the container backend if defined  https://review.openstack.org/55403516:51
*** shardy_mtg has quit IRC16:52
*** RandomTech has quit IRC16:59
*** vnogin has quit IRC16:59
*** TxGirlGeek has quit IRC17:02
*** TxGirlGeek has joined #openstack-ansible17:03
*** shardy_mtg has joined #openstack-ansible17:08
*** flemingo has joined #openstack-ansible17:10
*** mhfoo has quit IRC17:11
*** TxGirlGeek has quit IRC17:12
*** TxGirlGeek has joined #openstack-ansible17:13
*** mhfoo has joined #openstack-ansible17:13
*** flemingo has quit IRC17:15
*** osnaya has joined #openstack-ansible17:18
osnaya@admin0 Hi r u there?17:18
*** osnaya has quit IRC17:19
*** osnaya has joined #openstack-ansible17:20
*** TxGirlGeek has quit IRC17:22
*** flemingo has joined #openstack-ansible17:23
*** TxGirlGeek has joined #openstack-ansible17:25
*** shardy_mtg has quit IRC17:26
*** flemingo has quit IRC17:27
*** TxGirlGeek has quit IRC17:28
*** TxGirlGeek has joined #openstack-ansible17:29
osnaya@admin0 Hi admin0 are you there?17:29
admin0yep17:29
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406417:29
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406417:30
osnaya@admin0 per our chat earlier on deployment and containers... in case I arrange you to speak w few tech folks here, what are the suitable times for you?17:31
admin0sure :)17:32
admin0osnaya, where in the world are you ? i am in the NL17:32
*** esberglu_ has joined #openstack-ansible17:32
*** vnogin has joined #openstack-ansible17:33
osnaya@admin0 CA, USA17:33
admin03 Pm your time17:34
admin0after 1 hour 26 mins around17:34
*** esberglu has quit IRC17:35
osnaya@admin0 what is the time diff? It is 10.35am local time here... At 3pm in CA what will be the time for you?17:35
admin0oops .. i was looking into NY time :D17:36
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-plugins stable/queens: Utilise sorted to ensure no random changes  https://review.openstack.org/55430117:36
admin012 noon your time17:36
*** jbadiapa_ has joined #openstack-ansible17:37
osnaya@admin0 Do you think, you can join WebEx session?17:37
admin0sure17:37
admin0skype, zoom, webex :D17:38
admin0all works17:38
*** vnogin has quit IRC17:38
osnaya@admin0 OK. Let me work out the logistics and see the availability. Will confirm back to you. Thanks.17:38
admin0ok17:38
*** TxGirlGeek has quit IRC17:38
*** TxGirlGeek has joined #openstack-ansible17:39
*** jbadiapa has quit IRC17:40
*** TxGirlGeek has joined #openstack-ansible17:41
*** chhagarw has quit IRC17:45
*** ibmko has quit IRC17:46
odyssey4mecloudnull evrardjp care to give this backport a review https://review.openstack.org/554196 ?17:47
evrardjpshould we merge https://review.openstack.org/#/c/528357/ in Pike, or do you want to wait?17:48
evrardjpfor it to be a combined b17:48
evrardjpbp17:48
evrardjpbackport*17:48
*** jamesdenton has joined #openstack-ansible17:49
*** savvas_ has quit IRC17:49
odyssey4meevrardjp that is a combined backport17:50
*** radeks has quit IRC17:50
logan-cloudnull: maybe we should ditch lxc_cache_map and make something a little less rigid.. like a list of caches to prep17:51
logan-maybe not right now i guess17:52
*** TxGirlGeek has quit IRC17:54
*** savvas has joined #openstack-ansible17:54
openstackgerritMerged openstack/openstack-ansible-plugins master: Utilise sorted to ensure no random changes  https://review.openstack.org/55375717:55
*** TxGirlGeek has joined #openstack-ansible17:56
*** savvas_ has joined #openstack-ansible17:56
*** goldenfri has joined #openstack-ansible17:56
*** savvas has quit IRC17:58
*** poopcat has joined #openstack-ansible17:59
*** osnaya has quit IRC18:02
*** aojea has quit IRC18:02
*** alefra has quit IRC18:04
*** jbadiapa_ is now known as jbadiapa18:05
openstackgerritMerged openstack/openstack-ansible-lxc_hosts master: Optimize tasks for the container backend if defined  https://review.openstack.org/55403518:07
*** TxGirlGeek has quit IRC18:13
*** TxGirlGeek has joined #openstack-ansible18:16
*** flemingo has joined #openstack-ansible18:16
*** esberglu_ is now known as esberglu18:16
*** flemingo has quit IRC18:20
openstackgerritMerged openstack/openstack-ansible stable/queens: Correct is_container when deploying containers  https://review.openstack.org/55377718:21
openstackgerritMerged openstack/openstack-ansible master: Do not collect physical host facts in playbook  https://review.openstack.org/55410918:24
*** osnaya has joined #openstack-ansible18:26
*** osnaya has quit IRC18:27
*** dave-mccowan has joined #openstack-ansible18:27
*** osnaya has joined #openstack-ansible18:28
*** flemingo has joined #openstack-ansible18:28
openstackgerritMerged openstack/openstack-ansible-os_swift stable/pike: Switch pypy functional test to experimental  https://review.openstack.org/55374018:30
*** armaan has joined #openstack-ansible18:31
osnaya@admin0 Hi I am checking internal availability... quick q for you18:31
osnaya@admin0 when it comes to confirming with you about noon Pacific, how do I send you webex invitation, do you have an email I can send you?18:32
*** flemingo has quit IRC18:33
*** mbuil has quit IRC18:34
*** osnaya has quit IRC18:35
*** radeks has joined #openstack-ansible18:46
openstackgerritManuel Buil proposed openstack/openstack-ansible-os_neutron master: Allow choosing different versions of ODL to deploy  https://review.openstack.org/54928818:47
*** flemingo has joined #openstack-ansible18:48
*** openstackgerrit has quit IRC18:48
*** flemingo has quit IRC18:52
*** openstackgerrit has joined #openstack-ansible18:52
openstackgerritMarkos Chandras (hwoarang) proposed openstack/openstack-ansible master: zuul: Make openSUSE a voting job  https://review.openstack.org/55297718:52
*** osnaya has joined #openstack-ansible18:56
osnaya@admin0 the call may not happen today due to unavailability... I will let you know later18:57
*** flemingo has joined #openstack-ansible18:58
*** admin0 has quit IRC18:59
*** flemingo has quit IRC19:02
goldenfriI see that horizon/haproxy is not in a tested scenario for an aio_metal, but has anyone in here tried it?19:05
*** osnaya has quit IRC19:13
*** epalper has quit IRC19:14
*** ivve has quit IRC19:24
*** admin0 has joined #openstack-ansible19:25
admin0did i miss osnaya ?19:26
*** mhfoo has quit IRC19:28
*** mhfoo has joined #openstack-ansible19:29
prometheanfire{0} setUpClass (tempest.api.compute.keypairs.test_keypairs_v22.KeyPairsV22TestJSON) ... SKIPPED: The microversion range[2.2 - latest] of this test is out of the configuration range[None - None].19:37
*** radeks has quit IRC19:40
*** radeks has joined #openstack-ansible19:41
*** deadnull has joined #openstack-ansible19:42
*** armaan has quit IRC19:43
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Fix gitreview repo name  https://review.openstack.org/55433519:44
*** armaan has joined #openstack-ansible19:44
openstackgerritLogan V proposed openstack/ansible-role-systemd_mount master: Fix gitreview repo name  https://review.openstack.org/55433619:47
openstackgerritLogan V proposed openstack/ansible-role-systemd_service master: Fix gitreview repo name  https://review.openstack.org/55433719:48
*** openstackgerrit has quit IRC19:48
*** flemingo has joined #openstack-ansible19:49
*** bswrchrd has joined #openstack-ansible19:50
*** openstackgerrit has joined #openstack-ansible19:53
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Correctly support list of string prefixes  https://review.openstack.org/55434019:53
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Use the correct var to calculate address prefix  https://review.openstack.org/55434219:53
*** flemingo has quit IRC19:54
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Allow networkd filename override  https://review.openstack.org/55434319:54
*** Leo_m has quit IRC19:56
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Remove unused handlers  https://review.openstack.org/55434419:56
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Implement networkd restart as handler  https://review.openstack.org/55434519:57
*** ericzolf has joined #openstack-ansible20:00
*** deadnull has quit IRC20:03
*** radeks_ has joined #openstack-ansible20:05
pabelangerlogan-: odd, if I use the same playbook locally, it works for https://review.openstack.org/55427720:05
pabelangerlogan-: anyway to enable extra -vvv to see what is happening?20:05
logan-oh nice, i hadnt looked at the log yet20:06
logan-i guess it works!20:06
logan-yeah theres some way to run it with -vvv let me see...20:06
pabelangeryah, user was created, but ansible permission issue in /home/testing20:07
pabelangernot sure why ATM20:07
*** radeks has quit IRC20:08
openstackgerritLogan V proposed openstack/openstack-ansible-plugins master: Add tests for container_user connection attribute  https://review.openstack.org/55427720:09
*** radeks_ has quit IRC20:13
openstackgerritLogan V proposed openstack/ansible-role-systemd_service master: Add base role tests  https://review.openstack.org/55434820:22
openstackgerritLogan V proposed openstack/ansible-role-systemd_mount master: Add base role tests  https://review.openstack.org/55434920:23
openstackgerritLogan V proposed openstack/ansible-role-systemd_networkd master: Add base role tests  https://review.openstack.org/55435020:24
openstackgerritLogan V proposed openstack/openstack-ansible-plugins master: Add tests for container_user connection attribute  https://review.openstack.org/55427720:34
*** TxGirlGeek has quit IRC20:35
*** flemingo has joined #openstack-ansible20:37
mnaserdoes anyone run galera with over 2000 max_connetions20:39
mnaseri'm noticing some weird timeouts once it crosses the 2000 connection mark20:39
*** TxGirlGeek has joined #openstack-ansible20:39
*** mhfoo has quit IRC20:39
mnaserload is fine ... it's just not very responsive once it hits it20:39
*** flemingo has quit IRC20:42
*** TxGirlGeek has quit IRC20:43
*** flemingo has joined #openstack-ansible20:45
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406420:45
cloudnullmnaser: I'm sure we've clusters that meet that criteria, but i dont have access to them at the moment20:49
cloudnullI could go poke some folks for you if you want .20:49
mnaserit feels like there is sometihng that happens once it reaches 2k connections where things stall out like keystone auth etc20:49
*** flemingo has quit IRC20:50
mnaseri wonder if there is a ulimit thing, but logs show nothing20:50
*** flemingo has joined #openstack-ansible20:51
*** TxGirlGeek has joined #openstack-ansible20:54
*** flemingo has quit IRC20:55
*** TxGirlGeek has quit IRC20:56
*** zerick has quit IRC20:56
*** zerick has joined #openstack-ansible20:56
*** foutatoro has joined #openstack-ansible21:00
*** mhfoo has joined #openstack-ansible21:03
*** flemingo has joined #openstack-ansible21:04
foutatorohi logan-: I would like to know if you have any wiki explaining how to deploy openstack multi region or multi site with OSAD ?21:04
*** ansmith has quit IRC21:05
*** TxGirlGeek has joined #openstack-ansible21:07
*** flemingo has quit IRC21:09
*** TxGirlGeek has quit IRC21:09
*** TxGirlGeek has joined #openstack-ansible21:12
mnasercloudnull: ha, haproxy had maxconn 2k21:14
odyssey4metime for me to head out for the night - cheerio all!21:16
*** ericzolf has quit IRC21:18
*** TxGirlGeek has quit IRC21:19
*** flemingo has joined #openstack-ansible21:20
logan-foutatoro: pretty sure I have a gist with a sample config.. commuting right now but I can link ya in a bit21:21
logan-mnaser: figures lol21:21
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400321:23
cloudnullmnaser: haha, that's kinda funny. but im glad its wasnt something terrible that we're doing21:23
*** flemingo has quit IRC21:24
*** TxGirlGeek has joined #openstack-ansible21:25
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Move the image prep script into a template file  https://review.openstack.org/55406421:27
mnaseryeah that was so confusing21:28
*** TxGirlGeek has quit IRC21:30
*** flemingo has joined #openstack-ansible21:33
*** sar has quit IRC21:35
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400321:36
*** flemingo has quit IRC21:37
*** kstev has quit IRC21:50
*** flemingo has joined #openstack-ansible21:55
*** kstev has joined #openstack-ansible21:58
*** BjoernT has joined #openstack-ansible21:59
*** TxGirlGeek has joined #openstack-ansible21:59
*** ansmith has joined #openstack-ansible22:00
*** kstev1 has joined #openstack-ansible22:00
*** flemingo has quit IRC22:00
*** galstrom is now known as galstrom_zzz22:01
*** kstev has quit IRC22:02
*** TxGirlGeek has quit IRC22:04
*** ericzolf has joined #openstack-ansible22:05
*** ericzolf_ has joined #openstack-ansible22:05
*** flemingo has joined #openstack-ansible22:08
*** TxGirlGeek has joined #openstack-ansible22:08
*** ibmko has joined #openstack-ansible22:11
*** ericzolf has quit IRC22:11
*** flemingo has quit IRC22:13
logan-foutatoro: https://gist.github.com/Logan2211/d2c9548f8a405e663ea356adb83b6e9722:15
*** savvas_ has quit IRC22:16
foutatorologan-: thanks. so I don't understand how to distinguish hosts of differents regions with this configs ?22:30
logan-foutatoro: typical way to do this is by having a completely separate OSA inventory per region22:31
logan-for my deployment, the primary region hosts keystone and horizon, and the secondary regions do not have keystone and horizon hosts defined in openstack_user_config22:32
logan-and the vars there are configured on secondary regions so the services deployed in those regions will be configured to use the primary region's keystone endpoints22:32
*** BjoernT has quit IRC22:38
foutatorologan-: thanks so much for these information22:42
*** alefra has joined #openstack-ansible22:52
*** mhfoo has quit IRC22:55
logan-No problem22:58
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400323:05
*** flemingo has joined #openstack-ansible23:09
*** flemingo has quit IRC23:14
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400323:14
*** michealb has quit IRC23:15
*** michealb has joined #openstack-ansible23:16
*** admin0 has quit IRC23:17
*** masber has joined #openstack-ansible23:27
*** chyka has quit IRC23:30
*** chyka has joined #openstack-ansible23:31
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Split the container and host variable files  https://review.openstack.org/55438323:39
*** kstev1 has quit IRC23:39
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Convert lxc_hosts role to use simple download URL  https://review.openstack.org/55400323:40
*** foutatoro has quit IRC23:41
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible master: Add more infos into error message  https://review.openstack.org/55388123:42
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Split the container and host variable files  https://review.openstack.org/55438323:54
openstackgerritKevin Carter (cloudnull) proposed openstack/openstack-ansible-lxc_hosts master: Split the container and host variable files  https://review.openstack.org/55438323:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!