16:00:01 #startmeeting openstack_ansible_meeting 16:00:02 Meeting started Tue Jan 8 16:00:01 2019 UTC and is due to finish in 60 minutes. The chair is mnaser. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:05 The meeting name has been set to 'openstack_ansible_meeting' 16:00:05 #topic rollcall 16:00:06 o/ 16:00:28 o/ 16:00:32 o/ 16:00:34 o/ 16:00:36 o/ 16:00:52 o/ 16:01:16 \o/ 16:01:33 anyone has any last week highlights to bring up? i know we've missed 2 weeks because of holidays :) 16:01:35 +/- /o 16:01:45 holidays were a highlight :) 16:01:54 o/ 16:03:05 that's the best highlgiht 16:03:12 #topic Bug triage 16:03:13 catch up time 16:03:22 #link https://bugs.launchpad.net/openstack-ansible/+bug/1810584 16:03:22 Launchpad bug 1810584 in openstack-ansible "openstack-ansible setup-hosts.yml fails in task: lxc_hosts Ensure image has been pre-staged" [Undecided,New] 16:04:10 i can curl that file 16:04:33 3 ppl saying it's a problem , I suppose it's real 16:04:39 "Running the same command a second time, the command succeeds." 16:04:52 weirdly it works in gates in one go 16:04:53 in my experience, cdimage.ubuntu.com isn't the most reliable 16:05:15 yeah i've never seen gate failures around that 16:05:24 maybe we are using an infra mirror? 16:05:29 2 people on the same time too 16:05:35 i don't think we are for this 16:05:43 these are multinode though arent they? 16:05:54 jrosser: they seem to 16:06:22 but IIRC odyssey4me told me RAX has multinode daily periodics now, which would have shown up the issue 16:06:49 _lxc_hosts_container_image_url: "http://cdimage.ubuntu.com/ubuntu-base/releases/18.04/release/ubuntu-base-18.04.1-base-{{ lxc_cache_map.arch }}.tar.gz" 16:06:55 yep, we don't see failures there for MNAIO tests as far as I know 16:07:06 we maybe need to document this as a known issue in case of people not overriding the value 16:07:15 lxc_hosts_container_image_url: "{{ _lxc_hosts_container_image_url }}" 16:07:16 odyssey4me: I suppose you're using RAX mirrors for that 16:07:18 or perhaps we should increase the timeout 16:07:27 odyssey4me: the thing is if you look at the logs 16:07:29 0B downloaded 16:07:29 nope, we're using the upstream sources every time 16:07:33 so.. it's just not downloading ever 16:07:41 well, that's nice :/ 16:07:46 oh god 16:07:52 64MiB/s later on the same download on the rerun 16:07:54 look at the nice can of worms? 16:08:13 the 2nd person reported on the same day 16:08:31 well, we can remove the async and add retries I guess - or figure out another way of having async + retries 16:08:46 or we can ditch containers :p 16:08:50 :D 16:09:13 for the sakes of this bug, i hate to say it but i guess its confirmed because we don't have a retry mechanism 16:09:41 even though it's not really our fault, but we don't have recovery from this simple failure 16:10:14 confirmed/low ? 16:10:26 because it's really handling a third party failure 16:10:57 i guess ill go for that. 16:10:58 yeah I'm fine with that triage 16:11:07 #link https://bugs.launchpad.net/openstack-ansible/+bug/1810538 16:11:07 Launchpad bug 1810538 in openstack-ansible "keepalived.service is not enabled" [Undecided,New] 16:11:08 but we need to explain it's not really our fault in the bug :p 16:11:29 uh 16:11:37 i'll add a note 16:12:06 that's weird. It should not pass my functional testing if it's not started 16:12:16 o/ 16:12:22 oh wait 16:12:22 evrardjp: enabled not started 16:12:24 after reboot 16:12:29 yeah I misread 16:12:32 I don't test that 16:12:56 but it's not really my fault if the module is not doing appropriately 16:12:57 :p 16:13:01 https://github.com/evrardjp/ansible-keepalived/blob/master/tasks/main.yml#L197-L200 16:14:04 curious 16:14:07 i somehow doubt 16:14:14 that `enabled: yes` is broken 16:14:23 and i have a feeling something else is breaking it 16:14:46 anyways, i cant reproduce because i've always seen keepalive go back up 16:14:57 I can't remember if keepalived can be socket activated 16:15:11 and i can see it up here `systemctl is-enabled keepalived.service` on a c7 deploy 16:15:14 in that case a wrong net config would never activate the socket 16:15:33 but i doubt it's that 16:15:46 it would still be enabled but stopped 16:15:48 we need a systemd unit log 16:16:07 I think there is something else going on 16:16:32 asked for more info 16:16:33 the enablement of the service resulting in a failure should be in the journal 16:16:41 thx 16:16:48 incomplete 16:16:54 medium 16:16:54 for now.. 16:17:05 #link https://bugs.launchpad.net/openstack-ansible/+bug/1810537 16:17:05 Launchpad bug 1810537 in openstack-ansible "volume creation fails after successful installation" [Undecided,New] 16:17:27 hey isn't that what you were working on cloudnull ? 16:17:31 the fix is icnldued 16:17:44 cc odyssey4me ^ 16:17:53 https://review.openstack.org/#/c/628197/ 16:18:26 mnaser that maybe the issue I was seeing. 16:18:39 cloudnull: can i assign to you? maybe https://review.openstack.org/#/c/628197/ is the fix 16:18:42 it certainly has to do with it 16:19:02 Mohammed Naser proposed openstack/openstack-ansible-os_cinder master: Adds resource_filters.json distribution https://review.openstack.org/628197 16:19:04 added closes-bug 16:19:08 sure 16:19:13 evrardjp: i just did it, lets +2? 16:19:16 haha 16:19:29 this was the error from the cinder-api log I was seeing 16:19:30 https://pasted.tech/pastes/61c4496978d40841ddaf22d0e3ca49936f269a3a.raw 16:19:35 considering noonedeadpunk had it fixed for a while now and we haven't done much :) 16:19:55 when running tempest.api.volume.test_volumes_list.VolumesListTestJSON 16:19:56 cloudnull: while its not the same traceback, it complains about `common.get_enabled_resource_filters` 16:20:00 ++ 16:20:06 which seems pretty darn close so.. 16:20:13 so it totally could be the same issue, will tinker in a bit 16:20:21 keep https://review.openstack.org/#/c/628197/2 in mind and ill go onto the next 16:20:28 #link https://bugs.launchpad.net/openstack-ansible/+bug/1810533 16:20:29 Launchpad bug 1810533 in openstack-ansible "openstsack-ansible behind a proxy fails when calling apt-key" [Undecided,New] 16:20:29 Oh, never knew, that there's a bug for this:) 16:20:35 noonedeadpunk: no problem =) 16:20:49 oh look a proxy issue 16:20:54 * mnaser looks at jrosser 16:21:04 :) 16:21:13 didnt we change these to be in the repo now? 16:21:14 im going to guess this might be happening in cache prep stage 16:21:35 unfortunately it doesn't mention which role 16:21:51 MOAR WORKFLOW 16:21:55 jrosser: that's true we are carrying things 16:22:01 odyssey4me: haha I laughed too 16:22:13 fine ill join in too 16:22:13 BUFFEROVERWORKFLOW 16:22:38 i dont see any 'apt-key' references using codesearch.openstack.org 16:22:44 i guess i can ask where that change was done? 16:22:44 oh yeah, I'll take that - I've done all the patches except the rocky patch for rabbitmq_server 16:22:52 master is all don 16:22:54 *done 16:23:02 odyssey4me: thanks 16:23:03 hehe 16:23:10 okay cool, ill assign then odyssey4me :) 16:23:29 #link https://bugs.launchpad.net/openstack-ansible/+bug/1810319 16:23:30 Launchpad bug 1810319 in openstack-ansible "Can't set gateway for provider network" [Undecided,New] 16:23:38 https://review.openstack.org/#/q/(topic:vendor-gpg-keys+OR+topic:vendor-gpg-keys-stable/rocky)+(status:open+OR+status:merged) 16:24:11 looks like that's dcdamien 16:24:17 i wasnt sure about this one - "specific default gateway" sounds a bit bogus? 16:24:26 i seem to recall looking at the playbooks could not find any reference to 'gateway' other than docs 16:24:32 sounds like the user wants a static route 16:24:37 ^ that, yes 16:25:35 was it lxc_container_create that made network configs? 16:26:34 I don't think, that it ever created default routes. But several times I've faced with issue of missing nat rules on controller nodes after setup-hosts.yml 16:26:47 but it's probably not related 16:27:34 i dunno 16:27:36 long ago we could inject routes to the process. I remember, I wrote it. 16:27:37 i mean we have it listed there 16:27:49 theres an example in the tests https://github.com/openstack/openstack-ansible-lxc_container_create/blob/master/tests/group_vars/all_containers.yml#L17-L26 16:28:15 for static routes yeah 16:28:17 not default 16:28:26 static could easily be 0.0.0.0, no? 16:28:30 missing rules on the host is fixed by lxc-system-manage 16:28:34 well it would conflict with the lxc one 16:28:38 ahh 16:28:41 the 10.8.0.0 or whatever 16:28:44 jamesdenton: having two 0.0.0.0/0 seems to be problematic 16:28:50 for some ppl :p 16:29:13 not sure what we are talking about anymore 16:29:23 unicorns! 16:29:24 i added a comment 16:29:27 and i'll set to invalid i guess 16:29:36 linking to jrosser example 16:29:47 we are talking about two different things 16:29:49 IMO 16:30:07 user wants to define a gateway => not possible 16:30:18 we're pretty sure user just wants a static route pointing ?? somewhere ?? 16:30:34 the default route is always 10.8.0.1 or whatever the lxc host ends up with 16:30:44 we use that for natting and all the other fun stuff we need to do to make sure things work 16:30:55 also 16:31:00 don't we run haproxy inside metal? 16:31:27 i'm pretty sure we do.. right? keepalived and haproxy 16:31:35 Yep 16:31:43 https://github.com/openstack/openstack-ansible-lxc_hosts/blob/a8b96e2e37ffea4b7c3e055b1310b10bb95a7b2a/defaults/main.yml#L106 16:31:54 let me backtrack this into the inventory 16:32:18 haproxy/keepalived are installed on the host for an AIO, yes 16:32:37 so i guess this seems to be a specific scenario that the user came up with 16:32:42 it seems it's not in the inventory anymore. 16:33:02 running haproxy in containers but then wanting the container to not be wired to the internet lxc network but wired to the public network in this case 16:33:54 i think this is running containerised haproxy, needing eth0 to be natted to install (guess) and the external interface on this new eth14 16:33:59 one could run haproxy and keepalived in containers, and use whatever network interface he wants 16:34:40 I think it's a valid issue 16:35:10 not really, because our architecture has a default route of the host 16:35:15 which means that traffic goes in eth14 16:35:20 we probably removed that feature at some point, kept the feature in lxc_hosts, and probably forgot to edit the default template of the inventory. 16:35:21 but on the way out, it hits the default route 16:35:25 imho this is addressed with a static route as i linked 16:36:01 i'l leave this and lets wait for the user to comment on next week 16:36:05 mnaser: not sure to understand 16:36:22 evrardjp: just because traffic enters from one interface, does not mean it will exit from the same one 16:36:28 IMO you could have your own NIC that's not natted in the container, and that would require a default route for reason x 16:36:50 if your default route is the physical host that runs the container 16:37:11 well no I meant, if you don't run nat at all 16:37:23 you could have just bridges on the host, and ignore lxc nat. 16:37:37 you could, but that's a whole another use case 16:37:43 anyways, i think we've taken abit of time on this 16:37:55 i dont wanna burn everyone out with all this stuff for now 16:38:00 evrardjp: yes this is exactly what i do in the new http proxy test, no default route https://review.openstack.org/#/c/625523/ 16:38:02 haha true :) 16:38:14 i feel like bug triage drains everyone out and we lose people :( 16:38:24 so enough of that for today 16:38:38 Kevin Carter (cloudnull) proposed openstack/openstack-ansible-os_cinder master: Cleanup files and templates using smart sources https://review.openstack.org/588953 16:38:39 rhel 8 was an interesting subject cloudnull brought up and odyssey4me evrardjp was discussing earlier 16:38:48 a lot of stuff has been removed which makes containers even harder 16:39:04 * mnaser is using more and more non-containerized centos deploys 16:39:13 mnaser: I checked the link you gave above 16:39:17 with this, cinder as backend for glance is going to be supported by us but just when glance is METAL 16:39:20 (might worth giving it here) 16:39:28 oh yeah cloudnull posted that 16:39:39 cloudnull> https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8-beta/html/8.0_beta_release_notes/removed_functionality -- for your reading enjoyment :) 16:39:40 mnaser: I am scared of the LVM Python bindings have been removed 16:40:04 yeah guilhermesp also caught a really annoying issue about iscsiadm and containers 16:40:13 things like running glance in containers with cinder backend doesn't work for example 16:40:20 yep 16:40:28 nothing using iscsi will work in a container 16:40:35 learned that the hard way :) 16:40:41 :) 16:40:41 yep hahaha 16:40:45 that's been known for a while iirc (we learned the hard way too) 16:40:47 I suffered a bit but that's ok 16:41:03 so given all that stuff, i'd like to propose moving centos off containers into metal 16:41:09 as a path to eventually maybe make containers an opt-in 16:41:19 but making centos more of a canary to see what happens in there 16:41:32 a second cent gate for it? 16:41:42 mnaser: that sounds fair. I like the approach. 16:41:58 prometheanfire: we already gate metal and lxc 16:42:07 and they're both somewhat reliable, i guess. 16:42:17 mnaser: do we have fine tuned the default listen interfaces of all the services? 16:42:26 just curious what it'd do to coverage is all 16:42:39 evrardjp: i'll have to pick up that work and maybe JUST maybe multinode metal jobs 16:42:40 if there is no 0.0.0.0 anymore, we can pretty much simplify more things 16:43:12 evrardjp: you need this https://github.com/openstack/openstack-ansible/blob/master/playbooks/listening-port-report.yml 16:43:12 so i was thinking we can combine some of the efforts that odyssey4me has been doing in order to run integrated for roles 16:43:31 so that we don't have to rewrite most functional tests to throw them away later 16:43:33 mnaser: for now I'd say the only part is haproxy because he listens to the same port as some services, so if you put multinode, just put haproxy on a single node, then all the rest into an infra node, then a compute node 16:43:46 a 3 node perfection :) 16:43:49 evrardjp: yea but ideally i'd like haproxy to be colocatable 16:43:57 most deployments don't have a dedicated haproxy node 16:44:04 in my experience at least 16:44:08 people want to reuse their controllers 16:44:12 that's sad, because it helps :p 16:44:49 running everything on it's own machine is nice too because that helps, but then we can do vms to make it easier cause they don't need resources .. but machine containers work easier too cause they're lighter 16:44:49 oh. 16:44:50 wait. 16:44:52 :) 16:45:05 :) 16:45:28 maybe we should just use kata containers instead :p 16:45:37 hey 16:45:39 k8s? 16:45:39 i'm not gonna lie 16:45:50 i've thought about using docker containers to run some stuff 16:46:00 mnaser: it makes sense 16:46:10 like imagine how nice it'd be just to pull down memcache, the same way, across all systems 16:46:52 but then ppl will say it's memcache with centos, oh no it's memcache on suse I want, oh no it's memcache on ubuntu I want, OH NO it's not Alpine! 16:47:11 we should learn carefully from the tripleo experience there - seems to have turned messy 16:47:37 I think run everything on metal is simple 16:47:46 if ppl want their own things, they can 16:48:08 not going to make all people happy all the time 16:48:10 for example, they set their ubuntu nodes with lxd networks, and then install on lxd nodes 16:48:19 prometheanfire: that's true 16:48:31 prometheanfire: but we don't have to 16:48:50 i've used every single deployment tool so far by now lol 16:48:55 evrardjp: just making sure we were not trying to, that path leads to destruction :P 16:48:57 if we deliver the minimum amount of code it's easier to maintain in the long run, and people will be happy, even if we lack features 16:49:00 but I'd prefer not to completel reject containers, in favor of metal 16:49:09 noonedeadpunk: nope, we don't want to do that at all 16:49:12 mnaser: fuel too? 16:49:29 evrardjp: fuel, mos, tripleo, kolla-{ansible,k8s}, puppet openstack 16:49:40 noonedeadpunk: it's not about rejecting it, because they make sense... It's about giving them as opt-in 16:49:40 even the tripleo before redhat revived it :) 16:49:50 mnaser: wow, the HP thing? 16:49:54 yep 16:50:05 noonedeadpunk: when you say not rejecting containers, you mean machine containers or app containers 16:50:23 like odyssey4me would have said, at that time dinosaurs were roaming on earth 16:50:38 mnaser I mean machine containers 16:50:49 jrosser ++ totally agree. 16:50:54 yeah, we don't aim on dropping it 16:51:06 we have users that are happy with it 16:51:08 Yeah, then bare metal is really nice. But kata looks also pretty interesting. But I don't like docker to be honest.. 16:51:08 i'm also here primarily becasue of the attractive architecture / approach 16:51:18 kata is nothing but a docker runner 16:51:29 it just runs docker containers in vms, thats' all 16:51:31 it makes sense to me to have lxc only running on ubuntu, which formally supports it - and nspawn/lxc opt-in for those who want it 16:51:45 yeah, the lxc stuff is pretty hacked up too 16:51:49 because we depend on some other external repo 16:52:05 and the more we stray away from upstream tooling the harder it gets 16:52:23 the urgency is remove it for centos 16:52:28 when is 8 official? 16:52:50 having discussed all of this, is anyone not too opposed to removing containers from centos to be able to support rhel 8 and make it a bit more maintainable, i guess 16:52:57 Can we blame mhayden for something here?:) 16:53:09 always 16:53:19 I'd really love folks to give the nspawn bits a spin. they need folks to use it and report where its all broken for them and their environments. 16:53:19 heheh, we miss you:) 16:53:42 mhayden: we should do tacos this week 16:53:59 cloudnull: i really tried to get nspawn working at gate, 17 patch sets later and i was still struggling 16:54:00 wild Major found 16:54:03 cloudnull: I need to bug you about the state of that 16:54:07 and unfortunately we haven't been getting traction on it 16:54:07 FrankZhang: :) 16:54:10 my knowledge in it is limited 16:54:19 to my mind, given that we have a very limited centos/suse support base and people supporting/developing, perhaps we should consider scaling back their support to only without containers 16:54:36 odyssey4me: I would be fine with that. 16:54:37 mnaser: theres an important distinction between "we removed this stuff becasue rhel8 broke the world" vs. "we removed this stuff becasue it's the general direction of travel of OSA" 16:54:53 jrosser yep, fair point 16:55:03 i think we have enough confusion already between containers/metal, with very few actually using metal, and we carry the overhead 16:55:18 jrosser: i agree. i don't want to take away from major users who's benefit is containers and the architecture we provide 16:55:18 odyssey4me: to be honest, lxc is supposed to be phased out in favor of new lxd/lxc bindings, right? 16:55:23 mnaser maybe we could spend some time on a mnaio or some env to get nspawn up and running and answer questions. 16:55:39 * cloudnull has access to hardware to do that 16:55:47 if we move to containers being opt-in, then we should change the current dynamic inventory to a simpler inventory plugin which can be easily enabled/disabled 16:55:51 well, i'd be just happy with passing jobs to start with :( 16:56:00 I know we don't want to wait that long to make a decision but this could be a great Forum or PTG discussioon 16:56:00 odyssey4me: indeed 16:56:17 odyssey4me: would we still need the current one to support people if they do opt in to containers? 16:56:18 spotz: I tried that though, it didn't bring many supporters 16:56:24 also i think making our default deploy tooling simpler but leaving the complex one available is better 16:56:31 that's why I never worked on the removal :p 16:56:44 most users don't want to throw a /24 at an openstack control plane deployment, they want 3 ips for each of their controllers 16:56:45 mnaser: couldn't it be on the side, in ops repo? 16:56:47 evrardjp: Weird, cause the in[ut would help make a design for direction:( 16:57:00 *if* someone wants to do that, they can do it (and they probably have the knowledge to do it) 16:57:17 spotz: oh no I meant the lack of willingness to change was an input in itself 16:57:20 mnaser: swich the control plane to v6, never run out of addresses :P 16:57:27 ^ :D 16:57:31 baha 16:57:32 then just use haproxy for 624 16:57:35 like or maybe as an idea 16:57:41 :P 16:57:47 we can decouple the dynamic inventory away out 16:57:54 we've run around this circle many times, but until it matters enough to someone it's not going to happen 16:58:04 ^ 16:58:16 odyssey4me: the ipv6 circle or the metal circle 16:58:18 for now it's easy to make centos metal only, and perhaps also work on some plays to transition any container deployments to metal when doing the upgrade 16:58:33 evrardjp: hehe, then they need to provide help!:) 16:58:41 perhaps for suse we do the same given that we have a low support base and user base 16:58:55 if folks are ok with that, i will do the work 16:59:01 we leave ubuntu as-is until it matter enough to anyone to change up how that's all done 16:59:02 +1 16:59:18 lgtm 16:59:23 +1 16:59:24 awesome 16:59:24 and thanks mnaser for the work 16:59:33 it shouldn't be that hard if we keep lxc around 16:59:35 thank you for your patience with my shenanigans dealing with dinosaurs 16:59:36 :) 16:59:37 for ubuntu 16:59:51 would be sad to see SUSE support / test matrix reduced, but i understand 16:59:55 anyone else have anything in mind? 17:00:04 mnaser I think we can pretty simply just remove centos/suse from the openstack-ansible-tests templates, then add cross-repo integrated build tests to all roles with a metal build 17:00:07 as we won't change inventory, methods of deployments, but still simplify code for centos 17:00:22 odyssey4me: that was my goal pretty much :) 17:00:31 and then figure out upgrades 17:00:35 well 17:00:35 anyways, we're kinda at time 17:00:37 mnaser: I have an item to bring up for notice (after this) :P 17:00:44 it's not enough for clustering roles 17:00:49 whats up prometheanfire ? 17:00:57 the barbican role seems broken 17:00:58 evrardjp: i had an idea for that too :) 17:01:00 orly? 17:01:03 mnaser: multinode? 17:01:06 evrardjp: yep 17:01:16 mnaser: good 17:01:21 oh thats a lot of red 17:01:33 at least in master 17:01:36 fatal: [infra1]: FAILED! => {"changed": false, "cmd": "set -e\n if [ -d /opt/tempest-testing/bin ];\n then\n . /opt/tempest-testing/bin/activate\n fi\n tempest run --whitelist-file /root/workspace/etc/tempest_whitelist.txt", "delta": "0:00:02.464257", "end": "2019-01-03 18:26:46.166077", "msg": "non-zero return code", "rc": 1, "start": "2019-01-03 18:26:43.701820", "stderr": "", "stderr_lines": [], "stdout": "The 17:01:36 specified regex doesn't match with anything", "stdout_lines": ["The specified regex doesn't match with anything"]} 17:01:39 that is due to changes in tempest 17:01:47 mayb arxcruz or chandankumar can help us with that 17:01:49 and there was a patch this morning merged as the first part of addressing that 17:01:58 yep, I've been doing some work on that front too 17:02:16 okay cool, so maybe that is a good canary patch to see if it works or not :) 17:02:17 mnaser: k, thanks 17:02:21 https://review.openstack.org/628979 has been a bit of a work in progress 17:02:21 mnaser: can you point me the log url 17:02:30 chandankumar: i looked at https://review.openstack.org/#/c/625634/ 17:02:47 I also have a few things to add for the record of the meeting: 17:02:52 this https://github.com/openstack/openstack-ansible-os_tempest/commit/25b5533c30e328c80d29348dff0cfc0f2ac5e88f 17:03:04 even if that doesnt fix barbican/designate it's the first step 17:03:07 1) If there is a problem with SUSE packaging, please query in #openstack-rpm-packaging 17:03:09 the previous issue was that distro installs didn't install the plugins to do the tests - now that's happening, nothing's setting the var that enables the plugin to be installed 17:03:53 jrosser: does barbican/designate failure does not got fixed? 17:04:44 Jesse Pretorius (odyssey4me) proposed openstack/openstack-ansible-os_tempest master: Use the inventory to enable/disable services by default https://review.openstack.org/628979 17:04:45 evrardjp: anything else for the record of the meeting? or we can wrap up and keep discussing :) 17:04:54 2) I am bringing the idea of gating OBS Staging->OBS repo using OSA for more stability 17:05:08 yay 17:05:13 that should solve it 17:05:15 thanks mnaser 17:05:24 and everyone! 17:05:27 awesome, thanks evrardjp 17:05:40 even just trying to install openstack using anythign would e a good start 17:06:04 ++ 17:06:09 okay, we're over but i think we covered most things 17:06:12 i shall end :) 17:06:15 thank you everyone!! 17:06:17 #endmeeting