Thursday, 2018-04-26

*** julim has joined #openstack-helm00:31
*** julim has quit IRC00:50
*** yamamoto has quit IRC00:53
*** yamamoto has joined #openstack-helm00:53
anticwmlivadariu: works well to get started00:59
*** cNilesh has joined #openstack-helm02:03
*** unicell has quit IRC02:07
*** pcarver_ has joined #openstack-helm02:10
*** pcarver has quit IRC02:12
*** ianw_pto has quit IRC02:12
*** ianw has joined #openstack-helm02:13
*** gkadam has joined #openstack-helm02:26
*** cfriesen has joined #openstack-helm02:31
*** julim has joined #openstack-helm02:33
*** julim has quit IRC04:23
*** julim has joined #openstack-helm04:25
*** cfriesen has quit IRC04:38
*** mdih has quit IRC04:43
*** gkadam has quit IRC04:45
*** unicell has joined #openstack-helm05:06
*** unicell1 has joined #openstack-helm05:13
*** unicell has quit IRC05:13
*** gkadam has joined #openstack-helm05:29
*** mdih has joined #openstack-helm05:45
*** evin has joined #openstack-helm06:10
*** lunarlamp has quit IRC06:44
*** lunarlamp has joined #openstack-helm06:45
*** girish has joined #openstack-helm07:10
girishHi Everyone07:10
girishI tried helm to install openstack onkubernetes07:10
girishI followed the documentation at https://docs.openstack.org/openstack-helm/latest/install/multinode.html07:10
girishBut mariadb is not coming up07:10
girishit is indefinitely waiting07:10
girishCan a nybody help me please?07:11
girishubuntu@worker1:~$ kubectl logs -n openstack mariadb-0 Error from server (BadRequest): container "mariadb" in pod "mariadb-0" is waiting to start: PodInitializing07:11
girishubuntu@worker1:~$ kubectl logs -n ceph ceph-rgw-85d66f9658-9ds5r container_linux.go:247: starting container process caused "process_linux.go:359: container init caused \"rootfs_linux.go:54: mounting \\\"/var/lib/kubelet/pods/779d4894-4871-11e8-bb36-020d687e4ca0/volume-subpaths/ceph-bin/ceph-rgw/0\\\" to rootfs \\\"/var/lib/docker/aufs/mnt/8814753f0501c502327689cb84187edc1c52965f4ef73acae6166481ef08aa5f\\\" at \\\"/var/lib/dock07:12
*** yamamoto has quit IRC07:16
*** yamamoto has joined #openstack-helm07:17
jayahnportdirect: r u following up with container white paper?07:19
*** gkadam has quit IRC07:56
*** yamamoto has quit IRC08:11
*** gkadam has joined #openstack-helm08:20
*** yamamoto has joined #openstack-helm08:27
*** yamamoto has quit IRC10:55
*** lunarlamp has quit IRC11:03
*** lunarlamp has joined #openstack-helm11:11
*** julim has quit IRC11:31
*** yamamoto has joined #openstack-helm11:50
*** yamamoto has quit IRC11:50
*** girish has quit IRC12:08
*** cNilesh has quit IRC12:11
*** yamamoto has joined #openstack-helm12:18
*** mdih has quit IRC12:33
*** cfriesen has joined #openstack-helm13:27
*** yamamoto has quit IRC13:56
*** evin has quit IRC14:14
*** spiette has joined #openstack-helm14:56
*** yamamoto has joined #openstack-helm14:57
*** felipemonteiro_ has joined #openstack-helm14:58
*** felipemonteiro__ has joined #openstack-helm14:59
*** yamamoto has quit IRC15:03
*** felipemonteiro_ has quit IRC15:03
*** jistr|mtgs is now known as jistr15:04
*** spiette has quit IRC15:06
osh-chatbot3<alianlianlianlianlian> Hi, I write a email-template.tmpl in openstack-helm-infra/prometheus-alert manager/templates/etc/_email-template.tmpl.tpl, and load it via config map to mount in alertmanager template dir. But when I install in debug mode, the alert-template.tmpl has no context.  why? helm cannot load tpl with special char?15:09
osh-chatbot3<alianlianlianlianlian> My  template file as below: # cat prometheus-alertmanager/templates/etc/_alert-templates.tmpl.tpl {{ define "__ecms_email_text" }} <table> {{ range .Alerts }} <tr><td>{{ range .Labels.SortedPairs }}{{ if eq .Name "alertname" }}{{ .Value }}{{ end }}{{ end }}</td></tr> <tr><td>{{ range .Labels.SortedPairs }}{{ if eq .Name "severity" }}{{ .Value }}{{ end }}{{ end }}</td></tr> <tr><td>{{ range .Labels.SortedPairs }}{{ if15:11
osh-chatbot3eq .Name "alertgroup" }}{{ .Value }}{{ end }}{{ end }}</td></tr> <tr><td>{{ .StartsAt }}</td></tr> <tr><td>{{ range .Annotations.SortedPairs }}{{ if eq .Name "description" }}{{ .Value }}{{ end }}{{ end }}</td></tr> {{ end }} </table> {{ end }}  {{ define "__ecms_email_subject" }} {{ range .Alerts }} {{ range .Labels.SortedPairs }}{{ if eq .Name "alertname" }}{{ .Value }}{{ end }}{{ end }} {{ end }} {{ end }}  {{ define "email.ecms.subject"15:11
osh-chatbot3}}{{ template "__ecms_email_subject" . }}{{ end }} {{ define "email.ecms.html" }}{{ template "__ecms_email_text" . }}{{ end }}15:11
osh-chatbot3<alianlianlianlianlian> Thank you in advance15:12
*** julim has joined #openstack-helm15:20
openstackgerritSteve Wilkerson proposed openstack/openstack-helm-infra master: WIP: Add ldap support to grafana, update version  https://review.openstack.org/56327015:24
openstackgerritTin Lam proposed openstack/openstack-helm master: Add LDAP pool options  https://review.openstack.org/56015115:38
*** Guest62064 is now known as lamt15:48
*** spiette has joined #openstack-helm15:50
*** gkadam has quit IRC15:53
*** yamamoto has joined #openstack-helm15:59
*** yamamoto has quit IRC16:04
*** julim has quit IRC16:21
openstackgerritTin Lam proposed openstack/openstack-helm master: Sync mongodb chart  https://review.openstack.org/56454816:22
*** julim has joined #openstack-helm16:23
*** felipemonteiro__ has quit IRC16:45
*** felipemonteiro has joined #openstack-helm16:45
openstackgerritPete Birley proposed openstack/openstack-helm master: Libvirt: add dynamic deps on neutron agents as option.  https://review.openstack.org/56456116:54
cfriesenwhat is the proposed mechanism for openstack-helm to track resources for use with instances using "dedicated" CPUs?  Any host CPUs allocated to such instances must not be used by any k8s pods running on that host.16:57
*** yamamoto has joined #openstack-helm17:01
anticw*not* or *avoided*17:01
anticwi think cpu affinity makes sense, but avoiding things completely causes issues17:02
*** yamamoto has quit IRC17:06
*** unicell1 has quit IRC17:14
openstackgerritRenis Makadia proposed openstack/openstack-helm master: ceph - split chart into mon, osd and client  https://review.openstack.org/55919917:33
*** julim has quit IRC17:40
*** julim has joined #openstack-helm17:41
*** felipemonteiro has quit IRC17:51
*** felipemonteiro has joined #openstack-helm17:51
*** unicell has joined #openstack-helm17:52
*** yamamoto has joined #openstack-helm18:02
*** yamamoto has quit IRC18:07
*** felipemonteiro_ has joined #openstack-helm18:17
*** felipemonteiro has quit IRC18:21
*** evin has joined #openstack-helm18:21
*** julim has quit IRC18:26
openstackgerritPete Birley proposed openstack/openstack-helm master: Deployments: Move all deployment to be HA by default  https://review.openstack.org/56458418:52
openstackgerritChris Wedgwood proposed openstack/openstack-helm master: [RFC] neutron: default to OVSHybridIptablesFirewallDriver firewall driver  https://review.openstack.org/56458819:03
anticwSamYaple (or indeed anyone else with experience here) what do you think about ^ ?19:03
*** yamamoto has joined #openstack-helm19:04
anticwperhaps the commit message should reference https://docs.openstack.org/neutron/pike/admin/config-ovsfwdriver.html19:05
*** yamamoto has quit IRC19:09
openstackgerritRenis Makadia proposed openstack/openstack-helm master: ceph - split chart into mon, osd and client  https://review.openstack.org/55919919:38
openstackgerritRenis Makadia proposed openstack/openstack-helm master: ceph - split chart into mon, osd and client  https://review.openstack.org/55919919:42
*** felipemonteiro__ has joined #openstack-helm19:46
*** felipemonteiro_ has quit IRC19:46
cfriesenanticw: sorry, just got back from an all-afternoon meeting.  I'm trying to wrap my head around how you'd run a compute node with containerized nova-compute that would be able to run instances with dedicated cpus.19:52
cfriesenanticw: in the non-containerized world this is straightforward, you use isolcpus to isolate the host processes on some "platform" CPUs and then tell nova it's allowed to put guests on the remaining ones.19:52
cfriesenanticw: I can't figure out how this would work with kubernetes, since kubectl seems to assume it owns the whole node19:53
*** mdih has joined #openstack-helm20:01
*** yamamoto has joined #openstack-helm20:05
anticwcfriesen: dedicated means what to you in this case?20:08
*** yamamoto has quit IRC20:11
cfriesenanticw: "hw:cpu_policy=dedicated" in the instance flavor20:18
cfriesenanticw: the intent is to map a guest vCPU 1:1 with a host CPU20:18
cfriesenanticw: this will cause qemu to affine the linux thread corresponding to the guest vCPU to run exclusively on that host CPU, and we want to prevent anything else from running on that host CPU20:20
anticw"we want to prevent anything else from running on that host CPU" i think is a bad idea20:33
anticwif you have some work on that physical machine that needs to be run, let it run20:33
anticweven if it's k8s, etc20:33
anticwcpu afinity can steer things away from that sure, but you don't want to prevent it20:33
anticwor else you get resource stavation or locks held for too long (things break)20:33
anticwso use cpu afinity to steer k8s pods away from those cpus, but don't prevent it20:34
cfriesenanticw: we have this working now with a non-containerized environment.  my question is how to ensure that no k8s pods will ever run on those host CPUs (while also having k8s report the correct amount of resources for that node).  So if I have 8 cpus total and i want to reserve 6 for instances, then k8s should only "see" 2 CPUs for running pods.20:41
anticwcfriesen: and i'm saying that's a bad idea20:41
anticwyou don't want to do that20:41
anticwif k8s needs to do something, and you prevent it running ... badness will happen20:41
anticwso don't prevent it, just let cpu afinity have k8s avoid those cpus20:42
cfriesenanticw: when you say20:42
cfriesen"let cpu affinity have k8s avoid those cpus", are you talking about just using taskset or something?20:43
anticwwhatever mechanism works best, i haven't looked at what makes sense in this case20:43
cfriesenanticw: if I use taskset, won20:43
anticwallow k8s to do stuff if it has to, have it avoid the cpus you dedicate for VMs20:43
cfriesenwon't k8s still report to the master node that it has 8 cpus when really it can only run on 2?20:43
*** evin has quit IRC20:43
anticwprobably20:44
anticwfrom a resource account PoV perhaps you can have the nova ds on each host 'claim' 6 CPUs?20:45
cfriesenthat has potential, I think20:46
anticwyour concern is that in a mixed env you'll have expensive pods running taking cycles away from kvm processes?20:48
anticwif so, can you nice things so that kvm gets priority?20:49
cfriesenanticw: in the context of things like software routers we really cannot afford *anything* else running on those host CPUs.  (This is a typical NFV usecase, and the cloud operator will generally do significant work to tune the host to ensure that OS-level work gets done on other host CPUs.)20:51
*** eeiden has joined #openstack-helm20:51
cfriesenanticw: in the non-containerized environment, the rest of the system is affined to a small number of pCPUs, and only qemu threads are allowed to run on the "dedicated" pCPUs20:52
*** jdandrea has quit IRC20:52
*** yamamoto has joined #openstack-helm21:07
*** yamamoto has quit IRC21:13
*** cfriesen has quit IRC21:22
*** eeiden has quit IRC21:33
openstackgerritPete Birley proposed openstack/openstack-helm master: Gate: add basic cinder tests to gate  https://review.openstack.org/56463221:44
openstackgerritPete Birley proposed openstack/openstack-helm master: Gate: add basic cinder tests to gate  https://review.openstack.org/56463221:44
openstackgerritPete Birley proposed openstack/openstack-helm master: Deployments: set max_unavailible to 0  https://review.openstack.org/56463321:58
*** yamamoto has joined #openstack-helm22:09
*** yamamoto has quit IRC22:14
*** felipemonteiro__ has quit IRC22:28
openstackgerritPete Birley proposed openstack/openstack-helm master: WIP: Bootstrap: heat  https://review.openstack.org/55593222:39
openstackgerritPete Birley proposed openstack/openstack-helm master: WIP: Bootstrap: heat  https://review.openstack.org/55593223:01
*** yamamoto has joined #openstack-helm23:11
*** yamamoto has quit IRC23:16

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!