Akshay 8d08ae8902 | 11 months ago | |
---|---|---|
grafana_dashboards | 11 months ago | |
templates | 11 months ago | |
Chart.yaml | 11 months ago | |
README.md | 11 months ago | |
values.yaml | 11 months ago |
In order for Collaborative Editing and copy/paste to function correctly on kubernetes, it is vital to ensure that all users editing the same document and all the clipboard request end up being served by the same pod. Using the WOPI protocol, the https URL includes a unique identifier (WOPISrc) for use with this document. Thus load balancing can be done by using WOPISrc -- ensuring that all URLs that contain the same WOPISrc are sent to the same pod.
Install helm
Setting up Kubernetes Ingress Controller
A. Nginx:
Install Nginx Ingress Controller
B. HAProxy:
Install HAProxy Ingress Controller
Note:
Openshift uses minimized version of HAproxy called
Router that doesn't support all functionality of HAProxy but for COOL we need advance annotations Therefore it is recommended deploy HAproxy Kubernetes Ingress in collabora
namespace
Create an my_values.yaml
(if your setup differs e.g. take an look in then values.yaml ./collabora-online/values.yaml
) of the
helmchart
A. HAproxy:
replicaCount: 3
ingress:
enabled: true
className: "haproxy"
annotations:
haproxy.org/timeout-tunnel: "3600s"
haproxy.org/backend-config-snippet: |
balance url_param WOPISrc check_post
hash-type consistent
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
image:
tag: "latest"
autoscaling:
enabled: false
collabora:
aliasgroups:
- host: "https://example.integrator.com:443"
extra_params: --o:ssl.enable=false --o:ssl.termination=true
resources:
limits:
cpu: "1800m"
memory: "2000Mi"
requests:
cpu: "1800m"
memory: "2000Mi"
B. Nginx:
replicaCount: 3
ingress:
enabled: true
className: "nginx"
annotations:
nginx.ingress.kubernetes.io/upstream-hash-by: "$arg_WOPISrc"
nginx.ingress.kubernetes.io/proxy-body-size: "0"
nginx.ingress.kubernetes.io/proxy-read-timeout: "600"
nginx.ingress.kubernetes.io/proxy-send-timeout: "600"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
image:
tag: "latest"
autoscaling:
enabled: false
collabora:
aliasgroups:
- host: "https://example.integrator.com:443"
extra_params: --o:ssl.enable=false --o:ssl.termination=true
resources:
limits:
cpu: "1800m"
memory: "2000Mi"
requests:
cpu: "1800m"
memory: "2000Mi"
Note:
Horizontal Pod Autoscaling(HPA) is disabled for now. Because after scaling it breaks the collaborative editing and copy/paste Therefore please set replicaCount as per your needs
If you have multiple host and aliases setup set aliasgroups in my_values.yaml
:
collabora:
- host: "<protocol>://<host-name>:<port>"
# if there are no aliases you can ignore the below line
aliases: ["<protocol>://<its-first-alias>:<port>, <protocol>://<its-second-alias>:<port>"]
# more host and aliases list is possible
Specify server_name
when the hostname is not reachable directly for example behind reverse-proxy
collabora:
server_name: <hostname>:<port>
In Openshift , it is recommended to use HAproxy deployment instead of default router. And add className
in ingress block
so that Openshift uses HAProxy Ingress Controller instead of Router
:
ingress:
className: "haproxy"
Install helm-chart using below command, it should deploy the collabora-online
helm repo add collabora https://collaboraonline.github.io/online/
helm install --create-namespace --namespace collabora collabora-online collabora/collabora-online -f my_values.yaml
Follow only if you are using NodePort
service type in HAProxy and/or using minikube to setup, otherwise skip
A. HAProxy service is deployed as NodePort so we can access it with node's ip address. To get node ip
minikube ip
Example output:
192.168.0.106
B. Each container port is mapped to a NodePort
port via the Service
object. To find those ports
kubectl get svc --namespace=haproxy-controller
Example output:
|----------------|---------|--------------|------------|------------------------------------------|
|NAME |TYPE |CLUSTER-IP |EXTERNAL-IP |PORT(S) |
|----------------|---------|--------------|------------|------------------------------------------|
|haproxy-ingress |NodePort |10.108.214.98 |<none> |80:30536/TCP,443:31821/TCP,1024:30480/TCP |
|----------------|---------|--------------|------------|------------------------------------------|
In this instance, the following ports were mapped:
Additional step if deploying on minikube for testing:
Get minikube ip:
minikube ip
Example output:
192.168.0.106
Add hostname to /etc/hosts
192.168.0.106 chart-example.local
To check if everything is setup correctly you can run:
curl -I -H 'Host: chart-example.local' 'http://192.168.0.106:30536/'
It should return a similar output as below:
HTTP/1.1 200 OK
last-modified: Tue, 18 May 2021 10:46:29
user-agent: COOLWSD WOPI Agent 6.4.8
content-length: 2
content-type: text/plain
Install kube-prometheus-stack, a collection of Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator.
Enable prometheus service monitor, rules and grafana in your
my_values.yaml
prometheus:
servicemonitor:
enabled: true
labels:
release: "kube-prometheus-stack"
rules:
enabled: true # will deploy alert rules
additionalLabels:
release: "kube-prometheus-stack"
grafana:
dashboards:
enabled: true # will deploy default dashboards
Note:
Use kube-prometheus-stack
as release name when installing kube-prometheus-stack helm chart because we have passed release=kube-prometheus-stack
label in our my_values.yaml
. For Grafana Dashboards you may need to enable scan in correct namespaces (or ALL), enabled by sidecar.dashboards.searchNamespace
in Helmchart of grafana (which is part of PrometheusOperator, so grafana.sidecar.dashboards.searchNamespace
)
For big setups, you may not want to restart every pod to modify WOPI hosts, therefore it is possible to setup an additional webserver to serve a ConfigMap for using Remote/Dynamic Configuration
collabora:
env:
- name: remoteconfigurl
value: https://dynconfig.public.example.com/config/config.json
dynamicConfig:
enabled: true
ingress:
enabled: true
annotations:
"cert-manager.io/issuer": letsencrypt-zprod
hosts:
- host: "dynconfig.public.example.com"
tls:
- secretName: "collabora-online-dynconfig-tls"
hosts:
- "dynconfig.public.example.com"
configuration:
kind: "configuration"
storage:
wopi:
alias_groups:
groups:
- host: "https://domain1\\.xyz\\.abc\\.com/"
allow: true
- host: "https://domain2\\.pqr\\.def\\.com/"
allow: true
aliases:
- "https://domain2\\.ghi\\.leno\\.de/"
Note:
In current state of COOL remoteconfigurl for Remote/DynamicConfiguration only uses HTTPS. see here in wsd/COOLWSD.cpp
Where is this pods, are they ready?
kubectl -n collabora get pod
example output :
NAME READY STATUS RESTARTS AGE
collabora-online-5fb4869564-dnzmk 1/1 Running 0 28h
collabora-online-5fb4869564-fb4cf 1/1 Running 0 28h
collabora-online-5fb4869564-wbrv2 1/1 Running 0 28h
What is the outside host that multiple coolwsd servers actually answering?
kubectl get ingress -n collabora
example output :
|-----------|------------------|--------------------------|------------------------|-------|
| NAMESPACE | NAME | HOSTS | ADDRESS | PORTS |
|-----------|------------------|--------------------------|------------------------|-------|
| collabora | collabora-online |chart-example.local | | 80 |
|-----------|------------------|--------------------------|------------------------|-------|
To uninstall the helm chart
helm uninstall collabora-online -n collabora