# Advanced Deployment Guide

{% hint style="info" %}
**Recommended** for **Scalable Activation** Across Organization
{% endhint %}

<details>

<summary>Helm Chart Installation</summary>

## **Helm Chart for Kubernetes**

{% hint style="danger" %}
This cloud or local installation method is considered an **advanced install option**.&#x20;
{% endhint %}

You can **install** MOTAR in a high availability fashion and optionally configure MOTAR components for **advanced deployments**. (Production, multi-node Kubernetes cluster deployments.)&#x20;

This Helm chart installs MOTAR with all of its dependencies in a Kubernetes cluster.  Our Helm chart also employs non-MOTAR supporting charts, such as:

* A custom Minio chart branched from the Minio public chart.
* The Bitnami PostgreSQL chart.
* The ingress-nginx kubernetes chart.
* The nats public chart.

### Prerequisites

* [ ] Kubernetes 1.23+
* [ ] Helm 3.8.0+
* [ ] PV provisioner support in the underlying infrastructure
* [ ] ReadWriteMany volumes for deployment scaling

The MOTAR Helm chart supports deployments in Kubernetes clusters hosted in nearly any cloud provider, local server, and self-hosted cloud. As such many of the values are left to be filled in by the installing individual.&#x20;

The default **values.yaml** should provide information sufficient to help you prepare the installation for your circumstance, if you find anything confusing or not intuitive please reach out [contact-us](https://docs.motar.com/support/contact-us "mention").&#x20;

Below we will provide some general **recommended values**, and then some recommended values based on deployment type.&#x20;

You will be providing the values to the deployment which overrides the default where the values are set.

{% hint style="info" %}
If MOTAR is already installed, proceed to Section X to upgrade MOTAR with Helm.
{% endhint %}

{% hint style="warning" %}
If using the Dynepic provided helm chart and images, you will need to authenticate with the Dynepic registry to pull the Helm chart and provide a Secret in the cluster which contains the dockerconfigjson to pull the images. See the section for **imagePullSecret** below.&#x20;
{% endhint %}

### Step 1 - Configuration

* **Create** a values file values.yaml in a known directory
  * Replace: your\_directory with your details

```sh
touch /your_directory/your_value.yaml
```

* **Add** Installation Values to values.yaml

```sh
nano /your_directory/your_value.yaml
```

* **Add** the following values to your values.yaml. Update and change any values that contain ‘you’ or ‘your’.
* Then, **save** the file with CTRL-X (if using nano).

Sample **values.yaml** (Single Node Using Nodeport)

{% code overflow="wrap" %}

```yaml
global:
  motarImageRegistry: harbor.dynepic.net
  domainName: your_domain.com  
  initialAdminEmail: you@email.com
  environment: development
  mailConfig:
  reportTargets:
    - you@email.com
  securityTargets:
    - you@email.com
  s3Config:
    source: minio
    url: api-minio.your_domain.com

motar:
  ingress:
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using certmanager clusterIssuer
    nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,clientid
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  tls:
    enabled: true 
    tlsSecret: "webapps-cert" # Substitute with your tlsSecret
  serviceAccount:
    create: true
    name: motar
  minio:
    ingress:
      enabled: true
      className: nginx
      hostname: api-minio.your_domain.com
      annotations:
        apiVersion: networking.k8s.io/v1
        className: nginx
        cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using CertManager/ClusterIssuer
        nginx.ingress.kubernetes.io/proxy-body-size: "0"
      tls:
        enabled: true
        tlsSecret: "api-minio-cert" # Substitute with your tlsSecret
    consoleIngress:
      enabled: true
      className: "nginx"
      annotations:
        apiVersion: networking.k8s.io/v1
        className: nginx
        cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using CertManager/ClusterIssuer
        nginx.ingress.kubernetes.io/proxy-body-size: "0"
      hosts:
        - console-minio.your_domain.com
      tls:
        enabled: true
        tlsSecret: "console-minio-cert" # Substitute with your tlsSecret

storageClass:
  enabled: true
  provisioner: driver.longhorn.io
  parameters:
    fsType: ext4
    numberOfReplicas: "1"
    staleReplicaTimeout: "30"

ingress-nginx:
  enabled: true
  controller:
  	service:
      type: "NodePort"
      nodePorts:
        http: 31080
        https: 31443
minio:
  auth:
    existingSecret: "motar-s3-auth"
    rootUserSecretKey: "rootUser"
    rootPasswordSecretKey: "rootPassword"

postgresql:
  sslDisabled: true
  auth:
    existingSecret: "motar-pg-auth"
    secretKeys:
      adminPasswordKey: "postgres-password"
      motarPasswordKey: "motar-password"
```

{% endcode %}

Sample **values.yaml** (AWS EKS)

<pre class="language-yaml" data-title="aws_values.yaml" data-overflow="wrap"><code class="lang-yaml"><strong>global:
</strong>  motarImageRegistry: harbor.dynepic.net
  domainName: your_domain.com
  initialAdminEmail: you@email.com
  environment: development
  mailConfig:
  reportTargets:
    - you@email.com
  securityTargets:
    - you@email.com
  s3Config:
  source: minio
  url: api-minio.your_domain.com

motar:
  ingress:
  className: nginx
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using certmanager clusterIssuer
    nginx.ingress.kubernetes.io/cors-allow-headers: DNT,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization,clientid
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/proxy-body-size: "0"
  tls:
    enabled: true 
    tlsSecret: "webapps-cert" # Substitute with your tlsSecret
  serviceAccount:
    create: true
    name: motar
  minio:
    ingress:
      enabled: true
      className: nginx
      hostname: api-minio.your_domain.com
      annotations:
        apiVersion: networking.k8s.io/v1
        className: nginx
        cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using CertManager/ClusterIssuer
        nginx.ingress.kubernetes.io/proxy-body-size: "0"
      tls:
        enabled: true
        tlsSecret: "api-minio-cert" # Substitute with your tlsSecret
    consoleIngress:
      enabled: true
      className: "nginx"
      annotations:
        apiVersion: networking.k8s.io/v1
        className: nginx
        cert-manager.io/cluster-issuer: letsencrypt-cluster-issuer # If using CertManager/ClusterIssuer
        nginx.ingress.kubernetes.io/proxy-body-size: "0"
      hosts:
        - console-minio.your_domain.com
      tls:
        enabled: true
        tlsSecret: "console-minio-cert" # Substitute with your tlsSecret

storageClass:
  enabled: true
  provisioner: kubernetes.io/aws-ebs
  parameters:
    fsType: ext4
    type: gp3

ingress-nginx:
  enabled: true
  controller:
  config:
    use-proxy-protocol: false
  service:
    type: LoadBalancer
    external:
      enabled: false
    internal:
      externalTrafficPolicy: Local
      enabled: true
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-internal: false
        service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
        service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: true
        service.beta.kubernetes.io/aws-load-balancer-type: nlb-ip

minio:
  auth:
    existingSecret: "motar-s3-auth"
    rootUserSecretKey: "rootUser"
    rootPasswordSecretKey: "rootPassword"

postgresql:
  sslDisabled: true
  auth:
    existingSecret: "motar-pg-auth"
    secretKeys:
      adminPasswordKey: "postgres-password"
      motarPasswordKey: "motar-password"    

</code></pre>

### Step 2 - Prepare the namespace

* Create the namespace

```sh
kubectl create ns motar
```

* **Apply** the secrets for the s3 storage solution and postgres solution you intend to use&#x20;
  * *This can be **external s3** using your own solution or utilizing our **internal minio chart** either case requires secrets applied for the rootUser and rootPassword*
* You can apply directly from a created **yaml** file like so:

```sh
kubectl apply -f mysecrets.yaml -n motar`
```

Example YAML:

```yaml
apiVersion: v1
data:
rootPassword: base64enc_value
rootUser: base64enc_value
kind: Secret
metadata:
labels:
app.kubernetes.io/part-of: motar
name: motar-s3-auth
type: Opaque
```

Example Literals:

{% code overflow="wrap" %}

```sh
kubectl -n motar create secret generic --from-literal=motar-password=supersecret --from-literal=postgres-password=topsecret motar-pg-auth

kubectl -n motar create secret generic --from-literal=rootUser=minioUser --from-literal=rootPassword=topsecretPassword motar-s3-auth
```

{% endcode %}

{% hint style="info" %}
If you would like to apply a **tls certificate** for use on your ingress, this is the point you should do so as well if note using **CertManager**)
{% endhint %}

### Step 3 - Deploy the Chart

* Assuming you have created the value file as appropriate from above or for your specific use case the following list of commands should be sufficient to get you from start to finish.

{% code overflow="wrap" %}

```sh
kubectl create ns motar
helm install motar --version 3.1.0 oci://harbor.dynepic.net/helmrelease/motar -f your_values.yaml -n motar
```

{% endcode %}

### Step 4 - Access the Site

* At this step, if you correctly **configured** your networking to allow DNS routing to the hosting device , everything is up and running you should be able to access the [https://admin.motarghost.com.](https://admin.motarghost.com)&#x20;

You have completed HELM setup and may continue to Step [#id-3-access-your-motar-instance](https://docs.motar.com/commercial-use-installation-and-setup-tutorial#id-3-access-your-motar-instance "mention")

## NodePort

{% hint style="warning" %}
If using a NodePort configuration for your ingress controller you will need to ensure you have the networking appropriately set such that traffic goes to the correct port within your deployed environment. Here is a sample way of setting this all up using an nginx-reverse proxy.
{% endhint %}

* &#x20;**Install** the required services.

```bash
bash
sudo apt update
sudo apt install nginx
sudo apt install libnginx-mod-stream
sudo vim /etc/nginx/nginx.conf
```

* **Include** the proxy passthrough.
  * Append outside the http block \`include /etc/nginx/passthrough.conf;\` It should look something like this:

```bash
http {
  # default configuration
}
include /etc/nginx/passthrough.conf;
```

* **Declare** the proxy passthrough.
  * Now we need to create the /etc/nginx/passthrough.conf and we need to ensure the ports match the NodePort used in the value file. It will look something like:

```bash
stream {
  server {
    	listen 80;
    	proxy_pass 127.0.0.1:31080;
	}
	server {
    	listen 443;
    	proxy_pass 127.0.0.1:31443;
	}
}

```

* **Forward** http to https with reverse proxy

{% hint style="info" %}
If you would like to ensure all http traffic to your domain name is transferred to https you will further which to do the following.
{% endhint %}

* Add the appropriate 301 to the sites-available.

````bash
`sudo vim /etc/nginx/sites-available/your.domain`
```
server {
	if ($host ~ ^[^.]+\.your\.domain$) {
    	return 301 https://$host$request_uri;
	}

	listen 80;

}
```

````

* Remove the default nginx site.

```bash
sudo rm /etc/nginx/sites-enab...
```

### **Troubleshooting FAQ  (As Needed)**&#x20;

* If you are getting ‘bad address at line \[any number] of /etc/dnsmasq.conf’ this could be from not having set a HOST\_IP in the .env file. Please open your .env and add in your HOST\_IP.

{% code title="Example Error:" %}

```sh
dnsmasq     | dnsmasq: bad address at line 680 of /etc/dnsmasq.conf
```

{% endcode %}

</details>

<details>

<summary>MOTAR AI Engine with Zero Trust Orchestration Layer (beta) Installation</summary>

> **Note:** This is an advanced configuration.

Our MOTAR system includes a dedicated query server that requires an LLM. The query server interfaces with the LLM using OLLAMA API endpoints. We provide an OCI image containing all necessary components for the query server. To leverage your NVIDIA GPU, ensure the `nvidia-container-toolkit` is installed. Follow the steps appropriate for your environment \[AI-NVIDIA-TOOLKIT.md].

If you prefer to manage dependencies yourself, the query server also requires a Redis database and the following Docker environment variables:

```env
LLM_API_HOST:  ${LLM_API_HOST} 
PG_USER: ${POSTGRES_USER}
PG_PASS: ${POSTGRES_PASSWORD}
PG_DBNAME: ${POSTGRES_DB}
PG_HOST: ${POSTGRES_HOST}
PG_PORT: ${POSTGRES_PORT}
MINIO_ENDPOINT: ${MINIO_ENDPOINT}
MINIO_ACCESS_KEY: ${MINIO_ROOT_USER}
MINIO_SECRET_KEY: ${MINIO_ROOT_PASSWORD}
REDIS_HOST: ${REDIS_HOST}
REDIS_PORT: ${REDIS_PORT}
REDIS_DB: ${REDIS_DB}
```

* `PG_*` variables: Access to MOTAR's PostgreSQL database.
* `MINIO_*` variables: Credentials for your MOTAR MinIO instance.

***

#### Example: Running OLLAMA and Redis Containers

**OLLAMA:**

```yaml
ollama:
    image: ollama/ollama:latest
    container_name: ollama
    restart: unless-stopped
    runtime: nvidia
    ports:
        - "11434:11434"
    volumes:
        - ollama_models:/models
    environment:
        LOG_LEVEL: debug
        OLLAMA_MODELS: /models
        OLLAMA_DEBUG: 1
    entrypoint: ["/bin/bash"]
    command: ["-c", "dpkg --configure -a && apt update && apt install curl -y && sleep 5 && ollama serve & sleep 15 && ollama pull llama3 && pkill ollama && ollama serve"]
    healthcheck:
        test: ["CMD-SHELL", "curl -sf http://localhost:11434/api/tags | grep llama3"]
        interval: 10s
        timeout: 5s
        retries: 60
        start_period: 15s
```

**Redis:**

```yaml
redis:
    image: redis:7-alpine
    container_name: redis
    restart: unless-stopped
    volumes:
        - redis_data:/data
```

***

#### MOTAR Configuration (External Query Server)

Ensure MOTAR can access your query server. Example values file (AI section only):

```yaml
motar:
    ai:
        enabled: false  # Only deploys AI dependencies via Helm
        queryServer:
            external:
                enabled: true
                url: http://IP.OF.QUERY.SERVER
                port: 8000
```

***

#### MOTAR Configuration (In-Cluster Deployment)

To run everything inside your cluster, use a node configured for NVIDIA GPU passthrough. Apply a taint (e.g., `ai`) to schedule the query server and OLLAMA. Example values:

```yaml
ai:
    enabled: true
    llmServer:
        NodeSelector: node-role.kubernetes.io/ai: "true"
        runtimeClassName: "nvidia"
        tolerations:
            - key: "ai"
                operator: "Equal"
                value: true
                effect: "NoSchedule"
        storageSize: 10Gi
        models:
            - llama3:latest
        storageClassName: ""
        resourcesPreset: "gpuMedium"
        resources: {}
    queryServer:
        NodeSelector: node-role.kubernetes.io/ai: "true"
        tolerations:
            - key: "ai"
                operator: "Equal"
                value: true
                effect: "NoSchedule"
        runtimeClassName: "nvidia"
```

***

> Adjust values and configuration as needed for your environment and deployment strategy.

</details>

### MOTAR Discovery: Connecting an XR Headset&#x20;

When testing MOTAR locally using a Discovery license, it is possible to connect wireless XR headsets to your instance. However, this configuration is advanced and highly dependent on your current networking hardware, networking knowledge, and configuration permissions. <br>

{% hint style="warning" %}

### Proceed with Caution

While none of the following configurations will "break" your system and can be reversed, they may cause a loss of functionality for some users.

***We don't recommend** this configuration if you are **not** comfortable with altering both your network and the headset configuration.*&#x20;
{% endhint %}

Your network and hardware should meet the following criteria:

* An accessible and configurable router
* Developer permissons on the XR headset
* Admin permissions on the hosting computer

Connecting a wireless XR device to your local MOTAR test install may require the following changes based on your network and headset hardware:

* Router Based Configuration - *requires a configurable network router*
  * Customized record(s) added to the network router
* Device Based Configuration
  * Hosting and configuring files directly on an XR headset
  * Update local host files and/or updates to primary DNS
  * Local Domain name service changes

### Meta Quest 3 Configuration

*Coming soon.*

***

> ### Following the Installation and Tutorial Guide?
>
> If you are following along with the tutorial, you should be ready to access and configure your MOTAR Instance. Click the link below to return:
>
> [#id-3-access-your-motar-instance](https://docs.motar.com/commercial-use-installation-and-setup-tutorial#id-3-access-your-motar-instance "mention")
>
> \
> [#id-3-access-your-motar-instance](https://docs.motar.com/government-use-installation-and-setup-tutorial#id-3-access-your-motar-instance "mention")
