Install Palette VerteX with Management Appliance
This has been split from the former VerteX Management Appliance page.
Follow the instructions to install Palette VerteX using the VerteX Management Appliance on your infrastructure platform.
Size Guidelines
This section lists resource requirements for VerteX for various capacity levels. In VerteX, the terms small, medium, and large are used to describe the instance size of worker pools that Palette is installed on. The following table lists the resource requirements for each size.
The recommended maximum number of deployed nodes and clusters in the environment should not be exceeded. We have tested the performance of VerteX with the recommended maximum number of deployed nodes and clusters. Exceeding these limits can negatively impact performance and result in instability. The active workload limit refers to the maximum number of active nodes and pods at any given time.
| Size | Total Nodes | Node CPU | Node Memory | Node Storage (Total) | Total Deployed Workload Cluster Nodes | Deployed Clusters with 10 Nodes |
|---|---|---|---|---|---|---|
| Small | 3 | 8 | 16 GB | 750 GB | 1000 | 100 |
| Medium (Recommended) | 3 | 16 | 32 GB | 750 GB | 3000 | 300 |
| Large | 3 | 32 | 64 GB | 750 GB | 5000 | 500 |
The Spectro manifest requires approximately 10 GB of storage. VerteX deployed clusters use the manifest to identify what images to pull for each microservice that makes up VerteX.
Instance Sizing
| Configuration | Active Workload Limit |
|---|---|
| Small | Up to 1000 nodes each with 30 pods (30,000 pods) |
| Medium (Recommended) | Up to 3000 nodes each with 30 pods (90,000 pods) |
| Large | Up to 5000 nodes each with 30 pods (150,000 pods) |
Limitations
- Only public image registries are supported if you are choosing to use an external registry for your pack bundles.
Prerequisites
-
ISO management software installed on your local machine, such as
mkisofsorgenisoimage. -
Access to the Artifact Studio to download the Palette VerteX ISO.
tipIf you do not have access to Artifact Studio, contact your Spectro Cloud representative or open a support ticket.
-
Palette VerteX can be installed on a single node or on three nodes. For production environments, we recommend that three nodes be provisioned in advance for the Palette VerteX installation. We recommended the following resources for each node. Refer to the Size Guidelines section for additional sizing information.
-
8 CPUs per node.
-
16 GB memory per node.
-
Two disks per node.
-
The first disk must be 300 GB minimum and 500 GB is recommended. This disk is used for the Palette VerteX ISO stack. You specify the device in the
stylus_config.yamlfile as guided during the Install Palette VerteX steps.If the device is not specified, the default value is
auto. This means the installer selects the largest available drive, which may not be the desired behavior, especially in multi-drive environments. -
The second disk must be at least 500 GB and is used for the storage pool. The default device selected is
/dev/sdb. You can change the default device during the cluster creation steps in Local UI.dangerThe second disk is wiped as part of the installation process. If using an existing disk, ensure that you back up any important data before proceeding.
-
-
At least one removable media connection must be available to attach the Palette VerteX ISO. This can be a physical or virtual connection depending on your infrastructure provider.
-
-
The following network ports must be accessible on each node for Palette to operate successfully.
-
TCP/443: Must be open between all Palette nodes and accessible for user connections to the Palette management cluster.
-
TCP/6443: Outbound traffic from the Palette management cluster to the deployed cluster's Kubernetes API server.
-
-
SSH access must be available to the nodes used for Palette installation.
-
Relevant permissions to install Palette on the nodes including permission to attach or mount an ISO and set nodes to boot from it.
warningThe ISO is only supported on Unified Extensible Firmware Interface (UEFI) systems. Ensure you configure the nodes to boot from the ISO in UEFI mode.
-
Palette Management Appliance supports Secure Boot for Dell servers with UEFI and Hewlett Packard Enterprise iLO5. Learn how to configure and install Secure Boot for Palette Management Appliance below.
How to install Secure Boot on Hewlett Packard Enterprise iLO 5
Before you begin, ensure that you have iLO 5 access with privileges to launch the remote console and change BIOS settings. You also need the
MOK.dercertificate file on your local computer. Skip to step 4 if you have already downloaded the certificate file.- Navigate to Artifact Studio.
- Select the version corresponding to your VerteX installer. Then, select Show Artifacts. The artifact list appears.
- Download the MOK Key for Secure Boot file.
- Power on or reboot the server. When prompted during Power-On Self-Test (POST), press F9 to enter System Utilities.
- Select System Configuration and press ENTER.
- Select BIOS/Platform Configuration (RBSU) > Server Security > Secure Boot Settings > Advanced Secure Boot Options.
- Select DB – Allowed Signatures Database > Enroll Signature. If Secure Boot is currently enabled, the Enroll Signature option will be unavailable. Temporarily disable Secure Boot and repeat the process.
- Drag the
MOK.derfile from your desktop onto the iLO Remote Console window. iLO mounts it as a virtual USB device automatically. - Confirm any prompts.
- Verify that the new entry appears under DB – Allowed Signatures Database > View Signatures.
- Press ESC to exit the menus until the Save and Exit option is available.
- Save your changes. Exit the menu and reboot the server.
How to install Secure Boot on Dell servers with UEFI
Before you begin, you need the
MOK.dercertificate file on your local computer. Skip to step 4 if you have already downloaded the certificate file.-
Navigate to Artifact Studio.
-
Select the version corresponding to your VerteX installer. Then, select Show Artifacts. The artifact list appears.
-
Download the MOK Key for Secure Boot file.
-
Power on the server. Execute the following command to create a virtual CD/DVD drive containing an ISO file with the
MOK.dercertificate. Alternatively, you can save the file to a FAT32-formatted USB drive.mkisofs -output key.iso -volid cidata -joliet -rock MOK.der -
Reboot the server. When the Dell logo appears, press F2. The System Setup menu opens.
-
Select System BIOS > Boot Settings.
-
Ensure that the Boot Mode is set to UEFI.
-
Press ESC to return to Boot Settings.
-
Select System Security > Secure Boot Settings.
-
Toggle Secure Boot to Enabled and Secure Boot Policy to Custom.
-
Select Secure Boot Custom Policy Settings > Authorized Signature Database (db).
-
Select Import New Entry. Then, select the virtual CD/DVD drive or USB drive containing the
MOK.derfile. -
Save your changes. Press ESC to return to Authorized Signature Database (db).
-
Select View Entries. The
MOK.derfile shows in the database as DRBD Module Signing. -
Save your changes. Exit the menu and reboot the server.
-
You can choose to use either an internal Zot registry that comes with Palette or an external registry of your choice. If using an external registry, you will need to provide the following information during the Palette installation process.
- The DNS/IP endpoint and port for the external registry.
- Ensure the nodes used to host the Palette management cluster have network access to the external registry server.
- The username for the registry.
- The password for the registry.
- (Optional) The Certificate Authority (CA) certificate that was used to sign the external registry certificate in Base64 format.
How to get Base64 encoded entries for a certificate
You can get the Base64 encoded entry from your certificate by using the following command. Replace
<certificate-file>with the filename of your certificate file.base64 --wrap 0 <certificate-file> - The DNS/IP endpoint and port for the external registry.
-
If you have an Ubuntu Pro subscription, you can provide the Ubuntu Pro token during the Palette installation process. This is optional but recommended for security and compliance purposes.
-
A virtual IP address (VIP) must be available for the Palette management cluster. This is assigned during the Palette installation process and is used for load balancing and high availability. The VIP must be accessible to all nodes in the Palette management cluster.
How to discover free IPs in your environment
You can discover free IPs in your environment by using a tool like
arpingornmap. For example, you can issue the following command to probe a CIDR block for free IP addresses.nmap --unprivileged -sT -Pn 10.10.200.0/24This command will scan the CIDR block and output any hosts it finds.
Example nmap outputNmap scan report for test-worker-pool-cluster2-6655ab7a-tyuio.company.dev (10.10.200.2)
Host is up.
All 1000 scanned ports on test-worker-pool-cluster2-6655ab7a-tyuio.company.dev (10.10.200.2) are in ignored states.
Not shown: 1000 filtered tcp ports (no-response)For any free IP addresses, you can use
arpingto double-check if the IP is available.Example arping commandarping -D -c 4 10.10.200.101Example arping outputARPING 10.10.200.101 from 0.0.0.0 ens103
Sent 4 probes (4 broadcast(s))
Received 0 response(s)If you receive no responses like the example output above, the IP address is likely free.
Install Palette VerteX
-
Download the Palette VerteX ISO from the Artifact Studio. Refer to the Artifact Studio guide for instructions on how to access and download the ISO.
-
Load the Palette VerteX ISO to a bootable device, such as a USB stick, or upload the ISO to a datastore in your VMware environment. You can use several software tools to create a bootable USB drive, such as balenaEtcher.
- For VMware vSphere, you can upload the Palette VerteX ISO to a datastore using the vSphere Client or the
govcCLI tool. Refer to the vSphere or govc documentation for more information. - For Bare Metal, you can use tools like
scporrsyncto transfer the Palette VerteX ISO to the nodes, or use a USB drive to boot the nodes from the ISO. - For Machine as a Service (MAAS), you can upload and deploy ISOs using Packer. Refer to the MAAS documentation for more information.
Ensure that the Palette VerteX ISO is accessible to all nodes that will be part of the Palette VerteX management cluster.
- For VMware vSphere, you can upload the Palette VerteX ISO to a datastore using the vSphere Client or the
-
Attach the Palette VerteX ISO to the nodes and ensure the boot order is set to boot from the Palette VerteX ISO first.
For example, in VMware vSphere, the VMs will have the Palette VerteX ISO in CD/DVD drive 1. Refer to the documentation of your infrastructure provider for specific instructions on how to attach and boot from an ISO.
-
Restart the nodes to start the installation process.
-
Once the nodes have rebooted and entered the GRand Unified Bootloader (GRUB) menu, select the Palette eXtended Kubernetes Edge Install (manual) option and press ENTER.
cautionEnsure that you select the option within the first five seconds of the GRUB menu appearing, as it will automatically proceed with the default installation option after this time.
-
Once the nodes have finished booting, in the terminal, issue the following command to list the block devices.
lsblk --pathsUse the output to identify the device name to use for the Palette VerteX ISO stack. For example,
/dev/sda.Example outputNAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
/dev/loop0 7:0 0 1G 1 loop /run/rootfsbase
/dev/sda 8:0 0 250G 0 disk
/dev/sdb 8:16 0 5000G 0 disk
/dev/sr0 11:0 1 17.3G 0 rom /run/initramfs/live -
If there are any partitions on the device you plan to use for the installation, you must delete them before proceeding. For example, if the device is
/dev/sda, issue the following command to delete all partitions on the device.wipefs --all /dev/sdadangerDeleting partitions will erase all data on the device. Ensure that you back up any important data before proceeding.
-
Issue the following command to edit the installation manifest.
vi /oem/stylus_config.yaml -
Add the following
install.devicesection to your manifest, replacing<storage-drive>with the device name identified in step 6.#cloud-config
cosign: false
verify: false
install:
device: <storage-drive>
grub-entry-name: "Palette eXtended Kubernetes Edge"
system:
size: 8192
... -
Save the changes and exit the editor.
-
Issue the following command to start the installation process.
kairos-agent install -
Wait for the installation process to complete. This will take at least 15 minutes, depending on the resources available on the nodes. After completion, the nodes will reboot and display the Palette TUI.
-
In the Palette TUI, provide credentials for the initial account. This account will be used to log in to Local UI and for SSH access to the node.
Field Description Username Provide a username to use for the account. Password Enter a password for the account. Confirm Password Re-enter the password for confirmation. Press ENTER to continue.
-
In the Palette TUI, the available configuration options are displayed and are described in the next three steps. Use the TAB key or the up and down arrow keys to switch between fields. When you make a change, press ENTER to apply the change. Use ESC to go back.
-
In Hostname, check the existing hostname and, optionally, change it to a new one.
-
In Host Network Adapters, select a network adapter you would like to configure. By default, the network adapters request an IP automatically from the Dynamic Host Configuration Protocol (DHCP) server. The CIDR block of an adapter's possible IP address is displayed in the Host Network Adapters screen without selecting an individual adapter.
In the configuration page for each adapter, you can change the IP addressing scheme of the adapter and choose a static IP instead of DHCP. In Static IP mode, you will need to provide a static IP address and subnet mask, as well as the address of the default gateway. Specifying a static IP will remove the existing DHCP settings.
You can also specify the Maximum Transmission Unit (MTU) for your network adapter. The MTU defines the largest size, in bytes, of a packet that can be sent over a network interface without needing to be fragmented.
-
In DNS Configuration, specify the IP address of the primary and alternate name servers. You can optionally specify a search domain.
-
After you are satisfied with the configurations, navigate to Quit and press ENTER to finish the configuration. Press ENTER again on the confirmation prompt.
After a few seconds, the terminal displays the Device Info and prompts you to provision the device through Local UI.
tipIf you need to access the Palette TUI again, issue the
palette-tuicommand in the terminal. -
Ensure you complete the configuration on each node before proceeding to the next step.
-
Decide on the host that you plan to use as the leader of the group. Refer to Link Hosts for more information about leader hosts.
-
Access the Local UI of the leader host. Local UI is used to manage the Palette VerteX nodes and perform administrative tasks. It provides a web-based interface for managing the Palette VerteX management cluster.
In your web browser, go to
https://<node-ip>:5080. Replace<node-ip>with the IP address of your node. If you have changed the default port of the console, replace5080with the Local UI port. The address of the Local UI console is also displayed on the terminal screen of the node.If you are accessing Local UI for the first time, a security warning may be displayed in your web browser. This is because Local UI uses a self-signed certificate. You can safely ignore this warning and proceed to Local UI.
-
Log in to Local UI using the credentials you provided in step 10.
-
(Optional) If you need to configure a HTTP proxy server for the node, follow the steps in the Configure HTTP-Proxy in Local UI guide. When done, proceed to the next step.
-
From the left main menu, click Linked Edge Hosts.
-
Click Generate token. The host begins generating tokens that you will use to link this host with other hosts. The Base64 encoded token contains the IP address of the host, as well as an OTP that will expire in two minutes. Once a token expires, the leader generates another token automatically.
-
Click the Copy button to copy the token.
-
Log in to Local UI on the host that you want to link to the leader host.
-
From the left main menu, click Linked Edge Hosts.
-
Click Link this device to another.
-
In the pop-up box that appears, enter the token you copied from the leader host.
-
Click Confirm.
-
Repeat steps 27-31 for every host you want to link to the leader host.
-
Confirm that all linked hosts appear in the Linked Edge Hosts table. The following columns should show the required statuses.
Column Status Status Ready Content Synced Health Healthy Content synchronization will take at least five minutes to complete, depending on your network resources.
-
On the left main menu, click Cluster.
-
Click Create cluster.
-
For Basic Information, provide a name for the cluster and optional tags in
key:valueformat. -
In Cluster Profile, the Imported Applications preview section displays the applications that are included with the VerteX Management Appliance. These applications are pre-configured and used to deploy your Palette VerteX management cluster.
Leave the default options in place and click Next.
-
In Profile Config, configure the cluster profile settings to your requirements. Review the following tables for the available options.
Cluster Profile Options
Option Description Type Default Pod CIDR The CIDR range for the pod network. This is used to allocate IP addresses to pods in the cluster. CIDR notation 100.64.0.0/18Service CIDR The CIDR range for the service network. This is used to allocate IP addresses to services in the cluster. CIDR notation 100.64.64.0/18Ubuntu Pro Token (Optional) The token for your Ubuntu Pro subscription. String No default Storage Pool Drive (Optional) The storage pool device to use for the cluster. As mentioned in the Prerequisites, assign this to your second storage device. String /dev/sdbCSI Placement Count The number of replicas for the Container Storage Interface (CSI) Persistent Volumes (PVs). The accepted values are 1or3. We recommend using 3 to provide high availability for the CSI volumes. This value must match the MongoDB Replicas value.Integer 3Registry Options
Option Description Type Default In Cluster Registry (Optional) - True- Use internal Zot registry
-False- Use external registry.Boolean True Registry Endpoint The DNS/IP endpoint for the registry. Leave the default entry if using the internal Zot registry, which is a virtual IP address assigned by kube-vip. Adjust if using an external registry. String {{.spectro.system.cluster.kubevip}}Registry Port The port for the registry. The default value can be changed for the internal Zot registry. Adjust if using an external registry. Integer 30003OCI Registry Base Content Path (Optional) The base path for the registry content for the internal or external registry. Palette VerteX packs will be stored in this directory. String spectro-contentOCI Pack Registry Username If using the internal Zot registry, leave the default username or adjust to your requirements. If using an external registry, provide the appropriate username. String adminOCI Pack Registry Password If using the internal Zot registry, enter a password to your requirements. If using an external registry, provide the appropriate password. String No default - must be provided. OCI Registry Storage Size (GiB) (Optional) The size of the storage for the OCI registry. This is used to store the images and packs in the registry. The default value is set to 100 GiB, but this should be increased to at least 250 GiB for production environments. Integer 100OCI Pack Registry Ca Cert (Optional) - Internal Zot registry - Not required.
- External registry - The CA certificate that was used to sign the external registry certificate.Base64 encoded string No default Image Replacement Rules (Optional) Set rules for replacing image references when using an external registry. For example, all: oci-registry-ip:oci-registry-port/spectro-content. Leave empty if using the internal Zot registry.String No default Root Domain (Optional) The root domain for the registry. The default is set for the internal Zot registry, which is a virtual IP address assigned by kube-vip. If using an external registry, adjust this to the appropriate domain. String {{.spectro.system.cluster.kubevip}}Mongo Replicas The number of MongoDB replicas to create for the cluster. The accepted values are 1or3. We recommend using 3 to provide high availability for the MongoDB database. This value must match the CSI Placement Count value.Integer 3 -
Click Next when you are done.
-
In Cluster Config, configure the following options.
Cluster Config Options
Option Description Type Default Network Time Protocol (NTP) (Optional) The NTP servers to synchronize time within the cluster. String No default SSH Keys (Optional) The public SSH keys to access the cluster nodes. Add additional keys by clicking Add Item. String No default Virtual IP Address (VIP) The virtual IP address for the cluster. This is used for load balancing and high availability. String No default Click Next when you are done.
-
In Node Config, configure the following options.
importantYou must assign at least three control plane nodes for high availability. You can remove the worker node pool as it is not required for the Palette VerteX management cluster. When doing this, ensure that the Allow worker capability option is enabled for the control plane node pool.
Node Pool Options
- Control Plane Pool Options
- Worker Pool Options
Option Description Type Default Node pool name The name of the control plane node pool. This will be used to identify the node pool in Palette VerteX. String control-plane-poolAllow worker capability (Optional) Whether to allow workloads to be scheduled on this control plane node pool. Ensure that this is enabled if no worker pool is assigned to the cluster. Boolean True Additional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags can be used to filter and search for node pools in Palette VerteX.String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key = string
- Value = string
- Effect = string (enum)No default Option Description Type Default Node pool name The name of the worker node pool. This will be used to identify the node pool in Palette VerteX. String worker-poolAdditional Kubernetes Node Labels (Optional) Tags for the node pool in key:valueformat. These tags can be used to filter and search for node pools in Palette VerteX.String No default Taints Taints for the node pool in key=value:effectformat. Taints are used to prevent pods from being scheduled on the nodes in this pool unless they tolerate the taint.- Key = string
- Value = string
- Effect = string (enum)No default Pool Configuration
The following options are available for both the control plane and worker node pools. You can configure these options to your requirements. You can also remove worker pools if not needed.
Option Description Type Default Architecture The CPU architecture of the nodes. This is used to ensure compatibility with the applications operating on the nodes. String (enum) amd64Add Edge Hosts Click Add Item and select the other hosts that you installed using the VerteX Management Appliance ISO. These hosts will be added to the node pool. Each pool must contain at least one node. N/A - Control Plane Pool = Current host selected
- Worker Pool = No host selectedNIC Name The name of the network interface card (NIC) to use for the nodes. Leave on Auto to let the system choose the appropriate NIC, or select one manually from the drop-down menu. N/A Auto Host Name (Optional) The hostname for the nodes. This is used to identify the nodes in the cluster. A generated hostname is provided automatically, which you can adjust to your requirements. String edge-* -
Click Next when you are done.
-
In Review, check that your configuration is correct. If you need to make changes, click on any of the sections in the left sidebar to go back and edit the configuration.
When you are satisfied with your configuration, click Deploy Cluster. This will start the cluster creation process.
The cluster creation process will take 20 to 30 minutes to complete. You can monitor progress from the Overview tab on the Cluster page in the left main menu. The cluster is fully provisioned when the status changes to Running and the health status is Healthy.
-
Once the cluster is provisioned, access the Palette VerteX system console using the virtual IP address (VIP) you configured earlier. Open your web browser and go to
https://<vip-address>/system. Replace<vip-address>with the VIP you configured for the cluster.The first time you visit the system console, a warning message about an untrusted TLS certificate may appear. This is expected, as you have not yet uploaded your TLS certificate. You can ignore this warning message and proceed.
-
You will be prompted to log in to Palette VerteX system console. Use
adminas the username andadminas the password. You will be prompted to change the password after logging in. -
In the Account Info window, provide the following information.
Field Description Email address This is used for notifications and password recovery as well as logging in to the Palette VerteX system console. This will not be active until you configure SMTP settings in Palette VerteX system console and verify your email address. Current password Use adminas the current password.New password Enter a new password for the account. Confirm new password Re-enter the new password for confirmation. Refer to Password Requirements and Security to learn about password requirements.
After logging in, a summary page is displayed. You now have access to the Palette VerteX system console, where you can manage your Palette VerteX environment.
If you are accessing the Palette VerteX system console for the first time, a security warning may be displayed in your web browser. This is because Palette VerteX is configured with a self-signed certificate. You can replace the self-signed certificate with your own SSL certificates as guided later in Next Steps.
If your installation is not successful, verify that the piraeus-operator pack was correctly installed. For more
information, refer to the
Self-Hosted Installation - Troubleshooting
guide.
Validate
-
Log in to the Local UI of the leader host using the URL
https://<node-ip>:5080. Replace<node-ip>with the IP address of the leader host. If you have changed the default port of the console, replace5080with the Local UI port. -
In Local UI, click on Cluster in the left main menu.
-
Check that the cluster status is Running and the health status is Healthy. In the Applications section on this page, the listed applications should be in the Running state.
-
On the Cluster page, under Environment, click on the Admin Kubeconfig File to download it to your local machine.
-
On your local machine, open a terminal session and export the
KUBECONFIGenvironment variable to point to the downloadedkubeconfigfile.export KUBECONFIG=/path/to/your/downloaded/kubeconfig -
Issue the following command to verify the Palette installation.
kubectl get pods --all-namespaces --output custom-columns="NAMESPACE:metadata.namespace,NAME:metadata.name,STATUS:status.phase" \
| grep --extended-regexp '^(cp-system|hubble-system|ingress-nginx|jet-system|ui-system)\s'Your output should look similar to the following.
cp-system spectro-cp-ui-5cb6d454f8-bndxb Running
hubble-system auth-5586c867ff-mk6wn Running
hubble-system auth-5586c867ff-xm9mx Running
hubble-system cloud-7bfd6c7f55-bmpkm Running
hubble-system cloud-7bfd6c7f55-tmmjj Running
hubble-system configserver-697cf95f9f-z2tr6 Running
hubble-system event-8566675f7d-l7n8n Running
hubble-system event-8566675f7d-v8cmz Running
hubble-system event-8566675f7d-vtp8m Running
hubble-system foreq-59f8c6c584-47npj Running
hubble-system hashboard-5fcc8f448c-df5rj Running
hubble-system hashboard-5fcc8f448c-xs6mr Running
hubble-system hutil-5b49d6f5bc-5gcqc Running
hubble-system hutil-5b49d6f5bc-plg9j Running
hubble-system memstore-75b7d8bb5b-qtn7w Running
hubble-system mgmt-5874d55cf6-2gh52 Running
hubble-system mongo-0 Running
hubble-system mongo-1 Running
hubble-system mongo-2 Running
hubble-system msgbroker-0 Running
hubble-system msgbroker-1 Running
hubble-system oci-proxy-6bc464cf58-wwksw Running
hubble-system reloader-reloader-59c87c446c-9cvk5 Running
hubble-system specman-0 Running
hubble-system spectro-tunnel-647cf485b-xn87n Running
hubble-system spectrocluster-85bf89dcdb-llsjj Running
hubble-system spectrocluster-85bf89dcdb-m2c8w Running
hubble-system spectrocluster-85bf89dcdb-tp9lk Running
hubble-system spectrocluster-jobs-557bd5b798-fkzbm Running
hubble-system spectrossh-74db5544bf-5t24s Running
hubble-system system-6496bc487-cfchd Running
hubble-system system-6496bc487-mlxjk Running
hubble-system timeseries-7c4d6647b5-ckrnt Running
hubble-system timeseries-7c4d6647b5-jb9tp Running
hubble-system timeseries-7c4d6647b5-nl86q Running
hubble-system user-57f7759745-8fjx8 Running
hubble-system user-57f7759745-hxz4n Running
ingress-nginx ingress-nginx-controller-m5z54 Running
ingress-nginx ingress-nginx-controller-qsf6m Running
ingress-nginx ingress-nginx-controller-w64pz Running
jet-system jet-856db6655-k87k8 Running
ui-system spectro-ui-bcff7f675-lds2l Running -
Log in to the Palette VerteX system console using the virtual IP address (VIP) you configured earlier. Open your web browser and go to
https://<vip-address>/system. Replace<vip-address>with the VIP you configured for the cluster. -
On the login page, use
adminas the username and the new password you set during the initial login. -
On the Summary page, check that the On-prem system console is healthy message is displayed.