Search

Mittwoch, 2. April 2025

Migrating Nutanix Files Cluster via Async PD to another cluster

If you want to use Async PD for File Server Migration purposes:

Create the Storage Container on the Target Side.

Create a Remote Site from the Source to the Traget Cluster and Vice Versa, enable the Storage Container Mappings. Edit the Schedules of the Default created PD to replicate to the Remote SIte.

After Replication has finished, on the Source Site choose the PD and select Migrate.

After the Migration finished, you can go to Target Site and chose File Servers, here you select the File Server (needs Activation) and click Activate.

During the Activation Process, you have to enter all the IPs again (if you stay on the same Networks) or choose new networks and IPs (don't forget to refresh your DNS Entrys).

To clean up the Source Site: (Option1)

Remove the PD containing the Source File Server

The use putty to connect to the cluster as nutanix

Then CVM: ncli file-server list

Look for your file server, the Status should be Not Active, note the UUID

then

CVM: ncli file-server rm uuid=<uuid you noted> force=true

This should cleanup your source site.

Then look for your Storage Container in the GUI , it should be empty , and remove it (if not used by other entities), Maybe you have to use the CLI to remove:

CVM: ncli ctr ls

CVM: ncli ctr rm name=<NAME> [ignore-small-files=true ]

or you can try cleanup all in one (Option 2):

CVM: ncli file-server list

CVM: ncli file-server rm uuid=<uuid> force=true delete-pd-and-snapshots=true delete-container=true

Accessing Nutanix Storage Containers for upload/download

 If you need to download/upload files to a storage container:

 

  1. Open WinSCP.

  2. Connect to the CVM IP using SFTP protocol and port 2222.

  3. Login using the admin/prism element credentials.(not nutanix!)

  4. Enable the option to show hidden files by going to Options > Preferences > Panels and then selecting the “Show hidden files”  option under the common settings.

 

From here you can either upload or download files to the container.

 

Be carefull with deleting files here, you should not do that via WinSCP, use Nutanix GUI or CLI and only if you know what you do :-)

Sonntag, 16. März 2025

Nutanix with ESXi - Unable to boot into Phoenix using One-Click Update with Secureboot enabled

We had the problem with some customers who used Nutanix with vSphere that the host could not boot into phoenix during the one-click update.



The problem is that for vSphere 8 the boot was changed to Uefi and all settings for TPM (Secureboot and Intel TXT) were set.

The Secureboot certificates for Nutanix must then be installed manually. 

Nutanix certificates for SecureBoot are located at:

https://download.nutanix.com/kbattachments/Nutanix_Secure_Boot_v3.cer

These certificates have an expiration date Mar 22, 19:52:03 2031 GMT.

If the node doesn't have the Nutanix certificate installed in its BIOS, phoenix will not be booted if Secureboot enabled.


Freitag, 17. Januar 2025

Problems with HPE Nodes after adding 2nd CPU

This Problem occured on HPE DX380 Gen10, but I think it will be on other vendors too.

After adding 2nd CPU, the CVM will not boot as PCI devices get newly enumerated: 

[root@AHV1 ~]# virsh list --all

 Id   Name                    State

----------------------------------------

 -    NTNX-CVM   shut off


[root@sAHV1 ~]# virsh start NTNX-CVM

error: Failed to start domain 'NTNX-CVM'

error: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu NTNX-CVM prepare begin -) unexpected exit status 1: [Errno 2] No such file or directory: '/sys/bus/pci/devices/0000:b1:00.0/driver_override'

To resolve this, look in a not yet expanded Node from the same cluster what "b1:00.0" is:

[root@sAHV2 ~]# lspci

...
b1:00.0 Serial Attached SCSI controller: Adaptec Smart Storage PQI 12G SAS/PCIe 3 (rev 01)
...
then look in your expanded node which address  "Adaptec Smart Storage PQI..." has:

[root@sAHV1 ~]# lspci

....
5c:00.0 Serial Attached SCSI controller: Adaptec Smart Storage PQI 12G SAS/PCIe 3 (rev 01)
....

so you got 5c.00.0

then edit your CVM.xml on your expanded node:

[root@AHV1 ~]# virsh dumpxml NTNX-CVM  (only to view .XML)

[root@AHV1 ~]# virsh edit NTNX-CVM      (to edit .xml)

ORIGINAL:
.....

<video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0xb1' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
...

EDITED:

....
<video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x5c' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
....

Then you can start your CVM with:

[root@sAHV1 ~]# virsh start NTNX-CVM

and it will start :-)