Search

Donnerstag, 16. Oktober 2025

Broadcom / Vmware VCSA and Lifecycle Manager Updates cannot be downloaded (due to missing download token)

 

(Important Update: Changes to How You Download VMware Software Binaries - VMware Cloud Foundation (VCF) Blog)

It is necessary, to generate a download token in Broadcom Support Portal to get updates for VCSA or ESXi.

 

Generate Donwload Token:

  1. Login to Broadcom Portal
  2. Go to Quick Links „Generate Download Token)
  3. Select Site ID and klick “Generate Token”

 

 

 

vCenter Update: VCSA-Settings

 

  1. Login to VCSA (https://vcenter_server_fqdn/IP:5480)
  2. Change to Update
  3. Settings
  4. Copy the yellow part of the Download URL
    1. https://vapp-updates.vmware.com/vai-catalog/valm/vmw/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXX/<current_version>.latest
  5. Change to „Specified“ and create a new Download URL using the token and the part you have copied:
    1. https://dl.broadcom.com/<downloadToken>/PROD/COMP/VCENTER/vmw/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXX/<current_version>.latest

 

Now, the Update should work.

 

ℹ️ Extended Info from Broadcom:

  • Further patches automatically update this URL. For example, if you patch 8.0.3.00400 to 8.0.3.00500, the default URL will change to end in 8.0.3.00500.

 

So, no further changes should be necessary.

 

 

See also:

Error: "The vCenter Server is not able to reach the specified URL" vCenter Server patching via VAMI fails to download the updates from online repositories

 

 

 

Ein Bild, das Text, Software, Webseite, Screenshot enthält.

KI-generierte Inhalte können fehlerhaft sein.

 

ESXi Update: Lifecycle Manager-Settings

 

  1. Login to vCenter Client and navigate to Lifecycle Manger
  2. Go to „Settings“ - „Administration“ - „Patch Setup”
  3. Deactivate all VMware Download Sources
  4. Add the following new Download Sources:
    1. https://dl.broadcom.com/<DownloadToken>/PROD/COMP/ESX_HOST/main/vmw-depot-index.xml
    2. https://dl.broadcom.com/<DownloadToken>/PROD/COMP/ESX_HOST/addon-main/vmw-depot-index.xml
    3. https://dl.broadcom.com/<DownloadToken>/PROD/COMP/ESX_HOST/iovp-main/vmw-depot-index.xml
    4. https://dl.broadcom.com/<DownloadToken>/PROD/COMP/ESX_HOST/vmtools-main/vmw-depot-index.xml
  5. Restart Update Manager Service in VCSA (ssh VCSA service-control --restart vmware-updatemgr) | alternatively via VCSA GUI
  6. Download Updates „Lifecycle Manger“ - „ACTIONS“ -  „Sync Updates“

 

See also:

"A general system error occurred: Failed to download VIB(s): Error: HTTP Error Code: 403", vLCM fails to download the ESXi patches and images from online repositories

 

Ein Bild, das Text, Zahl, Quittung enthält.

KI-generierte Inhalte können fehlerhaft sein.

Ein Bild, das Text, Schrift, Reihe, Zahl enthält.

KI-generierte Inhalte können fehlerhaft sein.

 

Problems using LCM for SPP Updates after changing Hosts to UEFI/Secure Boot on HPE HW prior Gen11

After changing Boot Settings to UEFI/Secure Boot on HPE HW prior to Gen11, phoenix is unable to boot due to missing SecureBoot Keys, so FW Upgrades via LCM are not possible.

To fix this, get the Key:

https://download.nutanix.com/kbattachments/Nutanix_Secure_Boot_v3.cer

put it on a USB Stick. Insert the Stick in a Server USB Port (not ILO USB)

enter RBSU on the affected Server:

Enter Maintenance Mode, Reboot the Server

press F9

Enter System Configuration

BIOS/Plattform Configuration (RBSU)

Server Security

Secure Boot Settings

Advanced Secure Boot Options

KEK - Key Exhange Key

Enroll KEK Entry

Enroll KEK using File

File Systems on attached Media

Choose your USB Device

Locate the File


Commit Changes and Exit


After enrolling, you can view KEK Entrys:


You should see the Nutanix Entry now.

Leave RBSU (F12 Save and exit) and continue normal Boot

You are able to Update FW through LCM now.


Freitag, 10. Oktober 2025

Change Cluster an Storagecontainer from RF2 to RF3

 You can change your FT Level from 1N/1D to 2N/2D (if you have 5 or more Hosts) in the Prism GUI:


Then you have to change the RF at Storagecontainerlevel as well:

show all Container:

nutanix@NTNX-x:~$ ncli
<ncli> ctr list

Change RF to 3:

nutanix@NTNX-x:~$ ncli
<ncli> ctr edit name=CONTAINERNAME rf=3

for the Nutanix Management Share you have to login as admin:

nutanix@NTNX-x:~$ ncli -h true
<ncli> ctr update name=NutanixManagementShare rf=3 force=true





Mittwoch, 2. April 2025

Migrating Nutanix Files Cluster via Async PD to another cluster

If you want to use Async PD for File Server Migration purposes:

Create the Storage Container on the Target Side.

Create a Remote Site from the Source to the Traget Cluster and Vice Versa, enable the Storage Container Mappings. Edit the Schedules of the Default created PD to replicate to the Remote SIte.

After Replication has finished, on the Source Site choose the PD and select Migrate.

After the Migration finished, you can go to Target Site and chose File Servers, here you select the File Server (needs Activation) and click Activate.

During the Activation Process, you have to enter all the IPs again (if you stay on the same Networks) or choose new networks and IPs (don't forget to refresh your DNS Entrys).

To clean up the Source Site: (Option1)

Remove the PD containing the Source File Server, remove Schedule first.

This hould remove the inactive Fileserver as well.

The use putty to connect to the cluster as nutanix

Then CVM: ncli file-server list

Look for your file server, it should be not there otherwise the Status should be Not Active, then note the UUID

then

CVM: ncli file-server rm uuid=<uuid you noted> force=true

This should cleanup your source site.

Then look for your Storage Container in the GUI , it should be empty , and remove it (if not used by other entities), Maybe you have to use the CLI to remove:

CVM: ncli ctr ls

CVM: ncli ctr rm name=<NAME> [ignore-small-files=true ]

or you can try cleanup all in one (Option 2):

CVM: ncli file-server list

CVM: ncli file-server rm uuid=<uuid> force=true delete-pd-and-snapshots=true delete-container=true

Accessing Nutanix Storage Containers for upload/download

 If you need to download/upload files to a storage container:

 

  1. Open WinSCP.

  2. Connect to the CVM IP using SFTP protocol and port 2222.

  3. Login using the admin/prism element credentials.(not nutanix!)

  4. Enable the option to show hidden files by going to Options > Preferences > Panels and then selecting the “Show hidden files”  option under the common settings.

 

From here you can either upload or download files to the container.

 

Be carefull with deleting files here, you should not do that via WinSCP, use Nutanix GUI or CLI and only if you know what you do :-)

Sonntag, 16. März 2025

Nutanix with ESXi - Unable to boot into Phoenix using One-Click Update with Secureboot enabled

We had the problem with some customers who used Nutanix with vSphere that the host could not boot into phoenix during the one-click update.



The problem is that for vSphere 8 the boot was changed to Uefi and all settings for TPM (Secureboot and Intel TXT) were set.

The Secureboot certificates for Nutanix must then be installed manually. 

Nutanix certificates for SecureBoot are located at:

https://download.nutanix.com/kbattachments/Nutanix_Secure_Boot_v3.cer

These certificates have an expiration date Mar 22, 19:52:03 2031 GMT.

If the node doesn't have the Nutanix certificate installed in its BIOS, phoenix will not be booted if Secureboot enabled.


Freitag, 17. Januar 2025

Problems with HPE Nodes after adding 2nd CPU

This Problem occured on HPE DX380 Gen10, but I think it will be on other vendors too.

After adding 2nd CPU, the CVM will not boot as PCI devices get newly enumerated: 

[root@AHV1 ~]# virsh list --all

 Id   Name                    State

----------------------------------------

 -    NTNX-CVM   shut off


[root@sAHV1 ~]# virsh start NTNX-CVM

error: Failed to start domain 'NTNX-CVM'

error: Hook script execution failed: internal error: Child process (LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin /etc/libvirt/hooks/qemu NTNX-CVM prepare begin -) unexpected exit status 1: [Errno 2] No such file or directory: '/sys/bus/pci/devices/0000:b1:00.0/driver_override'

To resolve this, look in a not yet expanded Node from the same cluster what "b1:00.0" is:

[root@sAHV2 ~]# lspci

...
b1:00.0 Serial Attached SCSI controller: Adaptec Smart Storage PQI 12G SAS/PCIe 3 (rev 01)
...
then look in your expanded node which address  "Adaptec Smart Storage PQI..." has:

[root@sAHV1 ~]# lspci

....
5c:00.0 Serial Attached SCSI controller: Adaptec Smart Storage PQI 12G SAS/PCIe 3 (rev 01)
....

so you got 5c.00.0

then edit your CVM.xml on your expanded node:

[root@AHV1 ~]# virsh dumpxml NTNX-CVM  (only to view .XML)

[root@AHV1 ~]# virsh edit NTNX-CVM      (to edit .xml)

ORIGINAL:
.....

<video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0xb1' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
...

EDITED:

....
<video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x5c' slot='0x00' function='0x0'/>
      </source>
      <rom bar='off'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </hostdev>
....

Then you can start your CVM with:

[root@sAHV1 ~]# virsh start NTNX-CVM

and it will start :-)