I can confirm that upgrading to version 10.0.1 as per the article does remove the error. Please note this only impacts servers that run the HP customised ISO / HP installed management agents with the impacted versions as listed in the KB.
Wednesday, 22 October 2014
HP Agentless Management Service (AMS) causes "Can't Fork" and MKS errors
If you find that you can't suddenly remote ANY VM's on a host and are given a "Cannot connect to MKS" error and trying to do anything on an ESXi console generate a "Cant Fork" error, then you need to upgrade your HP management Agents on your HP ESXi servers. This error impacts version 5.0 and upwards and is outlined in KB2085618
Add ZFS storage space usage to SSH MOTD in Ubuntu 14.04
If you want to change the SSH Message of the Day so that it doesn't show the default / drive space AND have it show the size of your zfs pool then you have to do the following to allow it to display. Please note that this is done on Ubuntu 14.04
First edit the disk.py file under /usr/lib/python2.7/dist-packages/landscape/lib with your favourite text editor and at the end of the STABLE_FILESYSTEMS entry add ', "zfs"' as shown in the screenshot below, then save and quit the editor
Next, edit the disk.py file under /usr/lib/python2.7/dist-packages/landscape/sysinfo with your favourite text editor and find the line main_info = get_filesystem_for_path("/", self._mounts_file, and edit the "/" path to the zfs file system you want, in my case its "/zfs/storage". Also edit root_main_info = get_filesystem_for_path("/"..... and replace the "/" with the same entry as above. These changes can be seen in the screenshot below. Save and then quit the editor. *Note a recent update removes the root_main_info line and is no longer required*
Done, the next time you ssh in to your server, you'll see your ZFS file system space!!!
Monday, 29 September 2014
HP NMI Error with ESXi - Gen8
In the environment many of our ESXi hosts (HP DL380 Gen8's) have generated PSOD's with NMI generated events. If you are experiencing this issue please UPGRADE your ilo4 firmware to version 1.51. The issue described is under HP article c04332584. You can download the latest firmware from HP.
Tuesday, 2 September 2014
Deploying OVA fails with "Failed to Deploy OVF/OVA package: The operation is not supported on the object"
We recently had an issue deploying either an OVA/OVF file that was created from one of the virtual machines in our environment. Deploying this OVA failed instantly (after entering cluster locations / datastore) with the following error: "The Operation is not supported on the object". After much investigation and examining the /var/log/hostd.log file on the ESXi host we were trying to deploy too, we found the line: "Video Ram size edit is not supported when auto-detect is True." Editing the Virtual Machine hardware settings and editing the Video Card and changing the setting from Auto-detect to Specify custom settings (This can be anything) and then exporting the virtual machine as an OVA allowed the deployment to work! Very interesting that this setting caused a problem! Previously we have also encountered this when the Virtual Machine has an ISO mounted when to converting to OVA.
Thursday, 19 September 2013
PowerCLI script to check VIBs on a host
I work in an environment where ESXi hosts require a lot of additional software to be installed. Examples include a specific Cisco VEM module to be installed, the Trend DSA Filter driver etc. A simple way to check what version or even if the software has been installed on the host is as follows. This example will check if the Trend DSA Filter is installed.
(get-esxcli -vmhost hostname).software.vib.list() | where {$_.Name -eq "dvfilter-dsa"}
This will return information if the module is installed and give important information such as Versions etc.
(get-esxcli -vmhost hostname).software.vib.list() | where {$_.Name -eq "dvfilter-dsa"}
This will return information if the module is installed and give important information such as Versions etc.
VMware vSphere Storage Appliance (VSA) Installation and Removal Tricks
I've recently had the pleasure of playing around with VMware's vSphere Storage Appliance. I would like to quickly point out some of the troubles I had with the installation so that if you guys come across the same problems they can quickly be sorted. The installation I was using at the time was "VMware-vsa-all-5.1.3.0-1090545.iso"
During the installation if you receive an Error 2896 during the VSA Manager installation it is because of a script problem that doesn't like servers with multiple drives, specifically if you are also trying to install the product on for example D:\ rather than C:\.
Browse to %temp%\sva\VSAManager/installTool and edit the runtool.bat file. Edit the 9th line down and change it from cd %BUILDDIR% to cd /d %BUILDDIR% this will ensure that the installation will occur.
If you receive the error during setup through VSA Wizard (from within the VI Client) "Cannot create VSA cluster: Adding hosts to HA Cluster failed." Looking at the VSA Configuration log you will see that EVC mode failed to get configured. I was running a pair of AMD hosts, it was trying to set the EVC mode to amd-rev-e. If i manually created a cluster, there was no issue. This hack allows the install to complete, you can then turn on EVC manually on the cluster afterwards. Browse to the installation directory of vCenter on which you installed the VSA manager, e.g. "C:\program files\vmware\infrastructure\tomcat\webapps\VSAManager\WEB-INF\classes\" and wtih an administrative notepad, edit "dev.properties" file. Change the evc.config=true line to evc.config=false NOTE that this option will REQUIRE a restart of the "VMware VirtualCenter Server" and "VMware VirtualCenter Management Webservices" services. You should then be able to run through the VSA setup wizard without a hitch.
Remember, your hosts must have 4 NIC's in order for the setup to complete, if you have only a single quad port card, you will have to do a "Brownfield" network setup, otherwise you will recieve the following error: Cannot create VSA cluster: Failed to complete network configuration for specified host: <hostname> , hence revered all other hosts, Could not find 2 NICs on the system.
The setup can be done from this documentation: VSA Brownfield Setup
One final thing to note, is that if you are trying to run the cleanup.bat file (when removing the VSA) edit the file so that any line that reads cd %BUILDDIR% to cd /d %BUILDDIR% and the system I was on had no JAVA_HOME variable set. So I set it to the following "set JAVA_HOME=D:\vmware\ Infrastructure\jre" and that allowed a nice clean up.
Sunday, 5 May 2013
PowerCLI to enable LockDown Mode on all hosts
Here is a quick and easy way to make sure that LockDown Mode is enabled on all your hosts using PowerCLI (if your environment dictates it)
(get-vmhost * | get-view) | foreach-object -process {$._EnterLockdownMode()}
That will go through every host and enable LockDown mode. If you see the following error while running the script:
Exception calling "EnterLockdownMode" with "0" argument(s): "The administrator permission has already been dsiabled on the host (except for the vim user)"
It just means that the host already has that setting enabled and it can be ignored.
(get-vmhost * | get-view) | foreach-object -process {$._EnterLockdownMode()}
That will go through every host and enable LockDown mode. If you see the following error while running the script:
Exception calling "EnterLockdownMode" with "0" argument(s): "The administrator permission has already been dsiabled on the host (except for the vim user)"
It just means that the host already has that setting enabled and it can be ignored.
Subscribe to:
Posts (Atom)