Updating vmware esx
Since v Sphere has much better i SCSI performance than ESX 3.5 did, we decided to use the full 10GB bandwidth to connect the Left Hand i SCSI storage.
Technically this means that we give 1 Flex Nic 10GB which leaves us with 0GB to share among the other 3 Flex Nics remaining (per port).
I’m currently working on firmwares as it seems that this will resolve my issue. Problem update For the last weeks I’ve been swapping e-mails with VMware Support since updating the HP firmwares (Virtual Connect and all the other components) didn’t solve the problem.
First of all VMware gave me an alternate Broadcom bnx2x driver which unfortunately didn’t solve the problem.
The image below shows how the technical design looks now: From a Virtual Connect Manager perspective we used the following settings in the attached Server Profile (see image below) Pleaste note that we defined all 16 NIC’s and left 6 of them “Unassigned”.
The “Unassigned”-ones are the Flex Nics from Mezzanine Slot 2 which didn’t got any bandwidth assigned to them as you can see in the “Allocated Bandwidth”-column.
Currently the only way to get our connection back is to reboot the whole ESX Host.I’ve written this blog as an add-on to Frank Denneman’s blog about Flex-10 which you can find over here.Goal of this blog is to get a clear vision about the Flex-10 port mappings that HP uses to facilitate their blades with NIC’s, with the special focus towards VMware ESX/v Sphere.Notice that the first 8 vmnic’s are from the Onboard Card and the second 8 vmnic’s are from the Mezzanine Card.From within the HP Virtual Connect Manager we can divide the available 10 GB speed over those 4 Flex Nics, for example we can give 1A (vmnic0) 1GB, 1B (vmnic2) 7GB, 1C (vmnic4) 1GB which will leave us with 1GB to give out for 1D (vmnic6).