Skip to main content

Convert dvSwitch to vSwitch and back again

 dvSwitch to vSwitch and back again

Creating a Distributed virtual switch and adding a host is fairly straightforward. I have found that once a host is added to a dvSwitch ( or whichever way you wrap your head around the concept ) however that it’s a bit more difficult to convert it back depending upon how many virtual adapters are involved. This post documents the steps to add a host to a Distributed Virtual Switch and migrate 3 VMkernal adapters to it, and then migrate them back to the original Virtual Switch.
CONVERT to dvSwitch

(Fig 1) This screen shows the ESXi-1 host using a vSwitch configured with temporary VMkernal ports. Other than being set for DHCP they’re not used for any true purpose merely to allow for this example.

(Fig 2) As you can see in this screen I already have a dvSwitch configured with various dvPortGroups. The ESXI-2 server is already a participating host. I am going to add the ESXi-1 host and when I do I’m going to migrate the ports to the dvPG-VirtualMachines Port Group. Take notice of this section.
At this point I’m in the Inventory-Networking window and I’m going to click the ‘Add Host’ link in the upper right hand corner.

(Fig 3) The Add host GUI requires a host and desired NIC card selection. For this I’m going to select the vmnic1 and click next.

(Fig 4) Now we’ll migrate the virtual adapters from the Virtual Switch to the dvSwitch. Select the virtual adapters to migrate.

(Fig 5) Assign the migrating adapters to a dv Port Group.

(Fig 6) Here I simply select the dv Port Group identified as ‘dvPG-VirtualMachines’.

(Fig 7) Now you can see the vmk3, vmk4 and vmk5 have imported into the dvPG-VirtualMachines port group on the distributed switch.


So adding a host to dvSwitch is pretty easy. If there were thousands of virtual adapters that needed to be migrated the ease of which this interface makes that process would quickly become obvious. Migrating ports from a dvSwitch back to a standard vSwitch is just as simple however it’s not as automated and can be more time consuming. I haven’t figured out a way to migrate all virtual adapaters back to the vSwitch all at the same time like we did when we migrated the virtual adapters to the dvSwitch.
If someone knows a better or more efficient procedure please let me know.
(Fig 8) To begin the process of migrating the host back to a Standard vSwitch we have to get all the adapters moved off the dvSwitch. To start the process click the Manage Virtual Adapters link in the upper right.

(Fig 9) Each adapter must be selected individually then click ‘Migrate to Virtual Switch’ 
(Fig 10) Select which vSwitch to migrate the virtual adapter to.

There must be a way to migrate all the adapters to a Virtual Switch at the same time but I haven’t been able to figure out how. In this example I had to do this three times to get all the adapters migrated. In a heavy production environment there may be hundreds or thousands of adapters. I could see this taking a really long time. One would obviously drop to command line or script to get this done on a large scale system.
(Fig 11) Once all the virtual adapters have been migrated back to a Virtual Standard Switch the last thing to do is remove the host from the dvSwitch. For this you go back to the Inventory-Network screen ( cntr-shift-H ) and select the Hosts tab.

(Fig 12) To remove the host right click and select ‘Remove from vNetwork Distributed Switch’ In this example I want to remove the ESXi-1 host so I followed this step for this host.

(Fig 13) Going back to the Inventory-Hosts and Clusters screen and Configurations tab you can now see the dvSwitch is no longer associated with the ESXi-1 host.

And that’s how to add a host to a Distributed Virtual Switch migrating the virtual adapters and back again to the originating Virtual Standard Switch.



Popular posts from this blog

Recreating a missing VMFS datastore partition in VMware vSphere 5.x and 6.x

    Symptoms A datastore has become inaccessible. A VMFS partition table is missing.   Purpose The partition table is required only during a rescan. This means that the datastore may become inaccessible on a host during a rescan if the VMFS partition was deleted after the last rescan. The partition table is physically located on the LUN, so all vSphere hosts that have access to this LUN can see the change has taken place. However, only the hosts that do a rescan will be affected.   This article provides information on: Determining whether this is the same problem Resolving the problem   Cause This issue occurs because the VMFS partition can be deleted by deleting the datastore from the vSphere Client. This is prevented by the software, if the datastore is in use. It can also happen if a physical server has access to the LUN on the SAN and does an install, for example.   Resolution To resolve this issue: Run the  partedUtil  command on the host with the issues and verify if your output

ما هى ال FSMO Roles

  بأختصار ال FSMO Roles هى اختصار ل Flexible Single Operation Master و هى عباره عن 5 Roles فى ال Active Directory و هما بينقسموا لقسمين A - Forest Roles 1- Schema Master Role و هى ال Role اللى بتتحكم فى ال schema و بيكون فى Schema Master Role واحد فى ال Forest بيكون موجود على Domain Controller و بيتم التحكم فيها من خلال ال Active Directory Schema Snap in in MMC بس بعد ما يتعمل Schema Register بواسطه الامر التالى من ال Cmd regsvr32 schmmgmt.dll 2-Domin Naming Master و هى ال Role المسئوله عن تسميه ال Domains و بتتأكد ان مفيش 2 Domain ليهم نفس الاسم فى ال Forest و بيتم التحكم فيها من خلال ال Active Directory Domains & Trusts B- Domain Roles 1-PDC Emulator و هى ال Role اللى بتتحكم فى ال Password change فى ال domain و بتتحكم فى ال time synchronization و هى تعتبر المكان الافتراضى لل GPO's و هى تعتبر Domain Role مش زى الاتنين الاولانيين و بيتم التحكم فيها من خلال ال Active directory Users & Computers عن طريق عمل كليك يمين على اسم الدومين و نختار operations master فى تاب ال PDC Emu

Unlock the VMware VM vmdk file

  Unlock the VMware VM vmdk file Kill -9 PID Sometimes a file or set of files in a VMFS become locked and any attempts to edit them or delete will give a device or resource busy error, even though the vm associated with the files is not running. If the vm is running then you would need to stop the vm to manipulate the files. If you know that the vm is stopped then you need to find the ESX server that has the files locked and then stop the process that is locking the file(s). 1. Logon to the ESX host where the VM was last known to be running. 2.  vmkfstools -D /vmfs/volumes/path/to/file  to dump information on the file into /var/log/vmkernel 3.  less /var/log/vmkernel  and scroll to the bottom, you will see output like below: a. Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)FS3: 130: <START vmware-16.log> b. Nov 29 15:49:17 vm22 vmkernel: 2:00:15:18.435 cpu6:1038)Lock [type 10c00001 offset 30439424 v 21, hb offset 4154368 c. Nov 29 15:49:17 vm22 vmkernel: gen 664