Saturday, October 1, 2011

VMware vSphere 5 Host Network Designs

*** Updated 30/04/2012 ***

I have now uploaded two 10GbE designs. I might create some more 10GbE designs if they are requested. Certainly I can see myself creating some 2 x NIC 10GbE designs, but essentially they would be the same as the two already uploaded. If you have some unique design constraints please contact me on logiboy123 at gmail dot com.

The following is an anchor page for my vSphere 5 host networking diagrams. The included diagrams are based on 1GB and 10GbE network infrastructure.

Each of the following links will represent slight variations on the same type of design where the goals are:
  • Manageability - Easy to deploy, administer, maintain and upgrade.
  • Usability - Highly available, scalable and built for performance.
  • Security - Minimizes risk and is easy to secure.
  • Cost - Solutions are good enough to meet requirements and fit within budgets.

The following base designs should be considered a starting point for your particular design requirements. Feel free to use, modify, edit and copy any of the designs. If you have a particular scenario you would like created please contact me and I will see if I can help you with it.

All designs are based on iSCSI storage networking. For fibre networks simply remove or convert the Ethernet switches with the relevant fibre switch; segmentation designs will not apply so use the isolated designs for your base.


1GB NIC Designs

6 NICs isolated storage & no Fault Tolerance

6 NICs segmented storage & no Fault Tolerance

8 NICs isolated storage including Fault Tolerance

10 NICs isolated storage & isolated DMZ including Fault Tolerance

10 NICs segmented networks including DMZ & Fault Tolerance

12 NICs segmented networks including DMZ & Fault Tolerance - Highly Resilient Design


10GbE NIC Designs

4 NICs 10GbE segmented networking vSS Design

4 NICs 10GbE segmented networking vDS Design

10 comments:

  1. So what do you use with local storage and VSA?

    ReplyDelete
  2. I haven't had the opportunity to install the VSA at a customer site yet. At this point I'm not sure the VSA is worth the 5k price tag as you can get a supported iSCSI SAN for less then that.

    In my isolated designs you would simply remove the storage network if you had fiber or local storage and leave all the remaining networking as is.

    ReplyDelete
  3. These are great designs. Do you have them on PDF I can get?

    ReplyDelete
  4. You can email me;

    logiboy123 at gmail dot com

    ReplyDelete
  5. Hi Paul,
    so we have to set Active Passive on the Virtual machine? I have mine on Active-Active, with only RSTP enabled on the switches, and running into problem where VM on 2 different host (same VLAN) doesn't talk to each other.

    ReplyDelete
    Replies
    1. Tien,

      The best bet would be to create a discussion in the VMware technical forums. This will ensure that several people are aware of your problem and can assist. I would try to upload screenshots of your configuration to help people troubleshoot the issues.

      Delete
  6. Hi,

    What would you think would be the best design to make use of blades with 4 1Gb NIC with storage going through FC?

    I'd love to know your thoughts for a with and without FT solution.

    Many thanks

    ReplyDelete
    Replies
    1. Please use the 6 NICs isolated storage & no Fault Tolerance design for the environment. Simply remove the storage component of the design, what is left will be Management, vMotion and VM Networking.

      Delete
    2. The only way to get FT in with 4 NICs would be to create a single switch where each of the four traffic types is bound to a single vmnic; i.e. Management vmnic0
      vMotion vmnic1
      FT vmnic2
      VM Networking vmnic3
      Then order standby uplinks accordingly.

      This is not an implementation I would use, rather I would get the business to buy more hardware. One Gb uplink isn't going to be enough throughput for VM Networking when we have server densities higher then roughly 15-25 VMs per host. In fact my rule of thumb is 1Gb per 20VMs and even that will not necessarily cater for all of the peak times. I would get another mezz card in the chassis to get some more NICs or upgrade to 10GbE.

      Delete
  7. Thanks for sharing. I will need to do something similar very soon, and this is a good starting point. I will PM you for PDF, and make sure that you get all the credit.

    ReplyDelete