This page documents the network design that underlies that example. The design is derived from real-world web hosting networks that Netomata founder Brent Chapman has designed and implemented for consulting clients. Netomata stands ready to provide a variety of such services to your organization; please contact us to discuss how we can help you.
The network architecture is intended to address the following requirements.
The design should cover a single datacenter installation, with room to grow both within that datacenter (add hosts within the rack, add racks within the datacenter). However, IP addressing and DNS naming conventions should contemplate future growth to multiple datacenters.
The design should be able to tolerate the failure of any single hardware or software component without causing a service outage from an external end user's point of view. "Component" covers everything from power strips and data cables, through hardware devices such as switches and servers, through the software that runs on those devices.
Performance degradation (slower response timnes, etc.) is deemed an acceptable consequence of a failed component, while the component is being repaired. Functionald egradation (certain services or data being unavailable) from the end user's point of view is not acceptable.
Functional degradation that isn't visible to end users (for instance, temporary failure of internal management or monitoring tools) might be acceptable, and needs to be considered on a function-by-function basis. Factors that should be considered include potential frequency of failure, expected duration of failure (expected time to diagnose and repair), and potential user-visible indirect consequences of the failure (for instance, failure of a monitoring system delays detection of a second unrelated failure, which is user-visible).
The design needs to provide for multiple seprate environments that can operate simultaneously and in paralle, without any interaction between or interference with each other, such as:
There might need to be multiple environments of any given type, simultaneously; for example, there might need to be multiple testing environments for testing different release trains, or for multiple production environments to enable customized demos for particular customers.
Any environment might be a functional duplicate, rather than a literal duplicate, of the standard environment; for instance, server virtualization might be used more heavily in testing, so that a testing environment involves fewer physical machines than a production or staging environment.
Separately, the design should also allow for a corporate environment in the data center, hosting services such as corporate email and the company's general public web site.
The design should allow for up to 16 simultaneous environments.
It should be possible to create and delete environments on demand. There should be a well-defined, automated procedure for setting up a new environment, covering all of the systems (routers, switches, etc.) and services (status monitoring, performance monitoring, etc.) that need to be changed, outlining the changebe made, as well as a corresponding procedure for deleting an environment that's no longer neeeded.
Care should be taken to prevent (and, to the extent practical, preclude) any environment from having any dependencies on other environments, especially the Corporate environment. Such dependencies might be precluded, for example, by setting up black hole routes or access control lists in the protected environments to prevent them from reaching any of the other environments. This implies that the addressing plan needs to provide for a clear and straightforward separation of the environments.
The network design needs to enable and facilitate extensive use of server virtualization, particularly in the development, test, and staging environments.
It should be easy to add capacity for a particular server type (web server, app server, etc.) by simply adding more servers (physical or virtual) of the appropriate type. The network design and load balancer configuration plan shouldn't assume that there will be only 2 servers of any given type, though that's generally what is diagrammed.
The growth path should be clear, across a variety of levels (more servers in the rack, more racks in the datacenter, more datacenters).
The network design should enable backups of all production systems via a dedicated backup VLAN. Initially the backup server will be connected via FastE, just like the rest of the servers. However, GigE should be in the design as a growth option, in case future bandwidth needs require it.
The network and servers should be set up such that all configuration and maintenance, other than replacement of failed hardware components and installation of new components, can be done remotely. It should only be necessary to visit the datacenter to replace failed hardware or to install new hardware; it should not be necessary to visit to rearrange cables or anything like that for configuration changes.
All of the Linux servers will have a single dedicated management Ethernet interface for IPMI use, in addition to their main Ethernet interfaces. These management Ethernet interfaces cannot be presumed to be 802.1Q (trunking) capable, and so need to be connected to single-VLAN (non-trunked) Ethernet ports on the switches.
Servers that are not intended to be publicly accessible (for instance, database servers) should be given non-routable private addresses (per RFC 1918), and Network Address Translation should not be used, in order to make it more dificult for these machines to be reached (and potentially attacked) from the outside world.
Cabling between racks should be minimized, both to keep the cable monster under control and to make it possible for expansion racks to be widely separated. Ideally, it shouldn't be necessary to run more than a couple of fiber optic pairs and a couple of Cat6 connections between racks.
It should be possible to access the management interfaces of all components through an independent, out-of-band management system that doesn't rely on any piece of production equipment being functional. This implies a separate, dead simple management LAN to provide access to serial console ports and IPMI Ethernet interfaces for all components, LAN.
This section outlines, in narrative form, the basic concepts of this network design.
Each server (real or virtual) is part of one particular environment. Most environments are intended to be a fully-functional instance of some version of the service, including all necessary servers and software to offer (or test) the service. Some environments are for special purposes, such as administration of real servers (Xen dom0 hosts) or backups. Each environment is assigned an env_id number (ranging from 1 through 50), which is used to calculate VLAN tags, IP addresses, and so forth.
The unique environments include:
|Production||1||The in-service version of the service that is currently in use by customers|
|Pre-Prod||2||The "next" version of the service that is being prepared for imminent use by customers, where things like data staging and final QA testing take place|
|Demo||3||A fully functional version of the service that is populated with sanitized example data for use during sales demonstrations|
|Dev||4||A developmental version of the service, which is not yet ready for testing and QA, and which is definitely not for use by customers|
|QA||5||A version of the service that is undergoing integration testing and primary QA; not expected to be accessed by customers|
|Test||6||An "experimental" QA environment|
In addition, there are certain "special" environments:
|Shared||see note 1||Some servers (the admin and backup servers, for example) are shared across all the unique environments; these servers are said to be in the "shared" environment. In addition, all physical servers (Xen dom0 hosts) and network devices have management interfaces in the Shared environment.|
|Corporate||see note 2||Some corporate servers (the company's public web server, corporate email, etc.) are expected to be placed in the data center for convenience and cost efficiency, though these services are completely independent from the unique environments.|
The network architecture is typical for a web-hosting application. It consists of the following components:
Internet connectivity is via 100 Mb/s Ethernet from the data center vendor. Internet connectivity is redundant, from 2 different data center vendor routers. The data center vendor provides a single IP address as the default route for traffic outbound to the Internet; the data center vendor manages which of their routers is currently handling that address via VRRP or HSRP, in a way that is transparent to us.
A redundant firewall connects our rack to the data center provider's network. Network Address Translation (NAT) for our entire network is done by our firewall.
A redundant load balancer spreads load across all replicated servers internally.
Our switch/router pair (a pair of Cisco 3550-48 in an active-active configuration) provides redundant routing interconnectivity for all the equipment in our rack.
Our rack contains a number (3 initially, with room to grow to 24) of 1U servers running Linux and Xen. Each physical server hosts several virtual servers running Linux.
The following diagram shows the basic layout of the network at both Layer 2 (Ethernet) and Layer 3 (IP). Things to note in particular:
The following diagram further illustrates the basic Layer 2 (Ethernet) and Layer 3 (IP) connectivity for each type of host and each environment. Things to note in particular:
The horizontal lines in the diagram above represent particular networks, implemented as VLANs in the Ethernet switches. The VLANs that are solid black (ISP-FW, FW-LB, LB-RTR, Bulk, Management, and IPMI) are shared across all environments; that is, the Bulk VLAN in the Production environment is the exact same as the Bulk VLAN in the Pre-Prod, Test, Demo, and other environments. The VLAN that is grey (Environment) is unique to each environment; that is, the Production Environment VLAN is a different and distinct network from the PreProd Environment VLAN.
Each VLAN serves a unique function:
Another characteristic shown on this diagram is which connections are routing and which are non-routing (at Layer 3, IP). Where a "non-routing" connection is shown, a host (the server, router, load balancer, or whatever) can communicate directly with other hosts on the same VLAN, but will not pass packets on behalf of some other host (that is, will not act as a Layer 3 IP router). Where a "routing" connection is shown, conversely, the host with the routing connection will pass packets on behalf of other hosts (that is, act as a Layer 3 IP router). This distinction is a security measure, to control which hosts can be contacted from the outside world and to limit the ability of compromised hosts to be used as bases to attack other hosts. (On Red Hat or CentOS systems, routing is disabled by setting "FORWARD_IPV4 = NO" in the /etc/sysconfig/network file; as you can see from the diagram, all of our Red Hat and CentOS systems should be configured that way, since the only routing connections are on the Cisco router itself).
The following step-by-step walk-through of a typical service request from a customer might clarify things.
Note that when the host replies, the packets flow back via much the same path, with one key variation. The default route for all internal servers is to the ".1" address on their subnet, which is the switch (acting as a router). The switch's default route is to the loadbal via the LB-Rtr VLAN. So, while incoming packets flow directly from the loadbal to internal servers, outgoing packets flow from the server to the switch/router (via the Environment VLAN), and from there to the loadbal (via the LB-Rtr VLAN). This little asymmetry is a quirk of the design, but doesn't seem to cause any problems.
VLANs will be numbered as follows:
|1||Cisco default VLAN (nothing should be attached to this)|
|2||ISP-FW ("outside" connection on firewalls; used only between firewalls and ISP)|
|3||FW-LB ("inside" connection on firewalls; used only between firewalls and load balancers)|
|4||LB-RTR (connection between load balancers and routers for rest of VLANs)|
|15||Failover (crossover cable between firewall-1 and firewall-2; VLAN doesn't actually appear on switch, and is only defined here for purpose of reserving IP address space)|
|16||Management (shared among all environments)|
|32||Bulk (shared among all environments)|
|48||IPMI (shared among all environments)|
|80 + env_id||Environment-specific (unique to each environment)|
IP addresses for the shared ISP-FW VLAN (ISP-FW, VLAN 252) are drawn from the routable IP address allocation provided by our ISP.
We need to make available at least 1 IP address for each environment, for access to that environment's web server by customers using HTTP (TCP port 80) and HTTPS (TCP port 443), leading via NAT to the appropriate load-balanced internal IP address.
We might want to make additional external IP addresses available for each environment; for example, we might want to use a different external IP address (and DNS hostname) for each customer, even if those external addresses all lead to the same environment. Since these external IP addresses are scarce resources, and since the number needed might vary from environment to environment and from time to time, we'll simply keep careful track of our external IP addresses and allocate them individually as needed, rather than set up some sort of scheme for pre-allocating external address space along subnet boundaries or anything like that.
IP addresses for all other VLANs (the FW-LB and LB-RTR VLANs, the shared Bulk and Management VLANs, and the VLANs for each environment) are drawn from Net 10, per RFC 1918.
In general, the structure of an internal IP address is 10.site.VLAN.host where:
So, in general, the IP network assigned to a given VLAN is 10.5.VLAN.0/24. There are specific exceptions, noted elsewhere; for example, the Management VLAN (VLAN 16) is a /20 beginning at 10.5.16.0 and running through 10.5.31.255.
The Management IP address of any given device (real host, virtual host, switch, router, etc.) is intended to be its "primary" address, used for monitoring and managing that device. Services on a given host should be accessed through addresses on the Environment subnet, as appropriate, not through the host's Management address. A single host (with a single Management net address) might conceivably be providing services to different environments (with a different environment-specific address for each).
In other words, addresses on the Management (and Bulk) net should map to devices (real or virtual servers, routers, switches, etc.), while addresses on the Environment networks should map to services (even if multiple of those services are provided by the same device).
One exception to this is the addresses for the shared services such as DNS servers, syslog servers, and so forth; these are VIPs on the host(s) actually providing the service, but they are on the Management network so that they are reachable by all devices.
The following addresses on each subnet are reserved:
|0||Reserved by standard IP networking|
|1||Default router VIP (arbitrated between potential default routers via VRRP)|
|2||Direct address of potential default router 1 (participating in VRRP)|
|3||Direct address of potential default router 2 (participating in VRRP)|
|255||Broadcast address (per standard IP networking)|
The Environment subnets (subnets 81-86) are used to isolate the traffic for a certain category of services. For example one subnet may contain production services and another subnet may contain staging services and the traffic will be kept seperate. At this level the services must not rely on a service in another subnet so an application server in one subnet cannot reference a database server in another network. However an environment subnet can reference a service in the management VLAN. This allows common services like DNS to be made available to all environment subnets.
Within each subnet the services that are provided rather loosely.
On the Environment net for each environment, the groups are allocated as follows:
|1-9||Reserved for router, firewalls and routing protocols.|
|10-99||Available for assignment to the load balancer|
|100-199||Available for virtual servers|
|200-254||Available for physical servers|
|255||reserved (broadcast address for the entire /24 subnet)|
Here are some examples of IP addresses on the Environment subnet, to illustrate:
|10.5.81.17||load-balanced VIP for the App server (.17), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.81.18||direct IP for first App server (.18), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.81.19||direct IP for second App server (.19), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.81.33||load-balanced VIP for the Reports server (.33), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.81.34||direct IP for first Reports server (.34), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.81.35||direct IP for second Reports server (.35), on the environment VLAN for the production environment (environment 1, VLAN 81), at site 1 (original datacenter)|
|10.5.82.65||load-balanced VIP for the Charts server (.65), on the environment VLAN for the pre-production environment (environment 2, VLAN 82), at site 1 (original data center)|
|10.5.82.66||direct IP for the first Charts server (.66), on the environment VLAN for the pre-production environment (environment 2, VLAN 82), at site 1 (original data center)|
|10.5.82.67||direct IP for the first Charts server (.67), on the environment VLAN for the pre-production environment (environment 2, VLAN 82), at site 1 (original data center)|
The Management, Bulk, and IPMI subnets are essentially similar to each other. Most devices are on on one, two, or all three, depending on the type of device; they are separate networks only for traffic management purposes, to isolate high-volume backup and data transfer traffic from high-priority management traffic. So, devices that are on more than one of these networks should have similar IP addresses on all.
See the note above on Device versus Service IP addresses. One exception to this is the addresses for the DNS servers; these are VIPs, but are on the Management network so that they are reachable by all devices.
Given that a single rack has either 44 or 48U of space for devices, a /24 (254 useable addresses) should be sufficient to handle the real and virtual devices within a single rack. Multiple adjacent /24s will be used for multiple racks; a /20 would thus provide address space for 16 racks in a single datacenter.
It isn't necessary to be pedantic about this, though, as long as systems are accurately labelled and an accurate list of assigned addresses is maintained; it's probably overkill to change a device's Management IP address solely because it's moved up or down in the rack (or even to a different rack, as long as all the racks are in the same VLAN/subnet).
The Management net is VLAN 16, and is treated as one large flat /20 address space (so 10.5.16.0/20, which encompasses addresses from 10.5.16.0 to 10.5.31.255). Each rack should be given a separate 3rd octet (ranging from .16. for the 1st rack through .31. for the 16th rack), and the first 8 addresses in each /24 (so 10.5.x.0 through 10.5.x.7, where x is 16 to 31) should be reserved for routers and such, so that the big flat /20 subnet can someday be broken into separate per-rack /24 subnets if necessary.
The Bulk net is VLAN 32, and is treated as one large flat /20 address space (so 10.5.32.0/20, which encompasses addresses from 10.5.32.0 to 10.5.47.255). Each rack should be given a separate 3rd octet (ranging from .32. for the 1st rack through .47. for the 16th rack), and the first 8 addresses in each /24 (so 10.5.x.0 through 10.5.x.7, where x is 32 to 47) should be reserved for routers and such, so that the big flat /20 subnet can someday be broken into separate per-rack /24 subnets if necessary.
The IPMI net is VLAN 48, and is treated as one large flat /20 address space (so 10.5.48.0/20, which encompasses addresses from 10.5.48.0 to 10.5.63.255). Each rack should be given a separate 3rd octet (ranging from .48. for the 1st rack through .63. for the 16th rack), and the first 8 addresses in each 3rd octet (so 10.5.x.0 through 10.5.x.7, where x is 48 to 63) should be reserved for routers and such, so that the big flat /20 subnet can someday be broken into separate per-rack /24 subnets if necessary.
The following table gives the addresses on these three nets for various real and virtual hosts ("VIP" denotes a Virtual IP address):
|Management Addr||Bulk Addr||IPMI Addr||Usage|
|10.5.16.1||10.5.32.1||10.5.48.1||Default route VIP (arbitrated via VRRP)|
|10.5.16.2||10.5.32.2||10.5.48.2||Direct IP address for first router (switch-1)|
|10.5.16.3||10.5.32.3||10.5.48.3||Direct IP address for second router (switch-2)|
|10.5.16.4||-||-||Active Firewall (normally firewall-1)|
|10.5.16.5||-||-||Standby Firewall (normally firewall-2)|
|10.5.16.6||-||-||Active Corporate Firewall (normally corpfw-1)|
|10.5.16.7||-||-||Standby Corporate Firewall (normally corpfw-2)|
|10.5.16.9||-||-||First load balancer (loadbal-1)|
|10.5.16.10||-||-||Second load balancer (loadbal-2)|
|10.5.16.17||-||-||DNS server VIP - primary|
|10.5.16.18||-||-||DNS server VIP - secondary|
|10.5.16.25||-||-||Admin Services VIP|
|10.5.16.26||-||-||syslog server VIP|
|10.5.16.27||-||-||SNMP server VIP|
|10.5.16.29||-||-||Database Services VIP|
|10.5.16.30||-||-||Backup Services VIP|
|10.5.16.33||-||-||First Power Controller (pdu-1)|
|10.5.16.34||-||-||Second Power Controller (pdu-2)|
|10.5.16.201-248||10.5.32.201-248||10.5.48.201-248||Real hosts; 4th octet is 200 + rack_position (rack_position ranges from 1 to 48)|
|10.5.16.255||10.5.32.255||10.5.48.255||Reserved for broadcast address|
All IP routing will be done by networking devices (routers, switches, loadbals, firewalls, etc.). All hosts which have interfaces on multiple VLANs should be configured not to act as a router between those networks. (On Red Hat or CentOS systems, routing is disabled by setting "FORWARD_IPV4 = NO" in the /etc/sysconfig/network file.)
Only IPv4 should be used throughout the network; IPV6 should be disabled on all devices, and in particular should not be forwarded by the routers.
Since all routing will be done by routers/switches/loadbals, and since each of those will be connected to all VLANs, there is no need for a dynamic routing protocol internally between routers; each router will have direct knowledge of the status of all networks, since they are all "attached" from each router's point of view.
The router will implement IP Access Control Lists (ACLs) to block traffic between the various Environment networks, in order to preclude inadvertent cross-environment dependencies from being introduced by developers.
Data that needs to be transferred between environments (database exports used to seed one environment from another, for example) can be done via the Bulk network, but steps should be taken to ensure that Bulk net addresses or hostnames don't get encoded into the applications or their config files, in order to prevent cross-environment dependencies via the Bulk net.
NAT (Network Address Translation) will be handled by the firewalls.
Customer-visible services will be assigned an externally-routable IP address. The firewall will translate this to an internal IP address (which will generally be a loadbal VIP, and thus further NAT'd by the loadbal as part of the load balancing process).
NAT will be set up so that all internal hosts can make connections to external IP addresses, regardless of whether or not the internal host provides a customer-visible service. (If we use static NAT for hosts providing customer-visible services, they'll use that same NAT assignment for their outgoing connections; we'll also establish dynamic NAT for all other internal hosts, which don't provide customer-visible services and therefore don't have static NAT assignments.)
Load balancing for all services on all VLANs will be handled via dedicated hardware load balancers. Server pools (including VIP load balancing target addresses for each service, and direct addresses for the actual servers providing each service) will be allocated and managed as indicated above in the section on IP addresses on Environment subnets.
Appropriate and useful hostname-to-address and address-to-hostname translations should be published via DNS, both internally and externally. Certain domains should only be visible internally, as described below. RFC 1918 addresses (i.e., net 10) should never be visible externally, in neither hostname-to-address nor address-to-hostname DNS data.
All externally-visible hostnames to be used by end users should be in the main ".example.com" domain; for instance:
In addition, any externally-accessible services for employees (for instance, VPN servers) should also be in the ".example.com" domain; for example:
This essentially corresponds to VLAN 252 (ISP-FW).
No internal subdomain names should be resolvable by DNS from the outside world.
Internally, subdomains are used to identify environments, both shared and production-like:
|mgmt.example.com||16||Management net; main administrative interface for all hosts|
|bulk.example.com||32||Bulk net; high-volume data transfer interface for all hosts (for backups, database export/imports, etc.)|
|stage.example.com||82||Pre-Production staging environment|
|corp.example.com||128||Corporate subnet (totally divorced from rest of nets; switches do not have router interfaces on this VLAN, but rather act as merely a "dumb switch" for this VLAN; devices on this subnet use the firewall interface on this subnet as their default route)|
Devices (real or virtual hosts, routers, switches, etc.) should have names in the Management and Bulk subdomains, while services should have names in the Environment subdomains. This is similar to Device versus Service IP addresses.
Since IP addresses on the Management and Bulk VLANs should refer to devices (real or virtual; see Device versus Service IP addresses), so should the hostnames associated with those IP addresses.
In general, hostnames on the Management and Bulk VLANs should identify the type of the device. Since there are typically multiple devices of each type, they should be numbered, starting at 1. So, the form for a hostname is type-N.
So, sample hostnames on the Management VLAN would be:
For the Bulk VLAN, simply substitute ".bulk." for ".mgmt.".
In the future, if the installation grows to multiple racks, each rack could be given its own subdomain under the mgmt.example.com domain. For example, the routers in the first rack would become:
and routers in a second rack would be
And in the further future, if the installation grows to cover multiple data centers, a subdomain could also be added for each data center. For example, you might have routers in 2 racks in the first data center:
and more in 2 racks in the second data center:
It isn't strictly neccesary to create these rack and datacenter subdomains until they're needed, though creating them later will require a certain amount of reconfiguration of servers, services, and so forth. That work (and the potential problems that it entails) could be avoided by creating these extra levels now, even though they aren't needed now, at the cost of longer and more cumbersome hostnames now.
Since IP addresses on the Environment VLANs should refer to services (see Device versus Service IP addresses), so should the hostnames associated with those IP addresses.
For each service, there will generally be a VIP that is arbitrated via load balancing (or VRRP, or some other similar method), and a series of direct addresses for the hosts actually providing the service. The structure for hostnames for VIPs should be service, and the structure for structure for hostnames for the direct addresses providing the service should be service-N, where N starts at 1 and counts upwards.
Service names include:
So, the complete set of hostnames for the Production Environment VLAN, assuming each service is provided by 2 hosts, would be:
Appropriate PTR records (address-to-name translation) should be published corresponding to all in-use IP addresses and hostnames.
DNS servers will be run on two different hosts on the Management VLAN, so that they are always accessible to all hosts. All hosts should be configured to use both servers. The servers will be run on virtual IP addresses, so that they can be moved from one host to another, if necessary, without needing to reconfigure other hosts.
|ns1.mgmt.example.com||10.5.16.17||Primary domain name server for all internal domains|
|ns2.mgmt.example.com||10.5.16.18||Secondary domain name server for all internal domains|
External DNS service for publicly-visible domains will be provided through VIPs on the same hosts as the publicly-reachable web servers for the production environment.
IPMI (Intelligent Platform Management Interface) is the console interface for physical device management of the Linux hosts; it's the data center equivalent of keyboard, video, and mouse. It is done over a dedicated Ethernet port on each host. Since these IPMI Ethernet ports cannot be presumed to be 802.1q-capable (i.e., can't do VLAN trunking), they need to be connected to dedicated (non-trunked) ports on the Ethernet switches, which are all grouped into an IPMI VLAN.
Note that the IPMI interface is different from the "management" interface on each host. The IPMI interface lets you talk to the underlying hardware (BIOS, etc.), and is used for troubleshooting, BIOS setting changes, and the like when the host isn't necessarily even booted up yet. The management interface (which is one of several VLAN interfaces that each host has, via 802.1q trunking on the host's main Ethernet ports) is what is used to talk to the operating system on the host, via SSH or SNMP or whatever, when the host is up and running.
There will be out-of-band management for console access to all devices, including the routers and switches. By "out-of-band", we mean that it will be possible to access the console server independently of any piece of managed equipment (particularly including the routers and switches) or the primary Internet connection; the goal is to have a method available to reach the console of any devices, so that troubleshooting and (if necessary) reconfiguration can be done without requiring someone to drive to the data center, even if the routers and switches are disabled or the primary Internet connectivity is down.
For all devices which don't have IPMI interfaces (routers, switches, power distribution units, etc.), but have serial console interfaces instead, there will be a console server.
Unless the console server itself provides such functionality, normal access to the consoles for all devices should be through conserver (or something similar) on the admin server, rather than directly via the console server. Using conserver provides a number of advantages over accessing consoles directly through the console server:
In a pinch, consoles could still be accessed directly from the console server, bypassing the admin host.
Console logs for all devices will be captured on the admin host via conserver or similar software, as discussed above in the section on Console access/logging server.
A "syslog.mgmt.example.com" DNS hostname and corresponding VIP should be established on the admin host, syslogd software should be enabled on the admin host, and all devices should be configured to send syslog messages to that address or hostname.
An "snmp.mgmt.example.com" DNS hostname and corresponding VIP should be established on the admin host, SNMP trap software should be enabled on the admin host, and all devices should be configured to send SNMP traps to that address or hostname.
An SNMP trap listener should be established on that address, to capture and log SNMP traps, and something like SNMPTT should be used to translate traps into more meaningful text.
Status monitoring for all devices and services will is handled using Nagios running on the admin host.
Performance monitoring for all devices and services is handled using Cricket running on the admin host.
In general, backups will be made across the dedicated Bulk VLAN to the dedicated backup host, at whatever frequency and using whatever method is appropriate for the data in question.
This design supports virtualization of hosts by essentially treating virtual hosts (Xen DomU hosts) and physical hosts (Xen Dom0 hosts) equivalently. In general, physical hosts are accessed and managed through their Management VLAN (10.5.16.x) interfaces, while virtual hosts have interfaces only on the relevant Environment VLAN. Virtual hosts also don't have IPMI management connections, obviously.
|Web Hosting Example Diagrams-VLANs.jpg||52.57 KB|
|Web Hosting Example Diagrams-Connectivity.jpg||97.19 KB|