After a very long delay, there is finally a new release of AutoLab.
This version adds support for vSphere 6.5 and 6.7, Horizon View 7.0 and 7.5 and Windows Server 2016.
The version of FreeNAS has been updated, and pfSense replaces FreeSCO as the router, these changes make AutoLab more stable and reliable at the cost of much larger downloads. Each AutoLab package is around 2GB in size, so you should only download the version you will use.
As always, read the deployment guide with care and have fun with AutoLab.
I have just uploaded a video of deploying AutoLab 2.6 on the Ravello platform. The process is similar to deployment using your own hardware, but has a couple of differences. Make sure you have the Deployment guide to hand as the steps are in there too.
Many AutoLab deployments are inside your firewall on a trusted network, without direct inbound access from the Internet. When AutoLab is deployed on Ravello it is outside your firewall and accessible only over the Internet. Due to this very different security situation the Ravello build of AutoLab does not publish the NAS VM and it’s Build share. Usually the various pieces of licensed software are copied onto this share at the start of the build. On Ravello the ESXi and vCenter installer ISOs are uploaded and attached to the NAS VM, additional build share files are downloaded from inside the DC after it’s built.
You may wish to publish the Build share to upload additional files, then unpublish when you’re done uploading. I don’t recommend leaving the share accessible as there is no security on the share. In a later release we will have better mechanisms for uploads.
Here’s the simple process to make the Build share accessible:
1. In the Canvas select the NAS VM and click the Services tab
2. In the Supplied Services area Click the Add button
3. Enter 445 in the Port field and click Save
4. Remember to click Update to apply the configuration change to the application.
As you may have noticed my VMs were all powered off as I made this change, I need to power on the NAS before I can upload to it. You can still make the changes to the VMs while they are running however may be an brief outage on the VM you change.
5. On the Summary tab of the NAS select the DNS: field, copy the entire text. This is the Internet accessible address of the NAS VM.
6. In Explorer use the copied text to make up the URL of the Build share: \\<copied text>\Build
You can now copy files onto the Build share, or edit files like the Automate.ini file to your requirements.Bear in mind that you are accessing a remote Samba share, it will be very (very) slow. I plan to build FTP access into the next version for faster uploads.
7. Once you are finished uploading and editing you should disable access to the share. In the Ravello console select the NAS VM again and click the Services tab. Click the trash can next to the service you added to delete the service. Again click Save and Update to apply the change.
I’m delighted to release AutoLab version 2.6, the biggest new feature is support for vSphere 6.0. You can download the new deployment guide and packages from the AutoLab page.
With vSphere 6 VMware have vastly increased the amount of RAM required to install vCenter and the minimum RAM to run both vSphere and ESXi. This means that you can no longer build the core lab with less than 16GB of RAM. If you want to add a third host, VSAN or View then you will need even more RAM so it is good that 32GB is more achievable in a low cost home lab than a few years ago.
The other great new feature of AutoLab 2.6 is the ability to use public cloud to host AutoLab, so you may not even need to upgrade your lab to be able to play with AutoLab. I’ve been working with Ravello Systems, a start-up who have built a hypervisor that runs on top of AWS or Google Cloud. This is some very cool magic that I wrote about here. On the Ravello platform you can have a lab that you rent by the hour and only pay for while you’re using it. A three ESXi server AutoLab costs under $3 per hour to run, At that rate you could run the lab every evening for a month at a lower cost than buying a new machine. Another benefit of Ravello is that you can run multiple labs in parallel, something I often want to do as I’m working on different projects.
In this part of the Home Lab Build series, we’ll step through the creation of a Windows 2012 R2 Domain Controller. While one of the more basic installs, it can carry some fairly important tasks within a lab environment. You can find the visio file for the diagram is here.
If you want a basic set up with some kind of identity source, name resolution and a time sync source all in one, building a Windows AD box is going to be on your short list. Also, if you plan on studying for a Microsoft or VMware certification, having a grasp on Active Directory is a must. Like it or loath it, Windows and in turn Active Directory dominates many corporate networks today. So let’s get to it.
At a high level we want to accomplish a few things:
Install Windows 2012 R2 on a new VM
Set an Administrator password
Install VMware Tools
Set a static IP
Set a nameserver
Set a hostname
Disable the local firewall
Enable Remote Desktop Access
Add the Active Directory and DNS roles
Set a Domain Name for the new Domain
Set a Restore Mode password
First up, using the vSphere Desktop Client, create a VM with a Guest OS of Windows Server 2012 (64-bit). Change the NIC from E1000E to VMXNET3 and leave all other “Create New Virtual Machine” wizard settings to their defaults. Using Thin provisioning is a good idea in a lab environment especially if you’re disk space constrained. If you have more than 2 physical cores on your ESXi hosts, change the vCPU count of your VM to 2 but don’t do this if you lab only has 2 physical cores. Mount the Windows 2012 R2 ISO to this VM and then power it on.
Once the Windows installer is booted, select the appropriate language and click the “install now” button. Setup will give you a choice for the OS version, in this case, we want the standard GUI installation. On the following screen you’ll be asked if you want to “upgrade” an installation or “custom” which actually means “install windows only”. Select “custom” and then use the whole disk without creating any partitions by just clicking “next”. The installation of the OS will now commence and will take a few minutes (depending on your hardware).
After the install is complete and the server reboots you will be asked to set an Administrator password. Once logged in to the server, VMware Tools is the first thing that should be installed. This will provide the drivers and utilities needed to get the most out of this VM. Specifically, without VMware Tools, the VMXNET3 network card we chose to use does not have default drivers in Windows. Reboot the server once the VMware Tools installation is complete.
The server can now have it’s network identity created. We’ll set a static IP, a subnet mask, a gateway and a name (DNS) server. We’re actually going to set the DNS server to the localhost IP because this server will have the DNS services running on it. Finally we’ll set a hostname turn off the local firewall and then reboot once again.
After the server is on the network with the correct details, we will enable the ability to remotely manage it with a Remote Desktop Client and then add the “Active Directory Domain Services” and “DNS Server” roles. As we step through this wizard we will create a new forest with the domain name of “labguides.local” and configure a Directory Services Restore Mode password.
Finally once the wizard is over and server rebooted, you can login to the domain with the original Administrator password that was created upon first boot. If you’d like to set your domain up exactly the same as mine, you can grab the script export from my build here
If you need more information, watch the video for a detailed guide on how to accomplish these tasks.
As part of documenting my home lab (re)build, today I’m going to build an ESXi 6 server and then bootstrap VSAN using a single hosts’s local disks. If you’re following along my Home Lab Re-Build series, we’re building the first ESXi host in the diagram.
So why ESXi6? Well, we want to host some VMs, we want to use just local storage but we want it to be stable and have the ability to run nested ESXi VMs on top. Using VMware Virtual SAN on a single host provides no data redundancy so you’ll want to keep that in mind if you’re deciding to go this route. It’s a configuration not supported, but (in my opinion) really useful in a home lab environment.
First off we’ll wipe the local disks, then we’ll install ESXi 6, set a root P/W and set the management network up. Once it’s on the network we’ll install the vSphere Desktop Client and configure NTP and SSH. Finally we’ll configure VSAN to use the local disks of this single host. So, let’s get into it.
We’re going to mount the Gnome Partition Editor ISO to give us the ability to wipe the local disks of any existing partition information. This is required when configuring VSAN as it expects blank disks.
Once Gparted is loaded we can select each disk and then ensure no existing partitions exist. In the video below I forgot that we need to create a blank partition table prior to rebooting the hosts at first. Create a new partition table by selecting the Device -> Create Partition Table, leave the table type as “msdos” and click apply. You’ll need to repeat this task for each disk to be used by VSAN.
Once the disks have a blank partition table you can install ESXi 6 as normal, I wont document that here as it’s a fairly basic process and included in the video below. Once ESXi is installed, set the management network and install the new version of the vSphere Desktop Client (browse to the IP of you ESXi host for details). We need SSH / CLI access to be able to bootstrap VSAN so enable SSH in the vSphere Desktop Client by going to Configuration -> Security Profile -> Services Properties -> SSH -> Options -> Start.
I first heard about enabling VSAN on a single host from William Lam’s post. He’s using it to get vCenter up and running without the need for shared storage so we’re using is slightly differently but he concept is the same. He’s also got a post on using USB disks in VSAN.
Once logged into the CLI via SSH or the DCUI, run the following commands to set a default VSAN policy to work with only 1 host:
Now that the default policy will work with a single host, build a new VSAN “cluster”:
esxcli vsan cluster new
Finally add your SSD and magnetic/ssd capacity disks to the new cluster. You can get the SSD-DISK-ID and HDD-DISK-ID from either the UI (Configuration -> Storage -> Devices -> Right Click -> Copy Identifying to Clipboar) or by the CLI (esxcli storage core device list):
Today I want to introduce a series that I’ve been wanting to do for a while, a step-by-step video based home lab build. This will be the first in a series where I’ll take you through this new home lab build out so you can follow along if you like. Lets start out with gear
I have 2 primary systems in my home lab that are identical and based on the E5-1620 Xeon chip from Intel. While they have plenty of power for what I need (they are a quad core, 3.6 Ghz CPI), the they do use a considerable amount of power being rated at 130 watts. The CPUs are coupled with 64GB RAM per system, which is probably the biggest limit in my lab today. The ram is a little older and none-ECC. While it was ok when I first got these systems a couple of years ago, it needs replacing. I use the SuperMicro X9SRH-7TF motherboard which supports up to 512GB if you get the right type. For me, this board provided me with 2 great things. First, lots of memory support. Secondly, onboard 10GbE ports. I hook both of these systems together with the cheapest 10GbE switch I can find the Netgear XS708E. It’s not fancy, but it pushes packets over copper fast. The systems are housed in the super quiet and minimalist Fractal R4 case. Lets move onto the layout of the lab I’m going to (re)build.
I’ve quickly drawn up how my home network is set up today and how I’m going to connect that through to my home lab, probably using an NSX or vShield edge. You can see I have 4 ESXi hosts, along with the 2 Supermicro based systems, I also have 2 small HP N36L Microservers. I don’t have a use for them at this stage, but I’m sure I can find something along the way. Storage is both local in the form of VSAN on the ESXi systems and network based on a Synology NAS. In the lower portion of the diagram you can see the 4 VMs that I’m going to build first. An AD box, a database server and then vCenter with an external PSC. As we go along I’ll add to this diagram anything I decide to include.
And if you have any comments or questions please reach out.
I presented a What’s New in vSphere 6 presentation on last week’s US vBrownBag podcast.
Below is a copy of the recording and deck for those interested.
Earlier this year I recorded a video for the VMUG 2015 Virtual Event. As is often the case with online webinar platforms, the quality of the recording wasn’t as good as we’ve come to expect with the prevalence of online video these days. So, I posted the video (embedded below) to my YouTube channel, just like I did last year. Since that recording I learned a few things thanks to a couple of my colleagues that I want to point out.
Firstly, I mentioned that VSAN is using the VMware acquired Virsto file-system. This is incorrect. While VSAN in vSphere 6 does have improved sparseness and caching capabilities it’s not using Virsto. There is also mention of the 256 datastore limitation being removed by VVOLs, this is also incorrect.
Secondly, but much more exciting, VMware have announced the long awaited Windows vCenter to Linux vCenter Virtual Appliance Fling. My buddy William Lam (of www.virtuallyghetto.com fame) is pretty excited about this one! I thought it particularly relevant for those watching this video as I had a number of questions at the Virtual Event around this very topic. So head over and grab the fling. I might just do another video of what it looks like to migrate the AutoLab vCenter to a vCSA!
It has been a while, but the day has arrived. AutoLab version 2.0 is available for download. This version doesn’t support a new vSphere release since VMware hasn’t shipped one. AutoLab 2.0 is more of a maintenance and usability release.
The biggest feature is adding support for Windows Server 2012R2 as the platform for the domain controller and vCentre VMs. Naturally you should make sure the version of vSphere you deploy is supported on top of the version of Windows Server you use.
I have also removed the tagged VLANs which makes it easier to run multiple AutoLab instances on one ESXi server or to extend one AutoLab across a couple of physical servers if you only have smaller machines.
I’ve also added the ability to customize the password for the administrator accounts, which helps lock down an AutoLab environment.
Go ahead and download the new build from the usual download page and get stuck in. If you haven’t used AutoLab before make sure to read the deployment guide.