In 2006, Amitabh Srivastava was a corporate vice president in Microsoft’s Windows division, working on redefining the organization’s engineering processes. Ozzie had recently been named the company’s chief software architect and the two had their first meeting one late afternoon in Building 34 on Microsoft’s campus.
Srivastava has always had a rule: If he’s in town, he never misses dinner at home with his family – or if he needs to, he tells his family he’ll be late. That evening, Srivastava lost track of time. He missed dinner and never called home to say he’d be late. The one-hour meeting began at 4 p.m. and went until 8 p.m.
“By the end of that meeting, I was convinced software wouldn’t be shipped as it had been. My personal realization was, ‘I’m working on the wrong thing,’” Srivastava recalled.
At the time, Srivastava’s office was across from Cutler’s, and the two often got to the office early. Soon after the meeting with Ozzie, Srivastava recalls telling Cutler, “I don’t know what needs to be done, but I know there’s something changing dramatically and we’ve got to rethink our approach.”
After a few more weeks of discussions, Srivastava knew Microsoft needed to build an operating system for the cloud, and he identified his first task: Recruit Cutler.
“So I go to Dave and he says ‘I think I’m ready to retire,’” Srivastava said. “I said, ‘Dave, not quite. This is different. This could change the world.’”
Cutler didn’t say yes to Srivastava, but he also didn’t say no. “I had worked with Dave long enough to know that when he didn’t say no right away, that was a good sign.”
Srivastava developed a plan for Cutler and him to visit every team at Microsoft running a cloud service, from MSN and Hotmail to Xbox Live and the company’s cloud data centers.
The due diligence process took a few months as Cutler and Srivastava listened to the pain points and band-aid approaches teams had taken to keep their cloud services running. After the tour, Cutler and Srivastava never had a formal discussion about him joining the team. Cutler was on board.
Two years later, on Oct. 27, 2008, Ozzie stood on stage at the company’s Professional Developer’s Conference in Los Angeles and announced a technology preview of Windows Azure (now Microsoft Azure).
At PDC 2008, Ray Ozzie announced a technology preview of Windows Azure (now Microsoft Azure).
Amitabh Srivastava on stage at PDC 2008, wearing the “Project Red Dog” sneakers that Cutler designed.
Windows Azure Pack lifecycle was updated
It means that:
Remember that Microsoft Azure Stack is not the new version of Windows Azure Pack (WAP). Azure Stack release in 2017 won’t kill WAP – they will co-exist together for a long time. Azure Stack and WAP have a different purpose – WAP is a great solution to build IaaS, and Azure Stack is a great platform to run Azure services in your datacenter. They are totally different inside and very different from the outside – WAP looks like a simplified old Azure Portal, while Azure Stack looks exactly like new Azure Portal. WAP is built on top of Windows Server and System Center and can run on a vast varieties of hardware, supported by Windows Server 2012R2/2016. Azure Stack architecture is inspired by Azure, uses Azure Resource Manager model and can run only on a specific supported hardware.
It’s great that Microsoft supports both approaches. Service providers can use WAP to build a great IaaS solution, using the hardware they like or already have. They can use Azure Stack to build a “little Azure” in their DCs.
Windows Azure Pack (WAP) is a collection of Windows Azure technologies, available to Microsoft customers at no additional cost for installation into your data center. It runs on top of Windows Server 2012 R2 and System Center 2012 R2 and, through the use of the Windows Azure technologies.
Services that are available out of the box with WAP:
A region, is a geographical region on the planet, potentially multiple datacenters in close proximity, networked together. Those datacenters are sometimes called availability zones. An availability zone, has its own independent power and networking. It is set up to be an isolation boundary. If one availability zone goes down, the other continues working. The availability zones are typically connected to each other through very fast, private fiber-optic networks.
Within the availability zone, the VMs are deployed on machines, that are organized in racks. Each rack has its own router. The virtual machines on one single physical machine may run multiple containers.
When an incoming request comes to the endpoint, it is usually first delivered to a load balancer to route the traffic to an instance of a service. The goal is to run the code on different VMs that are not close to each other to reduce the chance of single point of failure. The unit of single point of failure is called a fault domain. With this hierarchy, when:
Availability Zones is a high-availability offering that protects your applications and data from datacenter failures. Availability Zones are unique physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. To ensure resiliency, there’s a minimum of three separate zones in all enabled regions. The physical separation of Availability Zones within a region protects applications and data from datacenter failures. Zone-redundant services replicate your applications and data across Availability Zones to protect from single-points-of-failure.
Azure services that support Availability Zones fall into two categories:
To achieve comprehensive business continuity on Azure, build your application architecture using the combination of Availability Zones with Azure region pairs. You can synchronously replicate your applications and data using Availability Zones within an Azure region for high-availability and asynchronously replicate across Azure regions for disaster recovery protection.