Sunday, August 27, 2017

All of Microsoft runs in the cloud

"We are the 'first and best' customer of Microsoft products and services"

In an interview, Microsoft CIO Jim Dubois informs that Microsoft IT is the "first and best" customer of Microsoft products and services - supporting the IT infrastructure and applications that run the business of Microsoft.
Microsoft IT consumes Microsoft's own products and services which are developed, and managed by other Microsoft entities. Microsoft IT deploys, and tests these products and services internally, before they are released to the customers.

"Our vision is that all of Microsoft runs in the cloud"

Aligning itself with the organization's vision of "All of Microsoft runs in the cloud", in year 2011, Microsoft IT defined its cloud adoption strategy so that it can benefit from cloud's efficiency, agility, and its rapid deployment capabilities.

How did they do it?

Documenting one such instance in a business case study published in September 2015, Microsoft IT highlights as how did they set-out to migrate approximately 2,100 internal applications from on-premises servers running in their 8 different data-centers worldwide on over 40,000 separate operating system instances, to the public cloud [NB: Microsoft IT doing away with its own data-centers and moving to the public Cloud offering that is available to the external customers as well, and thus moving away from owning the data-centers to running things in public cloud (pay as you go)].

Key points of this business case study -

  1. Change from within - forming the Stratus* team - Realizing that supporting a cultural change would be the biggest challenge to the cloud adoption endeavor, Microsoft IT formed a core team called the Stratus team to analyze the available cloud capabilities, and the applications and their platform requirements. This Stratus team would drive the cloud adoption across business units. 
  2. Setting up the Cloud Adoption Factory - Considering application's technical complexity, business impact, cloud service delivery models (XaaS), and appropriate migration strategies, the Stratus team devised a sophisticated decision framework to guide through the cloud migration efforts - the Cloud Adoption Factory. This Cloud Adoption Factory guided Microsoft IT through the identified journey.
  3. Moving to the cloud using Cloud Adoption Factory






The Stratus team, using the Cloud Adoption Factory framework, planned for and executed the applications' cloud migration. Some of the highlights of the exercise are given below:
  • 30% applications retired, right-sized, or eliminated because of consolidation into single app or service line that helped in eliminating thousands of physical servers and virtual machines.
  • 15% applications were replaced by a SaaS offering e.g. Office 365, SharePoint Online, etc.
  • <5 applications="" li="" on-premises.="" remain="" to="">
  • 50% percent of the applications are identified as either "first to move", "next to move", and "hard or costly to move."
  • For complex applications, the Stratus team used the following criteria to decide whether to deploy it in (Lift and Shift) IaaS or to re-architect / redevelop it for PaaS or as a SaaS.
Skills for the new era

There is little doubt that cloud migration would disrupt the classic models of computing and while people still debate the validity of Moore's law, this transformation in computing is shifting the IT professionals from their usual silos to becoming business process enablers.
All the pictures shown here are taken from Microsoft IT Business Case Study#4073

Term/Definitions: 

  1. *Stratus - A low-altitude cloud formation consisting of a horizontal layer of gray clouds.
  2. Essential characteristics of cloud computing: 1.) On-demand self-service, 2.) Broad network access, 3.) Resource pooling, 4.) Rapid elasticity, 5.) Measured service 
  3. Cloud computing service models: 1.) Software as a Service (SaaS), 2.) Platform as a Service (PaaS), 3.) Infrastructure as a Service (IaaS). 
  4. Cloud computing deployment models: 1.) Private Cloud, 2.) Community Cloud 3.) Public Cloud, 4.) Hybrid Cloud

References: 

Tuesday, August 22, 2017

Determining Oversized vs Undersized VMs


Reference : https://d0.awsstatic.com/whitepapers/Demystifying_vCPUs.pdf


How to create a Red Hat Linux instance in AWS custom VPC and connect to it

Objective:
Create an RHEL7.4 instance in a custom AWS VPC and connecting to it using the putty software from your desktop.

Software setup required:
  1. Have a working AWS account to launch RHEL instance
  2. Download putty software on your PC to connect to it
Challenges:

Launching RHEL instance in the default VPC can directly be pinged ot ssh'ed from your desktop using putty; however, connecting to an RHEL instance within your own custom VPC requires a few settings in place which essentially requires some idea about VPC concepts.

Steps:
  1. Create a custom VPC
    1. Create a custom VPC with a name (e.g. Demo_VPC) in a preferred region and give it a CIDR IP pool (e.g. 10.0.0.0/16)
    2. Create and attach an Internet Gateway to this Demo_VPC.
    3. Create a new route table in this VPC and make an entry that enables any outgoing traffic to be redirected to the Internet Gateway (e.g. Destination = 0.0.0.0/0  / Target = Internet Gateway)
    4. Create a subnet within the VPC and attach it to the route table thus making the subnet as a public subnet (this subnet has a route to reach to the Internet Gateway).
  2. Create a Red Hat Linux Instance and specify its VPC as the Demo_VPC and allow the instance to have a public IP be auto assigned to it.
  3. Create a new service group, assign it to the Linux instance and have the ICMP/ssh protocols enabled on this instance for the outside world (Security Groups are stateful so no need to explicitly define the outgoing traffic rules)
  4. Download or reuse the earlier downloaded ssh key (convert it to a ppk key to use it with putty) and connect to the linux instance using putty.
Note- Don't forget to shutdown (stop) the test instance when not in use.

Friday, August 18, 2017

Quick Byte - 10 Rules of Good Studying by Dr. Barbara Oakley

Well, this is a post that is little different from the overall theme of this blog, however, I find it relevant in the midst of having to witness the shift in computing practices and the opportunity it presents in learning more and more.

So these are the 10 rules of good studying by Prof. Barbara Oakley shared over the Internet as well as in her popular Coursera offered course on effective learning:


  1. Use recall
  2. Test yourself
  3. Chunk your problem
  4. Space your repetition
  5. Alternate different problem‐solving techniques during your practice
  6. Take breaks
  7. Use explanatory questioning and simple analogies
  8. Focus
  9. Eat your frogs first
  10. Make a mental contrast

My course notes that summarized the course can be seen here - 

Tuesday, August 15, 2017

Enabling ICMP traffic between two EC2 instances under the same custom Security Group



So can two EC2 instances within the same security group allow each other's ports be probed by ICMP?

By default - No! As there is no rule defined for inbound traffic that allows ICMP.

If one has to configure this security group to allow these two EC2 instances to allow each other to be pinged (ICMPed) you'll have to configure an ICMP inbound rule where the source of the traffic happens to be the same security group itself! See the self referencing entry as mentioned in the attached image.

Creation of such a security group be useful to allow and control ICPM traffic between all EC2 instances within the VPC by ensuring that each EC2 instance has such a group added to it in addition to the other security groups required.

Friday, August 11, 2017

Cloud Computing - The NIST Definitions

Cloud Computing - Excerpts from the NIST document


Cloud Computing:



Essential Characteristics:




Service Models:


Deployment Models: 


Tech Byte - AWS ELB Notes from AWS Standard Docs (1/10)



A load balancer accepts incoming traffic from clients and routes requests to its registered & healthy EC2 instances in one or more Availability Zones (also monitors the health of registers instances)
  • Stops routing traffic to unhealthy instances and restarts routing to them once they are found to be healthy again.
  • With a Classic Load Balancer, you register instances with the load balancer. With an Application Load Balancer, you register the instances as targets in a target group, and route traffic to a target group.
  • Load Balancer domain names are part of amazonaws.com domain.
  • Each availability zone has a load balancer node.
  • Client resolves Load Balancer's domain name and is given one or more IP address of load balancer nodes of the load balancer.
  • As traffic to your application changes over time, Elastic Load Balancing scales your load balancer and updates the DNS entry. Note that the DNS entry also specifies the time-to-live (TTL) as 60 seconds, which ensures that the IP addresses can be remapped quickly in response to changing traffic.
  • The client determines which IP address to use to send requests to the load balancer. 
  • The load balancer node that receives the request selects a healthy registered instance and sends the request to the instance using its private IP address.
  • Classic Load Balancer - the load balancer node that receives the request selects a registered instance using the round robin routing algorithm for TCP listeners and the least outstanding requests routing algorithm for HTTP and HTTPS listeners.
  • Application Load Balancer - the load balancer node that receives the request evaluates the listener rules in priority order to determine which rule to apply, and then selects a target from the target group for the rule action using the round robin routing algorithm. Routing is performed independently for each target group, even when a target is registered with multiple target groups.
  • For HTTP connections - Classic Load Balancers use pre-open connections but Application Load Balancers do not.
  • The nodes of an Internet-facing load balancer have public IP addresses. The DNS name of an Internet-facing load balancer is publicly resolvable to the public IP addresses of the nodes. Therefore, Internet-facing load balancers can route requests from clients over the Internet.
  • The nodes of an internal load balancer have only private IP addresses. The DNS name of an internal load balancer is publicly resolvable to the private IP addresses of the nodes. Therefore, internal load balancers can only route requests from clients with access to the VPC for the load balancer.
  • The instances do not need public IP addresses to receive requests from an internal or an Internet-facing load balancer.
  • If your application has multiple tiers, for example web servers that must be connected to the Internet and database servers that are only connected to the web servers, you can design an architecture that uses both internal and Internet-facing load balancers e.g. Internet-facing load balancer to web servers and an internal load balancer for the database servers.

     





Quick Byte - Cross-Zone Load Balancing & Connection Draining options in Classic Load Balancer (AWS)



  • Out of all available EC2 instances, those instances are marked that would be behind the Classic Load Balancer.
  • It is a good practice to pick EC2 instances from all AZs to attain max. fault tolerance
  • Cross-Zone Load Balancing (disabled by default for a Classic Load Balancer) - 
    • If enabled, traffic distributed evenly across all registered instances (no matter which AZ they are in) i.e if 2 AZs have N1 and N2 instances respectively, entire traffic is distributed amongst N1+N2 instances evenly.
    • If disabled, traffic distributed evenly across all enabled Availability Zones i.e. 2 AZs enabled that that N1 and N2 number of instances respectively, both AZs get the same amount of traffic served be N1 and N2 number of instances respectively.

  • Connection Draining - When enabled, prevents Auto Scaling from terminating the instance for a certain specified period of time. 

Thursday, August 3, 2017

Tech Byte - APIs




"Amazon today simplified smart locks for Alexa, by adding a Door Lock API to its Alexa Skills Kit. This API, or application program interface, is designed to improve voice commands for lock partners Schlage, Yale, Kwikset, August, Vivint and Z-Wave."


"Ford opens its SYNC 3 platform to developers via API to let mobile apps interface with its cars."



What are APIs?



The term "API" stands for Application Programming Interface. APIs provide an interface for two software applications to communicate over the Internet.

From a business point of view, the APIs are essentially business capabilities published over the Internet for business partners to use.

For example, Twitter, Facebook, Google and many other software giants let programmers communicate with their applications to exploit the offered functionality through published APIs.




What are some of the benefits these businesses enjoy by offering their data & services through APIs?



Some examples:

  • Twitter, through its APIs, offers its data and value added services to anyone who is interested in that data e.g. survey companies, media houses, publicists etc and has some free as well as chargeable plans for them. 
  • IBM offers its IBM Watson APIs to allow developers write bots with artificial intelligence 
  • Twilio offers its APIs for developers to write telephony application and charges users for making use of its APIs every time a call is made using them. 
  • Netflix offers its APIs so that other applications can also make use of these APIs to offer Netflix contents further and thereby increasing Netflix's market presence. 

API Strategy - does such a thing exist?


APIs offer new business avenues for the existing business assets. They also enable developers to improve existing content distribution systems for profitability e.g. Netflix is often called as an API company for having an effective API strategy for its content distribution.

APIs promote innovation, collaboration, better user experience, and thus more business opportunities for the digital assets of business; so having an effective API strategy in place suits business interests.

So which are the most used APIs out there?

According to a report by DevPost, following are the most used APIs on the Internet -

Picture courtesy TechCrunch.com


References:

Drooling Over Docker #4 — Installing Docker CE on Linux

Choosing the right product Docker engine comes in 2 avatars — Docker Community Edition (CE) and Docker Enterprise Edition (EE). While the...