Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Wednesday, January 9, 2019

Elastic Block Storage: Types and Snapshots in AWS

Elastic Block Storage is one of the important features of AWS. You can consider Elastic Block Storage as Hard Disk of your laptop. Below are some basic points to remember about Elastic Block Storage:

1. Elastic Block Storage (is just like Hard Disk of your laptop) and can only be used by mounting on an EC2 instance unlike S3.

2. EBS is persistent storage system. Instance storage is non-persistent.

3. EBS provide 1G to 1TB data storage capacity. If you want more storage, attach multiple EBS volumes to your EC2 instance.

4. Relationship between EBS and EC2: Multiple EBS can be attached to an EC2 instance but one EBS cannot be attached to multiple EC2 instances simultaneously. On the other hand, one EFS can be attached to multiple EC2 instances. 

5. Root EBS: Root EBS can be only one. Root EBS can’t be encrypted and “Delete on Termination” is checked by default.

6. Available only in single AZ: Multiple Availability Zone is NOT supported. EC2 and EBS should be in same AZ.

7. Backup to S3: Backup of EBS volumes is called Snapshot and is done in incremental fashion. You can also take point-in-time snapshots of your EBS volumes. Snapshots are stored in S3. 

8. As EBS is only AZ specific, so if you want to make it available in multiple zones, take a snapshot of it, save it to S3 and re-create EBS volume from this snapshot in another AZ.

9. Snapshot Sharing: Snapshots can also be shared to multiple regions and across multiple AWS accounts by backing them up to S3. To share snapshots between AWS accounts, make sure snapshots MUST NOT be encrypted.

10. You can also increase the size of EBS volume while restoring it from snapshot.

11. To take backup of Root EBS (where OS is running), you must stop it first for data integrity. For other EBS volumes you should also stop the instance otherwise it may impact the performance and data integrity.

12. RAID0, RAID1 and RAID10 (combination of both) are preferred. RAID5 is discouraged.

13. EBS is automatically replicated within the same AZ.

14. EBS Volume Types

  • General Purpose (SSD) (gp2) volumes can burst to 3000 IOPS, and deliver a consistent baseline of 3 IOPS/GiB. 
  • Provisioned IOPs (SSD) (io1) volumes can deliver up to 64000 IOPS, and are best for EBS-optimized instances. 
  • Max Throughput Optimized HDD (ST1) – For frequent accessed data
  • Max Cold HDD (SC1) – For IA (in-frequent accessed data)
  • Magnetic volumes, previously called standard volumes, deliver 100 IOPS on average, and can burst to hundreds of IOPS. Lowest cost.

For detailed comparison of above mentioned EBS Volume Types, you can go through the official documentation.

Tuesday, January 8, 2019

Difference between Dedicated Host and Dedicated Instance in AWS (Dedicated Host vs Dedicated Instance)

Following are the basic differences between Dedicated Hosts and Dedicated Instances:

Dedicated Instances run on dedicated host. When you restart your dedicated instance, there is a possibility that it run on different dedicated host. So, physical parameters may not remain the same. 

Dedicated Host is a dedicated physical host where you can launch your dedicated instances. All the physical parameters remain the same when the instances over this host are restarted. You have visibility over how your dedicated hosts are utilized and you can determine how many sockets and cores are installed on the server. These features allow you to minimize licensing costs in a bring-your-own-license (BYOL) scenario and help you address corporate compliance and regulatory requirements. BYOL (Bring Your Own License) are tied to physical host. It reduce costs by allowing you to use your existing server-bound software licenses. 

To summarize, following are the advantages of dedicated hosts:
  1. Save Money on Licensing Costs (BYOL)
  2. Visibility of Sockets and Physical Cores
  3. Help Meet Compliance and Regulatory Requirements
  4. Affinity
  5. Instance Placement Controls

Difference between Fixed Performance and Burstable Performance in EC2 Instances (Fixed Performance vs Burstable Performance)

Following are the basic differences between Fixed Performance and Burstable Performance:

1. Performance Instances provide a consistent CPU performance whereas Burstable Performance Instances provide a baseline CPU performance under normal workload. But when the workload increases, Burstable Performance Instances have the ability to burst, i.e. increase the CPU performance.

2. CPU credit regulates the amount CPU burst of an instance. You can spend this CPU credit to increase the CPU performance during the burst period. 

Suppose you are operating the instance at 100% of CPU performance for 5 minutes, you will spend 5 (i.e. 5*1.0) CPU credits. Similarly if you run an instance at 50% CPU performance for 5 minutes you will spend 2.5 (i.e. 5*0.5) CPU credits.

3. When you create an instance, you will get an initial CPU credit. In every hour, you will get certain amount of CPU credits automatically (this amount depends on the type of instance). If you don't burst the CPU performance, the CPU credit will be added to your CPU Credit Balance of your account. 

4. CPU credit keeps carrying forward till 24 hours, after that it re-initializes. If you are out of CPU Credit (i.e. CPU Credit Balance turns into 0) your instance will work on baseline performance.

5. Baseline performance is just 30% for t2.large.

6. Not all EC2 instances support Burstable Performance.

7. Mainly used for micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development, build, and stage environments, code repositories, and product prototypes.

AWS EC2 Instance Classes and Types

There are different types of EC2 instances which you can choose for your application. There are varying combinations of CPU, memory, storage (EBS or Instance storage), and networking capacity. Broadly, we can classify EC2 instances as:

1. General Purpose
2. Compute Optimized
3. Memory Optimized
4. Storage Optimized
5. Accelerated Computing / GPU Optimized

1. General Purpose

Classes: T2 (T stands for Tiny) - Burstable Performance, M5 (M stands for Medium) - Fixed Performance

Suited for: Development environments, Small and mid-sized databases, Low-traffic web applications, Build servers, Code repositories, Testing and staging environments, Early product experiments etc.

Instance sizes: nano, micro, small, medium, large, xlarge, 2xlarge, 4xlarge, 8xlarge, 10xlarge

CPU Credits: T2 instances accrue CPU credits when they are idle, and use CPU credits when they are active. T2 instances are a good choice for workloads that don’t use the full CPU often or consistently, but occasionally need to burst (e.g. web servers, developer environments and small databases).

2. Compute Optimized

Classes: C5 (C stands for Compute)

3. Memory Optimized

Classes: X1, R4 (R stands for RAM)

Charged Per GB of RAM

4. Storage Optimized

Classes: I3 (I stands for I/O), D2 (D stands for Dense)

Suited for: Data Warehouse, intended for workloads that need greater sequential write and read access to larger data sets

5. Accelerated Computing / GPU Optimized

Classes: P3, G3 (G stands for Graphics), F1

Uses NVIDIA GPU

Suited for: Low Latency, High Throughput, High IOPS, Machine Learning, Deep Learning, Big Data Analytics, In-memory Analytics, Reporting, Graphics, Gaming, Video Encoding, High Performance Databases, Batch Processing, Streaming, Speech Recognition, Ad Serving, 3D Visualization, Distributed Analytics, Computational Finance, Financial Analytics, Computational Fluid Dynamics, Genomics Research, Drug Discovery, Scientific Modelling, Molecular Modeling, Media Transcoding etc.

Monday, January 7, 2019

Difference between EC2 and Lightsail in AWS (EC2 vs Lightsail)

EC2 and Lightsail are compute services offered by AWS. Broadly, you can say Lightsail is the lighter version of EC2 where you don't need to manually configure underlying infrastructure like EBS, EFS, VPC, Subnets, Storage Groups, ACL etc. Following are the basic points about Lightsail to consider:

1. With EC2, you have to manually configure Storage and Networking. But if you don't want to take headache of underlying infrastructure such as Storage and Networking, you can go with Lightsail.

2. Lightsail is a VPS (Virtual Private Server) service in which the above mentioned infrastructure is in-built.

3. Lightsail provides: 
  • VPS (Virtual Private Server with Autoscaling)
  • Storage (like EBS, EFS for EC2)
  • Networking (like VPC, Subnet, SG, NACL for EC2)
  • Load Balancing (like ELB)
  • API (Application Program Interface)
  • Integration with other AWS services via VPC peering
4. Backup: You can easily create snapshot of you Lightsail VPS.

5. In short, Lightsail is the simpler and lighter version of EC2 with limited functionalities. The target market for Lightsail appears to be those who just want a simple VPS without going into the complexities of EC2. Later on, you can easily switch from Lightsail VPS to EC2.

Saturday, January 5, 2019

AWS IAM: Identity and Access Management in AWS

Identity and Access Management is a very useful service offering from AWS. IAM is used to authenticate and authorize the users and AWS services to use AWS resources. Below are the basic points to not about AWS IAM:

1. Authentication and Authorization: Control access to AWS resources for your users. For example, developer should only be able to access compute and storage resources, DBA should only be able to access database resources etc.

2. Components: Users, Groups, Policies, Roles

3. Root User/Account: User/Account with which you have created your AWS account. It has all the access. It is advisable not to use root account. Instead, create an Admin account and provide all the access. Keep it for emergency purpose. 

4. User Access Type: You can provide two types of access to user: Programmatic Access (Access Key ID and Secret Access Key), Console Access (Password).

5. Access Key ID and Secret Access Key: You have to note down the Secret Access Key, once lost, you need to regenerate it.

6. User Login URL: https://your_account_name.signin.aws.amazon.com/console

7. Groups and Policies: Instead of assigning policies to individual users, it is recommended to create a group and assign the policies to that group. Now keep adding/removing users to that group. For example, if you want to create 5 developer accounts and want to assign same policies to them, instead of assigning those policies to individual accounts 5 times, better create a group say “Developer_Group”, assign those policies to this group and add all those 5 users to this group. Later, you can add/remove users to/from this group.

8. Roles: Set of permissions. Assigned to AWS services. For example: Create a role of type “Amazon EC2”, assign permission “AmazonS3FullAccess”. Now assign this role to any EC2 instance (Actions -> Instance Settings -> Attach/Replace IAM Role). Now any application deployed on this EC2 instance will be able to communicate with S3. 

Example: Suppose you have created a web application which uploads a file to S3. If you run this web application on the EC2 instance which has above role assigned, your file will be uploaded to S3 successfully. But if you run the same web application to any other EC2 instance which does not have above roles assigned to it, you will get access denied error. 

Now there are two ways to run this web application successfully on this server. Either assign the above role to this EC2 instance or mention Access Key ID and Secret Access Key (of the user who has S3 access) in your web application code.

9. User vs Role: User and Roles are similar components. We need to attach permission to them. “User” is created for people while “Role” is created for AWS resources.

10. Policies: Policies are permissions. You can also create your own policies using “Policy Generator” or using JSON code. If you assign both “Allow” and “Deny” policy to a user, “Deny” will be given priority. 

11. MFA (Multi Factor Authentication): Multi-layer of security. Just like OTP. You need to manage a Virtual MFA Device. To do this, click on to activate a virtual MFA device, a bar code will be displayed, download Google Authenticator App, scan the bar code shown in console, now two authentication codes will be generated, write them to the console and that’s it. Now if the user with MFA logs in to the console, he/she has to provide MFA code also. 

12. Global Service: IAM is not region specific, it is global service.

13. Eventual Consistency: There is eventual consistency when you change any settings like policy/roles/permissions

14. Free to use

Friday, January 4, 2019

Elastic Beanstalk: PaaS offering from Amazon

Elastic Beanstalk is a simple way to deploy your application on AWS. No need to take headache of managing the infrastructure. Below are some basic point to remember about Elastic Beanstalk:

1. PaaS offering from Amazon.

2. Platforms supported: PHP, Java, Python, Ruby, Node.js, .NET, Go and Docker

3. Application Deployment: Just upload your application code (packaged code and libraries) and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring.

4. Resources used by Elastic Beanstalk: Elastic Beanstalk uses core AWS services such as Amazon EC2, Amazon Elastic Container Service (Amazon ECS), Auto Scaling, and Elastic Load Balancing to support your applications.

5. Monitoring and Logging: You can easily monitor and manage the health of your applications. Logs are created and saved in S3.

6. Application Versioning: You can maintain multiple versions of your application. Application versions are saved in S3.

7. Default EBL URL: Once you deploy your application, a default EBL URL will be created (for example: your_application_name.elasticbeanstalk.com). If ELB is not used, URL will point to your EC2 instance directly.

8. Free of cost: You will be charged only for the resources it launches.

Thursday, January 3, 2019

Difference between Internet Gateway and NAT Gateway (Internet Gateway vs NAT Gateway)

There is a minor and confusing difference between Internet Gateway and NAT Gateway. Below are some points to illustrate the difference between both:

1. Internet Gateway is used to connect a VPC to the internet while a NAT Gateway is used to connect the private subnet to the internet (through public subnet and Internet Gateway).

2. NAT Gateway cannot work without Internet Gateway. Your VPC must have Internet Gateway otherwise how NAT Gateway will direct traffic to the internet? NAT Gateway should always be launched in the public subnet where there is a route out Internet Gateway. If there is no route out to the Internet Gateway, NAT Gateway would not be able to connect the instances in the private subnet with internet.

3. NAT Gateway and NAT Instances only support IPv4 addresses while Internet Gateway supports both.

4. NAT Gateway supports only one way communication (from private subnet to internet and not vice-versa) while Internet Gateway supports both inbound and outbound traffic.

I have written a detailed article on NAT Gateway and NAT Instances here. Hope this might help.

Difference between NAT Instances and NAT Gateways (NAT Instances vs NAT Gateways)

NAT stands for Network Address Translation.

We launch many instances in private subnet in the VPC for security reasons. These instances cannot communicate with the internet. But there are many scenarios when these instances need to connect to internet like for patch updates, software installation, connection to Git repository etc. In these scenarios we need to make these instances communicate with the internet. 

NAT Instances and NAT Gateways come handy in these cases. These allow only outbound traffic to internet and restrict inbound traffic from internet. It means our instances in the private subnet can make connection to internet but nobody from the internet can access our instances in private subnet. 

Below are some basic points and differences between NAT Instances and NAT Gateways:

NAT Instances

1. NAT instance is like an EC2 instance and is also launched like an EC2 instance from AWS console.

2. It should always be launched in the public subnet.

3. Once launched, you need to manually disable source/destination check (this option is available under Actions >> Networking). This is because, it will be sending and receiving traffic on behalf of other instances, so the source and/or destination might not be itself.

4. You need to manage this instance yourself like you manage your EC2 instances.

5. NAT instance should be assigned an Elastic IP (but you can also use public IP).

NAT Gateway

1. Managed by AWS (you need to manage NAT instances yourself).

2. Always deploy your NAT Gateway in public subnet.

3. You must allocate Elastic IP to your NAT Gateway (you can allocate public IP to NAT instances).

4. In the main Route Table of your VPC (or the Route Table connected to private subnet), add a route out to this NAT Gateway. Set Destination as 0.0.0.0/0 and set target as NAT Gateway.

5. You cannot assign security groups to NAT Gateway (you can assign security groups to NAT instances).

6. You need one in each AZ since they only operate in a single AZ.

Note: Both NAT Instances and NAT Gateway only support IPv4 traffic (not IPv6).

Documentation: NAT InstancesNAT GatewayDifference between NAT Instance and NAT Gateway

Difference between CloudSearch and ElasticSearch in AWS (CloudSearch vs ElasticSearch)

Both CloudSearch and ElasticSearch use powerful underlying search engines and enable you to search and analyze the data. Both are listed under the Analytics services in AWS console. Below are the basic differences between CloudSearch and ElasticSearch: 

CloudSearch

1. Custom search service for your website or application.

2. CloudSearch uses open source Apache Solr as the underlying search engine.

3. Supports 34 languages and popular search features such as highlighting, autocomplete, and geospatial search.

4. It requires data to be loaded as documents and is good for full-text search, with an understanding of languages and grammar (example, synonyms, words to ignore etc.).

5. You can create a search domain and upload the data that you want to make searchable, and Amazon CloudSearch will automatically provision the required resources and deploy a highly tuned search index.

ElasticSearch

1. The service offers open-source Elasticsearch APIs, managed Kibana, and integrations with Logstash (ELK stack) and other AWS Services, enabling you to securely ingest data from any source and search, analyze, and visualize it in real time.

2. It is commonly used for near real-time visualizations of logs files and data analytics.

3. The service also offers built-in integrations with other AWS services such as Amazon Kinesis Data Firehose, AWS IoT, and Amazon CloudWatch Logs for data ingestion; AWS CloudTrail for auditing; Amazon VPC, AWS KMS, Amazon Cognito, and AWS IAM for security.

Difference between Route53 and ELB in AWS (Route53 vs ELB)

Both Route53 and ELB are used to distribute the network traffic. These AWS services appear similar but there are minor differences between them. 

1. ELB distributes traffic among Multiple Availability Zone but not to multiple Regions. Route53 can distribute traffic among multiple Regions. In short, ELBs are intended to load balance across EC2 instances in a single region whereas DNS load-balancing (Route53) is intended to help balance traffic across regions.

2. Both Route53 and ELB perform health check and route traffic to only healthy resources. Route53 weighted routing has health checks and removes unhealthy targets from its list. However, DNS is cached so unhealthy targets will still be in the visitors cache for some time. On the other hand, ELB is not cached and will remove unhealthy targets from the target group immediately. 

Use both Route53 and ELB: Route53 provides integration with ELB. You can use both Route53 and ELB in your AWS infrastructure. If you have AWS resources in multiple regions, you can use Route53 to balance the load among those regions. Inside the region, you can use ELB to load balance among the instances running in various Availability Zones.

For more details on Route53 and ELB, you can visit my following articles:

Route53: Domain Name System (DNS) from AWS

AWS ELB (Elastic Load Balancer)

Wednesday, January 2, 2019

AWS VPC Security: Difference between Security Group and ACL (Security Group vs ACL)

Security Group and ACL(Access Control List) provide security to resources launched in a VPC. Below are the basic differences between Security Group and ACL:

Security Group

1. Acts as a virtual Firewall at instance level.

2. Security Group acts as first layer of defense in a VPC.

3. One instance can be associated with multiple security groups.

4. Whenever we create a VPC, a default Security Group is created.

5. If we don’t associate an instance with any security group, default security group is automatically associated with it which was created while creating a VPC.

6. Stateful: Return traffic is automatically allowed, regardless of any rules.

7. Supports allow rules only.

8. We evaluate all rules before deciding whether to allow traffic.

9. Applies to an instance only if someone specifies the security group when launching the instance, or associates the security group with the instance later on.

10. Basic ports to remember:

  • SSH - 22 (Mainly for Linux Server)
  • RDP - 3389 (Mainly for Windows Server)
  • SMTP - 25 (Mail Server)
  • HTTP - 80
  • HTTPS - 443
  • All traffic - 0 – 65535

NACL (Network Access Control List)

1. Acts as a virtual Firewall at subnet level.

2. NACL acts as second (optional) layer of defense (after Security Group) in VPC.

3. One subnet can be associated with only one NACL while one NACL can be associated with multiple subnets.

4. Whenever we create a VPC, a default NACL is created.

5. If we don’t associate a subnet with any NACL, default NACL is automatically associated with it which was created while creating a VPC.

6. Stateless: Return traffic must be explicitly allowed by rules.

7. Supports allow rules and deny rules.

8. We process rules in number order when deciding whether to allow traffic.

9. Automatically applies to all instances in the subnets it's associated with (therefore, you don't have to rely on users to specify the security group).

VPC and Subnets: AWS Networking Services

VPC (Virtual Private Cloud) and Subnets are very important concepts under AWS Networking Services. All you AWS resources are defined in VPC. Below are some basic points about VPC and Subnets:

VPC

1. It is a logically isolated virtual network (sub-cloud) for you in AWS cloud. All your AWS resources are defined in a particular VPC.

2. You can select your own IP addresses, subnets, NACL (Network Access Control List), route tables and network gateways.

3. Whenever you create a VPC, you must define IP range for that VPC. IPv4 and IPv6 CIDR are supported. IPv4 CIDR must be from 16 to 28. IPv6 CIDR is of fixed size: 56. You cannot choose any IP range in IPv6.

4. One VPC can have multiple Subnets, NACL and Route Tables.

5. Internet Gateway: VPC is composed of subnets. Subnets are private by default. To make any subnet public, Internet Gateway should be associated with that VPC. One VPC cannot have more than one Internet Gateway.

6. VPC Peering: Connect two or more of your VPC with each other or with VPC of another AWS account. All VPC must be in same region. Example: You can enable VPC Peering between DEV VPC and UAT VPC and PROD VPC and Disaster Recovery VPC. There can be only one to one connection between VPC and Transitive Peering is not possible.

7. Whenever we create an account, a default VPC is created.

8. Default Route Table, NACL and Security Group: Whenever we create a VPC, by default one Route Table, NACL and Security Group gets created. If you don’t associate your subnets to any Route Table and NACL, this default Route Table and NACL gets associated with those subnets by default. If you don’t associate your instances to any Security Group, default Security Group is associated with each instance.

9. Flow Logs: You can associate flow logs with VPC. It captures information about the IP traffic going to and from network interfaces in your VPC. You should have IAM role for flow logs and log group in CloudWatch to enable flow logs.

Subnets

1. Sub-network inside a VPC. It contains sub-range of IP Addresses in a VPC.

2. A Subnet must be associated with an AZ. It cannot spread across multiple AZ.

3. Subnet can be private and public. Keep your databases in private subnet and webservers in public subnet.

4. An instance always belongs to a subnet. You cannot have an instance in a VPC which does not belong to any subnet.

5. NACL (Network Access Control List): Optional layer of security at subnet level. Acts as firewall at subnet level (Security Group act as firewall at instance level). One subnet can only be associated with one NACL. One NACL can be associated with multiple subnets.

6. Route Table: Each subnet must be associated with a Route Table. One subnet can have only one Route Table. One Route Table can be associated with multiple subnet. Network traffic of any instance inside a subnet is dictated by the routing table attached to it.

7. While creating a subnet, you must specify VPC, CIDR (must be in between the CIDR range of the parent VPC), and Availability Zone.

8. After creating a subnet, you should associate a Route Table and NACL with it. If you don’t do this, then the default Route Table and NACL will get associated with it which was created while creating the VPC.

9. A Subnet is private by default. To make it public, 

  • Define an Internet Gateway.
  • Attach IGW to VPC. IGW should be attached to a VPC. One VPC can only be attached to one IGW. Create a Route Table and add internet route in it (direct 0.0.0.0/0 to IGW).
  • Explicitly associate a Subnet (which you want to make public) to this Route Table. One Subnet have only one Route Table.
  • Enable Auto-assign public IPv4 address in that Subnet. You can also do this setting while launching an instance in this subnet. 
  • Ensure Security Group and NACL are not blocking internet traffic.
  • Now any EC2 instance launched in this Subnet will be able to communicate with the internet.

Tuesday, January 1, 2019

AWS Workspace: Desktop as a Service from AWS

AWS Workspace is a very useful service. You don't need to create and manage VMs for your employess/workers/contractors which are spread across the globe, just use AWS Workspaces. This is desktop as a service. Below are some basic points about AWS Workspaces:

1. Cloud desktop service (DaaS: Desktop as a Service)

2. You can use Amazon Workspaces to provision either Windows or Linux desktops in just a few minutes and quickly scale to provide thousands of desktops to workers across the globe. 

3. Workspaces helps you eliminate the complexity in managing hardware inventory, OS versions and patches, and Virtual Desktop Infrastructure (VDI), which helps simplify your desktop delivery strategy. 

4. With Amazon Workspaces, your users get a fast, responsive desktop of their choice that they can access anywhere, anytime, from any supported device.

5. Enable bring your own device (BYOD): Amazon Workspaces lets you run a cloud desktop directly on a wide range of devices like PC, Mac, iPad, Kindle Fire, Android tablet, Chromebook, and web browsers like Firefox, and Chrome. 

6. You can integrate these desktops with your company active directory.

7. You can pay either monthly or hourly, just for the Workspaces you launch, which helps you save money when compared to traditional desktops and on-premises VDI solutions.

AWS CloudFormation: Infrastructure as Code

When you need to create the same/similar replica of the existing cloud environment to another region or account, just create a template (in form of JSON/YAML) from the existing cloud environment and implement it on another region or account. CloudFormation converts all your cloud infrastructure to JSON/YAML code. Below are some basic points to remember about CloudFormation:

1. Infrastructure as Code

2. Create replica of your existing cloud environment (infrastructure resources) across multiple accounts and regions.

3. Components:

  • Template (JSON or YAML) (Code of your cloud environment or infrastructure resources) 
  • Stack (Logical collection/grouping of infrastructure resources based on the template code)
  • Changeset (Preview summary of proposed changes to your infrastructure)

4. Use cases: 

  • To copy the current cloud environment to another account or region 
  • To copy Production environment for developers to debug any issue 

5. Cost: Cloud Formation does not have any additional cost but you are charged for the underlying resources it builds.

AWS ELB (Elastic Load Balancer)

ELB (Elastic Load Balancer) balances and distributes traffic among various EC2 instances. Below are some basic points regarding ELB:

1. Elastic Load Balancer can distribute traffic among Multiple Availability Zone but not to multiple Regions.

2. Routes traffic to targets within Amazon Virtual Private Cloud (Amazon VPC) based on the content of the request.

3. Ensures only healthy targets receive traffic. If all of your targets in a single Availability Zone are unhealthy, Elastic Load Balancing will route traffic to healthy targets in other Availability Zones. Once targets have returned to a healthy state, load balancing will automatically resume to the original targets.

4. Hybrid Elastic Load Balancing: Offers ability to load balance across AWS and on-premises resources using the same load balancer. 

5. Application Load Balancer: Best suited for load balancing of HTTP and HTTPS traffic

6. Network Load Balancer: Best suited for load balancing of TCP traffic 

7. Classic Load Balancer: Classic Load Balancer provides basic load balancing across multiple Amazon EC2 instances and operates at both the request level (HTTP/S) and connection level (TCP).

8. Load Balancer can be internal/private to VPC or exposed to internet via Internet Gateway.

Monday, December 31, 2018

Route53: Domain Name System (DNS) from AWS

Route53 is the Domain Name System (DNS) service provided by AWS. Below are some basic points regarding Route53:

1. Domain Name System (DNS): Translates names like www.example.com into the numeric IP addresses like 192.0.2.1.

2. Why "53" in name? This services is named Route53 as port 53 belongs to TCP/UPD and mainly handles DNS queries.

3. Routes traffic based on multiple criteria, such as endpoint health, geographic location, and latency. Ensure end users are routed to the closest healthy endpoint for your application.

4. Routing Policies: Simple, Weighted (example: 75% to one server, 25% to other), Latency-based, Failover, Geo-location based.

5. Configure DNS health checks to route traffic to healthy endpoints or to independently monitor the health of your application and its endpoints. It re-route your users to an alternate location if your primary application endpoint becomes unavailable.

6. Also offers Domain Name Registration.

7. Record Sets: NS, SOA, A, AAAA, CNAME

CloudFront: Content Delivery Network (CDN) from AWS

CloudFront is the Content Deliver Network service provided by AWS. Below are some basic points regarding CloudFront:

1. Distribution service / Content Delivery Network (CDN) from AWS.

2. Edge Location: The CloudFront network has 160 points of presence (PoPs) as of now.

3. Edge server caches the data to improve latency and lower the load on your origin servers. 

4. Highly Programmable and Customizable content delivery with LAMBDA@EDGE: Lambda@Edge functions, triggered by CloudFront events, extend your custom code across AWS locations worldwide, allowing you to move even complex application logic closer to your end users to improve responsiveness.

5. CDN Origins: EC2, ELB, S3, Route53

6. TTL (Time to Live): How long your content will be cached/stored at Edge location? Defined in seconds. Default TTL: 24 hours (86400 seconds), Maximum TTL: 365 days (31536000 seconds), Minimum TTL: 0

7. Price Class: Use All Edge Locations (Best Performance), Use Only US, Canada, Europe, Use US, Canada, Europe, Asia and Africa. Price is charged accordingly.

8. Default CloudFront URL: *.cloudfront.net

9. Protocol supported: FTP

10. You can blacklist/whitelist users based on Geo-location.

11. Clearing cache from Edge location is chargeable.

Sunday, December 30, 2018

AWS Compute Services: EC2, Elastic Beanstalk, Lambda and ECS

EC2 (Elastic Compute Cloud), Elastic Beanstalk, Lambda and ECS (Elastic Container Service) are the compute service offerings from AWS. Below are some basic points regarding these AWS compute services:

EC2

1. Most common AWS service called Elastic Compute Cloud.

2. This is the Virtual Server in AWS.

3. Categories of EC2:
  • On Demand Instances (Charged hourly)
  • Spot Instances (Bid-based, Choose it when Start and End date is not a concern)
  • Reserved Instances (1 year or 3 year contract, cheaper than on-demand)
  • Scheduled Reserved Instances (Scheduled Instances)
  • Dedicated Host and Instances
4. EC2 Types:
  • General Purpose (T2, M5)
  • Compute Optimized (C5)
  • Memory Optimized (X1, R4)
  • Storage Optimized (I3, D2)
  • Accelerated Computing / GPU Optimized (P3, G3, F1)
Please note that EC2 is the most important topic in AWS. So, for details, please go through the official documentation.

Elastic Beanstalk

1. Simple way to deploy your application on AWS. No need to take headache of managing the infrastructure.

2. Just upload your application code and the service automatically handles all the details such as resource provisioning, load balancing, auto-scaling, and monitoring.

3. Supports PHP, Java, Python, Ruby, Node.js, .NET, Go and Docker.

4. Elastic Beanstalk uses core AWS services such as Amazon EC2, Amazon Elastic Container Service (Amazon ECS), Auto Scaling, and Elastic Load Balancing to support your applications.

5. Monitor and manage the health of your applications.

Lambda

1. Lambda lets you run code without managing any server (Go Serverless). Just upload your code to Lambda or write your code in Lambda Code Editor and it takes care of everything required to run it.

2. Any code uploaded to Lambda becomes Lambda Function. Code should be written in stateless style. If you need to store any state in between, save it to S3, Dynamo DB etc. and then retrieve from there.

3. Lambda can be directly triggered by AWS services such as S3, DynamoDB, Kinesis, SNS, CloudWatch, API Gateway and Web Applications. Use cases: https://aws.amazon.com/lambda/

4. Languages supported: C#, Java, Python, Ruby, Go, Powershell, Node.js

5. You pay only for the compute time you consume - there is no charge when your code is not running. You are charged for every 100ms your code executes and the number of times your code is triggered. You don't pay anything when your code isn't running.

ECS

1. Elastic Container Service (ECS) is a container orchestration service that supports Docker containers and allows you to easily run and scale containerized applications on AWS. 

2. Amazon ECS eliminates the need for you to install and operate your own container orchestration software, manage and scale a cluster of virtual machines, or schedule containers on those virtual machines.

3. Containers without Servers: With Fargate, you no longer have to select Amazon EC2 instance types to run your containers.

4. Amazon ECS launches your containers in your own Amazon VPC, allowing you to use your VPC security groups and network ACLs. 

5. With simple API calls, you can launch and stop Docker-enabled applications, query the complete state of your application, and access many familiar features such as IAM roles, security groups, load balancers, Amazon CloudWatch Events, AWS CloudFormation templates, and AWS CloudTrail logs.

Thursday, December 27, 2018

AWS Data Transport Solution: Snowball, Snowball Edge and Snowmobile (Data Truck)

It can cost thousands of dollars to transfer 100 terabytes of data using high-speed Internet. The same 100 terabytes of data can be transferred using two Snowball devices for as little as one-fifth the cost of using the Internet. For example, 100 terabytes of data will take more than 100 days to transfer over a dedicated 1 Gbps connection. That same transfer can be accomplished in less than one week, plus shipping time, using two Snowball devices.

Below are some basic points to remember about Snowball: 

1. Snowball is a petabyte-scale data transport solution to transfer large amounts of data into and out of the AWS Cloud. Even with high-speed Internet connections, it can take months to transfer large amounts of data. 

2. One snowball can contain approx. 50 TB of data.

3. With Snowball, you don’t need to write any code or purchase any hardware to transfer your data. Create a job in the AWS Management Console ("Console") and a Snowball device will be automatically shipped to you. Once it arrives, attach the device to your local network, download and run the Snowball Client ("Client") to establish a connection, and then use the Client to select the file directories that you want to transfer to the device. The Client will then encrypt and transfer the files to the device at high speed. Once the transfer is complete and the device is ready to be returned, the E Ink shipping label will automatically update and you can track the job status via Amazon Simple Notification Service (SNS), text messages, or directly in the Console.

4. Snowball Edge: 100 TB (storage as well as compute functionality). Local compute equivalent to EC2 (m4.large) instance.

5. Snowmobile: Data-truck with storage up to 100 PB.