This paper mainly shares the preparation and examination experience of AWS (Advanced Networking Specialty -ANS), and REVIEWE the related services of AWS network, analyzes its main characteristics and some application limitations; finally , this paper makes a brief analysis of AWS strategies and discusses their impact on operators and enterprise networks.
Two months ago, I had some time to set up a goal and optimize my knowledge structure. After surfing the Internet for half a day, I selected the AWS Advanced Networking Specialty Certification.
AWS Advanced Networking Specialty, is a new certification project launched by Amazon in 2017, focuses on the network requirements of Connection, Routing, Reliability, Fault Tolerance, Security, Encryption, Domain Name Resolution, CDN, Directory Service, various cloud services for network’s needs (VDI, Container, RDS, Big Data, Database Migration, etc.), Automatic Deployment and Operation, Efficiency and Cost, Risk and Compliance and so on knowledge related to the network, involving a wide range, there is a certain depth. The content of the examination is mainly based on the scene, focusing on the use of knowledge to solve practical problems, rather than the knowledge itself. There are 65 choice questions (single choice and multiple choice) you need finish these questions in 170 minutes, and each question introduces a business scene. You need to understand the problem, build a model and make a choice in 2.5 minutes. Let’s take a look at the style of the test questions through two mock questions:
Key knowledge points: VPC Peering does not support Transitive Routing; Client-to-Site V_ P_ N allocates IP address for Client and does NAT; generally speaking, NAT only supports one-way connection. The correct answer is B.
Key knowledge points: since Mumbai and Singapore are far away from the east coast of the United States and have a long time delay, they should establish a Transit Hub VPC in the Asia Pacific region; VPC Peering does not support Transitive Routing, so needs VPN over VPC Peering to connects two Transit Hub VPCs; for cross region VPC peering, AWS provides encryption automatically without using IPSEC V_ P_ N. The GRE tunnel is more efficient (when I did this exercise for the first time, I mistook Mumbai as Miami, and the whole exercise was misunderstood). The correct answer is B.
Many people thought that the AWS ANS is difficult, and it may be the most difficult one among the nine certifications of AWS. A man in the United States has six AWS certificates (three Associate – SA, Developer, SysOps, three Professional – SA, DevOps, Big Data), and he has challenged three times before passing the AWS ANS. Looking back, I think the main reasons are as follows:
1) ANS is a new certification project launched in 2017, and few people have passed it. Compared with other AWS certification projects, the quality of ANS mock exercises on the Internet is not high. In the process of preparing for the ANS certification examination, I have done more than 800 mock exercises (mainly from the Official Study Guide and Whizlabs, many of them which are copied), but I have not met any questions in the actual examination, so this examination mainly to investigate your real study level.
2) The network is more complex. It connects all kinds of cloud services and On-premises, involving end-to-end networks and various components. Under normal circumstances, you can’t feel its existence but may be all its faults if something goes wrong. If you don’t understand some core concepts of the network thoroughly, adjust the scenes a little, and you may be confused when you encounter pressure.
3) At present, most of the personnel engaged in cloud computing applications in enterprises come from IT and software, and few of them come from IP. Naturally, people from different backgrounds have different perceptions of the difficulty of AWS ANS certification. My personal experience is that IT and software involve a wide range, while IP is more complex.
After doing research on the Internet, there was still some pressure at that time. Can I succeed? How much time will take? But objectively, this certification examination is designed for people like me who have transferred from IP to IT and Cloud Computing. If I dare not challenge it, who can I expect? Am I an expert or not? People who can test are not necessarily experts, but real experts should not be afraid of this kind of practical test. Finally make up my mind, go!
After the goal was set, it was unswervingly implemented. The next eight weeks were hard, and almost all the available time was spent, “Theoretical Study – Lab – Mock Exercises – Summary”. During the world cup in Russia, I didn’t watch a football match. One day, I suddenly received a call from HR of the company – “Mr.Xue, you worked 150 hours overtime last month. Are you ok?” I said, “it’s OK. There’s no air conditioning in the dormitory, but the office is cool, and I can read books by myself.”
I have read thousands of pages of materials and taken more than 200 pages of notes, but I still feel that I am not confident enough and the scheduled time for taking the exam has been delayed again and again, not only because of the fee of more than USD300, but also because I am worried that if I fail the exam, my self-confidence will be affected. After all, I am an expert of the company. Later, I got tired of the preparation. I couldn’t see the materials, and I couldn’t find large pieces of new knowledge. I couldn’t wait any longer. So I signed up for the 8 / 7 exam.
Although I have made full preparations in advance, the examination process is still very tense and exciting. Time flies. Some questions are just incomprehensible. I always hope that the next question will be simpler so that can win back some time. I even have the idea for giving up in mind (the paper will be handed in at random). Finally, I hold it. As long as we have one minute, we will try our best to solve it. Go and read the questions. ANS examination is not only to examine your professional knowledge, but also your psychological quality and willpower, and whether you can perform stably under the limited time and environmental pressure.
It took me 140 minutes to answer 65 questions, 18 of which were marked and uncertain. Then I used the rest 30 minutes to review these 18 questions and revise some answers of them. At the moment before I handed in my papers, I was actually quite relaxed. In the past two months, no matter in the preparation stage or the examination site, I have made detailed planning, made my best efforts, and improved my professional level in the combination of cloud computing and network systematically . No matter what the result is, I have no regrets. If I can’t pass it, it’s my level or practice still not in place. Click the mouse and wait for a moment. The screen shows:
“Congratulations，you have successfully completed AWS Advanced Networking – Specialty exam…”
At that time, my mind was a little murmuring. What’s good for me to congratulate on the complete exam? Is there anyone who can’t complete it? Do I pass the exam? After leaving the examination room, I turned on my mobile phone and received an email from AWS:
Yes, the score is 84% (the pass line is usually 65% – 70%), and the result is very good, which shows that I have been burdened with too much ideological burden and overly prepared. In fact, as long as you can pass the certification exam, it’s OK. There are many differences between work and exam. Every 1% score increase in exam requires extra effort and time. It’s better to use this time to learn something new or enjoy life!
Cloud computing is not my real job. In the past three years, I have invested a lot of spare time in learning cloud computing related technologies, while creating various conditions to practice in my work. I am satisfied with my current progress. From knowing nothing about cloud computing and following others’ advice at the beginning, until forming some of my own views and passing AWS ANS—a professional cloud computing certification exam, I really feel my progress and growth; and this progress is due to the positive changes I have made for meet the challenges in the future. What I have gained is not only confidence, but also vast develop space in the future.
Next, I’d like to share my experience for taking the AWS ANS examination from three dimensions:
1) Preparation and examination experience of AWS ANS examination;
2) AWS network related service REVIEW, the main characteristics and some application limitations;
3) A brief analysis of AWS strategies.
1、 Preparation and examination experience of AWS ANS Certification examination
Mainly used learning materials:
ANS training video on A Cloud Guru (mainly use), subscription fee USD29/ month, first 7 days free:
ANS training video on Linux Academy, subscription fee USD49 / month, first 7 days free:
Network Related videos of AWS re: Invent 2017 (main use)
Related videos of AWS ANS organized by netizens: https://www.youtube.com/watch?v=SMvom9QjkPk&list=PLlkukGgpsXyvUbJ85RVD7qNJ1mcGKO4_ w&index=1
ANS practice test provided by Official Study, more than 200 exercise questions, including:
Whizlabs provides 8 sets of ANS practice tests, more than 600 exercises, USD29:
AWS Management Console is mainly used;
Some simple Python Web programs are written and run on EC2 instance for testing;
When using PuTTY to log in EC2 instance, you need to configure a proxy server through the company.
Learning plan based on personal situation:
Completed more than 800 mock training, summary, deepen understanding.
Read the Official Study Guide (the 3rd time) and review more than 200 pages of study notes;
Did some experiments and watched some videos.
1) During the preparation period, we should keep on exercising and adjust our competitive state;
2) AWS ANS Certification examination has a large amount of information, so we should take notes and review regularly;
3) During the examination, we didn’t have much time to think about some mainstream business scenarios, such as VPC routing, main scenarios and schemes of Hybrid DNS, main variants of Transit Hub VPC scheme, EC2 instance V_ P_N, EC2 instance V P N gateway horizontal and vertical expansion scheme, etc. All of them should be summarized and draw inferences from one instance.
Notes for appointment and examination:
take part in an examination:
Carry 2 personal IDs with photos, and the test center provides a safe cabinet to store personal belongings;
After entering the examination room, you can’t carry any articles. You can ask the staff of the examination center for some paper and pens;
Computer test, electronic monitoring, a circle of cameras around;
In the process of examination, the intensity of using brain will be large. We should ensure enough sleep and energy in advance and adjust our mood;
You are allowed to go to the toilet during the exam. You can prepare a bottle of water and put it on the way to the toilet.
2、 Review of AWS network related services
The English documents of AWS are very good. They provide detailed user’s guide and FAQs for various services. However, when it comes to the implementation of key technologies, they are often mentioned in one stroke. This is the main obstacle I encountered in preparing for the AWS ANS Certification examination. In the case of not knowing its specific implementation principle, many knowledge points rely on human memory, which is a very unpleasant thing. For the realization of some key services, I refer to the implementation scheme of open source software and the discussion on the Internet. At the same time, combined with the past research and development experience, I strive to draw models, deepen understanding and simplify memory. The following is a discussion of some important and difficult issues of AWS network related services, which comes from my study notes during the preparation period.
VPC forwarding logic and transitional routing
VPC should be implemented through SDN and Overlay scheme. It does not support internal forwarding between Multicast and Broadcast and Subnets, also the forwarding between Subnets, and the forwarding of EC2 instances to various service gateways (IGW, VGW, NAT, DNS, etc.) are all completed in one hop.
VPC has made important restrictions on message forwarding logic: if the source address of a message does not correspond to an interface within the VPC, the destination address of the message must correspond to an interface within the VPC; at least one of the source address or destination address of the message corresponds to an interface within the VPC, otherwise the message will be discarded.
The development of forwarding logic – VPC does not support transitive routing. It should mainly consider the security and reliability of AWS cloud network, such as avoiding loop problems in VPC network designed by tenants and source address spoofing. VPC does not support Transitive Routing, which will affect all aspects of AWS cloud network design and increase the complexity of the overall scheme, which is also the biggest difference from the traditional on premises network. The following table is me according to the official documents of AWS and Lab sorted out:
Access within this VPC
Access after connecting VPC Peering
Only partial instances from other VPCs in the same region can be accessed
Access through Direct Connect
Access after connect V_ P_N
There is no ENI interface in the VPC for IGE/EIGW, VGW, VPC Peering and Gateway VPC Endpoint, it only can be accessed within VPC.
Although VPC DNS can be accessed through the address of “VPC CIDR + 2”, there is no ENI interface inside VPC (VPC router should directly intercept DNS messages and forward them to AWS-Managed DNS service), so it only can be accessed inside VPC.
For interface VPC endpoint and EFS, they all have ENI interface and IP address inside VPC. Under normal circumstances, they are no different from accessing EC2 instance outside. AWS should make some access restrictions based on business considerations and technical constraints.
For NAT GW and NAT instances, they all have ENI interface and IP address in VPC, but the messages they process all need to access the Internet (not the NAT device itself). Because VPC does not support transitive routing, they can only access in VPC.
In order to implement transitional routing on AWS platform, Overlay scheme is needed, which transforms the access / penetration of various service gateways inside the VPC from outside the VPC into the initiation of requests inside the VPC; for each type of service gateway, corresponding solutions should be adopted.
For solving the problem of VPC DNS Transitive Routing, AWS-Managed SimpleAD, Active Directory or self deployed Unbound can be used to implement Conditional Forwarding.
For solving the Transient Routing problem of IGW and NAT, an EC2 instance is required to terminate the VPN tunnel and perform NAT conversion (converting the source address to the address of the VPC CIDR) before accessing the VPC IGW and NAT.
For the VPC Endpoint’s Transitive Routing problem, it needs to be implemented in the HTTP application layer, and two solutions can be adopted:
1) Reverse Proxy (Proxy service) : deploy ELB and Proxy Farm inside VPC. By modifying DNS, the access from outside VPC to the service gateway is converted to access ELB firstly , and then access the service gateway by ELB and Proxy Farm.
2) Forward Proxy (Proxy client). Deploy ELB and Proxy Farm inside VPC and configure ELB as the Proxy server on the client. The client connects to ELB firstly and then access the service gateway through ELB and Proxy Farm.
Local routing for VPC
The VPC Local Route is mainly used for forwarding within the VPC and ensuring the communication between all resources. It cannot be modified and can not be overridden with more specific routes. If you want to configure the software firewall to filter forward traffic between Subnets, you cannot change the VPC Local Route, but you can do it indirectly by changing the routing configuration of the EC2 instance OS.
You can add a Destination with a larger range than VPC CIDR in the VPC routing table.
The main interface of the EC2 instance, which cannot be separated from the instance. ENI can be created dynamically in a Subnet, representing a virtual network card, can be dynamically bound to an EC2 instance (the Subnet of the EC2 instance must be in the same AZ as the Subnet of the ENI instance), or can be detached from one instance and re-bound to another. The ENI interface can be used for network pipe networks, primary/standby swap, virtual firewalls, etc.
The number of ENI interfaces supported by EC2 instance is limited, and NIC Teaming is not supported.
Cross account network interface:
A VPC and EC2 instance of Subnet in account A are dynamically bound to ENI in a VPC and Subnet of account B. EC2 instances and ENI need to be in same one AZ; they are mainly used for access between AWS management services and tenant VPC, including RDS (AWS management database, tenant use database), Lambda (AWS provides computing resources, access to tenant VPC) and Workspaces, etc. The expansibility and reliability of the scheme are general.
Controlled usage. To use this function, tenants need to whitelist’ s control.
ENI interface’ Source/Destination Check
The Source/Destination Check of the ENI interface on an EC2 instance must be its own IP address when sending/receiving a packet, as the VPC forwarding logic does not support Transitive Routing. When receiving a message, the destination address of the message must be its own IP address; Otherwise the message will be discarded. When an EC2 instance provides functionality such as NAT, V P N, Firewall, etc., the incoming and outgoing packets are usually not their own, so forbid Source/Destination Check.
Security groups act on the ENI interface. The security group’s Inbound rules, where the port is its own and the source IP address is remote; The security group’s Outbound rules, where the port is remote and the source IP address is remote. Security group, only need to configure Allow.
Default security group, by configuring self-reference rules, the implementation can receive messages from instances configured with the same security group, allowing all messages to be sent. A new security group that initially disallows receiving messages but can send all messages.
The security group is stateful and allows messages to leave as long as they are allowed in, regardless of how the Outbound rules are configured; And vice-versa. If all traffic is allowed in and out of certain ports, the security group does not need to maintain state.
Due to the technical implementation within AWS, for existing connections, communication will continue for several days without interruption after the corresponding security group is removed, so the network ACL must be configured.
Network ACL acts on Subnet. According to the inbound rule of network ACL, the port is its own and the source IP address is remote; according to the Outbound rule of network ACL, the port is remote and the source IP address is remote. Network ACL needs to display and configure Allow and Deny rules, and match them according to the order of rules.
Default ACL can send and receive all messages by configuring rule 100 in the Inbound and Outbound directions. The new NACL forbids receiving and sending all messages at the beginning.
Network ACL is stateless.
IGW, NAT Gateway/Instance, and EIGW:
NAT Gateway/Instance, which simply converts the source address of a packet to the address of its ENI interface (private address), using the source port number to distinguish between different user streams. IGW ultimately converts the source address of the packet (private address) to a public IP address or elastic IP address, which is a 1:1 conversion.
NAT Gateway, which is an AWS Managed service, you can’t make any changes, you can’t configure the security group of its ENI interface, you can’t configure the Predefined port, you can’t allow the outside to access the inside. To deploy the NAT gateway to the Subnet level, the performance can be automatically scaled up to 45 Gps.
NAT instance, can use third party software, need to disable Source/Destination Check. The security groups for the NAT instances can be configured, and the Inbound rule is virtually meaningless because none of the messages received are intended to access the NAT instance itself.
EIGW is to provide IPv6 service similar to IPv4 NAT experience, VPC internal access to the Internet, Internet can not access the VPC internal, but does not do address translation; EIGW is deployed at the VPC level, not Subnet.
VPC routing priority:
The VPC’s static routing configuration can’t conflict with two targets going to the same Destination, but static and dynamic routing can conflict with each other, which is related to the priority of the route. The AWS network has two locations where routing needs to be done, namely VPC and VGW.
VPC has several routing sources, including local CIDR, static configuration, and VGW dynamic injection. VPC routing priority for: local CIDR route, the longest matching routing (no matter from where), static routing (to a Destination and the Target of IGW respectively, VPC Peering, VGW, NAT, ENI, etc.), through the Direct Connect with BGP routing (Target for VGW), V P N static routing (Target for VGW), through V P N Connection into the BGP routing (Target for VGW).
VGW also has multiple routing sources, including CIDR bound to the VPC, static routes configured on VPN Connection, routes learned dynamically through the BGP protocol and multiple Direct Connect and VPN Connection peers. The routing priority for VGW is: local CIDR routing, longest match routing, BGP routing learned through Direct Connect, V P N static routing, and BGP routing learned through V P N Connection.
If multiple BGP routing conflicts exist in multiple Direct Connect or multiple VPN connections within VGW, the BGP routing priority is: weight (highest wins), local_ Pref (highest wins), aggregation routing, as_ Path (shortest wins), origin (IGP EGP incomplete), Med (lowest wins), etc.
VGW through BGP over Direct Connect or BGP over V_ P_ N after connection learns the route, it can be dynamically injected into the VPC routing table, and it can also configure static route and target to VGW in the VPC routing table.
When a VGW is connected to a route via BGP Over Direct Connect or BGP Over VPN Connection, it can be dynamically injected into the VPC routing table. It can also be configured with static routing and Target to the VGW in the VPC routing table.
Mainly for security and compliance reasons, when accessing AWS public services, do not use the Internet. It can be divided into two categories:
Gateway VPC Endpoint, the early technology implementation, is mainly aimed at S3 and DynamoDB. It injects the public network route of these AWS services into the route table of VPC and Subnet (identified by PL XXXXXXXX as Destination), and VPC Endpoint is used as Target (identified by vpce XXXXXX, which should provide NAT function). You can configure TAM policy in VPC endpoint to access S3 Buckets. You can also configure IAM policy in S3 bucket to access VPC or VPC endpoint. You cannot use the policy based on source IP address. In addition, PL XXXXXX and configuration policy can also be referenced in the security group. PL XXXXXX cannot be referenced in the network ACL.
Interface VPC Endpoint, the latest technology implementation, is based on AWS PrivateLink technology. For EC2, ELB, Kinesis and other AWS services, one or more ENI interfaces and IP addresses are added in consumer VPC. At the same time, DNS domain names of region and zone are provided for these ENI interfaces (public network can be resolved and private network IP address can be returned). It can also can be used in Consumer VPC standard AWS service domain names (e.g. ec2.us-east-2. amazonaws.com ）Resolve to the private IP addresses of these ENI interfaces.
Through PrivateLink technology, we can also publish Endpoint Service to the outside: create network ELB and Back-end server in provider VPC, create Endpoint Service ; based on ELB, create interface VPC Endpoint in consumer VPC and reference provider VPC Endpoint Service.
Because Network ELB only supports TCP, AWS PrivateLink only supports TCP.
VPC Peering and AWS PrivateLink:
VPC Peering is suitable for two-way communication between multiple EC2 instances of two VPCs, and can support up to 125 Peering; PrivateLink is suitable for one-way communication, and can support thousands of consumer VPCs.
VPC Peering in the same region can refer to the security group of the opposite end, and can be configured to resolve the public DNS domain name of the opposite end to the private IP address (inelastic IP address or public IP address) inside the VPC. After adopting VPC Peering, the Route 53 private managed area associated with the opposite end VPC cannot be accessed automatically, and the association still needs to be displayed. For VPC Peering across regions, AWS automatically provides encryption.
Site-to-site VPN, using VGW, can also run VPN, software on EC2 instance, need to prohibit Source/Distination Check.
CGW mainly establishes a connection to VGW; if CGW is deployed behind NAT devices, it needs to support NAT-T function (this is a feature of IPSec, which encapsulates IPSec ESP messages into UDP messages with the port is 4500).
1 V_ P_ N connection, two IPSec tunnels, implement Active / Active or Active / Passive through routing policy:
VGW – > CGW traffic. When CGW issues routes to VGW, it adopts the longest matching route of BGP and as Strategies such as PRED or Med, etc.
CGW > VGW traffic. When CGW publishes routes to on premises internal network, can use some strategies such as BGP Weight or Local _ Pref, etc.
EC2 instance V P N gateway HA scheme: run two EC2 instances as IPSec gateway, establish the tunnel, of which EC2-1 as the on-premises routing Target; Run the automation script, find the problem, modify the VPC routing table, implement the switch, and select EC2-2 as the Target for the on-premises routing.
EC2 instance V_ P_ N Vertical expansion scheme of gateway: running BGP between EC2 instance1 (for ELB) and three EC2 instances (for IPSec).
EC2 instance V_ P_ N gateway horizontal expansion scheme: separate IPSec gateway according to different prefix, 192.168.0.0 http://blog.51cto.com/8493144/17 take EC2 instance 1. 192.168.128.0/17 take EC2 instance 2.
Client-to-Site V_ P_ N. It only can run VPN on EC2 instances , usually also provides authentication, IP address allocation and NAT functions for the client.
AWS cooperates with hundreds of regional operators around the world to move the PoP point down and use the DX Router to reach customers nearby. Two access schemes:
1) Direct connection with optical Fiber, DX Router — Dark Fiber — CGW, support 1Gbps and 10Gbps;
2) With the help of operator network, DX Router — MPLS PE… MPLE PE — CGW, support 50~500Mbps.
Dedicated connection, multiple VLANs (VIFS) can be configured, and the customer is responsible for LOA-CFA; the customer needs to pay the port hourly fee. Hosted connection, which only corresponds to one VLAN (one VIF), is designated by the operator. The operator is responsible for LOA-CFA, and the customer also needs to pay the port hourly fee.
CGW connects to VGW through DX router and Private VIF, runs BGP, and switches routes between VPC and on premises; VPC only announces its CIDR route to CGW, not other routes of static configuration or dynamic injection; CGW issues up to 100 routes to VGW.
CGW connects to AWS Internet through DX router and public VIF, runs BGP, controls the propagation range of on premises routing (local region, local continuous, global) through community attribute, and the range of CGW learning AWS Internet Routing (local region, local continuous, global). CGW publishes up to 1000 routes to AWS Internet, and AWS Internet will not provide Transit service for on premises public network routes; if CGW adopts private ASN, AS-Prepend will not work.
Managed VIF, hosted VIF, can be public VIF or private VIF (receiver bound VPC). The traffic related fees of hosted VIF shall be borne by the receiver, and the port hourly fee will be paid by the owner.
VGW can act as CloudHub, providing routing and forwarding for V P N Connection and Direct Connect.
After CGW is connected to direct connect gateway through DX Router and Private VIF, it can be connected to multiple VGW across regions. The control plane of direct connect gateway provides the function similar to BGP routing reflector, and its forwarding plane completes the traffic exchange between CGW and multiple VGWs (non VGW and non CGW).
After CGW connects to AWS Internet by DX Router and Public VIF, it can establish V P N Connection with VGW.
After CGW connects to VPC by DX Router and Private VIF, it can establish a VPN Connection with EC2 instances inside VPC. CGW supports Tunnel VRF functionality: create a VRF and access the VGW and VPC via the DX Router and Private VIF inside the VRF to learn the routing of the VPN gateway for the EC2 instance. Then, in the main routing table of CGW, create a tunnel (the Source and Destination of the tunnel are the address space of VRF), connect to the EC2 instance V P N gateway, and exchange the service route through BGP.
VPC DNS and Route 53:
After the VPC publishes the EC2 instance, it automatically provides the public DNS domain name and the private DNS domain name (enableDnsSupport is TRUE and enabledDnsHostnames is TRUE). Public DNS domain names resolve to private IP addresses inside the VPC.
The VPC DNS service, accessed via the “CIDR+2” address, automatically provides look up services for Internet public domain names, VPC resources, and Route 53 private hosting (bundled with the VPC). The VPC DNS service, which cannot be accessed from outside the VPC (via VPC Peering, VP N, Direct Connect, etc.), it cannot be configured.
Hybrid DNS has a variety of solutions:
1) Simple AD is an AD management service provided by AWS. It automatically forwards requests to VPC DNS. It cannot change the configuration and forward requests to On-premises. By configuring DHCP Option Set, the VPC EC2 instance can use the DNS service of Simple AD. if VPC also needs to resolve the domain name of On-premises, the EC2 instance in need can install unbound, point to the On-premises DNS server and VPC simple ad. On-premises DNS server can set forwarding and point to simple ad, so that On-premises can resolve the domain name of VPC resource.
2) Microsoft AD is an AD management service provided by AWS, which can be configured to forward requests to VPC DNS and On-premises DNS. By configuring DHCP Option Set, the VPC EC2 instance can use the DNS service of Microsoft AD to resolve the domain names of VPC and On-premises at the same time. On-premises DNS server can set forwarding and pointing to Microsoft AD, so that On-premises can resolve the domain name of VPC resource.
3) Deploying unbound as DNS server in VPC and implementing Conditional Forwarding can forward requests to VPC DNS and On-premises DNS. By configuring DHCP option set, the VPC EC2 instance can use the unbound DNS service to resolve the domain names of VPC and On-premises at the same time. On-premises DNS server can set forwarding and point to unbound, so as to realize On-premises resolution of VPC resource domain name.
4) Create route 53 private hosting area and associate with VPC. Use CloudWatch’s periodic events and Lambda function to periodically mirror the DNS database of On-premises in route 53 private hosting area, which is equivalent to creating secondary DNS for on premises in VPC to realize VPC resolving the domain name of On-premises.
Route 53 provides domain name registration, DNS service and Health Check function; Route 53 public hosting area is externally visible; Route 53 private hosting area shares global DNS infrastructure with Route 53 public hosting area, but Route 53 only responds to the query of associated VPC to Route 53 private hosting area, which cannot be accessed externally, and is mainly used for Split-Horizon DNS scenario (the same domain name can resolve different IP addresses inside and outside the VPC).
Route 53 supports Alias record, which is equivalent to pointer, and provides the experience of querying a record for DNS Resolver; while with CNAME, DNS Resolver needs to query twice. CNAME records cannot be added to Zone Apex (required by DNS protocol), but Alias records can. You can create Alias for Alias- a pointer to a pointer. When creating an Alias record in a route 53 private managed area, you cannot point to a resource in a Route 53 public managed area.
Users may choose a very far DNS Resolver to complete the resolution, which will lead to the failure of various routing strategies of Route 53. Ends-client-subnet is an extended DNS protocol, which allows DNS resolver to transfer user IP address to DNS server. Route 53 only processes user IP address when DNS resolver supports this protocol.
The Health Check of Route 53 can monitor specific resources, the Alarm / Metric of CloudWatch and other Health Checks; when creating DNS records, you can specify Health Check (which does not need to be directly related), so as to use the results of Health Check for DNS query and avoid the resources with problems.
ELB is an AWS-Managed VPC, with one or more ENI bindings created for each Subnet of the Consumer VPC (which requires display specification).
For Internet-facing ELB, the public domain name of ELB is resolved to elastic IP or public IP address (the message is converted to ENI private address in IGW), parsing VPC inside is the same and requires deployment in the Public Subnet.
For Internal ELB, the domain name of the ELB is still the Public domain name, but resolves to be the Private IP address of ENI, which can be deployed in either a Public or a Private Subnet.
Since ELB increases/decreases ENI and private IP addresses, as well as elastic or public IP addresses, during dynamic scaling, it is required to use the ELB domain name instead of directly using IP addresses. NLB’s ENI and IP addresses are fixed and can be accessed directly.
CLB is the first generation of ELB service, facing EOX; CLB supports both HTTPS / HTTP and TCP / SSL listeners; SSL listener is mainly used for SSL offloading, if it does not process SSL termination and CA certificate, it uses TCP 443 as the listener; CLB’s HTTPS / HTTP listener has very limited HTTP processing power in the application layer, and only supports basic sticky session, SSL offloading and other functions; it does not support SNI.
SSL negotiation configuration (Security Policy), negotiation of SSL connection between client and ELB, including SSL protocol, SSL password, sequence preference combination, etc. You can use pre-defined security policies or customize them.
ALB is an optimized service for HTTP / HTTPS. It supports load balancing based on URL and HTTP host. It supports SNI. A single IP address carries multiple SSL certificates. If the destination IP is used, it supports load balancing between VPC and On-premises resources.
NLB is a service optimized for TCP, with direct HASH and high performance; If the back-end server is required to process SSL terminations and CA certificates, NLB is usually used; Load balancing between VPC and on-premises resources is supported if destination IP is used.
ELB will change the source address of the IP packet. There are two ways that ELB can pass the user’s IP address to the back-end server:
1) Proxy Protocol, which adds a header to TCP and transmits the user’s original information. CLB uses Proxy Protocol V1 (text format) and NLB uses Proxy Protocol V2 (binary format).
2）HTTP X-Forwarded_ For, add a field in the HTTP header to transfer the user’s original information (client IP, proxy IP1 、Proxy IP2…） , CLB and ALB.
NLB, can preserves user IP, may be achieved through deep integration of NLB with VPC Router and IGW.
CLB and ALB support the configuration of security group, which is actually the security group of ENI interface in Consumer VPC. As Internet facing ELB and internal ELB, the logic of configuring security group is different; NLB does not support the configuration of security group, which can be indirectly realized by configuring the security group of Back-end server.
The IP addresses of CLB and ALB would change during dynamic scaling, so when configuring the Back-end server security group policy, the rules should be specified based on the security group (non-IP address) adopted by CLB and ALB.
CLB and ALB support logs, NLB does not support logs.
Connection Draining, during Auto Scaling, ELB stops sending new requests to the EC2 instance that is about to stop running, but allows it to finish processing the ongoing session. The default is 300 seconds.
S3 Static Web Hosting service only supports HTTP and returns HTML. The URL is generally as follows: http://xgf-bucket-1.s3-website.us-east-2.amazonaws.com/.
S3 API Endpoint service, supports HTTP and HTTPS, returns XML, and the URL is generally: https://s3.us-east-2.amazonaws.com/xgf-bucket-1.
CORS, cross domain resource sharing. When S3 Bucket is used as Static Web Hosting, it needs to support CORS. When customers are allowed to access the bucket, it can realize cross domain access (the content of other websites is introduced through XMLHttpRequest in the web page). You need to configure policies to allow you to access this website / webpage, which operations of other websites can be introduced, such as Get / Post, etc.
S3 Transfer Acceleration uses CloudFront’s global distribution network to download / upload objects with optimized path. First, enable Transfer Acceleration in the bucket, and adopt a new WBB domain name (non API domain name) – “bucketname. S3”- accelerate.amazonaws.com ”, locate the nearest edge node, the principle is similar to CDN.
The TAM policies of S3 bucket and object are configured separately. When they are used for web hosting, public access should be allowed.
S3 API Endpoint supports the Signed-URL capability. The general principle is as follows:
1) When external users access AWS through HTTP (specific URL), they need to be able to identify the customers who send them, including verifying the identity of the requester, preventing the request from being changed, and the request deadline, etc.
2) Use HASH to make a digest of the request (URL represents a resource – image, web page, etc.), then use “signature key” to make a “digital signature” of the digest, and then put it in the HTTP Authorization header, or put it in the URL in the form of query string.
3) The signed URL is sent to the customer, and the customer uses the Signed URL to access it. After receiving it, AWS decrypts it according to the “signature key” to get the “original digest”, and makes a “digest” at the same time. If it is consistent, it will be OK (know whether the request has been changed and who did it).
The application scene of Signed URL and Token are different (in the case of no password): signed URL allows “outsiders” to access certain “resources and services” within a period of time, and is identified by URL; Token allows “outsiders” to get temporary permissions and access a group of resources within a period of time.
When using S3 Static Web Hosting service, if an alias is used, the DNS name must be the same as the bucket name. This is because S3 Static Web Hosting provides Static Web Hosting service for multiple buckets of multiple accounts. It needs to find the correct bucket according to the host field in the HTTP header.
Cloudfront, a reverse proxy (proxy server), uses Route 53’s geographic based routing strategy to return the nearest resource to the requester.
CloudFront supports web distribution and RTMP distribution.
The domain names distributed by CloudFront are different from those of origin.
Usually, Origin handles dynamic requests and gives CloudFront to handle static resources in the web page. You can also submit both dynamic and static requests to CloudFront.
CloudFront, which can be integrated with S3, ELB, EC2 and third-party servers; When integrated with S3, you can use OAI to achieve CloudFront to Origin access control; When integrating with other resources, you can use Custom HTTP headers to achieve CloudFront to Origin access control; Make Origin not accessible to other CloudFront Distribution and non-CloudFront resources.
When providing private content, you can use Signed URL (for a single file) or Signed Cookies (for a group of files). The “signature key” is generally generated according to the Private Key, not the Private Key itself.
Using CloudFront will give you a DNS domain name. You can use it directly, or create a friendly CNAME record or Alias record (if Route 53 is used). However, you must tell CloudFront the DNS domain name, because CloudFront needs to determine which distribution the request message belongs to according to the HTTP HOST field information (friendly domain name).
CloudFront and Viewer can use HTTP, HTTPS, or Redirect HTTP to HTTPS. Between CloudFront and Origin, you can use Match Viewer, HTTP, or HTTPS. The Certificate needs to be injected in US East and automatically spread to all regions of the world.
Lambda@Edge , the processing time is viewer request, origin request, origin response, viewer response; the use scene are cookie checking, URL rewriting, dynamic modification of custom HTTP header or a / B testing (a new web page optimization methods, some customers visit A and some customers visit B, through the advantages and disadvantages of the two schemes).
Use the geoblocking function of CloudFront: use the GeoIP database to determine the user’s location, with an accuracy of 99.8%; in the Restrictions of Web Distribution, configure Whitelist and Blacklist in Geo Restriction; if not, CloudFront returns 403 (Forbidden).
Use the third-party geolocation service (need the support of origin server): upload the content to S3 Bucket, use OAI, and provide private content through CloudFront; write a web application, and call the geolocation service according to the user’s IP; if allowed, provide the signed URL for the content distributed by CloudFront (after the user requests to arrive at CloudFront, judge the signed URL) URL); if not allowed, return 403.
ACM – AWS Certificate Manager:
AWS manages TLS certificates and supports CloudFront, ELS, Elastic Beanstalk, API Gateway, etc. You can create CA certificates, or import your certificates into ACM. The CA certificate provided by ACM is valid for 13 months and will be automatically renewed. ACM is Regional level and handled separately in each Region. For CloudFront certificates, they need to be processed centrally in US East (NV).
Standard service, for common attack, syn / UDP flooding, L3 / 4 layer, no charge, always online, dynamic response to changes.
Advanced services, for Route 53 hosting area, CloudFront distribution, ELB, etc., L7 layer, provide Attack information. After the enterprise goes to the cloud, the application of horizontal expansion can digest DOS; but through the bill, you can see who has been attacked (EDOS, suffering from DOS economically); DOS will not affect your network, but will affect your expenses. Shield advanced service provides cost protection for DOS Attack, but only for Route 53 hosting area, CloudFront distribution, ELB and other services. After being Attacked, you can implement AWS WAF (using shield advanced service, which is free); you can also contact DRT (DOS processing team) to identify attack mode; DRT team helps you deploy AWS WAF, and you need to provide IAM role of cross account.
Intelligent threat detection service to monitor and protect your AWS Account and Wordload. Analysis of a large amount of data (using CloudTrail, VPC Flow Logs, DNS Logs, etc.) does not require probes and will not affect the availability and performance of the load. Overall analysis, including accounts.
Analysis of VPC environment, identification of security issues, intelligent threat detection services. EC2 instance needs to install Inspector Agent to monitor the behavior of operating system and application. For VPC and EC2.
Using machine for learning ML, Discover, Classify and Protect Sensitive Data, mainly for data stored in S3.
Xen virtualization and enhanced networking:
Xen is responsible for CPU and memory, Dom0 is responsible for virtual machine management and I / O virtualization; Xen, running on the Bare-metal, dom0 is equivalent to host OS and privileged virtual machine, and supports PV and HVM; supports HVM (hardware virtualization, VT-x / D required) and PV (semi virtualization, changing guest OS core, changing sensitive instructions to function call). Several operation modes of Xen are as follows:
1) PV mode (semi virtualization, full software simulation): no CPU support for virtualization, modify the Guest OS kernel to complete CPU and memory virtualization; I / O requests are sent to the real device driver of dom0;
2) PV on HVM mode (full virtualization and hardware simulation, but IO adopts software simulation): the chip supports CPU and memory virtualization, and I / O requests are sent to the real device drivers of Dom0 (modify the IO driver of guest OS, and support some standard VNIC and drivers by default), bypassing the full virtualization I / O stage of KVM and corresponding virto scheme.
3) SR-IOV PCI passthrough pass through mode (the premise is HVM), uses Intel VT-d, to directly allocate PF / VF to guest OS.
PV AMI and HVM AMI start in different ways: HVM AMI starts directly with MBR and can continue to install PV network driver (mainly for enhancing networking SR-IOV) to improve I / O performance. PV AMI, using pv-grub, to load menu.list To the OS kernel.
Enhanced Networking: the instance type of SR-IOV is used, which needs the hardware support of the host. Only the instance type that supports HVM can support enhanced networking. VPC and Internet support single stream 5Gbps (10Gbps in place group) and multi stream 10Gbps or 25gbps at most (depending on hardware network card Intel 52999 or ENA).
AMI support is required to enable enhanced networking (AMI is not enabled, drivers are not installed, all VMs can only use PF). For instance type using Intel 52999: install AMI using Inter IXGBEVF driver, and set SRIOVnetSupport property (the latest AMI has been set); For instance types that use ENA: the AMI installation uses the ENA driver and sets the ENASupport property property (all the latest AMIs have been set).
The infrastructure is described by software program, the version is managed by AWS CodeMit or GitHub, the deployment is made by CloudFormation, and the end-to-end collaboration is made by CodePipeline.
Validation errors, spelling and formatting problems, and preprocessing can find problems, not refer to rollback.
Semantic errors, which can only be found when the resource is actually created, require Rollback.
Reference Depends On will affect the order of creation.
Retaining resource: set the deletion policy to retain when the resource is defined in the Template, and keep it when the stack is deleted.
Some resources may be deleted or replaced when a new Template is used for update. When creating a Stack, provide a JSON file to define these policies (disable Update:Delete or Update:Replace ）To prevent resources from being deleted by the new template.
Change sets: for the current Stack, create a Change Set, look at the differences, and then execute it; help manage the upgrade of the stack to prevent the update from being destructive. Specifically, a new configuration file is provided. Before deployment, it is compared with the running stack to provide change, visualization, and finally execute.
Configure Non-AWS resources: CloudFormation can create Custom Resources. When CloudFormation executes a template and creates a custom resource, it can send a message through SNS (reminder, manual operation), or invoke lambda function (configure CGW on the client side through Python and SSH); then CloudFormation provides a Signed URL, which you can create results (ID, Status) with feedback resources. In this way, CloudFormation manages Non-AWS resources as well.
Should create a service role for cloudformation to create / change / delete stacks, or use the IAM permission of the caller.
Codecommit – CI, a managed source code control service (private Git repository), can still use Git’s CLI to implement version management. The new feature adopts branch version to avoid conflict; after confirmation, it is integrated into the main line.
Codepipeline – CD can quickly deploy Update, Build – Test – Deploy and SaaS products, which are fully compatible with Jenkins’ ability and usage habits, that is, to put Jenkins on the cloud and provide services in the form of SaaS. Codepipeline can respond to triggers from CodeCommit and check periodically.
Shared Services VPC and Transit VPC:
Application scene of shared services VPC: a large number of resources are on AWS. It is easy to access on premises through PROXY. Proxy is used to control the access between AWS and On-premises. The services provided by shared services VPC include: some shared services (AD, DNS, Database Replicas, etc.); remote access agents (mutual access between Spoke VPC and On-premises), HTTPS or sock agents, which need to manage some resources on ASW.
Transit VPC scenarios: A large number of Spoke VPCs access On-premises, making it difficult to move On-premises resources to AWS for complex routing. The EC2 instance VPN gateway connects the Spoke VPC’s VGW to the On-premises CGW. The VPN connection cannot be broken: there is a VPC Peering between the Hub VPC and the Spoke VPC, the V Pn connection must still be established. Direct Connect exists between the on-premises premises and the Hub VPC, and a VPN connection still needs to be established.
There are four scenarios and implementation schemes of transit VPC:
Scheme 1: two trusted VPCs are directly interconnected through VPC Peering, and the priority of static routing is higher than VPN and BGP, bypass transit VPC Hub.
Scheme 2: mutual trust, On-premises directly connect VGW, spoke VPC and as through private VIF and Direct Connect Gateway_ Path is short, routing priority is high, and transit VPC Hub is bypassed.
Scheme 3: the Transit Hub VPC and the remote VPC are interconnected through VPC Peering (providing high bandwidth). The EC2 instance of the Transit Hub VPC still needs to establish IPSEC with the EC2 instance in the remote VPC.
Scheme 4: CGW’s VRF is connected to VGW and VPC through private VIF / DX, and then establishes IPSEC tunnel with EC2 instance in VPC (to obtain DX’s high bandwidth), which requires CGW to support Tunnel VRF.
Billing and Data Transfer:
There are three kinds of network related fees: service / port hourly fee, data processing fee and data transmission fee.
V_ P_ N connections: charge by Connection -Hour, and data transmission fee (away from AWS).
Direct connect: charge according to Port-Hour and data transmission fee (away from AWS direction); for hosted connection, as long as accept, Port-Hour will be charged; for Hosted VIF, the receiver pays data transfer related fees, and Port-Hour is still paid by the owner.
Data Transfer – Internet, between AWS Internet and Internet (assuming that you access AWS through Internet); there is no charge for the inflow of AWS Internet, and the outflow of AWS Internet is USD0.09/GB (paid by the owner of the accessed resource), which involves the settlement between AWS Internet and other Internet.
Data transfer – region to region, between AWS Internet and AWS Internet, no charge for incoming direction, and USD0.02/GB for outgoing direction.
CloudFront: normal charge from edge to user; origin in AWS network, no charge for traffic from origin to CloudFront; charge for uploading data, USD0.02/GB.
Data Transfer-Same Region refers to the traffic between AWS public services in the Same Region without Data Transfer fee (but the AWS public service itself charges); Access AWS public services in different regions, including AWS service fees and data transfer fees.
Data Transfer-Inter-AZ (not Subnet), two-way charge, USD0.01/GB for each direction.
Data Transfer-VPC Peering: Communication between VPC Peering, EC2 instances of the same Region, two-way charge, USD0.01/GB in each direction.
For Direct Connect Access to AWS Internet and VPC, public VIF and pirvate VIF do not involve traffic cost; for access to other people’s resources, the other party pays USD0.09/GB away from AWS direction); for access to their own resources, the reduced rate is USD0.02/GB (away from AWS direction).
Links to related white papers and blogs:
<a href=”https://www.spoto.info/aws-dumps”> https://www.spoto.info/aws-dumps </a>
3、 Brief analysis of AWS strategies
AWS has built an IP backbone network covering the world, connecting all regions (except China and the U.S. government cloud), which is convenient for enterprises to quickly provide services to the outside world and realize the interconnection of internal business.
AWS cooperates with hundreds of regional operators around the world to move the POP point down and access the nearest enterprise customers through Direct Connect service, so as to realize the high quality and low cost on-cloud for enterprise customers, build hybrid cloud, access AWS public services and provide external services.
Through the global IP backbone network and direct connect, combined with various cloud services, AWS has basically built an end-to-end closed-loop system. As long as enterprises access AWS, the traffic can be digested within AWS. Of course, enterprises still need to rely on the interconnection between AWS Internet and other Internet to provide services.
Recently, it has been rumored that AWS will develop some enterprise side boxes, which should be very normal. At present, AWS provides direct connect and VPN service, to connect with dozens of software and hardware products of more than ten suppliers of on premises, the capabilities and configuration parameters of these products are different. Instead of constantly adapting these products in the cloud, another idea is to provide their own box and normalize the technology. At the same time, on premises has its own box as the starting point, which can launch some more competitive hybrid cloud services Services, including routing capabilities, security encryption capabilities, reliability, DNS resolution, storage solutions and other capabilities.
With the acceleration of the traditional enterprise cloud step, cloud computing will have a profound impact on the entire communications industry.
mpact on the operator market: after the enterprise goes to the cloud, its internal interconnection will naturally change from the traditional MPLS v_ P_ When n turns to cloud private line, the opening speed and service integration of cloud private line is much better than that of MPLS v_ P_ N. It can be predicted that the long-distance special line market will shift from operators to cloud service providers, and cloud service providers will still rely on local special lines and access customers provided by operators; in the future, operators will still occupy a leading position in the field of personal or consumer Internet, but cloud service providers with global coverage will dominate the high-value enterprise Internet.
Impact on the enterprises’ market: after enterprises go to the cloud, they will gradually reduce the investment in traditional it and network equipment and long-distance line rental, and turn to consume cloud services; due to economies of scale and high efficiency, every $1 spent in the cloud will reduce the $4 on premises investment, and the market space of traditional equipment manufacturers and software suppliers will be gradually eroded; most enterprise networks will eventually evolve Become a home access model.
At the same time, cloud computing will also bring some new opportunities to the communication industry. In order to meet the needs of traditional enterprises, we need to build many kinds of cloud routers, including network security and network customization_ P_ N gateway, V gateway_ P_ N access server, firewall, web firewall, NAT and so on; many traditional equipment manufacturers and new manufacturers have invested in this field, launched the corresponding software products, and integrated with the mainstream cloud service platform.
The following figure shows the enterprise digital infrastructure platform of a multinational company in the future. In addition the local access resources, the enterprise’s IT, software, network and other resources will all be built on the public cloud platform.