It is a comprehensive and evolving cloud computing platform provided by Amazon. It offers a wide range of services that help organizations build and deploy various types of applications and services in the cloud. AWS provides on-demand computing power, storage, databases, machine learning, analytics, and many other services to help businesses scale and grow.
Types of AWS Services Based on Scope
Global Services
It operates across all AWS Regions and are not confined to a specific geographic area. These services are designed to work seamlessly irrespective of region and are managed centrally.
Ex. Cloudfront, Identity and Access Management (IAM)
Regional Services
They are designed to operate within a specific AWS Region. They are deployed separately in each region, allowing for data locality and compliance with regional regulations.
Ex. Amazon EC2, Amazon S3
Region Selection:
It is a geographical area that encompasses multiple Availability Zones. AWS offers a global infrastructure divided into multiple regions, each consisting of availability zones (AZs), which are data centers designed for high availability and fault tolerance.
Ex. US East (N. Virginia)
Availability Zones (AZs)
(1 region = 3 or more AZs)
- It refers to one or more isolated data centers within a specific AWS region that are designed to operate independently but are interconnected through low-latency, high-speed networks.
- Each Availability zone is designed as an independent failure zone Ex. us-east-1a, us-east-1b
Point of Presence (POP)
It refers to a physical location or access point where network infrastructure or services, such as data centers, servers, or network equipment, are deployed to enable faster content delivery, reduce latency, and improve network performance.
Ex. Route 53 also uses PoPs for DNS resolution. When a user makes a DNS request, the PoP closest to the user responds to the query, ensuring faster resolution times.
Local Zones
They are extensions of AWS Regions, located in specific geographic locations to bring compute, storage, and other AWS services closer to end users.
Ex. Media streaming, hybrid cloud, gaming
Wavelength Zones
They are extensions of AWS infrastructure designed to provide ultra-low-latency services directly within telecommunication providers' 5G networks.
Ex. AR/VR, IoT, autonomous cars
Compute Services:
- Amazon Elastic Compute Cloud (Amazon EC2)
- Amazon Elastic Container Service (Amazon ECS)
- AWS Lambda
Storage Services:
- Amazon Simple Storage Service (Amazon S3)
- Amazon Elastic Block Store (Amazon EBS)
- Amazon Glacier
Database Services:
- Amazon Relational Database Service (Amazon RDS)
- Amazon DynamoDB
- Amazon Redshift
Networking Services:
- Amazon Virtual Private Cloud (Amazon VPC)
- Amazon Route 53
- AWS Direct Connect
Security Services:
- AWS Identity and Access Management (IAM)
- Amazon GuardDuty
- AWS Key Management Service (KMS)
Analytics Services:
- Amazon Athena
- Amazon EMR
- Amazon Kinesis
Machine Learning Services:
- Amazon SageMaker
- Amazon Comprehend
- Amazon Rekognition
Management and Monitoring Services:
- AWS Management Console
- AWS CloudFormation
- Amazon CloudWatch
Developer Tools:
- AWS CodeCommit
- AWS CodeBuild
- AWS CodeDeploy
Internet of Things (IoT):
- AWS IoT Core
- AWS IoT Greengrass
Amazon Govcloud
It is a specialized AWS region designed to host sensitive workloads in the cloud while meeting strict U.S. government compliance and regulatory requirements. It is intended for use by U.S. government agencies, contractors, and other organizations managing regulated data.
Identity and Access Management (IAM)
It is a service provided by AWS (Amazon Web Services) that enables you to manage access to AWS services and resources securely.
IAM allows you to create and manage IAM users, which are individual identities with unique security credentials that can be used to access AWS services.
Accounts:
1) Root account:
- The is the initial AWS account created when you sign up for AWS services. It has full administrative access to all AWS services.
- It is recommended to secure the root account with a strong password and enable multi-factor authentication (MFA) to protect it from unauthorized access.
- It is not recommended to use the root account for day-to-day operations or application access.
2) IAM User:
- They are created within the AWS account using IAM.
- They are used to manage access to AWS resources for individuals or applications. They can have different levels of permissions based on the policies attached to them.
- IAM accounts are more secure than the root account because they have limited permissions based on the policies assigned to them.
Amazon Cloudfront
It is a Content Delivery Network (CDN), which is built on top of AWS's global network of edge locations.
Edge locations are separate from AWS regions and are specifically designed for caching and delivering content with low latency to users.
It automatically routes your content through the nearest edge location to you.
Example:
- Your main website in India becomes the origin server for CloudFront.
- When you set up CloudFront, its global edge locations come into play.
- If you access your website from the US, CloudFront serves your request from the nearest edge location to you (e.g., an edge location in the US).
- Initially, if the requested content is not cached at that edge location, CloudFront fetches it from your origin server in India, caches it, and then serves it to you.
Amazon SNS (Simple Notification Service)
It is a fully managed messaging service provided by AWS. It enables applications, end-users, and distributed systems to send notifications and messages efficiently at scale. SNS supports both push notifications to subscribers and message-based communication between different parts of an application.
Amazon Bracklet
It is a fully managed quantum computing service provided by AWS. It allows researchers, developers, and businesses to explore, experiment with, and build applications using quantum computing technologies.
Amazon Elasticache
- It is a fully managed in-memory data store and caching service offered by AWS.
- It is designed to enhance the performance of web applications by enabling quick access to frequently used data through in-memory caching, which is significantly faster than disk-based databases.
- For integrating **ElastiCache **into your existing backend typically requires making changes to your backend logic.
Redis
Memcached
Example:
- If your website has to make repetitive queries to the database (e.g., retrieving user profiles, product information, or search results), this can slow down performance.
- You can cache the results of these database queries in Redis or Memcached. The first time the query is made, it fetches data from the database, and subsequent requests fetch data directly from ElastiCache, which is much faster (since it's in-memory).
Amazon Budgets
It is a service provided by AWS that allows you to set custom cost and usage budgets for your AWS resources. You can monitor your usage and costs, get alerts when you are close to reaching or exceeding your budget thresholds, and track your spending over time.
Amazon Cloudwatch
It is a monitoring and observability service provided by AWS that enables you to collect, access, and analyze metrics, logs, and events from your AWS resources and applications in real-time. It is used to monitor backend performance and infrastructure health whereas Google Analytics is used to monitor frontend user behavior and engagement analytics.
- Set alarms on any of your metrics to receive notification when your metric crosses your specified threshold.
- Monitor using your existing system, application and custom log files.
- Write rules to indicate which events are of interest to your application and what automated action to take.
Amazon Simple Storage Service (S3)
It is a scalable object storage service offered by Amazon Web Services (AWS) for storing and retrieving any amount of data at any time, from anywhere on the web. It is commonly used for storing backups, archives, data lakes, media files and logs.
Bucket URL:
https://<bucket-name>.s3.<region>.amazonaws.com/<object-key>
Amazon S3 Transfer Acceleration
It is a feature of Amazon Simple Storage Service (S3) that speeds up the upload and download of files to and from S3 buckets by routing traffic through Amazon CloudFront's globally distributed edge locations.
- When you enable transfer acceleration, Amazon S3 directs the upload or download requests to the nearest CloudFront edge location. These edge locations are strategically placed around the world and act as entry points to Amazon S3.
- Once the request is received at an edge location, it is routed through the optimized CloudFront network to the appropriate S3 bucket, significantly reducing latency and increasing the speed of file transfers.
AWS Direct Connect
- It is a cloud service that allows you to establish a dedicated network connection between your on-premises data center, office, or colocation environment and Amazon Web Services (AWS).
- This direct, private connection can improve the performance, security, and reliability of your AWS resources and applications by bypassing the public internet.
- Since the traffic doesn't go through the public internet, Direct Connect offers more consistent network performance compared to typical internet connections. This is especially beneficial for applications requiring high throughput or low-latency communication.
- Direct Connect locations are not part of AZs and are focused on enabling network connectivity between your on-premises infrastructure and AWS.
Amazon Elastic Compute Cloud (Amazon EC2):
It is a web service offered by Amazon Web Services (AWS) that provides resizable compute capacity in the cloud. It allows users to launch and manage virtual servers (instances) on-demand, making it easier to scale applications and manage workloads.
Advantages of Cloud Computing
1) Trade fixed expense for variable expense
2) Benefit from massive economies of scale
3) Stop guessing capacity
4) Increase speed and agility
5) Stop spending money running and maintaining data centers
6) Go global in minutes
Fault Domain
It refers to a logical or physical boundary within an infrastructure that can fail independently without affecting other parts of the system.
AWS Global Accelerator
It is a service provided by Amazon Web Services (AWS) that improves the availability, performance, and reliability of your applications with global users. It is designed to route user traffic to the optimal endpoint based on factors such as the health and geographic location of the users.
AWS Ground Station
It is a fully managed service that enables customers to control satellite communications, process satellite data, and integrate it with AWS services. It simplifies the process of interacting with satellites by providing ground stations as a service, eliminating the need for organizations to build and maintain their own satellite ground station infrastructure.
High Availability:
It refers to the ability of a system or component to remain operational and accessible with minimal downtime, even during failures or maintenance events.
AWS Elastic Load Balancing (ELB)
It is a fully managed service that automatically distributes incoming application traffic across multiple targets (such as Amazon EC2 instances, containers, IP addresses, and Lambda functions) to ensure high availability, fault tolerance, and scalability for your applications. It automatically adjusts to changes in incoming traffic, ensuring that no single instance is overwhelmed, thus improving application performance.
High Elasticity
It refers to the ability of a system or service to automatically scale its resources up or down based on current demand, ensuring optimal performance while minimizing costs.
In the context of cloud computing, elasticity allows you to quickly adapt to changes in workload and traffic patterns by provisioning or de-provisioning resources as needed, without manual intervention.
Auto Scaling Groups (ASG)
They are a feature in AWS that automatically adjusts the number of EC2 instances in response to changes in demand. By using Auto Scaling Groups, you can ensure that your application has the right number of instances available to handle traffic efficiently, improving performance, availability, and cost management.
High Fault Tolerant
It refers to the ability of a system to continue operating smoothly and without interruption despite failures in some of its components.
Amazon RDS Multi-AZ
It offers Multi-AZ deployments, where data is replicated across multiple AZs. If the primary database fails, the system automatically fails over to the standby database in another AZ with minimal downtime.
High Tolerance
- It refers to the ability of a system to reliably store and preserve data over time, ensuring that data remains intact and accessible even in the event of failures, corruption, or other issues.
- In the context of cloud computing and AWS, durability is a measure of how well the cloud platform can protect data against loss or corruption, often by using replication and redundancy techniques.
CloudEndure Disaster Recovery (CloudEndure DR)
- It is a comprehensive, fully automated disaster recovery solution offered by AWS.
- It is designed to minimize downtime and data loss in case of disasters, whether they are natural events, system failures, or other catastrophic incidents.
- It helps businesses quickly recover their workloads on AWS after a disaster by replicating live data and applications to the cloud.
Amazon Machine Image (AMI) (Sofware configuration)
It is a pre-configured template used to create virtual machines (instances) in EC2. It contains the necessary information to launch an instance, including the operating system, application server, applications, and any associated configurations.
Launching Multiple Instances from the AMI
1) Start by launching a single EC2 instance and configure it with your web application, necessary software (like a web server, database, etc.), and any required settings.
2) Once the instance is fully configured and tested, create an AMI from this instance.
3) In the AWS Management Console, use the AMI to launch 10 new EC2 instances.
Business Continuity Plan (BCP)
- It is a strategic plan that outlines how a business will continue operating during and after a major disruption or disaster. The primary goal of a BCP is to ensure that essential business functions remain operational during crises, allowing the business to recover quickly and effectively.
- A BCP typically addresses various types of potential risks, including natural disasters, cyberattacks, power outages, and any other events that could disrupt normal business operations.
Recovery Point Objective (RPO)
It refers to the maximum amount of data loss that is acceptable during an unplanned disruption or disaster.
It is closely tied to the frequency of backups or data replication
Ex. if the RPO is set to 4 hours, the business can afford to lose up to 4 hours' worth of data, but no more. Any data older than 4 hours would be recovered from backups.
Recovery Time Objective (RTO)
It refers to the maximum allowable time that an application or system can be down after a disaster or outage before it significantly impacts the business.
Ex. if the RTO is 2 hours, the system must be restored within 2 hours after a failure to avoid critical business impact.
AWS Climate Pledge Fund
The AWS Climate Pledge Fund is an investment program initiated by Amazon to accelerate the development and deployment of technologies that help combat climate change. It aligns with The Climate Pledge, Amazon’s commitment to reach net-zero carbon emissions by 2040.
Stay Connected!
If you enjoyed this post, don’t forget to follow me on social media for more updates and insights:
Twitter: madhavganesan
Instagram: madhavganesan
LinkedIn: madhavganesan
The above is the detailed content of Amazon Web Services. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

There are three common ways to initiate HTTP requests in Node.js: use built-in modules, axios, and node-fetch. 1. Use the built-in http/https module without dependencies, which is suitable for basic scenarios, but requires manual processing of data stitching and error monitoring, such as using https.get() to obtain data or send POST requests through .write(); 2.axios is a third-party library based on Promise. It has concise syntax and powerful functions, supports async/await, automatic JSON conversion, interceptor, etc. It is recommended to simplify asynchronous request operations; 3.node-fetch provides a style similar to browser fetch, based on Promise and simple syntax

JavaScript data types are divided into primitive types and reference types. Primitive types include string, number, boolean, null, undefined, and symbol. The values are immutable and copies are copied when assigning values, so they do not affect each other; reference types such as objects, arrays and functions store memory addresses, and variables pointing to the same object will affect each other. Typeof and instanceof can be used to determine types, but pay attention to the historical issues of typeofnull. Understanding these two types of differences can help write more stable and reliable code.

Hello, JavaScript developers! Welcome to this week's JavaScript news! This week we will focus on: Oracle's trademark dispute with Deno, new JavaScript time objects are supported by browsers, Google Chrome updates, and some powerful developer tools. Let's get started! Oracle's trademark dispute with Deno Oracle's attempt to register a "JavaScript" trademark has caused controversy. Ryan Dahl, the creator of Node.js and Deno, has filed a petition to cancel the trademark, and he believes that JavaScript is an open standard and should not be used by Oracle

CacheAPI is a tool provided by the browser to cache network requests, which is often used in conjunction with ServiceWorker to improve website performance and offline experience. 1. It allows developers to manually store resources such as scripts, style sheets, pictures, etc.; 2. It can match cache responses according to requests; 3. It supports deleting specific caches or clearing the entire cache; 4. It can implement cache priority or network priority strategies through ServiceWorker listening to fetch events; 5. It is often used for offline support, speed up repeated access speed, preloading key resources and background update content; 6. When using it, you need to pay attention to cache version control, storage restrictions and the difference from HTTP caching mechanism.

Promise is the core mechanism for handling asynchronous operations in JavaScript. Understanding chain calls, error handling and combiners is the key to mastering their applications. 1. The chain call returns a new Promise through .then() to realize asynchronous process concatenation. Each .then() receives the previous result and can return a value or a Promise; 2. Error handling should use .catch() to catch exceptions to avoid silent failures, and can return the default value in catch to continue the process; 3. Combinators such as Promise.all() (successfully successful only after all success), Promise.race() (the first completion is returned) and Promise.allSettled() (waiting for all completions)

JavaScript's event loop manages asynchronous operations by coordinating call stacks, WebAPIs, and task queues. 1. The call stack executes synchronous code, and when encountering asynchronous tasks, it is handed over to WebAPI for processing; 2. After the WebAPI completes the task in the background, it puts the callback into the corresponding queue (macro task or micro task); 3. The event loop checks whether the call stack is empty. If it is empty, the callback is taken out from the queue and pushed into the call stack for execution; 4. Micro tasks (such as Promise.then) take precedence over macro tasks (such as setTimeout); 5. Understanding the event loop helps to avoid blocking the main thread and optimize the code execution order.

Event bubbles propagate from the target element outward to the ancestor node, while event capture propagates from the outer layer inward to the target element. 1. Event bubbles: After clicking the child element, the event triggers the listener of the parent element upwards in turn. For example, after clicking the button, it outputs Childclicked first, and then Parentclicked. 2. Event capture: Set the third parameter to true, so that the listener is executed in the capture stage, such as triggering the capture listener of the parent element before clicking the button. 3. Practical uses include unified management of child element events, interception preprocessing and performance optimization. 4. The DOM event stream is divided into three stages: capture, target and bubble, and the default listener is executed in the bubble stage.

In JavaScript arrays, in addition to map and filter, there are other powerful and infrequently used methods. 1. Reduce can not only sum, but also count, group, flatten arrays, and build new structures; 2. Find and findIndex are used to find individual elements or indexes; 3.some and everything are used to determine whether conditions exist or all meet; 4.sort can be sorted but will change the original array; 5. Pay attention to copying the array when using it to avoid side effects. These methods make the code more concise and efficient.
