Topics covered:
- Cross Account
- Lambda
- Resource Access Manager
- Workspaces
- Organizations
- Saml
- Sts
- ECS
- ElasticCache
- Secrets Manager
- Systems Manager
- ParameterStore
- Backup
- Rest
Este es el post final de la serie de post sobre la certificación SAA (Solutions Architect Associate).
Cross Account
Definition: Cross-account access make it easier for you to work productively within a multi-account AWS environment by making it easy for you to switch roles within the AWS management console. You can now sign in to the console using your IAM username then switch the console to manage another account without having to enter or remember another username or password
Goal: Lets users switch to different accounts by using IAM policies and sts:assumeRole, between account “production” and account B.
- Create a group in IAM – development
- Create a user in IAM – development
- Log in to Production
- Create “read-write-app-bucket” policy (in Production account)
- Create “UpdateApp” Cross Account Role (in Production account)
- Apply the policy to the new Role
- Log into the Developer account
- Create a new inline policy
- Apply it to Developer Group
- Login as user Developer
- Switch accounts
in Production account:
First, create a policy to allow list buckets and then assign it to a role type “Role for Cross Account” on the account “production”. When we create a
new Role we will select the option “Role for Cross-Account Access” with the name “MyDevelopersAccess”, then we have to enter the account number of AWS account “development”.
Policy to allow .developers to list and access specific bucket in production{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "arn:aws:s3:::*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": "ar.n:aws:s3:::shared-bucket"
},
{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject",
"s3:DeleteObject"
],
"Resource": "arn:aws:s3:::shared-bucket/*"
}
]
}
Then we log in back again in “Development” account and we create an “inline” policy for the group “Developers”.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::PRODUCTION-ACCOUNT-ID:role/MyDevelopersAccess"
}
}
Now if we log in as a user of group “Developers” in “Development” account, we can do a switch account towards the “production” account of AS assuming the role created previously in account “production” and list the buckets. Without type or even remember any password.
Lambda
Computational serverless service.
See https://www.rubenortiz.es/2019/09/22/aws-developer-2019/
AWS Resource Access Manager (RAM)
AWS Resource Access Manager (RAM) is a service that enables you to easily and securely share AWS resources with any AWS account or within your AWS Organization. You can share AWS Transit Gateways, Subnets, AWS License Manager configurations, and Amazon Route 53 Resolver rules resources with RAM.
Trusted access. Share AWS resources with AWS organizations
A company is using AWS Organizations to manage their multi-account and multi-region AWS infrastructure. They are currently doing large-scale automation for their key daily processes to save costs. One of these key processes is sharing specified AWS resources, which an organizational account owns, with other AWS accounts of the company using AWS RAM. There is already an existing service which was previously managed by a separate organization account moderator, who also maintained the specific configuration details.
In this scenario, what could be a simple and effective solution that would allow the service to perform its tasks on the organization accounts on the moderator’s behalf?
You can use trusted access to enable an AWS service that you specify, called the trusted service, to perform tasks in your organization and its accounts on your behalf. This involves granting permissions to the trusted service but does not otherwise affect the permissions for IAM users or roles. When you enable access, the trusted service can create an IAM role called a service-linked role in every account in your organization. That role has a permissions policy that allows the trusted service to do the tasks that are described in that service’s documentation. This enables you to specify settings and configuration details that you would like the trusted service to maintain in your organization’s accounts on your behalf.
AWS Resource Access Manager (AWS RAM) enables you to share specified AWS resources that you own with other AWS accounts. To enable trusted access with AWS Organizations:
From the AWS RAM CLI, use the enable-sharing-with-aws-organizations
command.
Name of the IAM service-linked role that can be created in accounts when trusted access is enabled: AWSResourceAccessManagerServiceRolePolicy.
Workspaces
Is a cloud-based replacement for a traditional desktop. From pc, mac, Chromebook, iPad, Kindle or Android using a free Amazon WorkSpaces client application and credentials set up by an administrator or their existing AD credentials if Amazon WorkSpaces is integrated with an existing AD domain.
Windows 7 experience by W2k8 R2
customize wallpapehttps://www.radishlogic.com/aws/aws-lambda-console-accessing-environment-variables-via-python/rs, etc
local administrator access
persistent(not ephemeral)
data backed up every 12 hours in D:\
not need an AWS account to login
audit the encryption keys used to protect your data.
UPDATE 2019
- you can now restore your workspace to the previous last known healthy state
AWS Organizations
Enables you to consolidate multiple AWS accounts into an organization that you create. AWS Organizations is an account management service that lets you consolidate multiple AWS accounts into an organization that you create and centrally manage. With Organizations, you can create member accounts and invite existing accounts to join your organization. You can organize those accounts into groups and attach policy-based controls.
- One bill per ALL AWS accounts. Consolidate billing.
- Very easy to track charges
- Volume pricing discount
SCP
AWS Organizations offers policy-based management for multiple AWS accounts. With Organizations, you can create groups of accounts, automate account creation, and apply and manage policies for those groups. Organizations enables you to centrally manage policies across multiple accounts, without requiring custom scripts and manual processes. It allows you to create Service Control Policies (SCPs) that centrally control AWS service use across multiple AWS accounts.
AM policies let you allow or deny access to AWS services (such as Amazon S3), individual AWS resources (such as a specific S3 bucket), or individual API actions (such as s3:CreateBucket
). An IAM policy can be applied only to IAM users, groups, or roles, and it can never restrict the root identity of the AWS account.
By contrast, AWS Organizations let you use service control policies (SCPs) to allow or deny access to particular AWS services for individual AWS accounts, or for groups of accounts within an organizational unit (OU). The specified actions from an attached SCP affect all IAM users, groups, and roles for an account, including the root account identity.
When you apply an SCP to an OU or an individual AWS account, you choose to either enable (whitelist) or disable (blacklist) the specified AWS service. Access to any service that isn’t explicitly allowed by the SCPs associated with an account, its parent OUs, or the master account is denied to the AWS accounts or OUs associated with the SCP. When an SCP is applied to an OU, it is inherited by all of the AWS accounts in that OU.
When you attach SCPs to the root, OUs, or directly to accounts, all policies that affect a given account are evaluated together using the same rules that govern IAM permission policies:
– Any action that has explicit Deny
in an SCP can’t be delegated to users or roles in the affected accounts. An explicit Deny
statement overrides any Allow
that other SCPs might grant.
– Any action that has explicit Allow
in an SCP (such as the default “*” SCP or by any other SCP that calls out a specific service or action) can be delegated to users and roles in the affected accounts.
– Any action that isn’t explicitly allowed by an SCP is implicitly denied and can’t be delegated to users or roles in the affected accounts.
By default, an SCP named FullAWSAccess
is attached to every root, OU, and account. This default SCP allows all actions and all services. So in a new organization, until you start creating or manipulating the SCPs, all of your existing IAM permissions continue to operate as they did. As soon as you apply a new or modified SCP to a root or OU that contains an account, the permissions that your users have in that account become filtered by the SCP. Permissions that used to work might now be denied if they’re not allowed by the SCP at every level of the hierarchy down to the specified account.
SAML
Image a company with users working within on-premises facilities. They want all of their Software Architects to access resources on both environments using their on-premises credentials, which is stored in Active Directory.
Since the company is using Microsoft Active Directory which implements Security Assertion Markup Language (SAML), you can set up a SAML-Based Federation for API Access to your AWS cloud. In this way, you can easily connect to AWS using the login credentials of your on-premises network.
Before you can use SAML 2.0-based federation as described in the preceding scenario and diagram, you must configure your organization’s IdP and your AWS account to trust each other. The general process for configuring this trust is described in the following steps. Inside your organization, you must have an IdP that supports SAML 2.0, like Microsoft Active Directory Federation Service (AD FS, part of Windows Server), Shibboleth, or another compatible SAML 2.0 provider.
Security Token Service (STS)
Understanding the key terms
- Federation: Combining or joining a list of users in one domain (such as IAM) with a list of users in another domain (such as AD, Facebook, etc)
- Identity broker: A service that allows you to take identity from point A and join it (federate it) to point B
- Identity store: AD, Facebook, Google
- Identities: A user of a service like Facebook
After the Identity broker checks with LDAP Directory, AWS Security Token Service returns a:
- access key
- access password
- token
- duration
Use case scenario will be something like:
“a tech company that you are working for has undertaken a Total Cost Of Ownership (TCO) analysis evaluating the use of Amazon S3 versus acquiring more storage hardware. The result was that all 1200 employees would be granted access to use Amazon S3 for the storage of their personal documents.
How can you set up a solution that incorporates a single sign-on feature from your corporate AD or LDAP directory and also restricts access for each individual user to a designated user folder in an S3 bucket?”
In an enterprise identity federation, you can authenticate users in your organization’s network, and then provide those users access to AWS without creating new AWS identities for them and requiring them to sign in with a separate username and password. This is known as the single sign-on (SSO) approach to temporary access. AWS STS supports open standards like Security Assertion Markup Language (SAML) 2.0, with which you can use Microsoft AD FS to leverage your Microsoft Active Directory. You can also use SAML 2.0 to manage your own solution for federating user identities.
Images.
Another image.
More info about this: https://www.rubenortiz.es/2019/10/23/iam-and-stsassumerole/
ECS
It’s a regional service, you can use one or more AZs across a new or existing VPC to schedule the placement of containers.
ECS Spot Instance Draining Enabled
You can launch your ECS tasks on Spot instances. If Amazon ECS Spot Instance draining is enabled on the instance, ECS receives the Spot Instance interruption notice and places the instance in DRAINING status.
When a container instance is set to DRAINING
, Amazon ECS prevents new tasks from being scheduled for placement on the container instance. Service tasks on the draining container instance that are in the PENDING state are stopped immediately. If there are container instances in the cluster that are available, replacement service tasks are started on them. Spot Instance draining is disabled by default and must be manually enabled by adding the line ECS_ENABLE_SPOT_INSTANCE_DRAINING=true
on your /etc/ecs/ecs.config
file.
Network mode on containers
If the network mode is set to none
, the task’s containers do not have external connectivity and port mappings can’t be specified in the container definition.
If the network mode is bridge
, the task utilizes Docker’s built-in virtual network which runs inside each container instance.
If the network mode is host
, the task bypasses Docker’s built-in virtual network and maps container ports directly to the EC2 instance’s network interface directly. In this mode, you can’t run multiple instantiations of the same task on a single container instance when port mappings are used.
If the network mode is awsvpc
, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration
when you create a service or run a task with the task definition. When you use this network mode in your task definitions, every task that is launched from that task definition gets its own elastic network interface (ENI) and a primary private IP address. The task networking feature simplifies container networking and gives you more control over how containerized applications communicate with each other and other services within your VPCs.
- https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html
Quick Terminology
- Task Definition — This a blueprint that describes how a docker container should launch.
- NetworkMode
- none
- you cant specify the port mapping in your container definitions
- task containers can’t have external connectivity
- bridge
- default
- awsvpc
- needed for Fargate
- better performance
- exposed container ports are mapped directly to the corresponding host port
- If the network mode is
awsvpc
, the task is allocated an elastic network interface, and you must specify a NetworkConfiguration value when you create a service or run a task with the task definition.
- host
- better performance
- exposed container ports are mapped directly to the corresponding host port
- none
- NetworkMode
- Task – This is a running container with the settings defined in the Task Definition. It can be thought of as an “instance” of a Task Definition.
- Service — Defines long-running tasks of the same Task Definition. This can be 1 running container or multiple running containers all using the same Task Definition
- Cluster — A logic group of EC2 instances. When an instance launches the ecs-agent software on the server registers the instance to an ECS Cluster. This is easily configurable by setting the ECS_CLUSTER variable in /etc/ecs/ecs.config described here.
- Container Instance — This is just an EC2 instance that is part of an ECS Cluster and has docker and the ecs-agent running on it.
ECS Task Definitions
is required to run Docker containers in Amazon ECS, are text files on JSON format that describes one or more containers, describes how a docker container should launch. Task definition will include: which docker images to use with the containers in your task, how much CPU and memory to use in each container, whether containers are linked together in a task, docker networking mode, what (if any) ports from the container are mapped to the host container instance, whether the task should continue to run if the container finishes or fails, command the container should run when it is started, environment variables, data volumes should be used, IAM role.
Example of task definition
{ "family": "sinatra-hi", "containerDefinitions": [ { "name": "web", "image": "tongueroo/sinatra:latest", "cpu": 128, "memoryReservation": 128, "portMappings": [ { "containerPort": 4567, "protocol": "tcp" } ], "command": [ "ruby", "hi.rb" ], "essential": true
- family: When you register a task definition, you give it a family, which is similar to a name for multiple versions of the task definition, specified with a revision number. The first task definition that is registered into a particular family is given a revision of 1, and any task definitions registered after that are given a sequential revision number.
- Required: Yes
- name: name of container
- Required: Yes
- image: The image used to start a container. This string is passed directly to the Docker daemon. Images in the Docker Hub registry are available by default. You can also specify other repositories with either
orrepository-url
/image
:tag
repository-url
/image
@digest
- Required: Yes
- cpu: The number of
cpu
units the Amazon ECS container agent will reserve for the container.- Required: no
- memoryReservation: The soft limit (in MiB) of memory to reserve for the container. When system memory is under contention, Docker attempts to keep the container memory to this soft limit; however, your container can consume more memory when needed, up to either the hard limit specified with the
memory
parameter (if applicable), or all of the available memory on the container instance, whichever comes first.- Required: no
- portMappings: Port mappings allow containers to access ports on the host container instance to send or receive traffic.
- Required: no
- containerPort: The port number on the container that is bound to the user-specified or automatically assigned host port. If using containers in a task with the Fargate launch type, exposed ports should be specified using
containerPort
.- Required: yes, when
portMappings
are use
- Required: yes, when
- protocol: The protocol used for the port mapping. Valid values are
tcp
andudp
. The default istcp
- Required: no
- command: The command that is passed to the container
- Required: no
- essential: If the
essential
parameter of a container is marked astrue
, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If theessential
parameter of a container is marked asfalse
, then its failure does not affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential.- Required: no
Services
- Allow to run and maintain a specified number of instances of a task definition simultaneously in ECS cluster.
- Is like Auto-Scaling groups for ECS. This is like pods in Kubernetes I guess.
- If a task fails or stop, Services scheduler launches another instance of your task definition to replace and maintain the desired number of tasks.
Clusters
A logical group of container instances that you can place tasks on. First-time default cluster is created for you, but you can create multiple clusters in an account to keep your resources separate. I should think about this like a K8s node? Maybe? Not sure.
- Clusters can contain multiple different container instance types
- Clusters are region-specific
- Containers can only be part of one cluster at a time.
- You can restrict access to clusters with IAM policies.
ECS Scheduling
- Service Scheduler:
- for long-running tasks like microservices
- ensure tasks are running
- ensure tasks are registered against an ELB
- Custom Scheduler:
- You can create your own schedulers that meet your business needs.
- Leverage third-party schedulers like Blox.
ECS Container Agent
Allow container instances to connect to your cluster. Is included in the Amazon ECS-optimized AMI but you can also install it in any other EC2 that supports the ECS specifications. It Will not work with Windows. Pre-installed on special ECS AMIS.
ECS security
- IAM Roles
- EC2 instances use an IAM role to access ECS
- ECS tasks use an IAM role to access services and resources
- Security Groups are attached at the instance level (not the task or the container)
Task Placement Strategy
A task placement strategy is an algorithm for selecting instances for task placement or tasks for termination. Task placement strategies can be specified when either running a task or creating a new service.
Amazon ECS supports the following task placement strategies:
binpack – Place tasks based on the least available amount of CPU or memory. This minimizes the number of instances in use.
random – Place tasks randomly.
spread – Place tasks evenly based on the specified value. Accepted values are attributed key-value pairs, instanceId, or host.
ECS TaskRoles
If you are running Docker containers using Amazon’s Elastic Container Service (ECS), then you don’t want your Docker container relying on the Instance Metadata endpoint of the underlying EC2 Instance. Instead, you can use the Task Metadata endpoint, which serves the same purpose, but is exposed by ECS directly within your Docker container at http://169.254.170.2 (all AWS CLI and SDK tools know to check this endpoint).
curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI |jq
https://blog.gruntwork.io/authenticating-to-aws-with-instance-metadata-b6d812a86b40
https://aws.amazon.com/es/blogs/compute/a-guide-to-locally-testing-containers-with-amazon-ecs-local-endpoints-and-docker-compose/
https://docs.aws.amazon.com/AmazonECS/latest/userguide/task-iam-roles.html
Limits
- soft
- 1000 clusters per region
- 1000 instances per cluster
- 1000 services per cluster
- hard
- one ELB per service
- 1000 tasks per service
- max. 10 containers per task definition
- max. 10 tasks per instance(host)
- https://thecode.pub/easy-deploy-your-docker-applications-to-aws-using-ecs-and-fargate-a988a1cc842f
- https://linuxacademy.com/blog/amazon-web-services-2/deploying-a-containerized-flask-application-with-aws-ecs-and-docker/
- https://docs.bitnami.com/aws/how-to/ecs-rds-tutorial/
- https://www.bogotobogo.com/DevOps/AWS/aws-ELB-ALB-Application-Load-Balancer-ECS.php
- https://medium.com/boltops/gentle-introduction-to-how-aws-ecs-works-with-example-tutorial-cea3d27ce63d
Secrets
Amazon ECS enables you to inject sensitive data into your containers by storing your sensitive data in either AWS Secrets Manager secrets or AWS Systems Manager Parameter Store parameters and then referencing them in your container definition. This feature is supported by tasks using both the EC2 and Fargate launch types.
Secrets can be exposed to a container in the following ways:
– To inject sensitive data into your containers as environment variables, use the secrets
container definition parameter.
– To reference sensitive information in the log configuration of a container, use the secretOptions
container definition parameter.
UPDATE ECS
2019
- now supports per-container swap space: you can control swap space, increase virtual memory, moving inactive pages in memory onto swap spaces located on disk. You can now control each container swap configuration even if the containers are running in the same EC2 container.
2020
- ECS and EKS anywhere. You now can run ECS on your data center using the same management tools and APIs.
AWS ElasticCache
- Object caching is your primary goal
- Keep things simple as possible
- Scale horizontally (scale-out)
- If you have advanced data types such as lists, hashes and sets
- You are doing data sorting and ranking (leader boards)
- Data Persistence
- MultiAZ
- Supports Master/Slave replication
- Pub/Sub capabilities are needed
SwapUsage
- if your swap usage in Memcached is above 50 MB you should increase memcached_connections_overhead
Evictions
An eviction occurs when new item must be allocated and old item is just removed due to lack of free space in the system
AWS Secrets Manager
AWS Secrets Manager is an AWS service that makes it easier for you to manage secrets. Secrets can be database credentials, passwords, third-party API keys, and even arbitrary text. You can store and control access to these secrets centrally by using the Secrets Manager console, the Secrets Manager command line interface (CLI), or the Secrets Manager API and SDKs.
Secrets Manager enables you to replace hardcoded credentials in your code (including passwords), with an API call to Secrets Manager to retrieve the secret programmatically. This helps ensure that the secret can’t be compromised by someone examining your code, because the secret simply isn’t there. Also, you can configure Secrets Manager to automatically rotate the secret for you according to a schedule that you specify. This enables you to replace long-term secrets with short-term ones, which helps to significantly reduce the risk of compromise.
AWS Systems Manager
Parameter Store
AWS Systems Manager Parameter Store provides secure, hierarchical storage for configuration data management and secrets management. You can store data such as passwords, database strings, and license codes as parameter values. You can store values as plain text or encrypted data. You can then reference values by using the unique name that you specified when you created the parameter. Highly scalable, available, and durable, Parameter Store is backed by the AWS Cloud.
Patch Manager
AWS Systems Manager Patch Manager automates the process of patching managed instances with both security-related and other types of updates. You can use Patch Manager to apply patches for both operating systems and applications.
You can patch fleets of Amazon EC2 instances or your on-premises servers and virtual machines (VMs) by operating system type. This includes supported versions of Windows Server, Ubuntu Server, Red Hat Enterprise Linux (RHEL), SUSE Linux Enterprise Server (SLES), CentOS, Amazon Linux, and Amazon Linux 2. You can scan instances to see only a report of missing patches, or you can scan and automatically install all missing patches.
When you run AWS-RunPatchBaseline
, you can target managed instances using their instance ID or tags. SSM Agent and Patch Manager will then evaluate which patch baseline to use based on the patch group value that you added to the instance.
You create a patch group by using Amazon EC2 tags. Unlike other tagging scenarios across Systems Manager, a patch group must be defined with the tag key: Patch Group. Note that the key is case-sensitive. You can specify any value, for example, “web servers,” but the key must be Patch Group.
The AWS-DefaultPatchBaseline
baseline is primarily used to approve all Windows Server operating system patches that are classified as “CriticalUpdates” or “SecurityUpdates” and that have an MSRC severity of “Critical” or “Important”. Patches are auto-approved seven days after release.
A little question about that:
A Solutions Architect has been assigned to develop a workflow to ensure that the required patches of all of their Windows EC2 instances are properly identified and applied automatically. To maintain their system uptime requirements, it is of utmost importance to ensure that the EC2 instance reboots do not occur at the same time on all of their Windows instances. This is to avoid any loss of revenue that could be caused by any unavailability issues of their systems.
Which of the following will meet the above requirements?
Create two Patch Groups with unique tags that you will assign to all of your EC2 Windows Instances. Associate the predefined AWS-DefaultPatchBaseline baseline on both patch groups. Set up two non-overlapping maintenance windows and associate each with a different patch group. Using Patch Group tags, register targets with specific maintenance windows and lastly, assign the AWS-RunPatchBaseline document as a task within each maintenance window which has a different processing start time is the correct answer as it properly uses two Patch Groups, non-overlapping maintenance windows and the AWS-DefaultPatchBaseline baseline to ensure that the EC2 instance reboots do not occur at the same time.
AWS Backup
Características
Servicio centralizado. Consolida desde un solo punto las políticas de respaldo de diversos servicios.
Configuración via políticas de respaldo. Definimos diferentes políticas para cada uno de los servicios de nuestra cuenta.
Políticas de respaldo basadas en etiquetas. Asignamos que recursos respaldar mediante etiquetas.
Planes de respaldo ya creados nos permiten aplicar fácilmente políticas de mejoras prácticas. También podemos crear planes desde cero estableciendo frecuencia, ciclo de vida, etc
Retención de copias. Desde un solo punto se gestionará la retención de las diversas copias realizadas por el servicio.
Panel con monitorización de trabajos realizados. Trabajos completados, con problemas, restauraciones aplicadas, etc.
Permite cifrado via AWS KMS.
Restringir acceso a servicio via políticas de acceso. Se puede definir quien y como accede a que servicio de backup mediante políticas de acceso.
Cómo funciona?
Backup plan: plan que define tus requisitos incluyendo backups programados, reglas de retención y reglas de ciclo de vida.
Regla de backup:
- configura la frecuencia en la que se ejecuta (12h, Daily,Weekly,Monthly,Custom cron expression)
- ejemplo, cada 6 horas (cron(0 5,11,17,23 ? * * *)
- configura la ventana de backup
- default
- Backup windows consist of the time that the backup window begins and the duration of the window in hours. Backup jobs are started within this window. If you are unsure what backup window to use, you can choose to use the default backup window that AWS Backup recommends. The default backup window is set to start at 5 AM UTC time and last 8 hours.
- customizada
- default
- ciclo de vida
- The lifecycle defines when it is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define. Backups transitioned to cold storage must be stored there for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
- † El almacenamiento en frío actualmente es compatible solamente con las copias de seguridad de los sistemas de archivos EFS
- almacenamiento en frío: cambia el tipo de clase de el backup caliente (warm, S3 estándard) que ofrece acceso en milisegundos a la copia, por una clase fría (cold) de bajo coste implementada via Glacier que ofrece un tiempo de restauración de 3-5 horas
- The lifecycle defines when it is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define. Backups transitioned to cold storage must be stored there for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
Impacto en el precio
RDS
Backup: 0,095 $ Gb-mes
Restauración: Gratis
RDS proporciona nativamente un espacio de backup GRATIS de igual tamaño al tamaño principal provisionado. Al usar AWS Backup aplicaríamos un coste extra en el caso de que RDS no esté facturando un coste por backup actualmente. Porque AWS Backup cobra un precio en base al storage empleado en GB / mes.
There is no additional charge for backup storage, up to 100% of your total database storage for a region. (Based upon our experience as database administrators, the vast majority of databases require less raw storage for a backup than for the primary dataset, meaning that most customers will never pay for backup storage.)
EBS
Backup: el precio del gigabyte de backup es el mismo 0,05 GB-mes $
Restauración: gratis
EFS
Backup: 0,06 $ GB-mes warm / 0,012 $ GB-mes cold
Restauración: 0,024 $ GB-mes warm / 0,036 $ GB-mes cold
DynamoDB
Backup: $0.10 per GB
Restauración: $0.1836 per GB
*This applies to all backup storage except for on-demand DynamoDB table backups, which create full backups of your Amazon DynamoDB table data and settings.
Servicios disponibles actualmente: EBS,RDS,EFS,DynamoDB,Storage Gateway
**** AWS Backup currently supports all Amazon RDS database engines except Amazon Aurora.*
La gran mayoría de clientes tendrán necesidad de backups RDS (gratis hasta el 100% del storage primario de la RDS) y EBS (igual precio que AWS Backup y ya cuenta con DLM para su gestión).
Links
https://aws.amazon.com/backup/pricing/
https://aws.amazon.com/rds/mysql/pricing/
Update
- 2020
- Cross-Region AWS Backup: now you can backup your data to different regions. Centralized solution to store a copy of backup data more than a region away from production data.
- You can now restore single EFS files instead the whole EFS.
- Now you can scheduled or ondemand automated backups for EC2 instances: instance type, IAM Role, VPC, security group, etc.When you back up an EC2 instance, AWS Backup will protect all EBS volumes attached to the instance, and it will attach them to an AMI that stores all parameters from the original EC2 instance except for two (Elastic Inference Accelerator and user data script).
Rest of services
AWS Migration Service
AWS APP MESH
Is a cool service which provides control and monitoring for microservices. General availability in 2019. Really easy to locate errors.
Compatible with: fargate, ec2, ecs and EKS.
AWS Personalize
is service allow you to create, customize personal recommendations in your applications with no machine learning expertise required.
UPDATE 2019: GA august 2019
AWS Comprehend Medical
is a service which uses natural language processing and machine learning which extract information from a variety of sources like doctor notes, patient health records, etc. Makes easier to analyze patient data.
UPDATE 2019: Available in london, sidney and Canada
AWS CloudEndure Migration
is a service which allows you to move applications from any physical, virtual o cloud based infrastructure into AWS. Is going to simplify, expedite and reduce the cost by offering a highly available lift and shift solution allowing any migration of supported windows and main flavors of Linux.
UPDATE 2019: now available at no additional charge
AWS Security Hub
Is a new great service which helps you understand the security of your own AWS infrastructure. Under the share of the responsibility model, the customer is responsible for the security in the cloud so Security Hub enables to get a much better a security profile of your AWS infrastructure. Includes a dashboard which aggregate findings of existing security tools like Inspector, Duty.
UPDATE 2019: GA in August 2019.
AWS Control Tower
is a great new service which allows organizations to set up multi accounts AWS environments and govern at scale. Automated landing zone for new accounts, using AWS best practice, structuring your account, centralization logs in Cloud Trail, configuration management with AWS config. Also, get a dashboard providing centralized visibility across all AWS Accounts.
AWS Systems Manager
new feature call OpsCenter.
2019
- Port Forwarding using AWS System Manager Session: this feature is similar to the classic SSH tunnel. Port Forwarding allows you to forward traffic between your laptop to open ports on your instance.
- https://aws.amazon.com/blogs/aws/new-port-forwarding-using-aws-system-manager-sessions-manager/
AWS EventBridge
EventBridge (2019 announced), serverless eventBus make easier to connect applications together: can be SaaS, own applications and AWS services. Delivery streamed events from your events sources, and wrap them to your target using rewriting rules. Allowing building realtime, event-driven, distributed application architecture. Reduced effort as a take care of event ingestion, delivery, security and event-handling. Cost only for events publishes for non-AWS services. AWS services are free.
AWS CHATBOT
Tch to allow you to receive automated alerts and notifications of your AWS infrastructure to Slack, etc. Receives alerts by SNS and format them before sending to your chat room. Also allows you to execute commands to return diagnosis information.
AWS LAKE INFORMATION
Announced 2018, is a massive data repository which allows ingesting, clean and store all your data in one place at scale, and run analytics and complex machine learning algorithm. Integrated with S3, EMR, Glue, Machine Learning and can be queried by Redshift and Athena.
UPDATE 2019: general availability
AWS ATHENA
Athena uses Presto, a distributed SQL engine to run queries. It also uses Apache Hive to create, drop, and alter tables and partitions. You can write Hive-compliant DDL statements and ANSI SQL statements in the Athena query editor. You can also use complex joins, window functions and complex datatypes on Athena. Athena uses an approach known as schema-on-read, which allows you to project your schema on to your data at the time you execute a query. This eliminates the need for any data loading or ETL.
Athena charges you by the amount of data scanned per query. You can save on costs and get better performance if you partition the data, compress data, or convert it to columnar formats such as Apache Parquet. For more information, see Athena pricing.
UPDATE 2019
- Federated Quering
- Enables users to run SQL queries across data stored in relational, non-relational, object and custom data resources. With Federated queryng, customers can submit a single SQL query that scans data from multiple sources running on-premises or hosted in the cloud
- Maching Learning Models
- Use pre-built machine learning models created by SageMaker or train their own models, or find and subscribe to model packages.
-
Athena workgroups allow you to segregate athena queries: separate queries between users, groups, teams. Queries running in one group are not visible to another. You can disable workgroups temporary to prevent executing queries
Links
AWS REKOGNITION
UPDATE 2019:
- improve face analysis
- supports PrivateLink, you can send data to AWS services without ever leaving AWS network.
AWS STEP FUNCTIONS
Is a service coordinate all the components of distributed applications in a visual workflow. Manage dependencies, concurrency, etc. Allows orchestrating more complex processes, reusable workflows, etc.
UPDATE 2019: now support nested workflows
AWS ELASTIC MAP REDUCE
IS a fully managed big data service.
UPDATE 2019
- Block public access. It allows you to secure EMR clusters, prevents you to use security group which allows unrestricted public access, is just a check a box and applies per-region basis.
- New version 5.26.0 6x increase performance for apache spot
AWS SAGEMAKER
Amazon SageMaker proporciona a todos los desarrolladores y los científicos de datos la posibilidad de crear, entrenar e implementar modelos de aprendizaje automático de forma rápida
UPDATE 2019: Now supports spot instances
AWS QLDB
Fully managed database, provides immutable history of all commited changes, cannot be updated,altered or deleted. Protect integrity of transactions, complete and verifiable audit trail of data changes in app
UPDATE 2019: Ga in 2019
AWS TRANSFER SFTP
Is a way to transfer in out files in S3 using secure FTP,
UPDATE 2019: supports logical directories,
AWS TRANSCRIBE
is automatic speech recognition (ASR) service which add speech-to-text capability and save the output your speech to a text file.
update 2019
- Until now, uses SSE-S3 encryption to encrypt transcripts. Starting today, you can use your own encryption keys from AWS Key Management System (KMS).
AWS AMPLIFY
framework to develop and deploy mobile serverless applications
UPDATE 2019
- integrated with git code respository pull requests-previews, preview URL for each pull requests
AWS DATASYNC
Is an online data transfer service that automates and accelerate copying data between Network File System (NFS) or Server Message Block (SMB) file servers, Amazon S3 buckets and Amazon Elastic File System (EFS).
update 2019
- you can now directly transfer data into any Amazon S3 storage class, control overwrite for existing files or objects and configure additional data verification checks.
- Price reduction
update 2020
- AWS DataSync can now transfer billions of files or objects between Amazon Simple Storage Service (Amazon S3), Amazon Elastic File System (EFS), or Amazon FSx for Windows File Server, with just a few clicks in the DataSync console.
- For example, you can use this new capability to regularly replicate your Amazon FSx for Windows File Server file systems to a different AWS Region, or move files between your managed file systems and S3 buckets for data processing. DataSync comes with automatic encryption of data in-transit, and built-in options such as data integrity verification in-transit and at-rest. I
AWS DATA EXCHANGE
AWS Data Exchange makes it easy to find, subscribe to, and use third-party data in the cloud. Once subscribed to a data product, you can use the AWS Data Exchange API to load data directly into Amazon S3 and then analyze it with a wide variety of AWS analytics and machine learning services.
AWS MSK
MSK stands for Managed Streamkng for Apache Kafka, and it is a fully managed service to create Kafka clusters.
UPDATE 2019
- Now you have the option to monitor your cluster using Prometheus. Prometheus provides client access to JMX metrix emited by MSK brokers and Kafka within your Amazon VPC. No additional cost.
AWS SES
Amazon SES is a cloud-based email sending service designed to help digital marketers and application developers send marketing, notification, and transactional emails. You can use our SMTP interface or one of the AWS SDKs to integrate Amazon SES directly into your own systems and applications.
update 2020
- Now inlcudes a feature called Bring your Own IP which makes it possible to use Amazon SES to send email thorugh publicly-routable IP addresses that you already own.
AWS EKS
Amazon EKS is a managed Kubernetes service that makes it easy for you to run Kubernetes on AWS without needing to install, operate, and maintain your own Kubernetes control plane or worker nodes. Amazon EKS is certified Kubernetes conformant, so existing applications running on upstream Kubernetes are compatible with Amazon EKS. You can also easily migrate any standard Kubernetes application to EKS without needing to refactor your code.
UPDATE
- 2020
- 50% price reduction
AWS Config
allows you to asses, order and evaluate the configuration of your AWS resources
update 2019
you can now execute remediation actions to address non compliance resources, for example, you can create aws config rule to check if some of your s3 buckets allow public access and you can associate a remediation action and disable public access automatically – AWS config now supports API Gateway. Now you can track changes of your AWS API Gateway and something stops working. You can track stage configurations, throttle and access log, as well as the configuration of the API such, is the endpoint configuration version or event the protocol is used
AWS QuickSight
https://aws.amazon.com/es/quicksight/ Amazon QuickSight es un servicio de análisis empresarial ágil para crear visualizaciones, realizar análisis ad hoc y obtener información empresarial rápidamente a partir de sus datos. QuickSight descubre de manera continua orígenes de datos de AWS, permite a las organizaciones ajustar la escala a cientos de miles de usuarios y entrega un desempeño de consulta ágil y con buena capacidad de respuesta mediante el uso de un motor en memoria sólido (SPICE). BI, a business intelligence tool which enabled you to analyze business data, interactive dashboards.
Pingback: AWS certifications posts