AWS Solutions Architect Associate (SAA) 2018 – IV

Topics covered: RDS,DynamoDB

What RDS is used for?

RDS is for OLTP
  • SQL
  • PostgreSQL
  • Oracle
  • Aurora
  • MariaDB
DynamoDB – NO SQL
RedShift  – OLAP
OLTP (online transaction processing process) vs OLAP (Online Analytic Processing)

RDS Storage

General Purpose SSD – General Purpose SSD, also called gp2, volumes offer cost-effective storage that is ideal for a broad range of workloads. These volumes deliver single-digit millisecond latencies and the ability to burst to 3,000 IOPS for extended periods of time. Baseline performance for these volumes is determined by the volume’s size.

Baseline I/O performance for General Purpose SSD storage is 3 IOPS for each GiB. This relationship means that larger volumes have better performance. For example, baseline performance for a 100-GiB volume is 300 IOPS. Baseline performance for a 1-TiB volume is 3,000 IOPS. And baseline performance for a 5.34-TiB volume is 16,000 IOPS.

Volumes below 1 TiB in size also have ability to burst to 3,000 IOPS for extended periods of time. Burst is not relevant for volumes above 1 TiB. Instance I/O credit balance determines burst performance. For more information about instance I/O credits see, I/O Credits and Burst Performance.

Many workloads never deplete the burst balance, making General Purpose SSD an ideal storage choice for many workloads. However, some workloads can exhaust the 3000 IOPS burst storage credit balance, so you should plan your storage capacity to meet the needs of your workloads.

What happens when we modify the storage

RDS maintenance

Auto Minor Version Upgrade is a feature that you can enable to have your database automatically upgraded when a new minor database engine version is available.

 Upgrading RDS

There are two kinds of upgrades: major version upgrades and minor version upgrades. In general, a major engine version upgrade can introduce changes that are not compatible with existing applications. In contrast, a minor version upgrade includes only changes that are backward-compatible with existing applications.

The version numbering sequence is specific to each database engine. For example, Amazon RDS MySQL 5.7 and 8.0 are major engine versions and upgrading from any 5.7 version to any 8.0 version is a major version upgrade. Amazon RDS MySQL version 5.7.22 and 5.7.23 are minor versions and upgrading from 5.7.22 to 5.7.23 is a minor version upgrade.

For major version upgrades, you must manually modify the DB engine version through the AWS Management Console, AWS CLI, or RDS API. For minor version upgrades, you can manually modify the engine version, or you can choose to enable auto minor version upgrades.

A major version upgrade can take longer than a minor version upgrade. You can also run into compatibility issues during or after a major version upgrade, which are very unlikely after a minor version upgrade.

For both minor and major version upgrades, the system takes the following steps during the upgrade process for MySQL and MariaDB for Amazon RDS:

  1. A snapshot is taken (if backups are enabled) while the instance is still running the previous version.
  2. The instance shuts down in a special mode called a slow shutdown to ensure data consistency.
  3. The instance is restarted with mysqld running the new engine version with networking disabled to prevent remote connections.
  4. The mysql_upgrade program runs on the database to convert system and user tables to the new version.
  5. An RDS-specific upgrade script runs to upgrade RDS-managed tables and stored procedures.
  6. The instance is restarted with networking enabled to allow for remote connections.

The main difference between the process for a minor version upgrade and a major version upgrade is the use of the mysql_upgrade program. This program checks all tables (both system and user) on the database. Depending on the state of a table, the program might need to physically recreate the table to conform to the new version. In most cases, no physical changes are needed and mysql_upgrade merely updates metadata in the table to indicate that the table has been upgraded.

In the case of a minor version upgrade, these table checks are usually trivial. The entire process usually takes only a few seconds to a few minutes depending on the number of tables. However, a major version upgrade can require more extensive changes to tables, as discussed later.

MultiAZ fallacy about upgrades

One other caveat about upgrade downtime is how Multi-AZ fits into the picture. One common fallacy is that Multi-AZ configurations prevents downtime during an upgrade. We do recommend that you use Multi-AZ for high availability, because it can prevent extended downtime due to hardware failure or a network outage. However, in the case of a MySQL or MariaDB engine upgrade, Multi-AZ doesn’t eliminate downtime. The slow shutdown and the physical changes made on the active server by the mysql_upgrade program require this downtime.

Performance Insights

RDS Backups

There are two different types of Backups of AWS. Automated Backups and Database snapshots.

Automated backups

Automated backups allow you to recover your database to any point in time within a “retention period”. The retention period can be between one and 35 days. Automated backups will take a full daily snapshot and will also store transaction logs throughout the day. When you do a recovery, AWS will first choose the most recent daily backup, and then apply transaction logs relevant to that day. This allows you to do a point in time recovery down to a second, within the retention period.
Automated backups are enabled by default. You get free storage equal the size of your database. Backups are taken within a defined window. During the backup window, storage I/O maybe be suspended while your data is being backed up and you may experience elevated latency.


Are done manually, they are stored even after you delete the original RDS, unlike automated backups.
Whenever you restore, snapshot or backup, the restored version will be a new RDS with a new endpoint.

Point in time recovery

Backups and MultiAZ

IO suspension doesn’t happen for Multi-AZ. From AWS RDS user guide “During the automatic backup window, storage I/O might be briefly suspended while the backup process initializes (typically under a few seconds) and you might experience a brief period of elevated latency. No I/O suspension occurs for Multi-AZ DB deployments because the backup is taken from the standby.”

Enhanced Monitoring

la Enhanced Monitoring no supone un reinicio. Tiene un coste extra. No tiene relación con habilitar el performance schema.

Performance Schema

El performance schema si requiere reinicio. Si tenemos un parche pendiente de aplicar, y queremos por ejemplo habilitar el PE, podemos hacer el cambio en el parameterGroup (0 a 1) y luego aplicar el parche y hacemos las dos acciones en un solo reinicio.

Best practices

Max Connections in SQL SERVER

There is a parameter “user connections” in SQL server RDS
This parameter defines Maximum number of simultaneous user connections. Please note that the service may use up to 40 connections for system maintenance.
You can change this parameter as per your choice.
Max simultaneous user connections is not related, with the profile of the RDS instance as it happens with MySQL RDS. If we want to handle up to 10K connections we just need to change parameter
in MySQl default value of max connection is set based on instance memory [DBInstanceClassMemory/12582880]
But in MySQl also you can change the max connections to custom value.
in SQLl server the default value for max connection is 0 (which means no limit).
You can change it custom value


Amazon RDS encrypted DB instances provide an additional layer of data protection by securing your data from unauthorized access to the underlying storage. You can use Amazon RDS encryption to increase data protection of your applications deployed in the cloud, and to fulfill compliance requirements for data-at-rest encryption.

You can encrypt your Amazon RDS DB instances and snapshots at rest by enabling the encryption option for your Amazon RDS DB instances. Data that is encrypted at rest includes the underlying storage for DB instances, its automated backups, Read Replicas, and snapshots.

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. After your data is encrypted, Amazon RDS handles authentication of access and decryption of your data transparently with a minimal impact on performance. You don’t need to modify your database client applications to use encryption.

Amazon RDS also supports encrypting an Oracle or SQL Server DB instance with Transparent Data Encryption (TDE). 

TDE automatically encrypts data before it is written to storage, and automatically decrypts data when the data is read from storage.

TDE can be used with encryption at rest, although using TDE and encryption at rest simultaneously might slightly affect the performance of your database. You must manage different keys for each encryption method.

Amazon RDS supports TDE for the following SQL Server versions and editions:

– SQL Server 2017 Enterprise Edition

– SQL Server 2016 Enterprise Edition

– SQL Server 2014 Enterprise Edition

– SQL Server 2012 Enterprise Edition

– SQL Server 2008 R2 Enterprise Edition


  • You can only enable encryption for an Amazon RDS DB instance when you create it, not after the DB instance is created.However, because you can encrypt a copy of an unencrypted DB snapshot, you can effectively add encryption to an unencrypted DB instance. That is, you can create a snapshot of your DB instance, and then create an encrypted copy of that snapshot. You can then restore a DB instance from the encrypted snapshot, and thus you have an encrypted copy of your original DB instance. For more information, see Copying a Snapshot.
  • DB instances that are encrypted can’t be modified to disable encryption.
  • You can’t have an encrypted Read Replica of an unencrypted DB instance or an unencrypted Read Replica of an encrypted DB instance.
  • Encrypted Read Replicas must be encrypted with the same key as the source DB instance.
  • You can’t restore an unencrypted backup or snapshot to an encrypted DB instance.
  • To copy an encrypted snapshot from one AWS Region to another, you must specify the KMS key identifier of the destination AWS Region. This is because KMS encryption keys are specific to the AWS Region that they are created in.The source snapshot remains encrypted throughout the copy process. AWS Key Management Service uses envelope encryption to protect data during the copy process. For more information about envelope encryption, see Envelope Encryption.

RDS IAM DB Authentication

IAM database authentication provides the following benefits:

  • Network traffic to and from the database is encrypted using Secure Sockets Layer (SSL).
  • You can use IAM to centrally manage access to your database resources, instead of managing access individually on each DB instance.
  • For applications running on Amazon EC2, you can use profile credentials specific to your EC2 instance to access your database instead of a password, for greater security



  • New Script migration tool from Windows to Linux. You can use it in any virtual machine hosted in AWS or located in your datacenter as long as SQL SERVER is 2008 and above, and import to an EC2 instance running SQL SERVER 2016.
  • Import S3 Data into PostgreSQL: csv,text or binary supported from copy command of PostgreSQL
  • Storage AutoScaling:
    • automatically scales storage capacity with 0 downtimes. Applies to Mysql, PostgreSQL, Oracle, MariaDB. All you have to do is set up maximum storage limit and AutoScaling will monitor storage consumption and scale when required.
    • Can be enabled to existing or new RDS with no additional cost.
  • RDS Now Available on VMWARE
  • Deletion protection, for Aurora, RDS.
  • Publish RDS PostgreSQL logs to CloudWatch

AWS Redshift

Redshift is optimized for batched write operations and reading high volumes of data to minimize I/O and maximize data throughput
  • Data Warehouse
  • OLAP transaction
  • Single node (160GB)
    • Multi-node
    • Leader node, manages client connections
    • Computer node, up to 128 compute nodes
  • Columnar Data Storage: instead using rows, AWS Redshift organizes the data by column.
  • Advanced Compressión: can be compressed much more than row-based because similar data is stored sequentially on disk. Dont require indexes or views, uses less space than traditional relational database systems. When loading data in a table, AWS Redshift automatically samples your data and selects most appropriate compressions scheme.
  • Massively Parallel Processing: distributes data and query loads across all nodes.


    Compute Node Hours (total number of hours you can run across all your nodes. Billed for 1 unit per node per hour. No charged for leader node hours, just compute)
    Data Transfer(only within a VPC)
  • Encrypted in transit using SSL
  • Encrypted at rest using AES-256
By default takes care of key management
    your own keys through HSM
    AWS Key Management Service
1 Zone availability
 Can restore snapshots to news AZs in the event of an outage.

VPC Enhanced Routing

When you use Amazon Redshift Enhanced VPC Routing, Amazon Redshift forces all COPY and UNLOAD traffic between your cluster and your data repositories through your Amazon VPC. By using Enhanced VPC Routing, you can use standard VPC features, such as VPC security groups, network access control lists (ACLs), VPC endpoints, VPC endpoint policies, internet gateways, and Domain Name System (DNS) servers. Hence, enabling Enhanced VPC routing on your Amazon Redshift cluster is the correct answer.

You use these features to tightly manage the flow of data between your Amazon Redshift cluster and other resources. When you use Enhanced VPC Routing to route traffic through your VPC, you can also use VPC flow logs to monitor COPY and UNLOAD traffic. If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.



  • automatic workload management for Redshift. Now makes it easy to maximize query throughput and get consistent performance for your most demanding analytics workloads. Automatic workload management (WLM) uses machine learning to dynamically manage memory and concurrency helping maximize query throughput. In addition, you can now easily set the priority of your most important queries, even when hundreds of queries are being submitted.

AWS Aurora

  • 5x better performance  than mysql
  • 10 GB of allocated space at start, 10 GB increments up to 64 TB (autoscaling storage!!!!!!!)
  • can scale up to 32vCPUS 244GB of RAM, if you scale there is a maintenance window, couple of minutes
  • 2 copies of your db in each of availability zone, with a minimum of a 3 availability zone. 6 copies of your data.
  • designed to handle the loss of up to two copies of data without affecting database write availability and up to three copies of without affecting read availability.
  • storage is self-healing.


2 types of replicas are available
    Aurora Replicas, up to 15
        basically separate database from your original database
    MySQL Read Replicas, up to 5


In case of problem with master aurora,  you’ll failover automatically to your Aurora Replica.
Failover is automatically handled by Amazon Aurora so that your applications can resume database operations as quickly as possible without manual administrative intervention.
If you have an Amazon Aurora Replica in the same or a different Availability Zone, when failing over, Amazon Aurora flips the canonical name record (CNAME) for your DB Instance to point at the healthy replica, which in turn is promoted to become the new primary. Start-to-finish, failover typically completes within 30 seconds.
If you do not have an Amazon Aurora Replica (i.e. single instance), Aurora will first attempt to create a new DB Instance in the same Availability Zone as the original instance. If unable to do so, Aurora will attempt to create a new DB Instance in a different Availability Zone. From start to finish, failover typically completes in under 15 minutes.
Aurora relays in a virtual machine located in a one availability zone, for example eu-west-1c, if something fails you will have to failover to some Aurora replica.
The cluster DNS failover  of the cluster endpoint is automatic.

Backup retention

from 1 to 35 days

Custom Endpoints

Imagine scenario where there is a requirement to optimize your database workloads in your cluster where you have to direct the write operations of the production traffic to your high-capacity instances and point the reporting queries sent by your internal staff to the low-capacity instances. How do you achieve that? With custom endpoints.

The custom endpoint provides load-balanced database connections based on criteria other than the read-only or read-write capability of the DB instances. For example, you might define a custom endpoint to connect to instances that use a particular AWS instance class or a particular DB parameter group. Then you might tell particular groups of users about this custom endpoint. For example, you might direct internal users to low-capacity instances for report generation or ad hoc (one-time) querying, and direct production traffic to high-capacity instances


RedShift – OLAP
Limit RDS storage
Aurora 64 TiB
MariaDB 16 TB
Maximum Storage RDS VOLUME
SG RDS port protocol number
Maximum IOPS capacity RDS instance is 30K IOPS

AWS DynamoDB

Mirar el siguiente post:


AWS Config

allows you to asses, order and evaluate the configuration of your AWS resources

update 2019

you can now execute remediation actions to address non compliance resources, for example, you can create aws config rule to check if some of your s3 buckets allow public access and you can associate a remediation action and disable public access automatically
– AWS config now supports API Gateway. Now you can track changes of your AWS API Gateway and something stops working. You can track stage configurations, throttle and access log, as well as the configuration of the API such, is the endpoint configuration version or event the protocol is used

AWS QuickSight

Amazon QuickSight es un servicio de análisis empresarial ágil para crear visualizaciones, realizar análisis ad hoc y obtener información empresarial rápidamente a partir de sus datos. QuickSight descubre de manera continua orígenes de datos de AWS, permite a las organizaciones ajustar la escala a cientos de miles de usuarios y entrega un desempeño de consulta ágil y con buena capacidad de respuesta mediante el uso de un motor en memoria sólido (SPICE).
BI, a business intelligence tool which enabled you to analyze business data, interactive dashboards.

Leave a Reply

Your email address will not be published. Required fields are marked *