AWS Solutions Architect Associate (SAA) 2018 – I part

En este post dejaré algunas notas que tomé para poder estudiar para el AWS SASS. Utilizo Evernote para guardar notas pero con el paso del tiempo he decidido retomar el blog ya que es una mejor manera de tener mis notas actualizadas. Actualizaré el post poco a poco. Las notas serán en inglés porque así es como hice el curso.

Las definiciones de los diferentes servicios las tomo de o bien la documentación de AWS o bien de los comentarios del instructor del curso que hice.

 Topics covered: S3

AWS S3

  • A default of up to 100 buckets for new accounts
  • Files from 0 bytes to 5 TB
  • Unlimited storage
  • Files stored in buckets(buckets are similar to a “folder”)
  • Unique names
    • https://eu-west-1.amazonaws.com/the-name-of-my-bucket
  • When you upload filed to S3 you will receive an HTTP 200 code
  • Supports Versioning
  • Supports Encryption
  • Lifecycle Management
  • Secure your data – ACL

AWS S3 DATA CONSISTENCY

  • Read after write consistency, for PUTS new objects. You are able to read a new file in S3 after the upload
  • Eventual consistency for overwrites PUTS and DELETES, can take some time to propagate

AWS S3 KEY VALUE

S3 is object based. Objects consist of the following:

  • Key, name of the object
  • Value, the data
  • Version ID, important for the versioning of the object
  • Metadata
  • Subresources
    • bucket policies
    • access control list (ACL)
    • Cross-Origin Resource

TIPS VERSIONING CROSS REPLICATION

Versioning must be enabled on both the source and destination
Regions must be unique
Files in an existing bucket are not replicated automatically.
You cannot replicate to multiple buckets
Delete markers are replicated
Deleting individual versions or delete markers will not be replicated
haring (CORS)

AWS S3 Storage class tier and availability

S3

  • AWS S3 was built to deliver 99.99% of availability
  • Guarantee 99.9% of availability
  • Guarantee 99.999999999% durability of S3 object (11 x 9)
  • Store redundantly across multiple devices in multiple facilities
  • Designed to sustain the loss of two facilities concurrently
  • More cost effective than using S3 RRS

S3 – IA (Infrequently Accessed)

  • Lower fee than S3 but charged with retrieval fee
  • Same low latency and high throughput performance of Standard
  • Designed for durability of 99.999999999% of objects (11 x 9 )
  • Designed for 99.9% availability over a given year

S3 – One Zone IA (Infrequently Accessed One Zone)

  •  Same as IA however:TIPS VERSIONING CROSS REPLICATION

    Versioning must be enabled on both the source and destination
    Regions must be unique
    Files in an existing bucket are not replicated automatically.
    You cannot replicate to multiple buckets
    Delete markers are replicated
    Deleting individual versions or delete markers wil00l not be replicatedTIPS VERSIONING CROSS REPLICATION

    Versioning must be enabled on both the source and destination
    Regions must be unique
    Files in an existing bucket are not replicated automatically.
    You cannot replicate to multiple buckets
    Delete markers are replicated
    Deleting individual versions or delete markers will not be replicatedTIPS VERSIONING CROSS REPLICATION

    Versioning must be enabled on both the source and destination
    Regions must be unique
    Files in an existing bucket are not replicated automatically.
    You cannot replicate to multiple buckets
    Delete markers are replicated
    Deleting individual versions or delete markers will not be replicated

    • Data is stored in a single availability zone
    • Guarantee 99.999999999 % of durability (11 x 9 )
    • Guarantee  99.5% of availability 
    • Cost is 20% less than regular S3 IA
    • The S3 One Zone-IA storage class is set at the object level and can exist in the same bucket as S3 Standard and S3 Standard-IA, allowing you to use S3 Lifecycle Policies to automatically transition objects between storage classes without any application changes.
    • Remember that since the S3 One Zone-IA stores data in a single AWS Availability Zone, data stored in this storage class will be lost in the event of Availability Zone destruction.

      The STANDARD_IA and ONEZONE_IA storage classes are suitable for objects larger than 128 KB that you plan to store for at least 30 days. If an object is less than 128 KB, Amazon S3 charges you for 128 KB. If you delete an object before the 30-day minimum, you are charged for 30 days.

 

S3 – RRS (Reduced Redundancy Storage)

  • Guarentee 99.99 % of durability
  • Guarentee 99.99 % of availability
  • Used for data that can be recreated if lost
  • *** AWS started to recommend to not use this class anymore ***

S3 – Glacier

  • Very cheap, use only for archival
  • Takes 3-5 hours to restore from Glacier
  • 0,01 $ per gigabyte
  • Range Retrieval allows you to retrieve only specified byte ranges. You pay only for the actual data retrieved
  • Retrieval data:
    • Expedited:
      • Expedited Retrieval can be used for occasional requests and typically, data is retrieved between 1-5 minutes (for files < 250 MB).
      • However, the expedited retrieval request is accepted by Glacier only if there is capacity available. If capacity is not available, Glacier will reject the request. To guarantee expedited retrieval availability, you can purchase provisioned capacity
    • Standard:
      • Standard would take 3-5 hours and Bulk would take 5-12 hours
    • Bulk:
      • Bulk retrieval is the lowest cost option to retrieve data from Glacier and can be used to cost-effectively retrieve large amounts of data
  • Data stored in Amazon Glacier is protected by default; only vault owners have access to the Amazon Glacier resources they create.
  • Glacier automatically encrypts using AES 256. It handles the key management for you

AWS S3 CHARGEs

  • storage per GB
  • request(get,put,copy,etc)
  • storage management pricing
    • inventory,tags
  • data management pricing
    • data transferred out of S3 (data in is free)
    • Transferring data from an EC2 instance to Amazon S3, Amazon Glacier, Amazon DynamoDB, Amazon SES, Amazon SQS, or Amazon SimpleDB in the same AWS Region has no cost at all.
  • Transfer Acceleration

AWS S3 Multipart

In S3 Multipart Upload, you can upload a maximum object size of 5 TB and a part size of 5 MB to 5 GB (last part can be < 5 MB)

AWS S3 encryption

There are these options available:
  • Client side encryption, I encrypt in my laptop and then upload.
  • Server side encryption
    • SSE-S3: AWS manages both data key and master key, cheaper than SS3-KMS. Every object is encrypted and there is additional safe guard: Amazon encrypts the key itself with the master key and regularly rotate the master key. Amazon handle all the keys for you. You don’t worry about it.
    • SSE-KMS: AWS manages data key and you manage the master key, more expensive than SS3-S3
      • Additional level of trail, whom, when, where uses the key
      •  Additional level of transparency, who is decrypting what and when
      • Default key or you can generate new one
    • SSE-C: You manage both data key and master key

If you want to enforce the use of encryption in your bucket, use S3 Bucket Policy to deny PUT request that don’t include the x-amz-server-side-encryption in the request header

AWS S3 versioning

  •  S3 stores all versions of objects, even the deleted ones(including all writes and even if you delete an object).
  • Great backup tool
  • Versioning cannot be disabled, only suspended!
  • integrated with lifecycle rules
  • Versioning’s MFA delete capability, provide an extra layer of security
!!! Only the owner of an Amazon S3 bucket can permanently delete a version !!!

Tips Versioning Cross Replication

You use the Amazon S3 console to add replication rules to the source bucket. Replication rules define which source bucket objects to replicate and the destination bucket where the replicated objects are stored. You can create rules to replicate all the objects in a bucket or a subset of objects with specific key name prefixes (that is, objects that have names that begin with a common string). A destination bucket can be in the same AWS account as the source bucket, or it can be in a different account. The destination bucket must always be in a different Region than the source bucket.

  • Versioning must be enabled on both the source and destination
  • Regions must be unique
  • Files in an existing bucket are not replicated automatically.
  • You cannot replicate to multiple buckets
  • Delete markers are replicated
  • Deleting individual versions or delete markers will not be replicated

Lifecycles

The expire action retains the current version as a previous version and places a delete marker as the current version.
If you want to permanently delete the previous versions, combine the expire action with the permanently delete previous version actions.
  • Can be used in conjunction with versioning
  • Can be applied to the current version and previous versions
  • Transition to the IA  (infrequent Access) 128kb and 30 days after the creations date
  • Move to Glacier 30 days after IA, (mínimum 60 days)
  • Permanently delete

Webhosting

Here are the prerequisites for routing traffic to a website that is hosted in an Amazon S3 Bucket:
  • An S3 bucket that is configured to host a static website. The bucket must have the same name as your domain or subdomain. For example, if you want to use the subdomain acme.example.com, the name of the bucket must be acme.example.com.
  • A registered domain name. You can use Route 53 as your domain registrar, or you can use a different registrar.
  • Route 53 as the DNS service for the domain. If you register your domain name by using Route 53, we automatically configure Route 53 as the DNS service for the domain.
  • If you need to access some assets that are in a different bucket remember to use the S3 Website URL rather than regular s3 bucket URL, example:
    • https://mybucketname.s3-website-eu-west-1.amazonaws.com

Events

 The Amazon S3 notification feature enables you to receive notifications when certain events happen in your bucket. To enable notifications, you must first add a notification configuration identifying the events you want Amazon S3 to publish, and the destinations where you want Amazon S3 to send the event notifications.
Amazon S3 supports the following destinations where it can publish events:
  • Amazon Simple Notification Service (Amazon SNS) topic
  • Amazon Simple Queue Service (Amazon SQS) queue
  • AWS Lambda

Performance (OUTDATED)***

If you experienced consistently > 100 PUT/DELETE/LIST request to your bucket or > 300 GET request per second, probably you’ll have to do some actions to improve the performance based on your workloads (get-intensive / not get-intensive / mix)

  • Get Intensive Workloads: the best solution is to use CloudFront of course
  • Mix Workloads:
    • the key name of your objects can have an impact on the performance
    • S3 will use the key name of the object to determine which partition will use to store the object
    • the sequential key names, prefixed with the time stamp or ordered by alphabet, increases the probability to store a bunch of objects in the same partition, causing I/O issues
    • Adding some randomness in the key name object avoid this problem, because S3 will store objects in different partitions
In 2018 AWS announced a massive improvement of the S3 performance so this guide is in practice, no longer needed.
AWS S3 supports up to 3,5K PUT request per second
AWS S3 supports up to 5,5K GET request per second

sudo: no tty present and no askpass program specified

Error común al ejecutar scripts con permisos de sudo con usuarios no root. Para arreglarlo:

1. En sistemas Ubuntu/Debian

#visudo

2. Añadir una linea con el comando a ejecutar:

jenkins ALL=(root) NOPASSWD: /bin/chown -R www-data\:www-data .

3. O darle permisos totales

jenkins ALL=(ALL) NOPASSWD: ALL

4. Salvar el fichero.
5. No deberiamos ver más este error

S3cmd con role

s3cmd es un binario bastante útil si quieres hacer ciertas tareas con s3 y no picar código desde 0. Tuve un problema hace poco con él y tuve que investigar que pasaba. Lista de problemas:

ejecuto s3cmd y me da 403 forbidden access

  • primero, importante, si estas gestionando algo heredado muy probablemente tienes un .cfg oculto con configuración. Conviene cambiarla porque puede estar en base a KEYS y puedes no ver que está pasando. Para empezar de 0 puedes moverlo a extensión diferente y por defecto ya no tendrá ninguna configuración
  • definir las KEYS correctas si quieres un acceso programático a los recursos
  • en caso de querer trabajar con roles, has de tener en cuenta la versión de s3cmd. Si tienes una versión antigua es posible que no funcione ni aunque tengas el rol correcto
  • generar un rol adecuado de acceso a S3 y asignarlo a la instancia

Ejemplo tipo de policy:

{

"Version": "2012-10-17",

"Statement": [

{

"Effect": "Allow",

"Action": "s3:*",

"Resource": [

"arn:aws:s3:::static08",

"arn:aws:s3:::static08/*"

]

}

]

}

 

Burn windows iso file from mac osx el capitan

After struggling a little bit how to burn windows iso from Mac, I got proper right to do it. Basically, forget to just run dd, it won’t work. At least, didn’t work for me. In fact, all I got when I ran dd command was a faulty usb burned with iso files unable to start the installation from one laptop. So, I had to recover the usb with:

* sudo dd if=/dev/zero of=/dev/disk4 bs=1024k count=2

After that I was able to manage the usb from “Disk Utility”.

You’ll need a:

* windows iso file
* unetbootin for Mac
* usb ready for action

What I did was:

* First, open “Disk Utility” and format again the usb with MBR selected
* After that, open the unetbootin, select the iso and burn the usb.

This was the key to success in installation. I setup usb boot from laptop’s BIOS and then all worked like a charm.

MacBook reset password

1. Apagar equipo.
2. Encender dejando apretado COMMAND + r
3. (tardará más en levantar porque tira de una partición donde están las tools)
4. nos aparece el menú típico pero si miramos en la barra superior, buscamos y encontramos y hacemos click en “Terminal”.
5. En la terminal, clickamos “resetpassword”
6. Nos aparecerá otra ventana para cambiar el password del user que queramos y listo 🙂

Fixedbyvonnie

Moviendo WordPress a https

Aunque sea un tema menor, voy a dejar unos comentarios breves sobre lo que he tenido que hacer, nada del otro mundo.

1. Obviamente, contratar un certificado SSL. Para un dominio personal pequeño que sólo queremos encriptar sin wildcard ni grandes problemas, un COMODO por 5€ al año es suficiente. En mi caso el hosting compartido y el SSL es el mismo proveedor así que fue sencillo instalarlo. En cada hosting varía pero no es difícil.

2. Una vez hacemos la instalación del plugin a nivel hosting, debemos configurar el blog. Si vamos a Settings>General, podemos y debemos cambiar el WordPress address URL y el Site WordPress URL. (Si están en gris como en mi caso, es porque lo tenemos editado en el wp-config y ese es el archivo que debemos tratar).

3. Hacemos el cambio, borramos caché en el caso de que lo tengamos con caché. Muy raro será si no tenemos problemas con URLS tipo “este origen es https pero la fuente no”. Eso va a ser seguramente un tema de strings en base de datos apuntando al antiguo dominio (no https). Para tratar eso, viene perfecto ese php free. Lo subimos al hosting, configuramos y ponemos los string que queremos cambiar, ej: https://www.midominio.com por https://www.midominio.com

4. CDN: si tenemos CDNS encima de nuestra capa de caché, deberemos informar igualmente de que hemos movido el contenido a https. En mi caso, con CloudFlare viene un plugin que con la API KEY te deja configurar settings sin tener que ir a la propia web. En el caso que nos ocupa es la setting Automatic HTTPS Rewrites.

Por cierto, desde este mismo plugin podemos ver los stats tipo Google Analytics. Muy útil.

Y con esto, muy en líneas generales, ya hemos movido todo a HTTPS. Si tenemos una VPN + SSL en nuestro equipo la conexión y navegación ya tiene unos estándares de seguridad bastante correctos.

Obtener modelo y número de serie de PC

A veces por lo que sea, no encontramos el modelo del hardware a reparar. Me ha pasado con un Inspiron DELL, la solución es, abrir Inicio de Windows y teclear:

Conseguir el número de serie del equipo

wmic bios get serialnumber

Conseguir el el nombre del modelo del equipo

wmic csproduct get name

Links
http://www.techdreams.org/tips-tricks/find-model-number-and-serial-number-of-your-computer-using-dos-commands/599-20081208