So yeah although I’d been using AWS for a good while, I hadn’t been the one to do the actual provisioning: I recently started learning how to configure AWS and these are my continuing notes.
This post is a whizzbang super-fast careening through EC2/EBS/EFS.
I find EC2 is very much “does what it says on the tin”, so not much to say here except reminders to myself that:
#! /bin/bashat the top!
So turns out if you want to use the AWS cli (you know, provision stuff etc) from inside your EC2 instance, you can’t unless you also provide your AWS credentials. Copying and pasting those to and fro is bad enough but worse, they get saved (afaics in plain text) inside
~/.aws… which, I’m sure we are all agreed, is very bad. The better way to provide credentials to your instance is:
Creds for the new role instantly propagate to the EC2 instance and now you can run nifty aws cli commands
Sometimes, an instance needs to know a bit about itself. Amazon provides this data at a static IP. From inside the instance, do:
> sudo su > curl http://169.254.169.254/latest/meta-data
This will provide access to a bunch of instance-specific data. If you replace
meta-data above with
user-data, it’ll dump the startup script used to bootstrap the environment (the one that you supplied, remember?) to the screen.
These live under EC2 > Elastic Block Store. Don’t foresee (directly) needing these a whole lot in my scenarios, but handy to know that:
When bootstrapping another vm from an image like this, the virtualization type matters. You generally want HVM (Hardware-assisted Virtualization Machine), over PV (Para virtual) for reasons that I admit to snoring through. But generally the types of AMI (images) available to you during EC2 instance creation will be kinda limited with PV. And who wants to be limited? Right?!
If you think I slept through the PV vs HVM argument I promise you I was positively comatose wrt Instance-store-backed VMs.
Elastic File System is based on linux NFS and:
End of my notes on this.