We've all done it - consulted the Root of All Knowledge (aka "Google") and long since forgotten the results. Do you find yourself searching for the same things repeatedly? I sure do. I've decided once I've done that three or four times it should end up here for ready reference. You may also find my saved installation links useful.
Editing /etc/sudoers (use visudo) and adding an entry such as:
foo ALL=(ALL:ALL) NOPASSWD: ALL
... allows user "foo" to execute ALL commands without password. (Do not recommend) ALL can be replaced with the path to a specific executable for refined control.
Example:
www-data ALL=(ALL:ALL) NOPASSWD: /webthing.bash
Annoyed by that persistent disconnect every time you step away for a beverage? A simple fix is found in /etc/ssh/sshd_config. Just a few sprays can provide a swift and lasting solution. The two configuration items of note are ClientAliveInterval and ClientAliveCountMax.
ClientAliveInterval is the period of inactivity after which the SSH server sends an alive message to the remote client that is connected to it. ClientAliveCountMax is the number of attempts that the server will make to send the alive message from the server to the client.
Ubuntu 22.04 for example defaults these to 0 and 3 respectively so you'll find yourself "kicked out" fairly often. Increasing ClientAliveInterval to 300 will increase your timeouts to fifteen minutes. If those fifteen minutes aren't enough, increase ClientAliveCountMax to 3.
ClientAliveInterval 300
ClientAliveCountMax 3
sudo nano /etc/default/grub
GRUB_CMDLINE_LINUX_DEFAULT="ipv6.disable=1 quiet splash"
GRUB_CMDLINE_LINUX="ipv6.disable=1"
sudo update-grub
sudo reboot
sudo systemctl disable systemd-resolved
sudo systemctl stop systemd-resolved
docker ps
docker image list
docker image rm [image_name]
docker container list
docker compose up -d
docker run -d [image_name]
docker exec -it [container] bash
packer init .
packer fmt .
packer build [file].pkr.hcl
ansible-inventory -i inventory.yaml --list
ansible myhosts -m ping -i inventory.ini
ansible-playbook -i inventory.yaml playbook.yaml
ansible-playbook -i inventory.yaml playbook.yaml --check same, but a dry run.
To use AWS EC2 Instance Connect through Ansible, follow these steps.
Edit or create the ~/.ssh/config file and add the following:
Host i-*
IdentityFile /prod/KEYS/your_ssh_key
User ubuntu
ProxyCommand aws ec2-instance-connect open-tunnel --instance-id %hThis configuration tells SSH to use the aws ec2-instance-connect command whenever connecting to an EC2 instance by its instance ID.
Replace /prod/KEYS/your_ssh_key with your actual private key.
Set the correct User (e.g., ubuntu or ec2-user) based on your EC2 instance.
In your Ansible inventory file, list your EC2 instance IDs as the hostnames. For example:
[aws_hosts]
i-0796369b8cd3bfc30The inventory uses the EC2 instance IDs as the hostnames, which matches the Host i-* rule in your SSH configuration.
In your Ansible playbook, reference the inventory group and set the remote_user to match the SSH configuration.
- hosts: aws_hosts
remote_user: ubuntu
tasks:
- name: Check uptime
command: uptime
- Revive Stale Bread: If you've got a loaf of bread that's gone a bit hard, sprinkle it with water and put it in the oven for a few minutes. It'll come out almost as good as fresh.
