Skip to content

How to Install Podman Locally with Vagrant?

How I came to consider this solution

It’s very simple ? I wanted to try an alternative to Docker Desktop, especially considering their pricing model, but also to have a consistent set of tools on Windows, macOS, and Linux.

I’ll be 100% honest, for now, I’ve only tested everything on Windows. I haven’t had the time to start up an Ubuntu machine, and I’m waiting for my bank to approve the purchase of a new MacBook Pro with an M2 ? chip.

Technical Stack

  • VirtualBox ➡️ There are quite a few hypervisors on the market. I’ve already tried WSL2, but this time I was looking for something that was compatible with Windows, Linux, and macOS (I know, I’m weird) and compatible with Vagrant.
  • Vagrant ➡️ It’s 2023, and I’ve taken some good resolutions ?. I’ve never used Vagrant before, but I like Hashicorp. They make good products, so I feel like I should see what Vagrant can do to me, because I haven’t seen the value of the tool till now (for my use case).
  • Ansible ➡️ Well, it’s a basic tool for automating infrastructure. Vagrant offers several provisioners, and I like Ansible and wanted a generic tool.
  • Podman ➡️ If we take a look at the alternatives to Docker Desktop… There aren’t many options. Podman is often mentioned, so I’m going to give it a chance (I also want to test Containerd). Podman comes with a “Podman Desktop” that does -almost) the same thing as Docker Desktop, but it’s much more interesting to get hands-on experience and build everything to validate the tool ?.

So we’ll have a nice virtual machine under VirtualBox with Podman installed on it, a remote command allowing us to manage Podman from our client machine, all built using Vagrant and Ansible ?

Prerequisites

Chocolatey ➡️ I use Chocolatey as a package manager on Windows to maintain my applications, so I’ll use it to install the tools I need (https://chocolatey.org/install).

VirtualBox ➡️ We’ll install VirtualBox and validate the installed version.

Administrator ~
❯❯ choco install virtualbox

Administrator ~
❯❯ vboxmanage --version

7.0.6r155176

Vagrant ➡️ Same about Vagrant.

Administrator ~
❯❯ choco install vagrant

Administrator ~
❯❯ vagrant --version

Vagrant 2.3.4
Note 1
I am in an elevated PowerShell terminal.

Install Podman Remote

Podman Remote allows us to remotely manage our future Podman installation from a client machine.

Here, we will configure the version to install, download the archive, extract it into a “.bin” folder in my home folder, and configure my Path environment variable to add this new “.bin” folder.

Optionally but usefully, we can verify that the “podman” command is recognised ?.

tdesa ~
❯❯ $PodmanVersion = "4.4.2"

tdesa ~
❯❯ Invoke-WebRequest -Uri "https://github.com/containers/podman/releases/download/v$PodmanVersion/podman-remote-release-windows_amd64.zip" -OutFile "$Home\.bin\podman.zip"

tdesa ~
❯❯ Expand-Archive -Path "$Home\.bin\podman.zip" -DestinationPath "$Home\.bin"

tdesa ~
❯❯ Move-Item -Path "$Home\.bin\podman-$PodmanVersion\usr\bin\*" -Destination "$Home\.bin" -Force

tdesa ~
❯❯ $Path = [Environment]::GetEnvironmentVariable("PATH", "User") + "$Home\.bin"

tdesa ~
❯❯ [Environment]::SetEnvironmentVariable("Path", $Path, "User")
tdesa ~
❯❯ podman --version
podman.exe version 4.4.2

Configure VirtualBox network

We will create an interface and a dedicated network for our VMs, accessible from our client machine, using VBoxManage, and activate the DHCP server available on a given range of IPs.

tdesa ~
❯❯ VBoxManage hostonlyif create

0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100%
Interface 'VirtualBox Host-Only Ethernet Adapter' was successfully created

tdesa ~
❯❯ VBoxManage list hostonlyifs

Name:            VirtualBox Host-Only Ethernet Adapter
GUID:            f8cdbc9e-3610-496d-8c29-f3d2bacf1149
DHCP:            Disabled
IPAddress:       192.168.38.1
NetworkMask:     255.255.255.0
IPV6Address:     fe80::1bdd:d0f1:2803:5ca7
IPV6NetworkMaskPrefixLength: 64
HardwareAddress: 0a:00:27:00:00:10
MediumType:      Ethernet
Wireless:        No
Status:          Up
VBoxNetworkName: HostInterfaceNetworking-VirtualBox Host-Only Ethernet Adapter

tdesa ~
❯❯ VBoxManage hostonlyif ipconfig "VirtualBox Host-Only Ethernet Adapter" --ip="10.10.0.254" --netmask="255.255.255.0"

tdesa ~
❯❯ VBoxManage dhcpserver add --interface="VirtualBox Host-Only Ethernet Adapter" --server-ip="10.10.0.253" --lower-ip=10.10.0.100 --upper-ip=10.10.0.200 --netmask=255.255.255.0 --enable
Note 1
The network will be on a subnet of 10.10.0.0/24, and we’ll set the interface to the IP address 10.10.0.254 and the DHCP server to 10.10.0.253 (IP range from 10.10.0.100 to 10.10.0.200).

Setup infrastructure

Lets retrieve the git repository with our Vagrant and Ansible sources to build the Podman infrastructure.

tdesa ~\Repositories\GitLab
❯❯ Set-Location -Path "C:\Users\tdesa\Repositories\thibault.desaules@devoteam.com\GitLab"

tdesa Repositories\thibault.desaules@devoteam.com\GitLab
❯❯ git clone https://gitlab.com/thibault.desaules/dear-vagrant-please-build-my-podman-local-environment.git

Cloning into 'dear-vagrant-please-build-my-podman-local-environment'...
remote: Enumerating objects: 69, done.
remote: Counting objects: 100% (69/69), done.
remote: Compressing objects: 100% (39/39), done.
remote: Total 69 (delta 3), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (69/69), 9.12 KiB | 4.56 MiB/s, done.
Resolving deltas: 100% (3/3), done.

tdesa Repositories\thibault.desaules@devoteam.com\GitLab
❯❯ Set-Location -Path ".\dear-vagrant-please-build-my-podman-local-environment"

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant status

Current machine states:

coredns                   not created (virtualbox)
podman                    not created (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.

We install a small Vagrant plugin to manage VBoxGuests.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant plugin install vagrant-vbguest

Installing the 'vagrant-vbguest' plugin. This can take a few minutes...
Installed the plugin 'vagrant-vbguest (0.31.0)'!

Then we launch our infrastructure via Vagrant (and we can go get a small coffee).

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant up

Bringing machine 'coredns' up with 'virtualbox' provider...
Bringing machine 'podman' up with 'virtualbox' provider…

[...]

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant status

Current machine states:

coredns                   running (virtualbox)
podman                    running (virtualbox)

This environment represents multiple VMs. The VMs are all listed
above with their current state. For more information about a specific
VM, run `vagrant status NAME`.
Note 1
In the repository, on the “ansible” side, you have the public/private key pair that I use, since it is a local environment on my machine, I share them no worry, you can also create your own (before launching Vagrant).
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ ssh-keygen -t ed25519 -f .\.ansible\podman\files\podman

[...]
Note 2
Some useful Vagrant commands.
# to install or update a Vagrant plugin
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant plugin install vagrant-vbguest
or
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant plugin update vagrant-vbguest
# to launch vagrant infra base on the current Vagrantfile
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant up
# to replay the provisioner (in our case the ansible part)
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant reload --provision
or
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant provision
# update vagrant base box define in Vagrantfile
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant box update
# get the vagrant status
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant status
# connect to the machine (add machine name after ssh)
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant ssh
# stop vagrant machine
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant halt
# destroy the all infrastructure
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant destroy
Note 3
Sometimes the Fedora box seems to have some anomalies ?.
The error output from the command was:
/sbin/mount.vboxsf: mounting failed with the error: No such device
I haven’t finished investigating yet, but restarting the infrastructure solves it.
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant halt

==> podman: Attempting graceful shutdown of VM...
==> coredns: Attempting graceful shutdown of VM...

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant up

Bringing machine 'coredns' up with 'virtualbox' provider...
Bringing machine 'podman' up with 'virtualbox' provider...
[...]

Confirm Podman install is working

We will check that Podman is correctly installed by connecting to the machine created with Vagrant.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ vagrant ssh podman


[vagrant@podman ~]$ podman --version

podman version 4.4.2

We can confirm that we are able to run a container in the different possible modes of podman, first “rootless” ?. 

We can check the container IDs to confirm differences.

[vagrant@podman ~]$ podman run quay.io/podman/hello

Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob 2043a634439c done
Copying config f3ff8fb4ce done
Writing manifest to image destination
Storing signatures
!... Hello Podman World ...!

         .--"--.
       / -     - \
      / (O)   (O) \
   ~~~| -=(,Y,)=- |
    .---. /`  \   |~~
 ~/  o  o \~~~~.----. ~~
  | =(X)= |~  / (O (O) \
   ~~~~~~~  ~| =(Y_)=-  |
  ~~~~    ~~~|   U      |~~

Project:   https://github.com/containers/podman
Website:   https://podman.io
Documents: https://docs.podman.io
Twitter:   @Podman_io
[vagrant@podman ~]$ podman ps --all --format json | jq '.[] | .Id, .Image'

"17413b2b58e6358a7202ed2d5d216c7de0b916a14bd0494a1e052f17de98d2e5"
"quay.io/podman/hello:latest"

Then “rootful” ?.

[vagrant@podman ~]$ sudo podman run quay.io/podman/hello

Trying to pull quay.io/podman/hello:latest...
Getting image source signatures
Copying blob 2043a634439c done
Copying config f3ff8fb4ce done
Writing manifest to image destination
Storing signatures
!... Hello Podman World ...!

         .--"--.
       / -     - \
      / (O)   (O) \
   ~~~| -=(,Y,)=- |
    .---. /`  \   |~~
 ~/  o  o \~~~~.----. ~~
  | =(X)= |~  / (O (O) \
   ~~~~~~~  ~| =(Y_)=-  |
  ~~~~    ~~~|   U      |~~

Project:   https://github.com/containers/podman
Website:   https://podman.io
Documents: https://docs.podman.io
Twitter:   @Podman_io
[vagrant@podman ~]$ sudo podman ps --all --format json | jq '.[] | .Id, .Image'

"92f180fa12961d1b71ce42832c04bc46c57509fd0bc1ae5ca7145e9b8462c517"
"quay.io/podman/hello:latest"

This time, we will do the same thing but directly from our client machine. 

For this, we need to configure some environment variables for Podman Remote and launch the same image.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ [Environment]::SetEnvironmentVariable("CONTAINER_HOST", "ssh://root@10.10.0.20:22/run/podman/podman.sock", "User")

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ $env:CONTAINER_HOST = [System.Environment]::GetEnvironmentVariable("CONTAINER_HOST", "User")

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ [Environment]::SetEnvironmentVariable("CONTAINER_SSHKEY", "C:\Users\tdesa\Repositories\thibault.desaules@devoteam.com\GitLab\dear-vagrant-please-build-my-podman-local-environment\ansible\podman\files\podman", "User")

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ $env:CONTAINER_SSHKEY = [System.Environment]::GetEnvironmentVariable("CONTAINER_SSHKEY", "User")

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman run quay.io/podman/hello

!... Hello Podman World ...!

         .--"--.
       / -     - \
      / (O)   (O) \
   ~~~| -=(,Y,)=- |
    .---. /`  \   |~~
 ~/  o  o \~~~~.----. ~~
  | =(X)= |~  / (O (O) \
   ~~~~~~~  ~| =(Y_)=-  |
  ~~~~    ~~~|   U      |~~

Project:   https://github.com/containers/podman
Website:   https://podman.io
Documents: https://docs.podman.io
Twitter:   @Podman_io
tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman ps --all --format json | ConvertFrom-Json | Foreach-Object { $_ | Select-Object -Property Id,Image }
Id                                                               Image
--                                                               -----
92f180fa12961d1b71ce42832c04bc46c57509fd0bc1ae5ca7145e9b8462c517 quay.io/podman/hello:latest
31ba227a4a44c7c3e30f326c8a8538cffb017a9224d1e45b410c16b872ea59d7 quay.io/podman/hello:latest

We do have the two container IDs that correspond to the two launched containers.

Let’s play with Podman Compose

What would Docker be without “docker-compose”? Podman has its own version: “podman-compose”, let’s start by installing it.

Administrator ~
❯❯ pip3 install podman-compose

WARNING: Ignoring invalid distribution ~ (C:\Python311\Lib\site-packages)
Collecting podman-compose
  Using cached podman_compose-1.0.3-py2.py3-none-any.whl (27 kB)
Requirement already satisfied: pyyaml in c:\users\tdesa\appdata\roaming\python\python311\site-packages (from podman-compose) (6.0)
Requirement already satisfied: python-dotenv in c:\users\tdesa\appdata\roaming\python\python311\site-packages (from podman-compose) (0.21.1)
WARNING: Ignoring invalid distribution ~ (C:\Python311\Lib\site-packages)
Installing collected packages: podman-compose
Successfully installed podman-compose-1.0.3

Let’s verify the installation and launch a small container.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman-compose --version

['podman', '--version', '']
using podman version: 4.4.2
podman-composer version  1.0.3
podman --version
podman version 4.4.2
exit code: 0

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman-compose -f ".\docker-compose\docker-compose.yml" up

['podman', '--version', '']
using podman version: 4.4.2
** excluding:  set()
['podman', 'network', 'exists', 'docker-compose_default']
podman create --name=docker-compose_whoami_1 --label io.podman.compose.config-hash=123 --label io.podman.compose.project=docker-compose --label io.podman.compose.version=0.0.1 --label com.docker.compose.project=docker-compose --label com.docker.compose.project.working_dir=C:\Users\tdesa\Repositories\thibault.desaules@devoteam.com\GitLab\dear-vagrant-please-build-my-podman-local-environment\docker-compose --label com.docker.compose.project.config_files=.\docker-compose\docker-compose.yml --label com.docker.compose.container-number=1 --label com.docker.compose.service=whoami --net docker-compose_default --network-alias whoami -p 80:80/tcp --restart no docker.io/traefik/whoami
Error: creating container storage: the container name "docker-compose_whoami_1" is already in use by 00457988ec94588641390118b03a956b3b53bc7a37e561e95c1dd1f03f62dca6. You have to remove that container to be able to reuse that name: that name is already in use
exit code: 125
podman start -a docker-compose_whoami_1
2023/03/22 19:53:38 Starting up on port 80

We can confirm that our container is responding.

tdesa ~
❯❯ Invoke-WebRequest -Uri "http://10.10.0.20:80"

StatusCode        : 200
StatusDescription : OK
Content           : Hostname: 00457988ec94
                    IP: 127.0.0.1
                    IP: ::1
                    IP: 10.89.0.3
                    IP: fe80::876:27ff:feb7:f962
                    RemoteAddr: 10.10.0.254:60986
                    GET / HTTP/1.1
                    Host: 10.10.0.20
                    User-Agent: Mozilla/5.0 (Windows NT; Windows NT ...
RawContent        : HTTP/1.1 200 OK
                    Content-Length: 272
                    Content-Type: text/plain; charset=utf-8
                    Date: Wed, 22 Mar 2023 19:54:49 GMT

                    Hostname: 00457988ec94
                    IP: 127.0.0.1
                    IP: ::1
                    IP: 10.89.0.3
                    IP: fe80::876:27ff:feb7...
Forms             : {}
Headers           : {[Content-Length, 272], [Content-Type, text/plain; charset=utf-8], [Date, Wed, 22 Mar 2023 19:54:49 GMT]}
Images            : {}
InputFields       : {}
Links             : {}
ParsedHtml        : mshtml.HTMLDocumentClass
RawContentLength  : 272

Podman without pods? No way!

Podman brings an interesting feature, the ability to manage pods (in the Kubernetes sense). You can check this article if you’re interested in reading more.

We’re going to play with a pod and a few NATS containers (message exchange middleware, a bit like RabbitMQ or other AMQP technologies).

We’ll start by creating our pod, specifying its name and the ports to be published, and checking its creation.

We’re going to play with a pod and a few NATS containers (message exchange middleware, a bit like RabbitMQ or other AMQP technologies).

We’ll start by creating our pod, specifying its name and the ports to be published, and checking its creation.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod create `
  --name nats `
  --publish 4222:4222 `
  --publish 8222:8222 `
  --publish 4223:4223 `
  --publish 8223:8223 `
  --publish 4224:4224 `
  --publish 8224:8224

5b854969ceda509ac35f8f192321303738ae5ccff9d532be1368cf05091692a9

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod ps

POD ID        NAME        STATUS      CREATED        INFRA ID      # OF CONTAINERS
5b854969ceda  nats        Created     9 seconds ago  1ba3052db951  1

Then we’ll attach the three containers of our NATS cluster to it, each using a different port. The containers of a pod share the same network namespace.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman run `
  --detach `
  --interactive `
  --tty `
  --name nats-server-0 `
  --pod nats `
  docker.io/nats --port 4222 --cluster_name nats --cluster nats://0.0.0.0:6222 --http_port 8222

7d4eb151332ecdf9eef316bf9ce1217afa9065f54b96b5b3286ff8610ff0cccb

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman run `
  --detach `
  --interactive `
  --tty `
  --name nats-server-1 `
  --pod nats `
  docker.io/nats --port 4223 --cluster_name nats --cluster nats://0.0.0.0:6223 --routes=nats://127.0.0.1:6222 --http_port 8223

a453279165a6b06e06a2b4409e67fa9304fa98c29c919c71d0073022b2eed76e

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman run `
  --detach `
  --interactive `
  --tty `
  --name nats-server-2 `
  --pod nats `
  docker.io/nats --port 4224 --cluster_name nats --cluster nats://0.0.0.0:6224 --routes=nats://127.0.0.1:6222 --http_port 8224

c80930d569ae795e21081484ead201c76c3cd5341a7aafe9f8e549481986215a

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod ps

POD ID        NAME        STATUS      CREATED        INFRA ID      # OF CONTAINERS
5b854969ceda  nats        Running     3 minutes ago  1ba3052db951  4

Now let’s test our beautiful NATS cluster that we just deployed. 

For this, we’re going to use another container next to our pod that has a NATS client.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman run `
  --rm `
  --interactive `
  --tty `
  docker.io/natsio/nats-box

             _             _
 _ __   __ _| |_ ___      | |__   _____  __
| '_ \ / _` | __/ __|_____| '_ \ / _ \ \/ /
| | | | (_| | |_\__ \_____| |_) | (_) >  <
|_| |_|\__,_|\__|___/     |_.__/ \___/_/\_\

nats-box v0.13.5

d56a5e0745f0:~# nats --version
v0.0.35

For this test, we’re going to validate the publish-subscribe functionality that allows us to register a “subject” in order to receive published messages.

Let’s start by subscribing to the “foo” subject using the first server and then publishing messages to this same subject using the other servers in the cluster.

d56a5e0745f0:~# nats sub -s "nats://10.10.0.20:4222" foo &

20:32:46 Subscribing on foo

d56a5e0745f0:~# nats pub -s "nats://10.10.0.20:4222" foo "Hello World from server .20 with port 4222 to target first container of the pod"

20:34:08 Published 79 bytes to "foo"

[#2] Received on "foo"
Hello World from server .20 with port 4222 to target first container of the pod

d56a5e0745f0:~# nats pub -s "nats://10.10.0.20:4223" foo "Hello World from server .20 with port 4223 to target second container of the pod"

20:34:27 Published 80 bytes to "foo"

[#3] Received on "foo"
Hello World from server .20 with port 4223 to target second container of the pod

d56a5e0745f0:~# nats pub -s "nats://10.10.0.20:4224" foo "Hello World from server .20 with port 4224 to target third container of the pod"

20:34:44 Published 79 bytes to "foo"

[#4] Received on "foo"
Hello World from server .20 with port 4224 to target third container of the pod

To finish, we’ll see how to export the definition of our pod, destroy it, and then reimport it with the previous definition.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman generate kube nats > .\pods\nats.yml

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod stop nats

5b854969ceda509ac35f8f192321303738ae5ccff9d532be1368cf05091692a9

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod rm nats

5b854969ceda509ac35f8f192321303738ae5ccff9d532be1368cf05091692a9

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod ps

POD ID      NAME        STATUS      CREATED     INFRA ID   

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman play kube .\pods\nats.yml

Pod:
df59ef5956b8e18f53934a4164553035283a9ce22985e9ded168786b1aba9972
Containers:
6c9491317cc82dd00663ab339da1fbdcfff26f37dae6c0f775151582ab250917
f0cd529f7844e37c06084fe5bb531bfbd7b79ed918ded3e5685f72c066d61257
058bfdb7d99768bb007db3949ce4e5f4f815635b65a59a0d5154a377095f478a


tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ podman pod ps

POD ID        NAME        STATUS      CREATED        INFRA ID      
df59ef5956b8  nats        Running     8 seconds ago  be3b932805e9  4

Tadammmmmm ??

Conclusion

We’ve come to the end of this article. Information is available in the repository files (especially on the Ansible part so that you can customise the configuration).

So we’ve seen: how to deploy some infrastructure with Vagrant, how to use Ansible to customise our machines during the creation process, and finally, how to use Podman (podman, podman-compose, and pods) to replace Docker on our machine ???.

It was pretty cool, wasn’t it?

Optional: Adding our favorite DNS server CoreDNS

Our infrastructure deployed via Vagrant also started a small DNS server (CoreDNS). We can try to use it.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ nslookup france.devoteam.com 10.10.0.10

Server:  UnKnown
Address:  10.10.0.10

Non-authoritative answer:
Name:    france.devoteam.com
Address:  20.67.204.165

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ nslookup whoami.desaules.local 10.10.0.10

Server:  UnKnown
Address:  10.10.0.10

Name:    whoami.desaules.local
Address:  10.10.0.20

We just need to add the IP of our DNS server as the DNS server for our VirtualBox interface.

Administrator ~
❯❯ Set-DnsClientServerAddress -InterfaceIndex $((Get-NetAdapter | Where-Object { $_.macAddress -match "^0A-00-27.*$" }).ifIndex) -ServerAddresses ("10.10.0.10")

It’s magic.

tdesa dear-vagrant-[...]-my-podman-local-environment on main via ⍱ v2.3.4
❯❯ ping whoami.desaules.local

Pinging whoami.desaules.local [10.10.0.20] with 32 bytes of data:
Reply from 10.10.0.20: bytes=32 time<1ms TTL=64
Reply from 10.10.0.20: bytes=32 time<1ms TTL=64
Reply from 10.10.0.20: bytes=32 time<1ms TTL=64
Reply from 10.10.0.20: bytes=32 time<1ms TTL=64

Ping statistics for 10.10.0.20:
    Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
    Minimum = 0ms, Maximum = 0ms, Average = 0ms

We can configure the “desaules.local” zone in the “coredns” folder of the repository (which you can of course adapt).

Resources

Chocolatey

Installation : https://chocolatey.org/install

Windows Terminal : https://community.chocolatey.org/packages/microsoft-windows-terminal

VirtualBox : https://community.chocolatey.org/packages/virtualbox

Vagrant : https://community.chocolatey.org/packages/vagrant

Podman Remote

Podman Remote release : https://github.com/containers/podman/releases/

VirtualBox

VBoxManage docs : https://docs.oracle.com/en/virtualization/virtualbox/6.0/user/user-preface.html

GitLab

Repository with source : https://gitlab.com/thibault.desaules/dear-vagrant-please-build-my-podman-local-environment

Vagrant

Official website: https://www.vagrantup.com/

Documentation: https://developer.hashicorp.com/vagrant/docs

Podman

Official website: https://podman.io/getting-started/

Socket podman: https://github.com/containers/podman/blob/main/docs/tutorials/socket_activation.md

Podman-compose: https://github.com/containers/podman-compose

CoreDNS

Official website:: https://coredns.io/

NATS

Official website: https://nats.io/

Documentation: https://docs.nats.io/nats-concepts/what-is-nats

Want to learn more about Innovative Tech?

Check out our TechRadar by Devoteam  to see what our experts say about it in the market.