Expanding the EC2 root file system

By Brian Fitzgerald

Introduction

Expanding the AWS EC2 Linux root file system size for Red Hat version 7.1 and up can be handled by a few simple AWS EC2 console or CLI steps. For Redhat version 7.0, additional Linux command line steps are required.

Initial Conditions

We’ll start out with a small root file system size, 8G.

[ec2-user@ip-10-0-1-244 ~]$ df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      8.6G  1.3G  7.4G  15% /

Review the instance and the volume in the AWS console:

ebs-id

If you want to use command line tools, the note these facts:

Instance ID: i-0be13d6ba7d191ebe
EBS ID: vol-0aeecfd5a36a070a1

Procedure

To resize to 100G, for example, from the console, navigate to EC2, select the EC2 instance. Select the root block device.

inst.png

resizvol

or from the CLI, issue:

C:\>aws ec2 modify-volume --volume-id vol-0aeecfd5a36a070a1 --size 100
VOLUMEMODIFICATION      modifying       100     10      gp2     0       2019-06-17T22:42:37.000Z    300 100     gp2     vol-0aeecfd5a36a070a1

Optionally, check on the status:

C:\>aws ec2 describe-volumes-modifications --volume-id vol-0aeecfd5a36a070a1
VOLUMESMODIFICATIONS    optimizing      100     10      gp2     0       2019-06-17T22:42:37.000Z    300 100     gp2     vol-0aeecfd5a36a070a1

If your system is Amazon Linux, or Red Hat Linux version 7.1 and up, reboot the instance and you are done.

Fix 7.0 with parted

On some systems, Redhat Linux 7.0, for example, resizing the volume is not enough. You must fix the partition table with parted and then adjust the root partition size. Start parted on your root device:

[root@ip-10-0-1-244 ~]# parted /dev/xvda
GNU Parted 3.1
Using /dev/xvda
Welcome to GNU Parted! Type 'help' to view a list of commands.
                               display the version number and copyright information of GNU Parted

Print the partition table with “p”:

(parted) p

Parted will find problems and offer to fix them. Respond “f”:

Error: The backup GPT table is not at the end of the disk, as it should be.  This might mean that another operating
system believes the disk is smaller.  Fix, by moving the backup to the end (and removing the old backup)?
Fix/Ignore/Cancel? f
Warning: Not all of the space available to /dev/xvda appears to be used, you can fix the GPT to use all of the space
(an extra 188743680 blocks) or continue with the current setting?
Fix/Ignore? f
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvda: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: pmbr_boot

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  6445MB  6442MB  xfs

Next, quit parted and startup fdisk.:

(parted) q
[root@ip-10-0-1-244 ~]# fdisk /dev/xvda
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.
Welcome to fdisk (util-linux 2.23.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Print the partition table:

Command (m for help): p

Disk /dev/xvda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: C43F888F-F4D2-422F-9DE9-3755F19BB874
# Start End Size Type Name
1 2048 4095 1M BIOS boot
2 4096 12587007 6G Microsoft basic

Delete partition 2 by entering “d”. On the next line, accept the default partition number.

Command (m for help): d
Partition number (1,2, default 2):
Partition 2 is deleted

Re-create partition 2 by entering “n”. Accept the defaults for partition number, first sector, and last sector.

Command (m for help): n
Partition number (2-128, default 2):
First sector (34-209715166, default 4096):
Last sector, +sectors or +size{K,M,G,T,P} (4096-209715166, default 209715166):
Created partition 2

Write out the partition table:

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table. The new table will be used at
the next reboot or after you run partprobe(8) or kpartx(8)
Syncing disks.

Proceed to the next section. Reboot.

Reboot

Reboot from the AWS EC2 console, the AWS CLI, or the Linux shell:

C:\>aws ec2 reboot-instances --instance-ids i-0be13d6ba7d191ebe

or

[root@ip-10-0-1-244 ~]# reboot

Login and check:

[ec2-user@ip-10-0-1-244 ~]$ df -H /
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1      108G  1.4G  106G   2% /

That’s it!

Conclusion

The procedure for resizing the root file system depends on the operating system version. In Redhat 7.1 and up, simply resize the EBS volume and reboot.

In Redhat 7.0:

  • Resize the EBS volume.
  • Fix the partition table with parted.
  • Adjust the partition table with fdisk.
  • Reboot.

Connecting across VPCs using Peering

By Brian Fitzgerald

Introduction

The requirement to connect applications across regions is ubiquitous. In Amazon Web Services (AWS), applications are deployed to a Virtual Private Cloud (VPC), but a VPC is specific to a single AWS region — to connect across regions, it is necessary to connect across VPCs. For speed and security, it is preferable to connect VPCs across Amazon’s internal networks, not across the public internet. We are going to establish our cross-VPC connection using peering. For this peering to succeed, planning is necessary to avoid overlapping IP address ranges. Peering across AWS accounts will also be demonstrated.

Virtual Private Clouds

An Amazon AWS Virtual Private Cloud (VPC) is an isolated network in a single region. A VPC covers all availability zones in the region and can have multiple subnets. A VPC covers a specific CIDR (Classless Inter-Domain Routing) IP address range, or “block”. In this section, we’re going to cover VPC IP address ranges, which is going to lead into the next section on VPC peering.

Networks

IP network configuration in AWS VPCs is quite flexible. The network number can reflect almost any legal IPv4 address range. IPv6 CIDR ranges are also available. A VPC CIDR block size can range from /16 netmask (65534 IP addresses) to /28 netmask (14 IP addresses).

Amazon recommends that you specify a CIDR block from the private IPv4 address ranges as specified in RFC 1918:

  • 10.0.0.0 – 10.255.255.255 (10/8 prefix)
  • 172.16.0.0 – 172.31.255.255 (172.16/12 prefix)
  • 192.168.0.0 – 192.168.255.255 (192.168/16 prefix)

If you use an AWS tool to automatically create a VPC, you will find that the generated CIDR follows that guidance. Here are some examples of VPC CIDR blocks generated by AWS tools:

  • 10.0.0.0/16
  • 172.31.0.0/16

You may create a VPC with a CIDR block outside the RFC 1918 ranges, but most users will refrain from doing so on aesthetic grounds, or to avoid misunderstandings. You may not create an AWS VPC CIDR block beginning with 0. or 127.

The default limit on VPCs is 5 = per region. To get a higher limit, you have to open a support case and submit a limit increase request. To avoid needing a higher VPC limit, you might decide to create your VPCs as large as allowable, i.e. netmask /16, and to avoid inadvertently creating ones. Some AWS only work by creating a new VPC. For example, if you use the Getting Started menu to setup an Elastic Container Service with load balancing, you will have no option but to create a new VPC. Other AWS tools offer to create a new VPC For example if you create a new a new EC2 or RDS instance in the AWS console, the menu offers creating a new VPC as as option. In conclusion, you can tightly manage your number of VPCs, or you can request a higher limit.

One way to simplify and standardize your administration is to create all your VPCs using RFC 1918 IP address ranges and netmask /16. If you do that, you can create networks among these ranges

  • 10.0.0.0/16 – 10.255.0.0/16 (256 networks)
  • 172.16.0.0/16 – 172.31.0.0/16 (16 networks)
  • 196.168.0.0/16 (1 network)

for a total of up to 273 VPCs per region. For many applications, 273 VPCs per region is ample.

In a new account, the default VPC in each region has CIDR block 172.31.0.0/16. As of this writing, a new account covers 17 regions. The upshot is that a new AWS account has 17 VPCs all with the same CIDR block. In the case of isolated VPCs, this is not a problem, but default VPCs cannot be connected by peering because the IP address ranges overlap.

In addition to creating new VPCs, you can increase the size of an existing VPC by adding additional CIDR blocks. However, you cannot mix across RFC 1918 IP address ranges. Specifically, you cannot combine “10.0” and “172.” CIDR blocks in a single VPC.

In conclusion, if you consider all allowable ranges and netmasks, you can choose from over a half billion possible CIDR blocks. However, even if you restrict your choice to private networks and the largest allowable size, you can choose from among 273 different CIDR blocks.

Subnets

Subnetting VPCs is also quite flexible. You may specify a subnet mask ranging from /16 (65534 IP addresses) to /28 (14 IP addresses). By default, a VPC may have up to 200 subnets.

You cannot create a subnet larger than netmask /16. For example, if you compose a VPC from two contiguous netmask /16 networks, you could not, therefore, create a single netmask /15 subnet.

VPC summary

A new account covers multiple regions, each with a default VPC with CIDR block range 172.31.0.0/16. You can create additional VPCs. Some users may decide to stick with netmask /16 and RFC 1918 networks. The detailed explanations of VPC IP addresses in this section set the stage for the next section, which is VPC peering.

VPC Peering

A simple way to connect across VPCs is to establish VPC peering. VPC peering connects two VPCs to form a single network. Traffic is routed not across the Internet, but across a private AWS network. VPC peering is more secure and more reliable than using an internet gateway.

VPC peering requires that the VPC CIDR blocks do not overlap. Subnets are not considered. In other words, if two VPCs have overlapping CIDRs, you cannot establish VPC peering, even if no existing subnets overlap.

All default VPC CIDR blocks are 172.31.0.0/16. You cannot establish VPC peering across more than one default VPC because the IP address range overlaps. You must create one or more new VPC.

For this blog, we’re going to setup VPC peering across regions us-east-1, ap-northeast-1, and eu-west-1 (N. Virginia, Tokyo, and London). We’ll start by deleting the default VPC, namely 172.31.0.0/16, and creating these VPCs

Region Region VPC ID CIDR block
N. Virginia us-east-1 vpc-0ed2447f33a01d301 10.1.0.0/16
Tokyo ap-northeast-1 vpc-07251b9829e270787 10.2.0.0/16
London eu-west-2 vpc-0bf90b5507089c175 10.3.0.0/16

For example:

vpc

I have deleted the default VPCs for neatness: I have no need for them now. In each new VPC, create a subnet. Also, only for the sake of this blog, I’ll create an internet gateway in us-east-1 and add a route to the Internet via the gateway. Spin up an EC2, download the key pair, convert to putty keys, connect with PuTTY and we’re in. Install nc:

sudo yum -y update
sudo yum -y install nc

In ap-northeast-1 and eu-west-2, spin up an EC2 in each. Save the SSH keys (*.pem) for later.

Region Subnet EC2 IP address public IP address
us-east-1 10.1.0.0/24 10.1.0.244 107.23.67.190
ap-northeast-1 10.2.0.0/24 10.2.0.241
eu-west-2 10.3.0.0/24 10.3.0.183

Attempt to connect from us-east-1 to ap-northeast-1 and eu-west-2, and the connections time out.

timeout

Now, we’ll setup VPC peering.

Setting up VPC peering

To set up VPC peering, send an invitation by following the Create Peering Connection dialog. For example, from us-east-1, invite ap-northeast-1.

invite

In the ap-northeast-1 region, accept the invitation.

accept

In the “Actions” menu, select “Accept request”. In the dialog. click “Yes, Accept”, and in the next dialog, click “Modify my route tables now”, or select “Route Tables” from the left navigation pane.

In the us-east-1 route table, add a route to 10.2.0.0/16 via the peered connection. In the ap-northeast-1 route table, add a route to 10.1.0.0/16 via the peered connection. You can skip ahead to the screenshots in the next subsection to get a preview of the final route table.

Retest the connection to ap-northeast-1. Success:

tcp22ok

Likewise:

  • In us-east-1, invite to peering, eu-west-2 VPC.
  • In eu-west-2, accept the peering invitation.
  • In the us-east-1 route table, add a route to 10.3.0.0/16 via the peering connection.
  • In the eu-west-1 route table, add a route to 10.1.0.0/16 via the peering connection.

Using WinSCP, copy the *.pem files that you downloaded when you created the EC2s to the us-east-1 EC1. Change the file mode to 600. Now ssh succeeds:

from us-east-1:
ssh -i ap-northeast-1-key.pem ec2-user@10.2.0.241
ssh -i eu-west-2-key.pem ec2-user@10.3.0.183

Connecting Tokyo to London

Peering is not transitive, meaning that, so far, you cannot connect directly from ap-northeast-1 to eu-west-2 or vice versa. You may, however, setup peering directly between ap-northeast-1 and eu-west-2. Be sure to update the route tables.The final us-east-1 route table looks thus:

rout.us-east-1

The ap-northeast-1 route table is:

rout.ap-northeast-1

The eu-west-2 route table is:

rout.eu-west-2

Once routing is setup, you can connect between any two IP addresses in the three regions.

If you want to ssh from ap-northeast-1 to eu-west-2, then copy eu-west-2-key.pem to  ap-northeast-1 first.

from us-east-1:
cd .ssh/
scp -i ap-northeast-1-key.pem -p eu-west-2-key.pem  ec2-user@10.2.0.241:.ssh
ssh -i ap-northeast-1-key.pem ec2-user@10.2.0.241
from ap-northeast-1:
cd .ssh/
ssh -i eu-west-2-key.pem ec2-user@10.3.0.183

Again, note that only the us-east-1 EC2 instance is public. The ap-northeast-1 and eu-west-1 EC2 instances are private, and are accessible only via the us-east-1 EC2 instance.

This was an example of interconnecting three regions in the same account. The connections were accomplished using ssh (port 22). In the next section, we will connect across two separate AWS accounts via Oracle database link.

Connection across accounts

So far, we have setup VPC peering across regions in the same account. Now we are going to establish VPC peering across separate AWS accounts. Setup your accounts. Setup VPCs with non-overlapping CIDR blocks. For example:

Account number Region Region ID VPC ID CIDR block
665575760545 Seoul ap-northeast-2 vpc-04260ecd771d09cdb 10.5.0.0/16
128887077649 Singapore ap-southeast-1 vpc-043e1448a4e98a416 10.6.0.0/16

In each VPC, setup at least two subnets in separate availability zones.

Account number VPC ID Subnet Availability Zone
665575760545 vpc-04260ecd771d09cdb 10.5.0.0/24 ap-northeast-2b (apne2-az2)
665575760545 vpc-04260ecd771d09cdb 10.5.1.0/24 ap-northeast-2a (apne2-az1)
128887077649 vpc-043e1448a4e98a416 10.6.0.0/24 ap-southeast-1c (apse1-az3)
128887077649 vpc-043e1448a4e98a416 10.6.1.0/24 ap-southeast-1a (apse1-az1)

Create databases

In the first account:

  • Setup an internet gateway and a route for the sake of this blog.
  • Enable DNS hostnames.
  • Create an Oracle Database RDS (internet facing for the sake of this blog).
  • Test from Oracle SQL Developer.

dbsuccess

In the second account, create an Oracle Database RDS, private. Enable listener log exports. RDS Summary:

Account number DB ident Endpoint
665575760545 seoul-ora seoul-ora.c7oolvrrvu91.ap-northeast-2.rds.amazonaws.com
128887077649 singapore-ora singapore-ora.cdhkgqcl8pkk.ap-southeast-1.rds.amazonaws.com

(… continued)

Account number IP Address PORT db name
665575760545 52.79.225.94 1521 ORCL
128887077649 10.6.1.194 1521 ORCL

Note that the seoul-ora IP address is public and the singapore-ora IP address is private.

From the first account, from seoul-ora, the create database link statement succeeds:

CREATE DATABASE LINK singapore_link 
CONNECT TO admin IDENTIFIED BY "sing..33"
USING 'singapore-ora.cdhkgqcl8pkk.ap-southeast-1.rds.amazonaws.com:1521/ORCL';

A query across the database link times out:

select host_name from v$instance@singapore_link;
ORA-12170: TNS:Connect timeout occurred

Next we will setup VPC peering across accounts.

VPC Peering across accounts

From the first account, in Seoul, send the invitation:

invite.acct

From the second account, in Singapore, accept the invitation.

accept.acct

Click “Yes, Accept”, and in the next dialog, click “Modify my route tables now”, or select “Route Tables” from the left navigation pane. Add a route to destination 10.5.0.0/16 via peering connection pcx-09199f486b1e1a533.

From the first account, in Seoul, add a route to 10.6.0.0/16 via peering connection pcx-09199f486b1e1a533.

From the second account:

  • Select Singapore
  • Navigate to Services->RDS
  • Select singapore-ora
  • Identify the security group
  • Navigate to the security group
  • Add inbound rule:
    • TCP port: 1521
    • Source: 10.5.0.0/16

Here is a screenshot of the Singapore RDS security group inbound rules.

sing.sgRetry the database link from Seoul:

select host_name from v$instance@singapore_link;
HOST_NAME
ip-172-21-2-91

Success. Peering across AWS accounts works.

Note that the query returns the Singapore RDS hostname, ip-172-21-2-91. The host is an EC2 instance that is not accessible from your AWS account.

In the second account, in Singapore, CloudWatch, in Logs, in /aws/rds/instance/singapore-ora/listener, observe the establish record.

12-JUN-2019 21:44:13 * (CONNECT_DATA=(SERVICE_NAME=ORCL)(CID=
 (PROGRAM=oracle)(HOST=ip-172-23-0-229)(USER=Brian Fitzgerald))) 
 * (ADDRESS=(PROTOCOL=tcp)(HOST=10.5.1.93)(PORT=64171)) 
 * establish * ORCL * 0

Notice that connection source IP address 10.5.1.93 is in Seoul VPC CIDR block 10.5.0.0/16.

Again, note that only the ap-northeast-2 (Seoul) RDS instance is public. The ap-southeast-1 (Singapore) RDS instances is private, and is accessible only via the ap-northeast-2 RDS instance.

Our result: a successful TCP connection across AWS accounts. In this case, we set up a database link across Oracle databases. The connection could as well have been a Microsoft SQL Server linked server, or a database client, such as ODBC or JDBC. VPC peering is not limited to database technology. A wide range of applications, tools, and services can be deployed across AWS VPCs, regions, or accounts by leveraging VPC peering. This blog has covered TCP over IPv4, but other transports, such as UDP can be considered. IPv6 is supported in AWS, as well as IPv4.

Programming

All actions that were demonstrated from the AWS console can be accomplished programmatically.

Command Line Interfaces

The AWS command line interface (CLI) can be used to issue commands that perform the same actions that were demonstrated from the AWS console. For example, to drop and reinstatiate N. Virginia to Tokyo peering:

C:\>aws ec2 delete-vpc-peering-connection --vpc-peering-connection-id pcx-0006343192557953b
True
C:\>aws ec2 create-vpc-peering-connection --peer-vpc-id vpc-07251b9829e270787 --vpc-id vpc-0ed2447f33a01d301 --peer-region ap-northeast-1
VPCPEERINGCONNECTION    2019-06-21T01:54:00.000Z        pcx-0d5aa3deb15773138
ACCEPTERVPCINFO 665575760545    ap-northeast-1  vpc-07251b9829e270787
REQUESTERVPCINFO        10.1.0.0/16     665575760545    us-east-1       vpc-0ed2447f33a01d301
CIDRBLOCKSET    10.1.0.0/16
PEERINGOPTIONS  False   False   False
STATUS  initiating-request      Initiating Request to 665575760545

C:\>aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-0d5aa3deb15773138 --region ap-northeast-1
VPCPEERINGCONNECTION    pcx-0d5aa3deb15773138
ACCEPTERVPCINFO 10.2.0.0/16     665575760545    ap-northeast-1  vpc-07251b9829e270787
CIDRBLOCKSET    10.2.0.0/16
PEERINGOPTIONS  False   False   False
REQUESTERVPCINFO        10.1.0.0/16     665575760545    us-east-1       vpc-0ed2447f33a01d301
CIDRBLOCKSET    10.1.0.0/16
PEERINGOPTIONS  False   False   False
STATUS  provisioning    Provisioning

All other actions demonstrated from the AWS console in this blog article can be run from the CLI. Examples:

aws ec2 create-vpc
aws ec2 create-route
aws ec2 create-subnet
aws ec2 authorize-security-group-ingress

to mention only a few.

Programming APIs

The commands can be scripted in several languages, including javascript, powershell, and python. The python library is boto3. EC2 client methods include these examples:

create_vpc()
create_vpc_peering_connection()
accept_vpc_peering_connection()
create_route()
create_subnet()

 

Technical summary

The key details needed to setup VPC peering are:

  • Non-overlapping IP address ranges across VPCs.
  • Send and accept invitation.
  • Adding routes to the route table.
  • Security group inbound rules that cover the remote IP address ranges.

In this blog article, we explained, demonstrated, or mentioned:

  • AWS VPC peering across three regions.
  • Deleting the default VPC
  • Creating new VPCs
  • Use of RFC 1918 private IP address ranges
  • Use of /16 netmask for VPC
  • Sending and accepting VPC peering invitations.
  • Adding routes to the route table
  • Creation of subnets
  • Use of /24 subnet mask
  • Downloading ssh key pairs (pem)
  • Using PuTTYgen to convert ssh keys to putty keys (ppk).
  • Connecting to EC2 via PuTTY or WinSCP.
  • VPC peering across AWS accounts
  • Installing nc
  • Using nc in EC2 to test TCP connectivity
  • ssh across EC2 using ssh key pairs (pem)
  • Limiting RDS access via security groups.
  • Creating a database link
  • Reviewing RDS listener log

Conclusion

AWS VPC peering is a great way to connect applications across VPCs, regions, or accounts. VPC peering is faster, more reliable, and more secure than using the Internet. VPC peering can be implemented smoothly by avoiding overlapping IP address ranges. This blog covered ssh and database connections, but VPC peering applies to a wide range of networked application technology.

Docker on Windows 10 Home

By Brian Fitzgerald

Introduction

This article describes how you can configure your Windows 10 Home to run docker. The approach is:

  • Upgrade to Windows 10 Pro.
  • Configure your PC for virtualization.
  • Make changes to you docker create and run steps.

Download Docker

  • Go to docker.com
  • Create your Docker ID
  • Log in
  • Download “Docker for Windows Installer.exe”
  • Run the executable and complete the software installation.

Buy the upgrade to Windows 10 Pro

  • Go to microsoft.com account.
  • Create an account for yourself.
  • Add a valid payment method (credit card).
  • Press the Windows key.
  • In the search bar, enter “activation”.
  • In the search results, select Activation system settings.
  • Press the Buy button. A browser will take you to the Microsoft Store.
  • Buy the upgrade. The cost is $99.00.

Upgrade

  • Press Windows key again.
  • In the search bar, enter “activation”.
  • In the search results, select Activation system settings.
  • Press the Install button.

Enable Hyper-V

  • Press the Windows key.
  • In the search bar, enter “features”.
  • In the search results select Turn Windows features on or off.
  • Check Hyper-V

OLYMPUS DIGITAL CAMERA

Enable Virtualization

  • Power cycle the box
  • During boot, press the ESC key to get to the Startup Menu

OLYMPUS DIGITAL CAMERA

  • Press F10 for BIOS Setup
  • Navigate to the System Configuration Menu

OLYMPUS DIGITAL CAMERA

  • Press the Down Arrow to select Virtualization Technology.
  • Press F5 or F6 to change the value to Enabled.
  • Press F10 to Save and Exit.

Start Docker Service

If prompted, start docker service.

Notes on creating a docker image

Make sure that your application inside the docker image binds to host 0.0.0.0, not 127.0.0.1. For example, if you are using flask, then your Dockerfile contains:

ENTRYPOINT [ "flask" ]
CMD [ "run", "--host=0.0.0.0" ]

Example docker image creation:

docker build -t flask-tutorial .

Notes on running a docker image

If you are testing and want to be able to interrupt docker, use the -i and -t flags:

C:\Users\Brian Fitzgerald>docker run -p 5000:5000 -i -t flask-tutorial
* Serving Flask app "flaskr"
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)
^C

Connect to the application at

http://127.0.0.1:5000/

 

Conclusion

With modifications, you can run Docker on your home PC.