If you’re anything like me, you use Docker for nearly everything. However, for most of my projects, I try to maximize efficiency but don’t feel the need to go full Kubernetes on the project. That’s where this simple application comes in.
I want a live feed of all my Docker containers since I will (often) forget to change the settings inside my docker-compose.yml
and try and do something like load a 2 GB file without thinking. Or, I do a little side project and forget to tidy up the container with a docker compose
down.
I use some of the containers with memory limits enabled. Then, when the container goes over its memory limits, the container runtime might kill the process. This is not always the case. Sometimes I like to see what is causing such memory limits. Since Redpanda and Deephaven are already part of my daily development stack (and should be part of yours!), I have this program running alongside so I can fine-tune any containers I need.
To launch the latest release, you can clone the repository and run via:
git clone https://github.com/deephaven-examples/redpanda-docker-stats.git
cd redpanda-docker-stats
docker compose up -d
Or, you may download the release docker-compose.yml file if preferred:
mkdir redpanda-docker-stats
cd redpanda-docker-stats
curl https://raw.githubusercontent.com/deephaven-examples/redpanda-docker-stats/main/docker-compose.yml -o docker-compose.yml
docker compose up -d
This starts the containers needed for Redpanda and Deephaven.
To start listening to the Kafka topic docker-stats
, navigate to http://localhost:10000/ide.
This container is set to run with our Application Mode. In the Panels table you will see a table for docker-stats
and a figure for memoryUsage
To run this script, you need confluent-kafka
libraries installed on your local machine.
In your terminal, run:
pip3 install confluent-kafka
This can also be done in a virtual environment:
mkdir confluent-kafka; cd confluent-kafka
python3 -m venv confluent-kafka
cd confluent-kafka
source bin/activate
pip3 install confluent-kafka
See the Python venv documentation for more information.
Now, let’s look at the entire script. Run on your local machine, this will generate a Kafka stream to Redpanda that we can then read into Deephaven. We’ve provided the uninterrupted version for you to copy and paste, but we’ll explain the details below. Run on your local machine, this will generate a Kafka stream to Redpanda that we can then read into Deephaven.
Full Python script
from confluent_kafka import Producer
import re
import json
import time
import subprocess
topic_name = 'docker-stats'
producer = Producer({
'bootstrap.servers': 'localhost:9092',
})
def convert_unit(input_unit):
if input_unit == 'GiB': return 1073741824
if input_unit == 'MiB': return 1048576
if input_unit == 'kiB': return 1024
if input_unit == 'GB': return 1000000000
if input_unit == 'MB': return 1000000
if input_unit == 'kB': return 1000
else: return 1
def get_raw(value_with_unit_str):
value_str = re.findall('d*.?d+',value_with_unit_str)[0]
unit_str = value_with_unit_str[len(value_str):]
return int(float(value_str) * float(convert_unit(unit_str)))
while True:
data = subprocess.check_output("docker stats --no-stream", shell=True).decode('utf8')
container = {}
lines = data.splitlines()
for line in lines[1:]:
args = line.split( )
if len(args) == 14:
container = {
"container": args[0],
"name": args[1],
"cpuPercent": re.findall('d*.?d+',args[2])[0],
"memoryUsage": get_raw(args[3]),
"memoryLimit": get_raw(args[5]),
"memoryPercent": re.findall('d*.?d+',args[6])[0],
"networkInput": get_raw(args[7]),
"networkOutput": get_raw(args[9]),
"blockInput": get_raw(args[10]),
"blockOutput": get_raw(args[12]),
"pids": re.findall('d*.?d+',args[13])[0]
}
producer.produce(topic=topic_name, key=None, value=json.dumps(container))
producer.flush()
time.sleep(0.5)
We start off with all the needed imports. We are using the confluent_kafka
library to write our data to Redpanda. The imports of re
and json
are to format our data, while time
allows us to control the speed of streaming data, and subprocess
gives us access to pull data from the terminal.
from confluent_kafka import Producer
import re
import json
import time
import subprocess
Kafka topics can be any name. We fittingly name this application docker stats
.
topic_name = 'docker-stats'
We have the Kafka stream Redpanda set up on the local port 9092. We want to produce data to this port.
producer = Producer({
'bootstrap.servers': 'localhost:9092',
})
To convert the docker stats
format into just bytes, we have a helper function:
def convert_unit(input_Unit):
if input_Unit == 'GiB': return 1073741824
if input_Unit == 'MiB': return 1048576
if input_Unit == 'kiB': return 1024
if input_Unit == 'GB': return 1000000000
if input_Unit == 'MB': return 1000000
if input_Unit == 'kB': return 1000
else: return 1
In order to convert the units on the docker stats
, we want to separate the numbered value from the unit value and call the conversion method above.
def get_raw(value_with_unit_str):
value_str = re.findall('d*.?d+',value_with_unit_str)[0]
unit_str = value_with_unit_str[len(value_str):]
return int(float(value_str) * float(convert_unit(unit_str)))
The goal of this application is to tell me my container information until we decide to leave the container. We want this application to run forever, or until we quit. The rest of the application is inside a while-loop that never ends.
Now we can fine-tune the application to get other commands.
We use a subprocess in Python to run the command docker stats --no-stream
in the terminal. shell = True
means collect the output.
Now the subprocesses will output bytes, so to convert that back to just a string, we apply decode('utf8')
.
We assign the resulting output to the value data
, which is all the information on all the containers at that time.
data = subprocess.check_output("docker stats --no-stream", shell=True).decode('utf8')
We want each container’s information to be a separate row of data. Also, when producing to a Kafka stream, each argument needs to be a separate JSON. To achieve this, there is a dictionary for each container and we split the data by each row, and then by each argument.
container = {}
lines = data.splitlines()
for line in lines[1:]:
args = line.split( )
print(re.findall('d*.?d+',args[2])[0])
if len(args) == 14:
container = {
"container": args[0],
"name": args[1],
"cpuPercent": re.findall('d*.?d+',args[2])[0],
"memoryUsage": get_raw(args[3]),
"memoryLimit": get_raw(args[5]),
"memoryPercent": re.findall('d*.?d+',args[6])[0],
"networkInput": get_raw(args[7]),
"networkOutput": get_raw(args[9]),
"blockInput": get_raw(args[10]),
"blockOutput": get_raw(args[12]),
"pids": re.findall('d*.?d+',args[13])[0]
}
Next, we need to send this data to the Kafka stream. The Deephaven table is set up to read this exact JSON data — if we change the JSON dictionary, we need to change our Deephaven Kafka consumer. Here, each row is sent to the producer we created above. We also wait half a second to see both how the containers are changing, but also to prevent getting too much data that it is hard for us to visually inspect.
producer.produce(topic = topic_name, key = None, value = json.dumps(container))
producer.flush()
time.sleep(0.5)
Finally, navigate back to the Deephaven IDE and see the docker stats
stream in. With these custom-built statistics, you can watch how your containers change as you perform other operations.
The data/app.d
folder contains the Deephaven script to load our content into the Panels menu. Edit these Python scripts to add more pre-loaded panels such as tables and plots.
There are a lot of things to do with this data:
- Take the data and plot the
cpuPercent
withKafkaTime
for a container. - Export to CSV or Parquet format to save the information long term.
- Compare or perform statistics with this live data and earlier historical data.
Further reading
Source link
lol