JTK – Notes and Ramblings Things we have found …

1/8/2026

Rebuilding Pi for Flights and rtl_433

Filed under: General,Home Automation,RTL-SDR — taing @ 12:42 am

The Raspberry Pi 3 running the flight tracker and rtl_433 for weather station and driveway alarm data was in need of an update. Past articles include Update to LaCrosse weather, Adding LaCrosse weather to garage-pi, Mighty-Mule and LaCrosse update, Mighty Mule driveway alarm, Updates and a new rtl-sdr,
More flight tracking adding Radarbox, Updates to flight tracking.

Updating the Raspberry Pi OS Bookworm 32 bit installation to Trixie proved more complex than expected. With Trixie, 64-bit is the expected choice by quite a few things. Even Openhab 5.x has removed 32-bit support. This means that not everything we wanted to install had a 32 bit Trixie option. After getting pretty well tangles the best choice was to start over again with a new Raspberry Pi OS 32-bit Bookworm Lite SD card. Using the Raspberry Pi Imager allows you to predefine hostname, localization, username/password, WiFi if needed, and enable ssh.

rtl_433 /weather station data / driveway alarm

First we will update everything and install the rtl_sdr libraries and setup to listen at 433.92Mhz for the LaCrosse weather station and the Mighty Mule driveway alarm:

sudo apt update
sudo apt upgrade

sudo apt install rtl-sdr rtl-433

At this point we need to update the udev rules to allow for the proper discovery of the rtl-sdr devices. The rtl-sdr file can be found at https://github.com/osmocom/rtl-sdr/raw/master/rtl-sdr.rules

curl https://raw.githubusercontent.com/osmocom/rtl-sdr/master/rtl-sdr.rules >rtl-sdr.rules
sudo cp rtl-sdr.rules /etc/udev/rules.d/rel-sdr.rules
sudo udevadm control --reload-rules
sudo udevadm trigger

Next the serial numbers of the rtl-sdr devices need to be unique. If not already set previously, this is done by plugging the devices in one at a time and using rtl_eeprom.

#only plug in the rtl-sdr for 433 weather station/Mighty Mule
sudo rtl_eeprom -d 0 -s 433 # set the serial number for the rtl-sdr device
#only plug in the rtl-sdr for 1090 adsb
sudo rtl_eeprom -d 0 -s 1090
#only plug in the rtl-sdr for 978 UAT reception
sudo rtl_eeprom -d 0 -s 978

There is a sample rtl_433.conf file with the serial number set to 433, mqtt output in json format set and Mighty Mule decoder included. Grab a copy into your home directory, rename it to rtl_433.conf and give things a test.

rtl_433

If things appear to work and your mqtt broker is receiving messages it’s time to make this a service. First let’s create the service definition file at /etc/systemd/system/rtl_433-mqtt.service:

[Unit]
Description=rtl_433 to MQTT publisher
After=network.target
[Service]
ExecStart=/usr/bin/rtl_433
Restart=always
RestartSec=5
[Install]
WantedBy=multi-user.target

Then we need to copy the config file into place, enable and start the service and check its status

sudo mkdir /etc/rtl_433
sudo cp rtl_433.conf /etc/rtl_433/
sudo systemctl enable rtl_433-mqtt.service
sudo systemctl start rtl_433-mqtt.service
sudo systemctl status rtl_433-mqtt.service
sudo journalctl -u rtl_433-mqtt.service # for additional logging display

Flight Tracking and Feeding

readsb

The setup for ADS-B and UAT listening and aggregation has quite a few steps. First up is readsb as an alternative to dump1090-fa or dump1090-mutability. The basics are:

sudo bash -c "$(wget -O - https://github.com/wiedehopf/adsb-scripts/raw/master/readsb-install.sh)"
sudo readsb-set-location 50.12344 10.23429 # set long and lat
sudo nano /etc/default/readsb # edit config if needed
sudo systemctl restart readsb # restart the service after config edits

The config file will definitely need --device 1090 added to the RECEIVER OPTIONS line in the config file. If you know the ppm deviation for you rtl-sdr you can add it there, too.

RECEIVER_OPTIONS="--device 1090 --device-type rtlsdr --gain auto --ppm 0"

tar1090

Next up is tar1090 to let us see what’s up there on a map. Once installed view your map at http://your_ip/tar1090.

sudo bash -c "$(wget -nv -O - https://github.com/wiedehopf/tar1090/raw/master/install.sh)"
sudo nano /etc/default/tar1090 # edit config file if needed
sudo systemctl restart tar1090 # restart the service after config edits
sudo apt install -y lighttpd # if connection refused when trying http://host_ip/tar1090

graphs1090

graphs1090 is a nice utility to add for displaying stats and graphs. The installation follows a similar pattern to the previous software. Once installed view your graphs at http://your_ip:graphs1090

sudo bash -c "$(curl -L -o - https://github.com/wiedehopf/graphs1090/raw/master/install.sh)"
sudo nano /etc/default/graphs1090 # edit the config as needed
sudo cp /usr/share/graphs1090/default-config /etc/default/graphs1090 # restore original settings

FlightAware

Now is the time to start adding the aggregators to share the ADS-B data with. FlightAware has simple instructions online. You’ll add their repository and install piaware using apt. Once its up an running you visit their website to claim the feed.

wget https://www.flightaware.com/adsb/piaware/files/packages/pool/piaware/f/flightaware-apt-repository/flightaware-apt-repository_1.2_all.deb
sudo dpkg -i flightaware-apt-repository_1.2_all.deb
sudo apt update
sudo apt install piaware

ADSBExchange

ADSBExchange is also pretty simple.

ccurl -L -o /tmp/axstats.sh https://www.adsbexchange.com/stats.sh
sudo bash /tmp/axupdate.sh

Once it is installed you can check your feed at https://www.adsbexchange.com/myip/. They also have an optional stats package you can install.

curl -L -o /tmp/axstats.sh https://www.adsbexchange.com/stats.sh 
sudo bash /tmp/axstats.sh

FlightRadar24

FlightRadar24 has a simple one-liner installation.

wget -qO- https://fr24.com/install.sh | sudo bash -s

Since we are feeding to multiple sites we should follow their advice and disable mlat in fr24feed.ini. FlightRadar24 feed stats can be viewed locally at http://your_ip:8754. You can also visit https://www.flightradar24.com/account/data-sharing.

Airnav / Radarbox

Airnav / Radarbox also has simple to follow instructions.

sudo bash -c "$(wget -O - http://apt.rb24.com/inst_rbfeeder.sh)"
sudo nano /etc/rbfeeder.ini # refer to file below
[client]
network_mode=true
log_file=/var/log/rbfeeder.log
[network]
mode=beast
external_port=30005
external_host=127.0.0.1

Once setup and connected for a few minutes use sudo rbfeeder --showkey to reveal your key and the visit their site to claim your feed.

AirplanesLive

AirplanesLive has instruction’s on github. Once installed you can check your status at https://airplanes.live/myfeed.

curl -L -o /tmp/feed.sh https://raw.githubusercontent.com/airplanes-live/feed/main/install.sh
sudo bash /tmp/feed.sh 
pico /etc/default/airplanes # edit the configuration
sudo systemctl status airplanes-feed # display feeder service status
sudo systemctl status airplanes-mlat # display mlat service status

Planefinder

Planefinder also has instructions online for a variety of clients. This is the summary for the 32 bit Raspberry Pi 3. Find the appropriate Debian package on their page and copy the link. Retrieve the file with wget. The commands will look something like those below. Follow the link provided once the install completes – http://your_ip:30053.

wget http://client.planefinder.net/pfclient_x.x.x_armhf.deb
sudo dpkg -i pfclient_x.x.x_armhf.deb # update filename to reflect actual download

UAT / 978

We skipped adding the UAT 978 sections along the way. Now we need dump978-fa. This is best installed using the FlightAware instructions. Assuming you already have the FlightAware repos installed from the previous steps:

sudo apt install dump978-fa

You will want the update /etc/default/dump978-fa to include driver=rtlsdr,serial=978 on the RECEIVER_OPTIONS line. It may be necessary to update your /etc/piaware.conf file to include the lines below. Refer to the PiAware Advanced Config page for more info. Once installed you can view the UAT map at http://your_ip/skyaware978

uat-receiver-host 127.0.0.1
uat-receiver-type sdr

ADBSExchange has a simple config script for the UAT addition. Once added you can view you map UAT map at http://your_ip/ax978.

sudo bash -c "$(wget -q -O - https://raw.githubusercontent.com/adsbxchange/adsbexchange-978/master/install.sh)"
sudo adsbexchange-978-set-location your_lat your_long

The FlightRadar24 add your data page has a UAT tab with instructions. Provided you are already sharing ADS-B 1090 data with them it is as simple as sudo fr24feed-signup-uat.

RadarBox will need a couple of lines added to /etc/rbfeeder.ini. These are discussed in a couple of forum post: MLAT and dump978 and RBFeeder on RPi config network connection to FlightFeeder 978 UAT.

[dump978]
dump978_enabled=true
dump978_port=30979

You may need to update /etc/collectd/collectd.conf so that graphs1090 will use and display the UAT data. In the module graphs1090 section add URL_978 "http://localhost/skyaware978". more information can be found in /etc/default/graphs1090.

Once it is all done these local links should work:

http://your_ip/tar1090
http://your_ip/dump1090
http://your_ip/graphs1090/
http://your_ip:8754/
http://your_ip:30053/map.html
http://your_ip/ax978
http://your_ip/skyaware978

These links to the aggregation sites should also work:

https://www.flightaware.com/adsb/stats/user/
https://www.adsbexchange.com/myip/
https://map.adsbexchange.com/mlat-map/
https://www.flightradar24.com/account/data-sharing
https://planefinder.net/account/receivers
https://www.airnavradar.com/stations/
https://airplanes.live/myfeed

12/26/2025

Yet another Pi Weather Radio update

Filed under: General — taing @ 6:01 pm

After an update to Raspberry Pi OS Trixie and a great deal of hassle with sox not being willing to handle mp3 format in spite of libsox-fmt-mp3 being installed the Weather Radio scripts have been updated again.

The start.sh script is unaltered from before:

cd weatherRadio
rm -f one two three; mkfifo one two three

./to-same.sh <one &
./to-lame.sh <two &
./to-udp.sh <three &
rtl_fm -f 162550000 -s 22050 -p 14 | tee one two three |multimon-ng -t raw -a EAS /dev/stdin

The tosame.sh script is also unaltered:

multimon-ng -t raw -a EAS /dev/stdin | python3 ~/tg-dsame/dsame.py --mqtt your-mqtt-broker --json output.json --call /home/pi/weatherRadio/playRadio.sh --command boo

The to-lame.sh script was updated to once again use lame instead of sox for the mp3 conversion. While ez-stream can handle other formats mp3 works well from lame.

lame --bitwidth 16 --signed -s 22050 --lowpass 3500 --abr 64 --scale 8 -r -m m - - |ezstream -c ezstream.conf

The to-udp.sh script was greatly simplfied and sox was removed completely from it:

socat -u - udp4-sendto:localhost:5555

The playRadio.sh script was updated to handle playing a raw stream instead of mp3:

#!/bin/bash
nc -lu 5555|play -t raw -r 22050 -b 16 -e signed -c 1 -v 7 -&amp;
sleep 120
killall "play"

For completeness, the ezstream.conf file:

<?xml version="1.0" encoding="UTF-8"?>
<ezstream>

  <servers>
    <server>
      <protocol>http</protocol>
      <hostname>somemachine</hostname>
      <password>hackmeplease</password>
    </server>
  </servers>

  <streams>
    <stream>
      <mountpoint>/weather</mountpoint>
      <format>MP3</format>
      <stream_name>The Weather Radio</stream_name>
      <stream_url>yoururl</stream_url>
      <stream_genre>weather</stream_genre>
      <stream_description>NOAA Radio KHB59 (162.55MHz)</stream_description>
      <stream_quality>2.0</stream_quality>
      <stream_bitrate>32</stream_bitrate>
      <stream_samplerate>48000</stream_samplerate>
      <stream_channels>1</stream_channels>
    </stream>
  </streams>

  <intakes>
    <intake>
      <type>stdin</type>
      <stream_once>yes</stream_once>
    </intake>
  </intakes>
</ezstream>

Weatherflow Tempest MQTT

Filed under: General — taing @ 6:05 am

The Weatherflow Tempest unit has an API based on local broadcast UDP packets. There is also a websocket and a REST API.

This article will focus on using the UDP API with MQTT for use in Openhab. There are a number of resources online. There is a binding that started in 2020 but the discussion stops in 2022. There is a different binding from Bill Welliver that seems more complete.

Additionally there are resources for Home Assistant. https://github.com/briis/hass-weatherflow2mqtt has been depreciated and the git set to read only. It has been replaced with https://github.com/briis/weatherflow_forecast. The newer integration uses a combination the Weaterflow APIs for a more complete setup and also works with Home Assistant discovery. There is also https://github.com/gualandd/WeatherFlow-Tempest-UDP using NodeRed.

All of the options mentioned above seem a bit too heavy for a simple sensor to MQTT path. Here is a bit of python3 code to listen to the UDP port and send to a MQTT broker.

import socket
import paho.mqtt.publish as mqtt
import json

# Define the UDP port to listen on
UDP_IP = "0.0.0.0"  # Listen on all available interfaces
UDP_PORT = 50222
mqtt_host = "your.mqtt.broker"

# Create a UDP socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)

# Bind the socket to the specified address and port
sock.bind((UDP_IP, UDP_PORT))

print(f"Listening on {UDP_IP}:{UDP_PORT}...")

# Listen for incoming packets and print them
while True:
    data, addr = sock.recvfrom(1024)  # Buffer size is 1024 bytes
    js = json.loads(data)
#    print(json.dumps(js, indent=4))
    match js['type']:
        case "obs_st":
            mqtt.single('Tempest/obs',json.dumps(js['obs']), hostname=mqtt_host)
            print(f"OBS: {js['obs']}")
        case "rapid_wind":
            mqtt.single('Tempest/wind',json.dumps(js['ob']), hostname=mqtt_host)
            print(f"Wind: {js['ob']}")

This code could easily be expanded to handle more that the obs_st and rapid_wind messages. For now it’s enough. Be sure to update the mqtt_host to point to your broker.

For sending the MQTT messages, the paho-mqtt library is used. The publish.single function can take additional parameters beyond those seen in the script:

  • topic: (Required, string) The topic to publish the message to.
  • payload: (Required, string, bytes, or None) The actual message content.
  • qos: (Optional, int, default 0) The Quality of Service level (0, 1, or 2).
  • retain: (Optional, bool, default False) If set to True, the broker will retain the message as the last known good value for the topic.
  • hostname: (Optional, string, default 'localhost') The IP address or domain name of the MQTT broker.
  • port: (Optional, int, default 1883) The network port of the broker.
  • client_id: (Optional, string) A unique identifier for the client.
  • auth: (Optional, dict) A dictionary containing username and password for authentication (e.g., {'username': "myuser", 'password': "mypass"}).
  • tls: (Optional, dict or None) A dictionary for configuring TLS/SSL secure connections. 

A great intro to using MQTT in python shows installing paho-mqtt using pip3. For a Raspian Buster system apt install python3-paho-mqtt is simpler. Alternatively a virtual environment could be created.

In Openhab, a Generic MQTT thing pointing to your broker is ideal. For most of the sensor data your channels will have the MQTT Topic State set to Tempest/obs. An Incoming Value Transformation using JSONPath will parse the individual readings from the JSON array: JSONPATH:$[0][7], for example, will extract Air Temperature. Refer to the Weatherflow UDP API of the array index of each value. The channels and linked items can be created using the appropriate units for proper conversions.

For Wind Direction a Scale Profile can be created on the Item. This allows for displaying wind direction with compass directions in addition to degrees. For example, E (82 °). The degrees unit is set in the Channel. Create direction.scale in /etc/openhab/transform/ with this content:

[0..11]=N
[12..33]=NNE
[34..56]=NE
[57..79]=ENE
[80..102]=E
[103..125]=ESE
[126..147]=SE
[148..170]=SSE
[171..191]=S
[192..214]=SSW
[215..237]=SW
[238..260]=WSW
[261..283]=W
[284..306]=WNW
[307..329]=NW
[330..352]=NNW
[353..360]=N
NaN=Non-numeric state
format=%label% (%value%)

12/24/2025

TrueNAS and Dropbox, again

Filed under: General — taing @ 1:00 am

After updating TrueNAS let’s try to run Dropbox client in a Docker container rather than installing directly. Hopefully the container will survive upgrades better.

The janeczku/dropbox is one of the most downloaded images for Dropbox client in Docker. Unfortunately it hasn’t seen an update in nine years. It uses the 64-bit version 11.4.21 of dropboxd from 2016.

You must set env variable DBOX_SKIP_UPDATE to prevent loading the newer version of DB. The current version will not install in the container due to GLIB version issues.

There are several things to setup in the TrueNAS app GUI to get things going: As mentioned in the container overview you will want to set the two environment variables for user id and group ip: DBOX_UID and DBOX_GID. You also need to configure storage. The simplest storage is a Host Path mapped to /dbox/Dropbox and /dbox/.dropbox

Once it starts the log (View Logs ) will report the URL to follow to authorize the link. This is typically in the form https://www.dropbox.com/cli_link_nonce?nonce=5b59dd0928bc08b75a736935bd7d37cd. Pasting the link into a browser and confirming the login is required.

There are a few places to get some status info. From the TrueNAS shell, docker commands will help you see a bit. docker ps -a will list the current containers. docker logs -f containername will tail the container logs. docker top containername will list the current processes for the container.

From the container shell , the dropbox command can display additional information. dropbox status will give a brief status. dropbox filestatus will show the current sync status of the files in the current directory. Additional help on commands can be found using dropbox help or online.

Note: The container shell is accessed from the TrueNAS Apps Gui, select the container and look for Workloads and Containers. There should be three icons for each container – Shell , Volume Mounts and View Logs . In TrueNAS 25.04.2.6 there are bugs that prevent the View Logs function from working well in many cases

Even after all of this the janeczku/dropbox container didn’t seem to be actually syncing anything. It is unclear what all of the issues were but the extreme age of the daemon is surely part of the problem.

There are several newer containers available. One is tiagovdaa/dropbox-docker. It is definitely descended from janeczku/dropbox. Future testing will show if it is more suitable. It is built on version 223.4.4909. The current version from dropbox.com as of December 2025 is 238.4.6075. At the very least, 238.4.6075 does auto install correctly inside the tiagovdaa/dropbox-docker container. It should be noted the language for the container is Portuguese. It appears there is only one volume mount point for this container: dbox. This implies that .dropbox and Dropbox are both children of this mount point.

So for now, it’s back to the previous method of installing dropbox headless and using the Gist to run it as a service as discussed in an earlier post.

A few notes from the earlier post: The first command from the earlier post is su dropbox. This is critical. The .dropbox-dist/dropboxd command will create the Dropbox folder in the home directory of the current user. You will also get the prompt to copy/paste the displayed URL to authenticate to Dropbox as discussed above. When running as a service the /etc/db/dropbox-cli commands will fail UNLESS you are running as the user you defined in dropbox-start.target. The python script checks for the dropbox-pid file in ~/.dropbox.

For services, use systemctl status servicename or journalctl -u servicename for more info. Be aware for the dropbox-start service created above, the dropbox-start.service process completes and exits leaving behind a child running dropbox.

7/4/2025

Some Lua Benchmarks

Filed under: General — taing @ 12:32 am

Sometimes there is a faster way. First let’s look at a simple append one table to another:

For In loop (slowest) 100%
local t1 = {string.byte(string.rep('A',256),1,256)}
local t2
for x = 1, 1000 do
    t2 = {1,2,3,4,5,6,7,8,9,0}
    for _, v in pairs(t1) do
      table.insert(t2, v)
    end
end
Regular For Loop (slightly faster) 84.3%
local t1 = {string.byte(string.rep('A',256),1,256)}
local t2
for x = 1, 1000 do
    t2 = {1,2,3,4,5,6,7,8,9,0}
    for v = 1,  #t1 do
      table.insert(t2, v)
    end
end
log(#t1..' : '..#t2)
table.move (much faster) 24.5%
local t1 = {string.byte(string.rep('A',256),1,256)}
local t2
for x = 1, 1000 do
    t2 = {1,2,3,4,5,6,7,8,9,0}
    table.move(t1,1,#t1,#t2+1,t2)
end
log(#t1..' : '..#t2)
Table.unpack (fastest) 12.5%
local t1 = {string.byte(string.rep('A',256),1,256)}
local t2
for x = 1, 1000 do
    t2 = {1,2,3,4,5,6,7,8,9,0, table.unpack(t1)}
end
log(#t1..' : '..#t2)

Test are based on ETC Mosaic v 2.14.4 soft triggers running scripts:

Results:
for in	1,213,051 microseconds
for 1,022,291 microseconds
move 296,931 microseconds
unpack 151,784 microseconds

The next test is extracting a zero terminated string from a buffer

buffer = {
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,0,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,
1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9,1,2,3,4,5,6,7,8,9,9}
String Concatenate (slowest) 100%
local str = ''
local i
for x = 1, 1000 do
    i = 1; 
    str = ''
    local len = #buffer
    while buffer[i] ~= 0 and i&lt;=len do
        str = str .. string.char(buffer[i])
        i = i + 1
    end
end
Scan and copy (faster) 10%
local str = ''
local i
for x = 1, 1000 do
    i = 1; 
    local len = #buffer
    while buffer[i] ~= 0 and i&lt;=len do 
        i = i + 1
    end
    str = string.char(table.unpack(buffer,1,i))
end
table.unpack / string:unpack (fastest) 6.6%
local str
local i
for x = 1, 1000 do
    local sbuffer = string.char(table.unpack(buffer)) -- second copy
    str, i = string.unpack('z' , sbuffer)  -- final copy
end
Results:
string concatenate:          3,114,018 microseconds
scan and copy: 308,604 microseconds
table.unpack / string.unpack: 207,526 microseconds

The final test is float32 extraction

Native Lua code (slowest) 100%
buffer = {0x43, 0x68, 0x61, 0x6E}
local float 
for x = 1, 10000 do
    local sign = 1
    local mantissa = (buffer[i+1] &amp; 127)&lt;&lt;16 | buffer[i+2]&lt;&lt;8 | buffer[i+3]

    if buffer[i] > 127 then
        sign = -1
    end

    local exponent = (buffer[i] &amp; 127)&lt;&lt; 1 | (buffer[i+1]>>7)
    if exponent == 0 then
        float = 0.0
    else
        mantissa = (math.ldexp(mantissa, -23) + 1) * sign       -- ldexp is deprecated in Lua 5.3 -> replace ldexp(x, exp) with (x * 2.0^exp) 
        float = math.ldexp(mantissa, exponent - 127)
    end
end

Float code based on info from https://www.h-schmidt.net/FloatConverter/IEEE754.html and https://github.com/iryont/lua-struct

string.unpack() (fastest) 91%
buffer = {0x43, 0x68, 0x61, 0x6E}
local float 
for x = 1, 10000 do
    local b = string.char(table.unpack(buffer,1,4))
    float = string.unpack('>f', b)
end
Results:
Native Lua Code: 120,978
string.unpack: 110,149

9/13/2024

PiKVM Notes

Filed under: General — taing @ 3:43 pm

We started the journey with the Geekworm KVM-A3 Kit for Raspberry Pi 4 kit from Amazon.com. Add a Raspberry Pi 4, power supply and sd card to get started.

Both the PiKVM instruction and the Geekworm instructions are handy to have. You will need to download the PiKVM image (PiKVM V3 HAT BOX Image, OLED/FAN preactivated). We created the SD using RPi Imager.

For us the RTC and the OLED display were enabled out of the box. We did need to purchase a CR1220 3v Lithium button battery for the RTC.

You will want to get the clock set and syncing correctly. The steps below must be done as root with the filessystem set to rw. Remember to set the file system back to ro when done.

  • set the timezone
    • timedatectl set-timezone America/New_York)
  • temporarily disable systemd-timesyncd
    • systemctl stop systemd-timesyncd
  • set the correct time
    • timedatectl set-time 'YYYY-MM-DD HH:MM:SS'
  • update /etc/systemd/timesyncd.conf
    • nano /etc/systemd/timesyncd.conf
      • NTP="10.101.1.101"
  • restart systemd-timesyncd
    • systemctl stop systemd-timesyncd

Setting the static address is described in the PiKVM FAQ. It involves editing /etc/systemd/network/eth0.network.

The device we wanted to control had Display Port out. We added a DisplayPort to HDMI & DisplayPort adapter to allow the on-site KVM to share the screen with the PiKVM. It should be noted that with the adapter we chose the primary monitor is the HDMI output. In our case that requires a bit of fiddling after a reboot to get the two displays to mirror and not extend.

We also found that for our remote access www solution the display needed to be set to MJPEG/HTTP instead of the default H.264/WebRTC mode.

7/6/2024

Pi Weather Radio Updates

Filed under: General — taing @ 10:42 pm

After updating the Pi to Bookworm things didn’t seem to work correctly for the Pi SDR Weather Radio (pt 2, pt 3, pt 4). The problem was with the new version of ezstream. Turns out this is solvable from with a quick search. Newer versions of ezstream(1.x) no longer support the .xml format configuration format from version 0.x.

There is an included utility, ezstream-cfgmigrate, to solve the problem.

ezstream-cfgmigrate -0 ezstream.xml >ezstream.conf

will create a newly formatted file, ezstream.conf, to use.

I also realized I had never documented the current version of the scripts.

The first file is start.sh. This is typically going to be left running, typically with screen.

cd weatherRadio
rm -f one two three; mkfifo one two three

./to-same.sh <one &
./to-lame.sh <two &
./to-udp.sh <three &
rtl_fm -f 162550000 -s 22050 -p 14 | tee one two three |multimon-ng -t raw -a EAS /dev/stdin

Everything else happens in a subfolder. We first delete and creates three bash fifos, each processed by a separate script.

to-same.sh handles the actual SAME messages, MQTT and starting the speaker output:

multimon-ng -t raw -a EAS /dev/stdin | python3 ~/tg-dsame/dsame.py --mqtt jtk-openhab.home.arpa --json output.json --call /home/pi/weatherRadio/playRadio.sh --command boo

to-lame.sh (named before we switched to sox) handles the streaming via ezstream:

sox -t raw -r 22050 -b 16 -e signed -c 1 -v 7 - -r 22050 -t mp3 -c 1 -C 64 - sinc -3.5k |ezstream -c ezstream.conf

to-udp.sh creates the udp stream for playRadio.sh:

sox -t raw -r 22050 -b 16 -e signed -c 1 -v 7 - -r 22050 -t mp3 -c 1 -C 64 - sinc -3.5k|socat - udp4-sendto:localhost:5555

playRadio.sh pipes the udp stream from to-udp.sh to play using netcat and plays for two minutes to the attached speaker:

#!/bin/bash
nc -lu 5555|play -t mp3 -&
sleep 120
killall "play"

12/23/2023

TrueNAS Scale / FTP / Cleanup

Filed under: General — taing @ 7:19 pm

After getting TrueNAS setup I was time to create a FTP user for the cameras to record. First a local user is created. The user’s home directory is where the files go and will probably also wants to be a Dataset and configured as a Share. The user is configured with a shell of /usr/sbin/nologin.

The FTP service is then enabled. The service is configured to always chroot.

Now it is time to create a cron job to cleanup these folders. The following command is set to run each night at midnight as the camera ftp user:

find /mnt/tank1/video/// -type f -mtime +30 -print0 | xargs -0 rm -f

This should find and delete all of the files in the folder older than 30 days.

There has been an issue with some of the Amcrest camera and FTP but for now ours seem to be uploading correctly.

12/1/2023

TrueNAS Scale and Dropbox

Filed under: General — taing @ 5:59 pm

Unfortunately, TrueNAS Scale doesn’t natively support the Dropbox service. It does support Dropbox through Cloud Data Protection however that only provides for one way sync. The gist found at https://gist.github.com/kbumsik/b7cc243e297a3a66837151024049f43c provides an option. The gist will not survive a system update but can be re-enabled.

Make sure you create a regular user for dropbox and give them a home directory in your pool/home. For example, user dropbox with a home directory of /mnt/tank1/home/dropbox. The gist will have you the dropbox headless linux install directly from Dropbox along with the handy dropbox.py script from Dropbox before adding their service script. A summary of the commands from Dropbox and the gist:

su dropbox
wget https://www.dropbox.com/download?plat=lnx.x86_64
tar -xzf name_of_downloaded_file
.dropbox-dist/dropboxd
# this can take a long time especially on reinstallation

wget https://linux.dropbox.com/packages/dropbox.py
sudo mkdir /etc/db
sudo cp dropbox.py /etc/db/dropbox-cli
sudo chmod +x /etc/db/dropbox-cli

wget https://gist.githubusercontent.com/kbumsik/b7cc243e297a3a66837151024049f43c/raw/b339fbeee2ba1081723612bec5aacf92cc60e7c2/dropbox-start.target

# Edit the service unit for your needs - be sure user id is correct and adjust path for dropbox-cli
pico dropbox-start.target

# do not start the service unless you have stopped the manually started process
sudo cp dropbox-start.target /etc/systemd/system/dropbox-start.service
sudo systemctl enable --now dropbox-start.service
sudo systemctl status dropbox-start.service

I have saved dropbox.py and dropbox-start.target at truenas dropbox.zip in case the gist goes missing.

UPDATE: after a short while the process gave me an error that it couldn’t monitor the entire folder structure and suggest I run the following command to resolve:

echo fs.inotify.max_user_watches=100000 | sudo tee -a /etc/sysctl.conf; sudo sysctl -p

UPDATE #2: After updating to TrueNAS Scale Dragonfish-24.04.0 /usr is Read Only. The commands above have been updated. You will need to edit dropbox-start.target with the proper path from dropbox-cli.

11/3/2023

Airthings Wave Plus

Filed under: General — taing @ 9:40 pm

There are some resources online for getting the Airthings Wave Plus to communicate with your Raspberry Pi and Openhab. The first article is from Airthings and is a bit out of date but a very good starting point. The find_wave.py script they refer to is no longer on their site but I found it at https://cdn2.hubspot.net/hubfs/4406702/Tech/find_wave.py or at https://github.com/Airthings/waveplus-reader. An newer Python Bluetooth library from the Airthings folks is also available.

This all takes us down a rabbit hole of python 2.7 lack of current support. “sudo apt-get install python-2” got the proper version of Python itself installed but that still leaves some dependencies missing. I found a note on installing the no longer supported version of pip for Python 2.7 for Raspbian Bullseye.

sudo curl https://bootstrap.pypa.io/pip/2.7/get-pip.py --output get-pip.py
sudo python2 get-pip.py

However, I was still not able to get bluepy installed for Python 2.7.

So now we try to adapt the script for Python 3. Not too bad, we need parenthesis for the print statements/functions. Once the syntax was correct my first run “sudo python3 find_wave.py” or “sudo blescan” rewarded me with the error message:

BTLEManagementError(“Failed to execute management command ‘%s'” % (cmd), rsp)

“bluetoothctl power off” followed by “bluetoothctl power on” seems to resolve the issue. There is quite a bit of discussion on this error on github.

Once we got find_wave.py working it was time to try read_waveplus.py. Once again this is written for Python 2.7 but a quick fixup of all of the print statements got it working for Python 3.

~/waveplus-reader $ sudo python3 read_waveplus.py 2930165120 5

Press ctrl+C to exit program

Device serial number: 2930165120
??????????????????????????????????????????????????????????????????????????????????????????????????????????
?     Humidity ? Radon ST avg ? Radon LT avg ?  Temperature ?     Pressure ?    CO2 level ?    VOC level ?
??????????????????????????????????????????????????????????????????????????????????????????????????????????
?     34.0 %rH ?     74 Bq/m3 ?     74 Bq/m3 ?   19.43 degC ?   977.64 hPa ?    546.0 ppm ?     46.0 ppb ?
?     34.0 %rH ?     74 Bq/m3 ?     74 Bq/m3 ?   19.43 degC ?   977.64 hPa ?    546.0 ppm ?     46.0 ppb ?
?     34.0 %rH ?     74 Bq/m3 ?     74 Bq/m3 ?   19.43 degC ?   977.64 hPa ?    546.0 ppm ?     46.0 ppb ?

There are lots of folks who have various version of Airthings to MQTT and airthings-wave-plus-reader. There is even a nifty ESP32 gateway project. But the project that interested me most for the Pi also creates a mini dashboard and was recommended in the Openhab forums.

Different readers all seem to return the same data for the sensors(Temp, Humidity, Pressure, CO2, VOC and Radon). However, there are differences in the handling of access control data (illuminance, ambient light, measurement periods and battery level). The access control packets seems to start with the sent command followed by, in Python struct pack format, <L12B6H Or <L2BH2B9H. The battery level appear to be millivolts as a 16 bit unsigned word 3rd from the end(17 or 13).

For now the Openhab Bluetooth binding extension is working. If issues develop the MQTT gateway will probably be the solution.

UPDATE(July 2024): after updating the Pi to Bookworm bluepy was no longer installed. Currently it is not available as a Debian python3-xxx package and it considered an externally-managed-environment by pip3. This means sudo pip3 install bluepy will fail. There are a couple of choice to work around this. The recommended method is to create a virtual environment and install there. If I had the patience and time I would have chosen this path. Instead I chose the --break-system-packages route. This option will allow you to install into the global python environment with pip3.

sudo pip3 install bluepy --break-system-packages

In my situation this was expedient.

Pi has no network

Filed under: General — taing @ 9:07 pm

On my Raspberry Pi running Bullseye after running “sudo apt-get dist-upgrade” dhclient failed to start at boot. A local login and one could run “”sudo dhclient” and all would be well.

Fortunately the good folks at stackexchange had seen and solved the problem. Edit /etc/systemd/system/dhcpcd.service.d/wait.conf to change “/usr/lib/dhcpcd5/dhcpcd” to “/usr/sbin/dhcpcd”.

10/30/2023

More upgrades – PHP & Webtrees

Filed under: General — taing @ 10:09 pm

In the process of the upgrades discussed in the previous post – Ubuntu 18.04(Bionic Beaver) to 22.04(Jammy Jellyfish) there were additional issues for Webtrees and PHP.

The current release of webtrees (2.1) requires PHP 7.4, 8.0, 8.1 or 8.2. In the process of upgrading, PHP 8.1 was installed but lacked the required modules to function properly. This can be solved by a simple apt-get command to install the required pieces:

sudo apt-get install php8.1-{cli,common,curl,zip,gd,mysql,xml,mbstring,imagick,intl}

There is a discussion of this at https://www.webtrees.net/index.php/forum/help-for-2-0/35292-update-php-error-solved and https://www.webtrees.net/index.php/forum/help-for-2-0/35369-upgrade-ubuntu-to-20-04-02-broke-webtrees.

It is also worth noting the apache2 commands to enable/disable Apache PHP modules:

sudo a2dismod php7.4
sudo a2enmod php8.1

There is a discussion of installing multiple versions of PHP at https://tecadmin.net/how-to-install-php-on-ubuntu-22-04/ and https://linux.how2shout.com/how-to-install-php-7-4-on-ubuntu-22-04-lts-jammy-linux/. The articles discuss a PPA for the versions not in the standard apt sources. Once multiple versions are installed installed you can use

sudo update-alternatives --config php

to change versions. You can easily confirm the default version using “php -v”.

Older Posts »

Powered by WordPress