Tuesday, 9 April 2024

Proxmox mount host storage in Linux Container

 On host in terminal run:

pct set <container_id_number> -mp0 /<host_dir>,mp=/<container_mount_point>

Tuesday, 2 April 2024

Windows 10 update KB5034441 fails with error 0x80070643 (how to ignore update)

In January 2024, Microsoft pushed out a broken update to the Windows Recovery Environment (WinRE) that got a couple things wrong:

1) If your WinRE partition didn't have enough space, the update would fail to install.

2) If you didn't have a WinRE partition at all, it would also try to install and fail.

Within a week or two, MS linked to info about how to resize a WinRE partition to address #1 above. As for #2, they suggested you just ignore the failing update.

My system was suffering from #2 and I was tired of seeing the failing attempt so I did a quick search for how to disable a specific update and found an article describing Microsoft's "Show or Hide Updates Tool".

 

Display UTF-8 characters in Putty or a Windows Terminal ssh session

Using Putty SSH on Windows 10 to connect to a Debian 12 system, I noticed that some UTF-8 characters were not rendering properly in the terminal. After checking the Putty settings (Window -> Translation -> Remote Character Set = UTF-8 and Window -> Appearance -> Font = Noto Sans) I was still seeing a filename containing the UTF-8 Middle Dot (·) character printing out with some kind of encoding representation rather than the character itself. The character rendered fine in Firefox when visiting the aforementioned URL.

I wondered if it might be a font issue so I tried installing the Noto series archive of Nerd Fonts and selecting NotoMono Nerd Font Mono. This didn't fix the display in Putty.

Next attempt was to try a different terminal. I installed Windows Terminal which is supposed to use a default font with better UTF-8 support and fail back to other fonts as needed. On a whim I also tried just running ssh in it. I had forgotten that MS decided to include ssh in the April 2018 Windows 10 Creator's Update. Anyway, after using that terminal and the built-in ssh to connect to the server, I was still not seeing the middle dot character rendered properly.

Some more investigation revealed that it might be an issue with locale settings on the Debian side so I ran dpkg-reconfigure locales and chose en_CA.UTF-8 UTF-8, en_GB.UTF-8 UTF-8, and en_US.UTF-8 UTF-8 as my locales to generate followed by en_CA.UTF-8 as the "Default locale for the system environment". After that, reconnecting in both Windows Terminal using `ssh` and in Putty both allowed me to see the middle dot character properly.

Tuesday, 19 March 2024

Backup a folder as a gzipped tar file over the network


tar zcvf - <my_backup_folder> | ssh <destination_host> "cat > /<path_on_destination>/my_backup.tgz"

Using sudo with ssh agent forwarding

I recently had to ssh from my computer into another system, become root, and from there create a tgz file on a 3rd system. The authentication had to come from my computer.

From: https://serverfault.com/a/564462

sudo --preserve-env=SSH_AUTH_SOCK

will do the trick, at least when using it to become root.

Wednesday, 13 March 2024

Proxmox GPU passthrough options for hardware accelerated transcoding

I recently migrated the disks hosting my Plex Media Server (PMS) libraries to a new system. The previous system ran SmartOS with a large zfs mirrored pairs zpool with the default name "zones".

SmartOS is a very clever Illumos-based operating system (modern replacement for Solaris) that is designed to be a host for either hardware virtual machines (HVMs based on bhyve or kvm) or zones (SmartOS containers that securely share the host Illumos kernel or LX containers that emulate a Linux kernel on top of the shared host Illumos kernel.)

I was running my instance of PMS in a SmartOS LX container created by Canonical and based on Ubuntu 16.04. The host zones/Media folder was made accessible to the zone. I was also running a number of vanilla SmartOS zones for minecraft servers and other things that I needed to spin up for testing bits and bobs now and then.

The hardware hosting was getting long in the tooth and was not ideal. No GPU for hardware trans-coding of multi-media, no ECC RAM (considered very important when using ZFS), and no straightforward upgrade paths.

During the 2020-2023 pandemic time-frame I had a data rescue and digital archiving project at work that needed some careful decisions on file encoding and container formats. I needed to preserve uncompressed or lossless compressed 1st generation digital copies from the original analogue sources but also create smaller 2nd generation versions in a format that could retain high quality at much smaller file sizes while showing promise for being playable on a wide variety of devices long into the future. To this end, there was a strong desire for open source software, open coders and decoders (codecs), and standard container formats. Anyhoo... for video I settled on the very new but promising AV1 standard with faith that the hardware and software would catch up (with the existing well-supported H.264 as a fallback if I ran out of time.) As fortune would have it, AV1 encoding got a huge boost from Intel with their new and inexpensive ARC Alchemist graphics cards that have hardware AV1 encoding support. Intel also did a ton of work to open up and support drivers and libraries for Linux and the Open Source multi-media software communities were quick to add support for it.

Soo... It took a while but I put myself together an AMD Ryzen CPU with a workstation motherboard using the X570 chipset which allows me to use ECC Unbuffered (EUDIMM) RAM. I installed Manjaro Linux on the native hardware and with the latest kernel, drivers, ffmpeg, and handbrake installed I was able to get all my AV1 encoding done at 20x video speed which is amazing for AV1.

In that time I had also started using Proxmox at home (a little CWWK Intel N6005 networking box) and work (5 node hyper-converged cluster with Ceph storage running on used Dell PowerEdge servers) so when the project was done I decided to replicate the system for a new home server as a replacement for my SmartOS box. A big win was that Proxmox has excellent support for ZFS and even includes an installer that lets you create a mirrored bootable root pool out of the box. I wish other Linux distributions would copy the Proxmox implementation. Ubuntu tried to do their own cool ZFS thing for a bit on their desktop installer but never brought it to their server installer and I'm not sure what is up with it now. Anyway, what Proxmox meant for me was mirrored NVMe root drives, great GUI for managing VMs, containers, storage, networking, etc. and support to import my SmartOS zones pool just by moving the disks over to the new chassis.

Hardware pass-through support for passing my Intel Arc A380 GPU to a kvm VM took some fiddling but worked. I wanted it for hardware accelerated trans-coding in PMS.

But then I needed to make the zones/Media data available so I set up an LXC container based on a Debian Bookworm image to get access to that folder on the host and maybe serve it up as NFS or CIFS to the VM. But once I got the data available in the container, I thought I might as well see if I could give the container access to the GPU as well and just install PMS on it. This required undoing the HVM passthrough steps and doing some additional steps for LXC device permissions.

Update 2025-03-26: There was more to this that I intended to write up a year ago but then life got busy, I didn't get back to it, and now I forget what else I was going to put here so I'm just going to publish this as is. I'll post something new and refer back if/when I get back to this stuff.

Friday, 1 March 2024

Save existing GNU screen output buffer to a file

 I was working on a node proxy application and used screen to be able to detach my terminal while leaving it running and capturing the output of the application. When I came back, I wanted to save everything and tracked down this bit of wisdom in a comment on a stackoverflow answer:

Ctrl+A and : to get to command mode, then hardcopy -h <filename>

Monday, 26 February 2024

Disable errors on console

Recently when confronted with what turned out to be a networking issue that my Proxmox Virtual Environment cluster was having, the only way to interact with the hosts was through the management console.

Of course, as a result of a networking problem on something like a 5 node Proxmox plus Ceph cluster during a network problem, the host gets pretty chatty to its logs. And much of that gets displayed on the console you are trying to use.

Proxmox being built on Debian Linux means that dmesg is the utility to control the "kernel ring buffer" (basically whatever the kernel has to say) and the default is to display it all on the console.

If you log in on the console and need to get stuff done without being bombarded by kernel messages, you can run `dmesg -D` to disable the printing of messages to the console. When you want them back, `dmesg -E` enables them again.

Thursday, 22 February 2024

Running a node.js server on a privileged port

 I needed to run a node.js server application with a listener on a privileged port (in this case port 80 because it was an HTTP proxy.)

Running a whole node application as root just so it can bind to a port lower than 1024 is overkill and turns any bug in node.js, imported modules, and your code into potential remote root security exploits. Not good.

Most server software designed to run on privileged ports (e.g. httpd) is designed to be initially run with root privileges to bind the port but will then drop those privileges once that is done and run as a dedicated system user with only the minimum set of permissions required to do its job.

Thanks to this serverfault answer, I learned about the authbind utility.

Install it (your Linux distro likely has a package available) and then touch, chown, and chmod the apropriate file in the `/etc/authbind/` folders. Read the manual page for all the options.

Sooo..... Assuming the desired app.js is running under non-privileged user "user" and you wish to bind to port 80:
sudo touch /etc/authbind/byport/80
sudo chown user:user /etc/authbind/byport/80
sudo chmod 500 /etc/authbind/byport/80

Then as "user", run your app like this:

authbind node app.js
  

Friday, 16 February 2024

Reclaim unused space in thin VM disk

After slowly using the old method below for a dozen VMs, I started to wonder if there was a better way to just always write zeros when deleting files or to have the OS circle back around later to zero unused space. I thought there might be some sort of kernel or file-system option in Linux. I didn't find one, but what I did find was fstrim. It is designed to zero free space on mounted volumes and can find them by itself.

So... this new method is much easier and faster. Options vary by version of fstrim and Linux distro, but -a, --all and -v, --verbose seem well supported.

sudo fstrim --all --verbose

 

Old method:

To recover empty space in thin VM disks, it is often necessary to overwrite empty space inside the VM with zeroes. The zerofree utility is a good way to do that for the common ext file systems. Once the free space is zeroed, VM hypervisor software will often either recognize and reclaim the space on the host volume, or provide a utility to reclaim the space. Check your hypervisor documentation. In the case of host volumes using VMWare filesystem (VMFS) version 6 and higher, the reclaiming of zeroed space happens automatically in the background. Older versions require running a utility.

The easiest way to zero the free space on common Linux filesystems is to boot to a recent gparted live CD .iso file. Your VM software should let you mount the .iso file as a CDROM for the VM to boot. You may have to get into the VM BIOS to change the boot order.

Once GParted starts to boot, you should be able to choose all defaults and end up with a graphical interface that includes a running GParted and a desktop with a few other applications.

To write zeros to all the unclaimed space, see if you can spot the root device in the GParted GUI and then open the terminal application and run something like:

sudo zerofree -v <whatever_the_root_device_is>

For example:

sudo zerofree -v /dev/sda1

Note: If LVM is in use (you can tell from the GParted GUI that opens if you see an extended partition with an lvm2 pv filesystem), open terminal and run:

sudo lvdisplay

to find root device before running zerofree. Your zerofree command might then look something like:

sudo zerofree -v /dev/myhost-vg/root

Modern Ubuntu web kiosk using chromium as the browser engine

 I have been working to prepare a digital atlas exhibit for the Natillik Heritage Centre in Gjoa Haven, Nunavut, Canada. Working with Indig...