Sunday, March 30, 2025

Rackmount hardware of Rasberry PI4

Standard

Background.


After having had my Raspberry PIs in a box together with a lot of wiring spaghetti, its nice to finally have some decent rack mount enclosures to tidy up stuff.

This will be the first of two enclosures for a total of 10 nodes. I have not really decided if they are all going to be in the same cluster or two individual.

Hardware.

The nodes themselves are Raspberry PI4-B with 8 GB of memory. They each have 128 GB of internal storage via the onboard Micro SD card. I use some SANDISK high endurance cards, that are made for 24/7 operation and rated for 20.000 hours.

Between the nodes is a normal gigabit network switch with 10 gigabit fiber backbone. for now I have more than ample ports in one switch.

 
The enclosure model is the Rackmount RM-PI-T1. It is a 1U enclosure with room to add a fan or a HAT for the PIs if necessary.
 
If your rack cabinet already has fans I doubt it will be necessary.

Software.

The nodes run normal PI-OS which is a Debian derived Linux based OS specially tailored for the PI hardware. I use the lite version which is a headless version of the OS.

On this I use the K3S (Rancher) lightweight Kubernetes platform. I think some of the choices Canonical have made lately makes me doubt MicroK8s in a production environment and I think full on Kubeadm (Vanilla Kubernetes) is probably overkill for my needs.

I use no kind of hardware emulation. This is what is known as "bare metal" in Kubernetes terms. You could add this as an extra layer of abstraction. As it is here the images need to be built for the ARM architecture - and not for AMD64 which most ready made ones are. 

Something that has surprised a lot of developers after they got a new Macbook I think :-P.

You could add a hw virtualization layer - but I think the performance impact would be severe. 

Some of the nodes may convert to FreeBSD at a later stage. Mostly as an experiment. They will then not be part of the Kubernetes cluster but rather their own kind running the needed services via Jails on the physical host.

Images.

The Kubernetes images are built though Ainsible. The build environment hosted on Github and built via Github actions.

I plan to make a seperate more in depth post about this and also publish scripts and configs publicly on Github (once they fully do what I want).

Backend server.

For this purpose I currently have an older and much more powerful HP DL-585 FreeBSD server with storage, that attaches to the nodes for persistent storage. Also it currently runs any databases needed.

On this server each logical node is hosted in a Jail, that gives it its own IP stack and own ZFS pool area in the storage area. 

The total ZFS pool size is currently 14 TB.

This has been the way the services now in the Kubernetes cluster where all hosted in my old setup before the PI nodes.

Conclusion.

These small machines are remarkably capable and have a very low power consumption footprint. The new PI5 is arguably even more desirable with its built in interface to real SSD storage.

However the idea behind Kubernetes is to have the nodes stateless and that is negated if the different nodes have persistent data, that is not mirrored to all of the nodes.

If you have your own business and require flexible computing I highly recommend the approach of using cheaper HW like this, because you will be able to size your system in a way you can have multiple nodes fail and not even know about it except for the alarm the system will give you.

It is then very easy to simply replace a faulty node with new hardware, add the needed disk image and let it rejoin the cluster. 
 
And do not be alarmed at the prospects of having your own on site Kubernetes setup. If I can do it with of the self parts for less than 7500 DKK any company with a decent IT department can also run something similar. 
 
It is the most obvious way of complying to any GDPR legislation in the EU, and avoiding any worries how our US "ally" treat our data. Which you will have to if you use hosting with the hosting titans.

Tuesday, April 6, 2021

libvirt-dnsmasqd running as part of libvirt

Standard

Open DNS server on external IP

Background

When you run libvirt / KVM on Ubuntu it seems the framework starts up a second local DNS server beside systemd-resolved. Since only one server can occupy localhost port 53 (systemd-resolved) libvirt-dnsmasq uses HOSTNAME / Port 53. So that will be whatever IP the DHCP server assigns to your PC. 

This IP is accessible from all other sources on the network as a matter of fact and running an open DNS server on it is probably not a good idea.

Solution

Check the virtual net status:

# virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

Check to see if libvirt-dnsmasq is running

# lsof -i TCP:53

Remove the default virtual network for your virtual hosts.

# virsh net-destroy default

Prevent it from autostarting at all

# virsh net-autostart --network default --disable

If you need it again start it with

# virsh net-start default

Result

libvirt-dnsmasq is no longer listening on port 53 / DNS on whatever IP your interface(s) have been given on your network.

# lsof -i TCP:53
COMMAND   PID            USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
systemd-r 635 systemd-resolve   13u  IPv4  25136      0t0  TCP localhost:domain (LISTEN)

# virsh net-list
 Name   State   Autostart   Persistent
----------------------------------------





Wednesday, January 17, 2018

Citrix on linux

Standard

How to get the black address bar

1. Use Firefox. It works way better with the ICACLient than chromium. I have not tested chrome, buit I suspect it is the same.

1. Setup of local ICAClient

- Navigate to the root of the ICA installation. For me /opt/Citrix/ICACLient.
- Find the file All_Regions.ini 
- Under [Client Engine\GUI]
- Add ConnectionBar=* 

Restart your ICAClient and log on again. It should now be visible in the top of the screen.

Saturday, September 2, 2017

FreeBSD 11 - vt console debacle

Standard

Vt console in FreeBSD

Introduction

For some reason server distributions of *nix now have to have a vt based (compositing) console, so that you can get your console into the larger resolutions.

Personally I fail to see the need for this at all. When you connect to your server it is via ssh 95% of the time. Or probably more. And your client PC where off you run ssh probably has a hd display today. Using the actual console is something you do when there are problems. Usually in single user mode. This does not have to be bling bling.

What about us, that depend on the serial console? Or text mode without a compositor. Well it would seem FreeBSD is now going down the Ubuntu server road, and we have to do some fiddeling in order to make our Inside Lights out cards, serial consoles and what not working again.

At least for me, my iLO card on my HP server could no longer bridge to the console after an upgrade to FreeBSD 11. I was told the console was in an unsupported graphics mode and after plugging a monitor onto the server, it was in 640x480.

How to fix this.

Fortunately this is simple not requiring a custom kernel build or anything. In loader.conf put:

kern.vty=sc

And you are back to the old sc driver for the console. And I can again get a console via ethernet. Also in single user via the iLO.

Conclusion

Back to normal ..

Tuesday, July 25, 2017

HP inside lights out and OpenSSH

Standard

Problem

The sshd server implementation made by HP on ILO cards - especially older ones - can be notoriously difficult to use with newer versions of OpenSSH.

The main reason is algorithms rendered unsafe by old age combined with a very strange implementation by HPs hands.

Solution

The following configuration on the client side made my current version of OpenSSH play reasonably nice with ILO / SSH. There is some unexplained disconnects by the server I cannot figure out. 

In .ssh/config

Host <hostname>/<IP>
PasswordAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication no
HostbasedAuthentication no
PubkeyAuthentication no
RSAAuthentication no
Compression no
ForwardAgent no
ForwardX11 no
Ciphers aes128-cbc,3des-cbc,blowfish-cbc,cast128-cbc,arcfour128
HostKeyAlgorithms ssh-rsa,ssh-dss
KexAlgorithms diffie-hellman-group1-sha1
MACs hmac-sha1
ServerAliveInterval 0

Result

You can now SSH to the ILO card and use the cmd line tool from HP. Issue a 

</>hpiLO-> remcons

To get a real HW console bridged over the network.


Friday, January 29, 2016

Automount with systemd

Standard

Automount with systemd 


Introduction 


As many others I have used autofs for automagically mounting nfs / cifs shares from my clients to my local servers for a while. You can see some of my experiments with this here.

However systemd has now got this as a standard feature with a few small steps. This all reqires nfs server to be configured at your local server before setting this up on your client.

/etc/fstab content 


You need to add you server to your local /etc/fstab 

my.server.com:/server/files /my/files nfs noauto,x-systemd.automount,soft,rsize=16384,wsize=16384,timeo=14,intr 0 0 

You can do this by restarting the system or using sysctl.

$> sudo systemctl daemon-reload 
$> sudo systemctl restart remote-fs.target 
$> sudo systemctl restart local-fs.target 

You need to add you server to your local /etc/fstab

Conclusion 


Your shares should now automount when you cd into the folder. Here that would me /my/files on the client.

Friday, October 23, 2015

Lenovo T440P + Ubuntu = driver hell .. And how I fixed it.

Standard

New laptop and the nasty suprises

I have purchased the Lenovo T440P in order to run Ubuntu for my development environment. I had a lot of initial issues using this laptop with external displays. The laptop display is too small to use on its own for development, so this was kind of a show stopper.

My target environment was:

  1. Lenovo T440P patched to a sensible BIOS level. I think mine is 2.31 or 2.32.
  2. Ubuntu 14.04.3 LTS
  3. 3.19.0-30-generic #34~14.04.1-Ubuntu SMP Fri Oct 2 22:09:39 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
  4. Gnome-Shell / not Unity
  5. Nvidia proprietary drivers
  6. Some sort of util to manage the use of the new daisy chaining of monitors on a single channel.
A lot of people spent many many hours getting this to work. This was what worked for me.

Installation of base system

  1. I started out with a system with no nonsense. That is the server release of Ubuntu. Install it, and you will get a system without any GUI at all.
  2. Install the package gnome-shell. This will give you the most basic gnome-shell install shaved down to the bones. No empathy, no Evolution. The downside is you have to add many of the usual tools by hand.
  3. Install nvidia-331 + Anything above this worked for me.
  4. Replace GDM with LightDM. This is the crucial point. GDM did not work for me on this laptop. I think LightDM is more advanced in managing the session with multiple monitors as of now. done by firstly installing the package lightdm and then dpkg-reconfigure lightdm. Select lightdm over GDM.
  5. See to it that the GRUB boot loader has no hokus pokus in it. You need a plain "quiet splash" no modeset changes unless you are fiddeling around and know what you are doing.
  6. The above should also have installed the package prime. So prime-select query should show you what GPU you are using.

Result

I am able to use both monitors with the Intel or the NVIDIA driver. Of cause the Intel card does not perform as well as the nvidia one. Some people have reported memory leaks / kernel crashes. I have not experienced this with the above setup. I am writing this from the lappie with an external monitor attached using the monitor on the laptop itself. 

It is really nice that the external monitors no longer tie hard to the discrete graphics card.

switching is done with prime-select <query|nvidia|intel>

For now I think I will use the intel card as it seems to be easily fast enough to power a developer workstation. In fact I struggle to see, why Lenovo does not produce a variant of the T440P with ONLY the Intel card. Seems to be more than what an old codegrunt like me need anyway.

Performance wise glxgears give me around 60 fps with the Intel driver and 2000+ fps when using the NVIDIA card. This goes for both screens.

Outstanding issues

I also have the advanced dock. The above setup does not work for me at all using this yet. I am able to get a text console (mirror) on both displays, but cannot get a GUI up and running on either screen using the advanced dock. I have read that the same is the issue with some versions of MS Windows. So in fact Lenovo has made a patch, that I have yet to flash onto the advanced dock.

https://support.lenovo.com/us/en/documents/ht081248)

I think the next step is to flash the firmware of the dock and see, what the outcome of that is. Right now I do not have a windows boot device compatible with the dock, so I probably have to create a boot partition for Windows on the lappie itself (which sucks) or replace the disk with another one and install on that.

What not to do ..

I have read some have had success downgrading to the 1.14 version of the BIOS. If you have a later version of the T440P I cannot emphasize enough how bad an idea that is.

The older BIOS is made for the older versions of the hardware, and even a minor version incompability in a device in your laptop could transform your laptop into the most expensive paperweight you ever owned.

Only do this if the laptop originally shipped with the early version of the BIOS.

Update:

It seems the above is not a problem on Ubuntu 15.10. On that release displays and dock work just as they are supposed to. I think this is mainly due to the major release upgrade of the kernel / 4.*.

For now the above fixes are still needed for the LTS product.



Update 2:

Distributor ID: Ubuntu
Description: Ubuntu 16.04 LTS
Release: 16.04
Codename: xenial

All works on this LTS out of the box. And it occurred to me that Cannonical actually ship a Gnome-Shell only dist now! Hell just froze over apparently. So forget about the steps to get a clean gnome env above. Just fetch the "gubuntu" distro.