OsmoDevCon 2024: "Introduction to XDP, eBPF and AF_XDP"

I've presented a talk Introduction to XDP, eBPF and AF_XDP as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

This talk provides a generic introduction to a set of modern Linux kernel technologies:

  • eBPF (extended Berkeley Packet Filter) is a kind of virtual machine that runs sandboxed programs inside the Linux kernel.

  • XDP (eXpress Data Path) is a framework for eBPF that enables high-performance programmable packet processing in the Linux kernel

  • AF_XDP is an address family that is optimized for high-performance packet processing. It allows in-kernel XDP eBPF programs to efficiently pass packets to userspace via memory-mapped ring buffers.

The talk provides a high-level overview. It should provide some basics before the other/later talks on bpftrace and eUPF.

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-204-introduction-to-xdp-ebpf-and-afxdp

OsmoDevCon 2024: "Using bpftrace to analyze osmocom performance"

I've presented a talk Using bpftrace to analyze osmocom performance as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

bpftrace is a utility that uses the Linux kernel tracing infrastructure (and eBPF) in order to provide tracing capabilities within the kernel, like uprobe, kprobe, tracepoints, etc.

bpftrace can help us to analyze the performance of [unmodified] Osmocom programs and quickly provide information like, for example:

  • Histogram of time spent in a specific system call

  • Histogram of any argument or return value of any system call

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-203-using-bpftrace-to-analyze-osmocom-performance

OsmoDevCon 2024: "Anatomy of the eSIM Profile"

I've presented a talk Anatomy of the eSIM Profile as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

In the eSIM universe, eSIM profiles are the virtualised content of a classic USIM (possibly with ISIM, CSIM, applets, etc.).

Let's have a look what an eSIM profile is:

  • how is the data structured / organized?

  • what data can be represented in it?

  • how to handle features provided by eUICC, how can the eSIM profile mandate some of them?

  • how does personalization of eSIM profiles work?

There is also hands-on navigation through profiles, based on the pySim.esim.saip module.

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-174-anatomy-of-the-esim-profile

OsmoDevCon 2024: "Detailed workings of OTA for SIM/USIM/eUICC"

I've presented a talk Detailed workings of OTA for SIM/USIM/eUICC as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

Everyone knows that OTA (over the air) access to SIM cards exists for decades, and that somehow authenticated APDUs can be sent via SMS.

But let's look at the OTA architecture in more detail:

  • OTA transport (SCP80) over SMS, USSD, CellBroadcast, CAT-TP, BIP

  • The new SCP81 transport (HTTPS via TLS-PSK)

  • how to address individal applications on the card via their TAR

  • common applications like RFM and RAM

  • custom applications on the card

  • OTA in the world of eUICCs

  • talking to the ECASD

  • talking to the ISD-R

  • talking to the ISD-P/MNO-SD or applications therein

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-175-detailed-workings-of-ota-for-sim-usim-euicc

OsmoDevCon 2024: "GlobalPlatform in USIM and eUICC"

I've presented a talk GlobalPlatform in USIM and eUICC as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

The GlobalPlatform Card Specification and its many amendments play a significant role in most real-world USIM/ISIM, and even more so in eUICC.

The talk will try to provide an overview of what GlobalPlatform does in the telecommunications context.

Topics include:

  • security domains

  • key loading

  • card and application life cycle

  • loading and installation of applications

  • Secure Channel Protocols SCP02, SCP03

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-173-globalplatform-in-usim-and-euicc

OsmoDevCon 2024: "High-performance I/O using io_uring via osmo_io"

I've co-presented a talk (together with Andreas Eversberg High-performance I/O using io_uring via osmo_io as part of the OsmoDevCon 2024 conference on Open Source Mobile Communications.

Traditional socket I/O via read/write/recvfrom/sendto/recvmsg/sendmsg and friends creates a very high system call load. A highly-loaded osmo-bsc spends most of its time in syscall entry and syscall exit.

io_uring is a modern Linux kernel mechanism to avoid this syscall overhead. We have introduced the osmo_io`API to libosmocore as a generic back-end for non-blocking/asynchronous I/O and a back-end for our classic `osmo_fd / poll approach as well as a new backend for io_uring.

The talk will cover

  • a very basic io_uring introduction

  • a description of the osmo_io API

  • the difficulties porting from osmo_fd to osmo_io

  • status of porting various sub-systems over to osmo_io

You can find the video recording at https://media.ccc.de/v/osmodevcon2024-209-high-performance-i-o-using-iouring-via-osmoio

Gradual migration of IP address/port between servers

I'm a strong proponent of self-hosting all your services, if not on your own hardware than at least on dedicated rented hardware. For IT nerds of my generation, this has been the norm sicne the early 1990s: If you wante to run your own webserver/mailserver/... back then, the only way was to self-host.

So over the last 30 years, I've always been running a fleet of machines, some my own hardware colocated, and during the past ~18 years also some rented dedicated "root servers". They run a plethora of services for either my personal stuff (like this blog, or my personal email server), or any of the IT services of the open source projects I'm involved in (like osmocom) or the company I co-founded and run (sysmocom).

Every few years there's the need to migrate to new hardware. Either due to power consumption/efficiency, or to increase performance, or to simply avoid aging hardware that may be dying soon.

When upgrading from one [hosted] server to another [hosted] server, there's always the question of how to manage the migration with minimal interruption to services. For very simple services like http/https, it can usually be done entirely within DNS: You reduce the TTL of the records, bring up the service on the new server (with a new IP), make the change in the DNS and once the TTL of the DNS record is expired in all caches, everybody will access the new server/IP.

However, there are services where the IP address must be retained. SMTP is a prime example of that. Given how spam filtering works, you certainly will not want to give up your years if not decadeds of good reputation for your IP address. As a result, you will want to keep the IP address while doing the migration.

If it's a physical machine in colocation or your home, you can of course do that all rather easily under your control. You can synchronize the various steps from stopping the services on the old machine, rsync'ing over the spool files to the new, then migrate the IP over to the new machine.

However, if it's a rented "root" server at a company like Hetzner or KVH, then you do not have full control over when exactly the IP address will be migrated over to the new server.

Also, if there are many different services on that same physical machine, running on a variety of different IPv4/IPv6 addresess and ports, it may be difficult to migrate all of them at once. It would be much more practical, if individual services could be migrated step by step.

The poor man's approach would be to use port-forwarding / reverse-proxying. In this case, the client establishes a TCP connection to the old IP address on the old server, and a small port-forward proxy accepts that TCP connectin, creates a second TCP connection to the new server, and bridges those two together. This approach only works for the most simplistic of services (like web servers), where

  • there are only inbound connections from remote clients (as outbound connections from the new server would originate from the new IP, not the old one), and

  • where the source IP of the client doesn't matter. To the new server all connections' source IP addresses suddenly are masked and there's only one source IP (the old server) for all connections.

For more sophisticated serviecs (like e-mail/SMTP, again), this is not an option. The SMTP client IP address matters for whitelists/blacklists/relay rules, etc. And of course there are also plenty of outbound SMTP connections which need to originate from the old IP, not the new IP.

So in bed last night [don't ask], I was brainstorming if the goal of fully transparent migration of individual TCP+UDP/IP (IPv4+IPv6) services could be made between and old and new server. In theory it's rather simple, but in practice the IP stack is not really designed for this, and we're breaking a lot of the assumptions and principles of IP networking.

After some playing around earlier today, I was actually able to create a working setup!

It fulfills the followign goals / exhibits the following properties:

  • old and new server run concurrently for any amount of time

  • individual IP:port tuples can be migrated from old to new server, as services are migrated step by step

  • fully transparent to any remote peer: Old IP:port of server visible to client

  • fully transparent to the local service: Real client IP:port of client visible to server

  • no constraints on whether or not the old and new IPs are in the same subnet, link-layer, data centre, ...

  • use only stock features of the Linux kernel, no custom software, kernel patches, ...

  • no requirement for controlling a router in front of either old or new server

General Idea

The general idea is to receive and classify incoming packets on the old server, and then selectively tunnel some of them via a GRE tunnel from the old machine to the new machine, where they are decapsulated and passed to local processes on the new server. Any packets generated by the service on the new server (responses to clients or outbound connections to remote serveers) will take the opposite route: They will be encapsulated on the new server, passed through that GRE tunnel back to the old server, from where they will be sent off to the internet.

That sounds simple in theory, but it poses a number of challenges:

  • packets destined for a local IP address of the old server need to be re-routed/forwarded, not delivered to local sockets. This is easily done with fwmark, multiple routing tables and a rule, similar ro many other policy routing setups.


RetroNetCall: "Datex-L, the German CSPDN"

I've presented about Datex-L, the German CSPDN (Circuit Switched Public Data Network) as part of the RetroNetCall talk series on retro-networking technology.

You can find the video recording at https://media.ccc.de/v/retronetcall-20230201-laforge-datex-l-cspdn

I've always been fascinated by "ancient" data communications - both the kind that I personally consciously witnessed from the late 1980s as well as the kind that I never experienced myself (like Datex-L).

I am not an expert in the subject by all means, as I was never involved in its design, implementation or even used it. However, given that there's very few public information online about Datex-L and/or other CSPDNs, I thought I could improve the situation by presenting about it.

RetroNetCall: "ISDN B-Channel protocols"

I've presented about ISDN B-Channel protocols (X.75, V.120, V.110, T.70, ...) as part of the RetroNetCall talk series on retro-networking technology.

You can find the video recording at https://media.ccc.de/v/retronetcall-20221207-laforge-isdn-b-channel-protocols

Many people with some kind of telecom background are familiar with D-channel (signalling) protocols of ISDN, and you can find many publications on that topic. Surprisingly, much less publications are talking about the B-channel protocols used for data transmission, like VX.75, V.110, V.120, T.70, ...

Deployment of future community TDMoIP hub

I've mentioned some of my various retronetworking projects in some past blog posts. One of those projects is Osmocom Community TDM over IP (OCTOI). During the past 5 or so months, we have been using a number of GPS-synchronized open source icE1usb interconnected by a new, efficient but strill transparent TDMoIP protocol in order to run a distributed TDM/PDH network. This network is currently only used to provide ISDN services to retronetworking enthusiasts, but other uses like frame relay have also been validated.

So far, the central hub of this OCTOI network has been operating in the basement of my home, behind a consumer-grade DOCSIS cable modem connection. Given that TDMoIP is relatively sensitive to packet loss, this has been sub-optimal.

Luckily some of my old friends at noris.net have agreed to host a new OCTOI hub free of charge in one of their ultra-reliable co-location data centres. I'm already hosting some other machines there for 20+ years, and noris.net is a good fit given that they were - in their early days as an ISP - the driving force in the early 90s behind one of the Linux kernel ISDN stracks called u-isdn. So after many decades, ISDN returns to them in a very different way.

Side note: In case you're curious, a reconstructed partial release history of the u-isdn code can be found on gitea.osmocom.org

But I digress. So today, there was the installation of this new OCTOI hub setup. It has been prepared for several weeks in advance, and the hub contains two circuit boards designed entirely only for this use case. The most difficult challenge was the fact that this data centre has no existing GPS RF distribution, and the roof is ~ 100m of CAT5 cable (no fiber!) away from the roof. So we faced the challenge of passing the 1PPS (1 pulse per second) signal reliably through several steps of lightning/over-voltage protection into the icE1usb whose internal GPS-DO serves as a grandmaster clock for the TDM network.

The equipment deployed in this installation currently contains:

  • a rather beefy Supermicro 2U server with EPYC 7113P CPU and 4x PCIe, two of which are populated with Digium TE820 cards resulting in a total of 16 E1 ports

  • an icE1usb with RS422 interface board connected via 100m RS422 to an Ericsson GPS03 receiver. There's two layers of of over-voltage protection on the RS422 (each with gas discharge tubes and TVS) and two stages of over-voltage protection in the coaxial cable between antenna and GPS receiver.

  • a Livingston Portmaster3 RAS server

  • a Cisco AS5400 RAS server

For more details, see this wiki page and this ticket

Now that the physical deployment has been made, the next steps will be to migrate all the TDMoIP links from the existing user base over to the new hub. We hope the reliability and performance will be much better than behind DOCSIS.

In any case, this new setup for sure has a lot of capacity to connect many more more users to this network. At this point we can still only offer E1 PRI interfaces. I expect that at some point during the coming winter the project for remote TDMoIP BRI (S/T, S0-Bus) connectivity will become available.


I'd like to thank anyone helping this effort, specifically * Sylvain "tnt" Munaut for his work on the RS422 interface board (+ gateware/firmware) * noris.net for sponsoring the co-location * sysmocom for sponsoring the EPYC server hardware

Clock sync trouble with Digium cards and timing cables

If you have ever worked with Digium (now part of Sangoma) digital telephony interface cards such as the TE110/410/420/820 (single to octal E1/T1/J1 PRI cards), you will probably have seen that they always have a timing connector, where the timing information can be passed from one card to another.

In PDH/ISDN (or even SDH) networks, it is very important to have a synchronized clock across the network. If the clocks are drifting, there will be underruns or overruns, with associated phase jumps that are particularly dangerous when analog modem calls are transported.

In traditional ISDN use cases, the clock is always provided by the network operator, and any customer/user side equipment is expected to synchronize to that clock.

So this Digium timing cable is needed in applications where you have more PRI lines than possible with one card, but only a subset of your lines (spans) are connected to the public operator. The timing cable should make sure that the clock received on one port from the public operator should be used as transmit bit-clock on all of the other ports, no matter on which card.

Unfortunately this decades-old Digium timing cable approach seems to suffer from some problems.

clock drift between master and slave cards

Once any of the spans of a slave card on the timing bus are fully aligned, the transmit bit clocks of all of its ports appear to be in sync/lock - yay - but unfortunately only at the very first glance.

When looking at it for more than a few seconds, one can see a slow, continuous drift of the slave bit clocks compared to the master :(

Some initial measurements show that the clock of the slave card of the timing cable is drifting at about 12.5 ppb (parts per billion) when compared against the master clock reference.

This is rather disappointing, given that the whole point of a timing cable is to ensure you have one reference clock with all signals locked to it.

The work-around

If you are willing to sacrifice one port (span) of each card, you can work around that slow-clock-drift issue by connecting an external loopback cable. So the master card is configured to use the clock provided by the upstream provider. Its other ports (spans) will transmit at the exact recovered clock rate with no drift. You can use any of those ports to provide the clock reference to a port on the slave card using an external loopback cable.

In this setup, your slave card[s] will have perfect bit clock sync/lock.

Its just rather sad that you need to sacrifice ports just for achieving proper clock sync - something that the timing connectors and cables claim to do, but in reality don't achieve, at least not in my setup with the most modern and high-end octal-port PCIe cards (TE820).

Progress on the ITU-T V5 access network front

Almost one year after my post regarding first steps towards a V5 implementation, some friends and I were finally able to visit Wobcom, a small German city carrier and pick up a lot of decommissioned POTS/ISDN/PDH/SDH equipment, primarily V5 access networks.

This means that a number of retronetworking enthusiasts now have a chance to play with Siemens Fastlink, Nokia EKSOS and DeTeWe ALIAN access networks/multiplexers.

My primary interest is in Nokia EKSOS, which looks like an rather easy, low-complexity target. As one of the first steps, I took PCB photographs of the various modules/cards in the shelf, take note of the main chip designations and started to search for the related data sheets.

The results can be found in the Osmocom retronetworking wiki, with https://osmocom.org/projects/retronetworking/wiki/Nokia_EKSOS being the main entry page, and sub-pages about

In short: Unsurprisingly, a lot of Infineon analog and digital ICs for the POTS and ISDN ports, as well as a number of Motorola M68k based QUICC32 microprocessors and several unknown ASICs.

So with V5 hardware at my disposal, I've slowly re-started my efforts to implement the LE (local exchange) side of the V5 protocol stack, with the goal of eventually being able to interface those V5 AN with the Osmocom Community TDM over IP network. Once that is in place, we should also be able to offer real ISDN Uk0 (BRI) and POTS lines at retrocomputing events or hacker camps in the coming years.

Retronetworking at VCFB 2022

I'm happy to announce active participation at the Vintage Computing Festival Berlin 2022 in two ways:

The exhibit will be similar to the exhibit at the retrocomputing village of the last CCC congress (36C3): A digital telephony network with ISDN BRI and POTS lines providing services to a number of laptops with Modems and ISDN terminal adapters.

We plan to demo the following things: * analog modem and ISDN dial-up into BBSs ** text / ANSI interfaces via Telix, Telemate, Terminate ** RIPterm graphical interfaces * analog modem and ISDN dial-up IP/internet * ISDN video telephony

The client computers will be contemporary 486/Pentium machines wit DOS, Windows 3.11 and OS/2.

First steps towards an ITU-T V5.1 / V5.2 implementation

As some of you may know, I've been starting to collect "vintage" telecommunications equipment starting from analog modems to ISDN adapters, but also PBXs and even SDH equipment. The goal is to keep this equipment (and related software) alive for demonstration and practical exploration.

Some [incomplete] information can be found at https://osmocom.org/projects/retro-bbs/wiki/

Working with PBXs to simulate the PSTN (ISDN/POTS) network is fine to some extent, but it's of course not the real deal. You only get S0-buses and no actual Uk0 like actual ISDN lines of the late 80ies and 90ies. You have problems with modems not liking the PBX dialtone, etc.

Hence, I've always wanted to get my hand on some more real-world central-office telephone network equipment, and I finally have a source for so-called V5.1/V5.2 access multiplexers. Those are like remote extension boxes for the central office switch (like EWSD or System 12). They aggregate/multiplex a number of analog or ISDN BRI subscriber lines into E1 lines, while not implementing any of the actual call control or ISDN signalling logic. All of that is provided by the actual telephone switch/exchange.

So in order to integrate such access multiplexers in my retronetworking setup, I will have to implement the LE (local exchange) side of the V5.1 and/or V5.2 protocols, as specified in ITU-T G.964 and G.965.

In the limited spare time I have next to my dayjob and various FOSS projects, progress will likely be slow. Nonetheless I started with an implementation now, and I already had a lot of fun learning about more details of those interfaces and their related protocols.

One of the unresolved questions is to what kind of software I would want to integrate once the V5.x part is resolved.

  • lcr would probably be the most ISDN-native approach, but it is mostly unused and quite EOL.

  • Asterisk or FreeSWITCH would of course be obvious candidates, but they are all relatively alien to ISDN, and hence not very transparent once you start to do anything but voice calls (e.g. dialup ISDN data calls in various forms).

  • yate is another potential candidate. It already supports classic SS7 including ISUP, so it would be a good candidate to build an actual ISDN exchange with V5.2 access multiplexers on the customer-facing side (Q.921+Q.931 on it) and SS7/ISUP towards other exchanges.

For now I think yate would be the most promising approach. Time will tell.

The final goal would then be to have a setup [e.g. at a future CCC congress] where we would have SDH add/drop multiplexers in several halls, and V5.x access multiplexers attached to that, connecting analog and ISDN BRI lines from individual participants to a software-defined central exchange. Ideally actually multiple exchanges, so we can show the signaling on the V5.x side, the Q.921/Q.931 side and the SS7/ISUP between the exchanges.

Given that the next CCC congress is not before December 2022, there is a chance to actually implement this before then ;)

Notfallwarnung im Mobilfunknetz + Cell Broadcast (Teil 2)

[excuse this German-language post, this is targeted at the current German public discourse]

Ein paar Ergänzungen zu meinem blog-post gestern.

Ich benutzt den generischen Begriff PWS statt SMSCB, weil SMSCB strikt genommen nur im 2G-System existiert, und nur ein Layer ist, der dort für Notfallalarmierung verwendet wird.

Zu Notfallwarn-Apps

Natürlich sind spezielle, nationale Deutsche Katastrophenschutz-Apps auch nützlich! Aber diese sollten allenfalls zusätzlich angeboten werden, nachdem man erstmal die grundlegende Alarmierung konform der relevanten internationalen (und auch EU-)Standards via Cell Broadcast / PWS realisiert. Man sagt ja auch nicht: Nachrichtensendungen braucht man im Radio nicht mehr, weil man die bereits im Fernsehen hat. Man will auf allen verfügbaren Kanälen senden, und zunächst jene mit möglichst universeller Reichweite und klaren technischen Vorteilen benutzen, bevor man dann zusätzlich auch auf anderen Kanälen alarmiert.

Wie sieht PWS für mich als Anwender aus

Hier scheint es größere Missverständnisse zu geben, wie das auf dem Telefon letztlich aussieht. Ist ja auch verständlich, hierzulande sieht man das nie, ausser man ist zufällig in einem Labor/Basttel-Netz z.B. auf einer CCC-Veranstaltung unterwegs, in der das Osmocom-Projekt mal solche Nachrichten versendet hat.

Die PWS (ETWS, CMAS, WEA, KPAS, EU-ALERT, ...) nachrichten werden vom Telefon empfangen, und dann je nach Konfiguration und Priorität behandelt. Für die USA ist im WEA vorgeschrieben, dass Alarme einer bestimmten Prioritatsklasse (z.B. der Presidential Level Alert) immer zwangsweise zur Anzeige gebracht werden und immer mit einem lauten sirenenartigen Alarmton einhergehen. Es ist sogar explizit verboten, dass der Anwender diese Alarme irgendwo ausstellen, stumm schalten o.ä. kann. Insofern spielt es keine Rolle, ob das Telefon gerade Lautlos gestellt ist, oder es nicht gerade unmittelbar bei mir ist.

Bei manchen Geräten werden die Warnungen sogar mittels einer text2speech-Engine laut über den Lautsprecher vorgelesen, nachdem der Alarmton erscheint. Ob das eine regulatorische Anforderung eines der nationalen System ist, weiss ich nicht - ich habe es jedenfalls bereits in manchen Fällen gesehen, als ich mittels Osmocom-Software solche Alarme in privaten Labornetzen versandt habe.

Noch ein paar technische Details

  • PWS-Nachrichten werden auch dann noch ausgestrahlt, wenn die Zelle ihre Netzanbindung verloren hat. Wenn also z.B. das Glasfaserkabel zum Kernnetz bereits weg ist, aber noch Strom da ist, werden bereits vorher vom CBC (Cell Broadcast Centre) an die Mobilfunkzelle übermittelte Warnungen entsprechend ihrer Gültigkeitsdauer weiter autonom von der Zelle ausgesendet Das ist wieder ein inhärenter technischer Vorteil, der niemals mit einer App erreichbar ist, weil diese erfordert dass das komplette Mobilfunknetz mit allen internen Verbindungen und dem Kernnetz sowie die Internetverbindung vom Netzbetreiber zum Server des App-Anbieters durchgehend funktioniert.

  • PWS-Nachrichten können zumindest technisch auch von Telefonen empfangen werden, die garnicht im Netz eingebucht sind, oder die keine SIM eingelegt haben. Ob dies in den Standards gefordert wird, und/oder ob dies die jeweilige Telefonsoftware das so umsetzt, weiss ich nicht und müsste man prüfen. Technisch liegt es nahe, ähnlich wie das Absetzen von Notrufen, das ja auch technisch in diesen Fällen möglich ist.

Zu den Kosten

Wenn - wie in der idealen Welt - das Vorhalten von Notfallalarmierung eine Vorgabe bereits zum Zeitpunkt der Lizenzvergabe für Funkfrequenzen gewesen wäre, wäre das alles einfach ganz lautlos von Anfang an immer unterstützt gewesen. Keiner hätte extra Geld investieren müssen, weil diese minimale technische Vorgabe dann ja bereits Teil der Ausschreibungen der Betreiber für den Einkauf ihres Equipments gewesen wäre. Zudem hatten wir ja bereits in der Vergangenheit Cell Brodacast in allen drei Deutschen Netzen, d.h. die Technik war mal [aus ganz andern Gründen] vorhanden aber wurde irgendwann weggespart.

Das jetzt nachträglich einzuführen heisst natürlich, dass es niemand eingeplant hat, und dass jeder beteiligte am Markt sich das vergolden lassen will. Die Hersteller freuen sich in etwa wie "Oh, Ihr wollt jetzt mehr als ihr damals beim Einkauf spezifiziert habt? Schön, dann schreiben wir mal ein Angebot".

Technisch ist das alles ein Klacks. Die komplette Entwicklung aller Bestandteile für PWS in 2G/3G/4G/5G würde ich auf einen niedrigen einmaligen sechsstelligen Betrag schätzen. Und das ist die einmalige Investition in der Entwicklung, welche dann über alle Geräte/Länder/Netze umgebrochen wird. Bei den Milliarden, die in Entwicklung und Anschaffung von Mobilfunktechnik investiert wird, ist das ein Witz.

Die Geräte wie Basisstationen aller relevanten Hersteller unterstützen natürlich von Haus aus PWS. Die bauen für Deutschland ja nicht andere Geräte, als jene, die in UK, NL, RO, US, ... verbaut werden. Der Markt ist international, die gleiche Technik steht überall.

Weil man jetzt zu spät ist, wird das natürlich von allen Seiten ausgenutzt. Jeder Basisstationshersteller wird die Hand aufhalten und sagen, das kostet jetzt pro Zelle X EUR im Jahr zusätzliche Lizenzgebühren. Und die Anbieter der zentralen Komponente CBC werden auch branchenüblich die Hand aufhalten, mit satten jährlichen Lizenzgebühren. Und die Consultants werden auch alle die Hand aufhalten, weil es gibt wieder etwas zu Integrieren, zu testen, ... Das CBC ist keine komplexe Technik. Wenn man das einmalig als Open Source entwickeln lässt, und in allen Netzen einsetzt, bekommt man es quasi zum Nulltarif. Aber das würde ja Voraussetzen, dass man sich wirklich mit der Technik befasst, versteht um welch simple Software es hier geht, und dass man mal andere Wege in der Beschaffung geht, als nur mal eben bei seinen existierenden 3 Lieferanten anzurufen, die sich dann eine goldene Nase verdienen wollen.

In der öffentlichen Diskussion wird von 20-40 Millionen EUR gesprochen. Das sind überzogene Forderungen der Marktteilnehmer, nichts sonst. Aber selbst wenn man der Meinung ist, dass man lieber das Geld zum Fenster hinauswerfen will, statt Open Source Alternativen zu [ver]suchen, dann ist auch diese Größenordnung etwas, dass im Vergleich zu den sonstigen Anschaffungs- und Betriebskosten eines Mobilfunknetzes verschwindend gering ist. Ganz zu schweigen von den Folgekosten im Bereich Bergung/Rettung, Personenschäden, etc. die sich dadurch mittelfristig bei Katastrophen einsparen lassen.

Oder anders betrachtet: Wenn sogar das wirtschaftlich viel schwächere Rumänien sich sowas leisten kann, dann wird es wohl auch die Bundesrepublik Deutschland stemmen können.