Notfallwarnung im Mobilfunknetz + Cell Broadcast (Teil 2)

[excuse this German-language post, this is targeted at the current German public discourse]

Ein paar Ergänzungen zu meinem blog-post gestern.

Ich benutzt den generischen Begriff PWS statt SMSCB, weil SMSCB strikt genommen nur im 2G-System existiert, und nur ein Layer ist, der dort für Notfallalarmierung verwendet wird.

Zu Notfallwarn-Apps

Natürlich sind spezielle, nationale Deutsche Katastrophenschutz-Apps auch nützlich! Aber diese sollten allenfalls zusätzlich angeboten werden, nachdem man erstmal die grundlegende Alarmierung konform der relevanten internationalen (und auch EU-)Standards via Cell Broadcast / PWS realisiert. Man sagt ja auch nicht: Nachrichtensendungen braucht man im Radio nicht mehr, weil man die bereits im Fernsehen hat. Man will auf allen verfügbaren Kanälen senden, und zunächst jene mit möglichst universeller Reichweite und klaren technischen Vorteilen benutzen, bevor man dann zusätzlich auch auf anderen Kanälen alarmiert.

Wie sieht PWS für mich als Anwender aus

Hier scheint es größere Missverständnisse zu geben, wie das auf dem Telefon letztlich aussieht. Ist ja auch verständlich, hierzulande sieht man das nie, ausser man ist zufällig in einem Labor/Basttel-Netz z.B. auf einer CCC-Veranstaltung unterwegs, in der das Osmocom-Projekt mal solche Nachrichten versendet hat.

Die PWS (ETWS, CMAS, WEA, KPAS, EU-ALERT, ...) nachrichten werden vom Telefon empfangen, und dann je nach Konfiguration und Priorität behandelt. Für die USA ist im WEA vorgeschrieben, dass Alarme einer bestimmten Prioritatsklasse (z.B. der Presidential Level Alert) immer zwangsweise zur Anzeige gebracht werden und immer mit einem lauten sirenenartigen Alarmton einhergehen. Es ist sogar explizit verboten, dass der Anwender diese Alarme irgendwo ausstellen, stumm schalten o.ä. kann. Insofern spielt es keine Rolle, ob das Telefon gerade Lautlos gestellt ist, oder es nicht gerade unmittelbar bei mir ist.

Bei manchen Geräten werden die Warnungen sogar mittels einer text2speech-Engine laut über den Lautsprecher vorgelesen, nachdem der Alarmton erscheint. Ob das eine regulatorische Anforderung eines der nationalen System ist, weiss ich nicht - ich habe es jedenfalls bereits in manchen Fällen gesehen, als ich mittels Osmocom-Software solche Alarme in privaten Labornetzen versandt habe.

Noch ein paar technische Details

  • PWS-Nachrichten werden auch dann noch ausgestrahlt, wenn die Zelle ihre Netzanbindung verloren hat. Wenn also z.B. das Glasfaserkabel zum Kernnetz bereits weg ist, aber noch Strom da ist, werden bereits vorher vom CBC (Cell Broadcast Centre) an die Mobilfunkzelle übermittelte Warnungen entsprechend ihrer Gültigkeitsdauer weiter autonom von der Zelle ausgesendet Das ist wieder ein inhärenter technischer Vorteil, der niemals mit einer App erreichbar ist, weil diese erfordert dass das komplette Mobilfunknetz mit allen internen Verbindungen und dem Kernnetz sowie die Internetverbindung vom Netzbetreiber zum Server des App-Anbieters durchgehend funktioniert.

  • PWS-Nachrichten können zumindest technisch auch von Telefonen empfangen werden, die garnicht im Netz eingebucht sind, oder die keine SIM eingelegt haben. Ob dies in den Standards gefordert wird, und/oder ob dies die jeweilige Telefonsoftware das so umsetzt, weiss ich nicht und müsste man prüfen. Technisch liegt es nahe, ähnlich wie das Absetzen von Notrufen, das ja auch technisch in diesen Fällen möglich ist.

Zu den Kosten

Wenn - wie in der idealen Welt - das Vorhalten von Notfallalarmierung eine Vorgabe bereits zum Zeitpunkt der Lizenzvergabe für Funkfrequenzen gewesen wäre, wäre das alles einfach ganz lautlos von Anfang an immer unterstützt gewesen. Keiner hätte extra Geld investieren müssen, weil diese minimale technische Vorgabe dann ja bereits Teil der Ausschreibungen der Betreiber für den Einkauf ihres Equipments gewesen wäre. Zudem hatten wir ja bereits in der Vergangenheit Cell Brodacast in allen drei Deutschen Netzen, d.h. die Technik war mal [aus ganz andern Gründen] vorhanden aber wurde irgendwann weggespart.

Das jetzt nachträglich einzuführen heisst natürlich, dass es niemand eingeplant hat, und dass jeder beteiligte am Markt sich das vergolden lassen will. Die Hersteller freuen sich in etwa wie "Oh, Ihr wollt jetzt mehr als ihr damals beim Einkauf spezifiziert habt? Schön, dann schreiben wir mal ein Angebot".

Technisch ist das alles ein Klacks. Die komplette Entwicklung aller Bestandteile für PWS in 2G/3G/4G/5G würde ich auf einen niedrigen einmaligen sechsstelligen Betrag schätzen. Und das ist die einmalige Investition in der Entwicklung, welche dann über alle Geräte/Länder/Netze umgebrochen wird. Bei den Milliarden, die in Entwicklung und Anschaffung von Mobilfunktechnik investiert wird, ist das ein Witz.

Die Geräte wie Basisstationen aller relevanten Hersteller unterstützen natürlich von Haus aus PWS. Die bauen für Deutschland ja nicht andere Geräte, als jene, die in UK, NL, RO, US, ... verbaut werden. Der Markt ist international, die gleiche Technik steht überall.

Weil man jetzt zu spät ist, wird das natürlich von allen Seiten ausgenutzt. Jeder Basisstationshersteller wird die Hand aufhalten und sagen, das kostet jetzt pro Zelle X EUR im Jahr zusätzliche Lizenzgebühren. Und die Anbieter der zentralen Komponente CBC werden auch branchenüblich die Hand aufhalten, mit satten jährlichen Lizenzgebühren. Und die Consultants werden auch alle die Hand aufhalten, weil es gibt wieder etwas zu Integrieren, zu testen, ... Das CBC ist keine komplexe Technik. Wenn man das einmalig als Open Source entwickeln lässt, und in allen Netzen einsetzt, bekommt man es quasi zum Nulltarif. Aber das würde ja Voraussetzen, dass man sich wirklich mit der Technik befasst, versteht um welch simple Software es hier geht, und dass man mal andere Wege in der Beschaffung geht, als nur mal eben bei seinen existierenden 3 Lieferanten anzurufen, die sich dann eine goldene Nase verdienen wollen.

In der öffentlichen Diskussion wird von 20-40 Millionen EUR gesprochen. Das sind überzogene Forderungen der Marktteilnehmer, nichts sonst. Aber selbst wenn man der Meinung ist, dass man lieber das Geld zum Fenster hinauswerfen will, statt Open Source Alternativen zu [ver]suchen, dann ist auch diese Größenordnung etwas, dass im Vergleich zu den sonstigen Anschaffungs- und Betriebskosten eines Mobilfunknetzes verschwindend gering ist. Ganz zu schweigen von den Folgekosten im Bereich Bergung/Rettung, Personenschäden, etc. die sich dadurch mittelfristig bei Katastrophen einsparen lassen.

Oder anders betrachtet: Wenn sogar das wirtschaftlich viel schwächere Rumänien sich sowas leisten kann, dann wird es wohl auch die Bundesrepublik Deutschland stemmen können.

Notfallwarnung im Mobilfunknetz + Cell Broadcast

[excuse this German-language post, this is targeted at the current German public discourse]

In mehrerern Gegenden Deutschlands gab es verheerende Hochwasser, und die Öffentlichkeit diskutiert deshalb mal wieder die gute alte Frage nach dem adäquaten Mittel der Alarmierung der Bevölkerung.

Es ist einfach nur ein gigantisches Trauerspiel, wie sehr die Deutsche Politik und Verwaltung in diesem Punkt inzwischen seit Jahrzehnten sämtliche relevanten Standards verpennt, und dann immer wieder öffentlich durch fachlich falsche und völlig uninformierte Aussagen auffällt.

Das Thema wurde vor dem aktuellen Hochwasser bereits letztes Jahr im Rahmen des sog. WarnTag öffentlich diskutiert. Auch hier von Seiten der öffentlichen Hand ausschliesslich mit falschen Aussagen, wie z.B. dass es bei Cell Broadcast Datenschutzprobleme gibt. Dabei ist Cell Broadcast die einzige Technologie, wo keine Rückmeldung des einzelnen Netzteilnehmers erfolgt, und das Netz nichtmal weiss, wer die Nachricht empfangen hat, und wo dieser Empfang stattgefunden hat. Ganz wie beim UKW-Radio.

Fakt ist, dass alle digitalen Mobilfunkstandards seit GSM/2G, d.h. seit 1991 die Möglichkeit mitbringen, effizient, schnell und datensparsam alle Nutzer (einer bestimmten geographischen Region) mit sogenannten broadcast Nachrichten zu informieren. Diese Technik, in GSM/2G genannt Cell Broacast (oder auch _SMSCB_), unterscheidet sich Grundlegend von allen anderen Kommunikationsformen im Mobilfunknetz, wie Anrufe und herkömmliche SMS (offiziell SMS-PP). Anrufe, SMS und auch mobile Paketdaten (Internet) werden immer für jeden Teilnehmer individuell auf ihm zugewiesenen Funkressourcen übermittelt. Diese Ressourcen sind beschränkt. Es können in keinem Mobilfunknetz der Welt alle Teilnehmer gleichzeitig telefonieren, oder gleichzeitig SMS empfangen.

Stattdessen benutzt Cell Broadcast - wie der Name bereits unmissverständlich klar macht - Einen broadcast, d.h. Rundsendemechanismus. Eine Nachricht wird einmal gesendet, benötigt also nur eine geteilte Ressource auf der Luftschnittstelle, und wird dann von allen Geräten im Empfangsbereich zeitgleich empfangen und dekodiert. Das ist wie UKW-Radio oder klassisches terrestrisches Fernsehen.

Cell Broadcast wurde bereits in den 1990er Jahren von Deutschen Netzbetreibern benutzt. Und zwar nicht für etwas lebensnotwendiges wie die Notfallsignalisierung, sondern für so banale Dinge wie die Liste jener Vorwahlen, zu denen gerade ein vergünstigter "wandernder Ortstarif" Besteht. Ja, sowas gab es mal bei Vodafone. Oder bei O2 wurden über lange Zeit (aus unbekannten Gründen) die GPS-Koordinaten der jeweiligen Basisstation als Cell Broadcast versendet.

In der folgenden (nun fast abgeschalteten) Mobilfunkgeneration 3G wurde Cell Broadcast leicht umbenannt als Service Area Broadcast beibehalten. Schliesslich gibt es ja Länder mit - anders als in Deutschland - funktionierender und kompetenter Regulierung des Telekommunikationsmarktes, und die langjährig bestehenden gesetzlichen Anforderungen solcher Länder zwingen die Netzbetreiber und auch die Ausrüster der Neztbetreiber, neue Mobilfunkstandards so zu entwickeln, dass die gesetzlichen Vorgaben bzgl. der Alarmierung der Bevölkerung im Notfall funktioniert.

Im Rahmen dieser Standardisierung haben eine Reihe von Ländern innerhalb der 3GPP-Standardisierung (zuständig für 2G, 3G, 4G, 5G) sogenannte Public Warning Systems (PWS) standardisiert. Zu diesen gehören z.B. das Japanische ETWAS (Earthquake and Tsunami Warning System), das Koreanische KPAS (Korean Public Alerting System), das US-Amerikanische WEA (Wireless Emergency Alerts, früher bekannt als CMAS) und auch das EU-ALERT mit den nationalen Implementationen NL-ALERT (Niederlande) und UK-ALERT (Großbritannien) sowie RO-ALERT (Rumänien).

Die zahlreichen Studien und Untersuchungen, die zur Gestaltung obiger Systeme und der internationalen Standards im Mobilfunk geführt haben, weisen auch nochmal nach, was sowieso vorher jedem Techniker offensichtlich erscheint: Eine schelle Alarmierung aller Teilnehmer (einer Region) kann nur über einen Broadcast-Mechanismus erfolgen. In Japan war die Zielvorgabe, die Alarmierung in Erdbebenfällen innerhalb von weniger als 4 Sekunden an die gesamte betroffene Bevölkerung zu übertragen. Und das ist mit PWS möglich!

Die relevanten PWS-Standards in 2G/3G/4G/5G bieten jede Menge nützliche Funktionen:

  • Benachrichtigung in bestimmten geographischen Regionen

  • Interoperable Schnittstellen, so dass Netzwerkelemente unterschiedlicher Hersteller miteinander kommunizieren

  • Konfigurierbare Benachrichtigungstexte, nicht nur in der primären Landessprache, sondern auch in mehreren anderen Sprachen, die dann automatisch je nach Spracheinstellung des Telefons wiedergegeben werden

  • Unterschiedliche Schweregrade von Alarmierungen

  • Übermittlung nicht nur im Broadcast, sondern auch im Unicast an jeden Teilnehmer, der gerade in einem Telefongespräch ist, und dessen Telefon gerade währenddessen aus technischen Gründen den Broadcast nicht empfangen würde

  • Unterschied zwischen Wiederholung einer Übertragung ohne Änderung des Inhalts und einer übertragung mit geändertem Inhalt

Es gibt also seit vielen Jahren internationale Standards, wie sämtliche heute eingesetzten Mobilfunktechniken zur schnellen, effizienten und datensparsamen Alarmierung der Bevölkerung eingesetzt werden können.

Es gibt zahlreiche Länder, die diese Systeme seit langem einsetzen. Das US-Amerikanische WEA wurde nach eigenen Angaben seit 2012 bereits mehr als 61.000 Mal benutzt, um Menschen vor Unwetter oder anderen Katastrophen zu warnen.

Sogar innerhalb der EU hat man das EU-ALERT System spezifiziert, welches weitgehend mit dem amerikanischen WEA identisch ist, und auf die gleichen Techniken aufbaut.

Und dann gibt es Länder wie Deutschland, die es seit genauso vielen Jahren vermissen lassen, durch Gesetze oder Vorschriften

  1. die Netzbetreiber zum Betrieb dieser Broadcast-Technologien in ihrem Netz verpflichtet

  2. die Netzbetreiber zur Bereitstellung von standardisierten Schnittstellen gegenüber den Behörden wie Zivilschutz / Katastrophenschutz zu verpflichten, so das diese selbständig über alle Netzbetreiber Warnungen versenden können

  3. die Gerätehersteller z.B. über Vorschriften des FTEG (Gesetz über Funkanlagen und Telekommunikationsendeinrichtungen) zu Verpflichten, die PWS-Nachrichten anzuzeigen

In den USA, dem vermeintlich viel mehr dem Freien Markt und dem Kapitalismus anhängenden System ist all dies der Regulierungsbehörde FCC möglich. In Deutschland mit seiner sozialen Marktwirtschaft ist es anscheinend unmöglich, den Markt entsprechend zu regulieren. Eine solche Regulierung schafft man in Deutschland nur für wirklich wichtige Themen wie zur Durchsetzung der Bereitstellung von Schnittstellen für die Telekommunikationsüberwachung. Bei so irrelevanten Themen wie dem Katastrophenschutz und der Alarmierung der Bevölkerung braucht man den Markt nicht zu regulieren. Wenn die Netzbetreiber kein PWS anbieten wollen, dann ist das einfach so Gottgegeben, und man kann da ja nichts machen.

Falls jemand sich SMSCB und PWS technisch näher ansehen will: In 2019 haben wir im Osmocom-Projekt eine Open Source Implementation des kompletten Systems von BTS über BSC bis zum CBC, sowie der dazwischen befindlichen Protokolle wie CBSP vorgenommen. Dies wurde freundlicherweise durch den Prototype Fund mit EUR 35k finanziert. Ja, so günstig kann man die nötige Technik zumindest für eine einzelne Mobilfunkgeneration entwickeln...

Man kann also in einem selbst betriebenen Labor-Mobilfunknetz, welches auf Open Source Software basiert mehr in Punkt standardkonformer Notfallalarmierung, als die Deutsche Politik, Verwaltung und Netzbetreiber zusammen hinbekommen.

Wir haben in Deutschland Leute, die diese Standards in und auswendig kennen, sogar daran mitgearbeitet haben. Wir haben Entwickler, die diese Standards implementiert haben. Aber wir schaffen es nicht, das auch mal selbst praktisch zu benutzen - das überlassen wir lieber den anderen Ländern. Wir lassen lieber zuerst die ganze Katastrophenalarmierung mittels Sirenen vergammeln, machen den Netzbetreibern keine Vorgaben, entwicklen komische Apps, die Anwender extra installieren müssen, die prinzipbedingt nicht skalieren und beim Test (WarnTag) nicht funktionieren.

Was für eine Glanzleistung für den hochentwickelten Techhologie-Standort Deutschland.

36C3 Talks on SIM card technology / Mitel DECT

At 36C3 in December 2019 I had the pleasure of presenting: One full talk about SIM card technology from A to Z and another talk where I presented together with eventphone team members about Security issues in the Mitel SIP-DECT system.

The SIM card talk was surprisingly successful, both in terms of a full audience on-site, as well as in terms of the number of viewers of the recordings on media.ccc.de. SIM cards are a rather niche topic in the wider IT industry, and my talk was not covering any vulnerabilities or the like. Also, there was nothing novel in the talk: SIM cards have been around for decades, and not much has changed (except maybe eSIM and TLS) in recent years.

In any case, I'm of course happy that it was well received. So far I've received lots of positive feedback.

As I'm working [more than] full time in cellular technology for almost 15 years now, it's sometimes hard to imagine what kind of topics people might be interested in. If you have some kind of suggestion on what kind of subject within my area of expertise you'd like me to talk about, please don't hesitate to reach out.

The Mitel DECT talk also went quite well. I covered about 10 minutes of technical details regarding the reverse engineering of the firmware and the communication protocols of the device. Thanks again to Dieter Spaar for helping with that. He is and remains the best reverse engineer I have met, and it's always a privilege to collaborate on any project. It was of course also nice to see what kind of useful (and/or fun) things the eventphone team have built on top of the knowledge that was gained by protocol-level reverse engineering.

If you want to know more low-level technical detail than the 36C3 talk, I recommend my earlier talk at the OsmoDevCon 2019 about Aastra/Mitel DET base station dissection.

If only I had more time, I would love to work on improving the lack of Free / Open Source Software realted to the DECT protocol family. There's the abandoned deDECTed.org, and the equally abandoned dect.osmocom.org project. The former only deals with the loewst levels of DECT (PHY/MAC). The latter is to a large extent implemented as part of an ancient version of the Linux kernel (I would say this should all run in userspace, like we run all of GSM/UMTS/LTE in userspace today).

If anyone wants to help out, I still think working on the DECT DLC and NWK dissectors for wireshark is the best way to start. It will create a tool that's important for anyone working with the DECT protocols, and it will be more or less a requirement for development and debugging should anyone ever go further in terms of implementing those protocols on either the PP or FP side. You can find my humble beginnings of the related dissectors in the laforge/dect branch of osmocom.org/wireshark.git.

Retronetworking / BBS-Revival setup at #36C3

After many years of being involved in various projects at the annual Chaos Communication Congress (starting from the audio/vidoe recording team at 15C3), I've finally also departed the GSM team, i.e. the people who operate (Osmocom based) cellular networks at CCC events.

The CCC Camp in August 2019 was slightly different: Instead of helping an Osmocom based 2G/3G network, I decided to put up a nextepc-based LTE network and make that use the 2G/3G HLR (osmo-hlr) via a newly-written DIAMETER-to-GSUP proxy. After lots of hacking on that proxy and fixing various bugs in nextepc (see my laforge/cccamp2019 branch here) this was working rather fine.

For 36C3 in December 2019 I had something different in mind: It was supposed to be the first actual demo of the retronetworking / bbs-revival setup I've been working on during past months. This setup in turn is sort-of a continuation of my talk at 34C3 two years ago: BBSs and early Intenet access in the 1990ies.

Rather than just talking about it, I wanted to be able to show people the real thing: Actual client PCs running (mainly) DOS, dialling over analog modems and phone lines as well as ISDN-TAs and ISDN lines into BBSs, together with early Interent access using SLIP and PPP over the same dial-up lines.

The actual setup can be seen at the Dialup Network In A Box wiki page, together with the 36C3 specific wiki page.

What took most of the time was - interestingly - mainly two topics:

  1. A 1U rack-mount system with four E1 ports. I had lots of old Sangoma Quad-E1 cards in PCI form-factor available, but wanted to use a PC with a more modern/faster CPU than those old first-generation Atom boxes that still had actual PCI slots. Those new mainboards don't have PCI but PCIe. There are plenty of PCIe to PCI bridges and associated products on the market, which worked fine with virtually any PCI card I could find, but not with the Sangoma AFT PCI cards I wanted to use. Seconds to minutes after boot, the PCI-PCIe bridges would always forget their secondary bus number. I suspected excessive power consumption or glitches, but couldn't find anything wrong when looking at the power rails with a scope. Adding additional capacitors on every rail also didn't change it. The !RESET line is also clean. It remains a mystery. I then finally decided to but a new (expensive) DAHDI 4-port E1 PCIe card to move ahead. What a waste of money if you have tons of other E1 cards around.

  2. Various trouble with FreeSWITCH. All I wanted/needed was some simple emulation of a PSTN/ISDN switch, operating in NT mode towards both the Livingston Portmaster 3 RAS and the Auerswald PBX. I would have used lcr, but it supports neither DAHDI nor Sangoma, but only mISDN - and there are no mISDN cards with four E1 ports :( So I decided to go for FreeSWITCH, knowing it has had a long history of ISDN/PRI/E1 support. However, it was a big disappointment. First, there were some segfaults due to a classic pointer deref before NULL-check. Next, libpri and FreeSWITCH have a different idea how channel (timeslot) numbers are structured, rendering any call attempt to fail. Finally, FreeSWITCH decided to blindly overwrite any bearer capabilities IE with 'speech', even if an ISDN dialup call (unrestricted digital information) was being handled. The FreeSWITCH documentation contains tons of references on channel input/output variables related to that - but it turns out their libpri integration doesn't set any of those, nor use any of them on the outbound side.

Anyway, after a lot more time than expected the setup was operational, and we could establish modem calls as well as ISDN dialup calls between the clients and the Portmaster3. The PM3 in turn then was configured to forward the dialup sessions via telnet to a variety of BBSs around the internet. Some exist still (or again) on the public internet. Some others were explicitly (re)created by 36C3 participants for this very BBS-Revival setup.

My personal favorite was finding ACiD Underworld 2.0, one of the few BBSs out there today who support RIPscrip, a protocol used to render vector graphics, text and even mouse-clickable UI via modem connection to a DOS/EGA client program called RIPterm. So we had one RIPterm installation on Novell DOS7 that was just used for dialling into ACiD Underworld 2.0.

Among other things we also tested interoperability between the 1980ies CCC DIY accoustic coupler "Datenklo" and the Portmaster, and confirmed that Windows 2000 could establish multilink-PPP not only over two B-channels (128 kbps) but also over 3 B-Channels (192).

Running this setup for four days meant 36C3 was a quite different experience than many previous CCC congresses:

  • I was less stressed as I wasn't involved in operating a service that many people would want to use (GSM).

  • I got engaged with many more people with whom I would normally not have entered a conversation, as they were watching the exhibits/demos and we got to chat about the technology involved and the 'good old days'.

So all in all, despite the last minute FreeSWITCH-patching, it was a much more relaxing and rewarding experience for me.

Special thanks to

  • Sylvain "tnt" Munaut for spending a lot of time with me at the retronetworking assembly. The fact that I had an E1 interface around was a good way for him to continue development on his ICE40 based bi-directional E1 wiretap. He also helped with setup and teardown.

  • miaoski and evanslify for reviving two of their old BBSs from Taiwan so we could use them at this event

The retronetworking setup is intended to operate at many other future events, whether CCC related, Vintage Computing or otherwise. It's relatively small and portable.

I'm very much looking forward to the next incarnations. Until then, I will hopefully have more software configured and operational, including a variety of local BBSs (running in VMs/containers), together with the respective networking (FTN, ZConnect, ...) and point software like CrossPoint.

If you are interested in helping out with this project: I'm very much looking for help. It doesn't matter if you're old and have had BBS experience back in the day, or if you're a younger person who wants to learn about communications history. Any help is appreciated. Please reach out to the bbs-revival@lists.osmocom.org mailing list, or directly to me via e-mail.

Software Freedom Podcast #3 about Free Software mobile phone communication

Recently I had the pleasure of being part of the 3rd incarnation of a new podcast series by the Free Software Foundation Europe: The Software Freedom Podcast.

In episode 3, Matthias and Bonnie of the FSFE are interviewing me about various high-level topics related to [the lack of] Free Software in cellular telephony, as well as some of the projects that I was involved in (Openmoko, Osmocom).

We've also touched the current mainstream / political debate about Huawei and 5G networks, where my position can only be summarized as: It doesn't matter much in which country the related proprietary software is being developed. What we need is Free / Open Source software that can be publicly audited, and we need a method by which the operator can ensure that a given version of that FOSS software is actually executing on his equipment.

Thanks to the FSFE for covering such underdeveloped areas of Free Software, and to use their podcast to distribute related information and ideas.

Sometimes software development is a struggle

I'm currently working on the firmware for a new project, an 8-slot smart card reader. I will share more about the architecture and design ideas behind this project soon, but today I'll simply write about how hard it sometimes is to actually get software development done. Seemingly trivial things suddenly take ages. I guess everyone writing code knows this, but today I felt like I had to share this story.

Chapter 1 - Introduction

As I'm quite convinced of test-driven development these days, I don't want to simply write firmware code that can only execute in the target, but I'm actually working on a USB CCID (USb Class for Smart Card readers) stack which is hardware-independent, and which can also run entirely in userspace on a Linux device with USB gadget (device) controller. This way it's much easier to instrument, trace, introspect and test the code base, and tests with actual target board hardware are limited to those functions provided by the board.

So the current architecture for development of the CCID implementation looks like this:

  • Implement the USB CCID device using FunctionFS (I did this some months ago, and in fact developing this was a similarly much more time consuming task than expected, maybe I find time to expand on that)

  • Attach this USB gadget to a virtual USB bus + host controller using the Linux kernel dummy_hcd module

  • Talk to a dumb phoenix style serial SIM card reader attached to a USB UART, which is connected to an actual SIM card (or any smart card, for that matter)

By using a "stupid" UART based smart card reader, I am very close to the target environment on a Cortex-M microcntroller, where I also have to talk to a UART and hence implement all the beauty of ISO 7816-3. Hence, the test / mock / development environment is as close as possible to the target environment.

So I implemented the various bits and pieces and ended up at a point where I wanted to test. And I'm not getting any response from the UART / SIM card at all. I check all my code, add lots of debugging, play around with various RTS / DTR / ... handshake settings (which sometimes control power) - no avail.

In the end, after many hours of trial + error I actually inserted a different SIM card and finally, I got an ATR from the card. In more than 20 years of working with smart cards and SIM cards, this is the first time I've actually seen a SIM card die in front of me, with no response whatsoever from the card.

Chapter 2 - Linux is broken

Anyway, the next step was to get the T=0 protocol of ISO 7816-3 going. Since there is only one I/O line between SIM card and reader for both directions, the protocol is a half-duplex protocol. This is unlike "normal" RS232-style UART communication, where you have a separate Rx and Tx line.

On the hardware side, this is most often implemented by simply connecting both the Rx and Tx line of the UART to the SIM I/O pin. This in turn means that you're always getting an echo back for every byte you write.

One could discard such bytes, but then I'm targeting a microcontroller, which should be running eight cards in parallel, at preferably baud-rates up to ~1 megabit speeds, so having to read and discard all those bytes seems like a big waste of resources.

The obvious solution around that is to disable the receiver inside the UART before you start transmitting, and re-enable it after you're done transmitting. This is typically done rather easily, as most UART registers in hardware provide some way to selectively enable transmitter and/or receiver independently.

But since I'm working in Linux userspace in my development environment: How do I approximate this kind of behavior? At least the older readers of this blog will remember something called the CREAD flag of termios. Clearing that flag will disable the receiver. Back in the 1990ies, I did tons of work with serial ports, and I remembered there was such a flag.

So I implement my userspace UART backend and somehow it simply doesn't want to work. Again of course I assume I must be doing something wrong. I'm using strace, I'm single-stepping through code - no avail.

In the end, it turns out that I've just found a bug in the Linux kernel, one that appears to be there at least ever since the git history of linux-2.6.git started. Almost all USB serial device drivers do not implement CREAD, and there is no sotware fall-back implemented in the core serial (or usb-serial) handling that would discard any received bytes inside the kernel if CREAD is cleared. Interestingly, the non-USB serial drivers for classic UARTs attached to local bus, PCI, ... seem to support it.

The problem would be half as much of a problem if the syscall to clear CREAD would actually fail with an error. But no, it simply returns success but bytes continue to be received from the UART/tty :/

So that's the second big surprise of this weekend...

Chapter 3 - Again a broken card?

So I settle for implementing the 'receive as many characters as you wrote' work-around. Once that is done, I continue to test the code. And what happens? Somehow my state machine (implemented using osmo-fsm, of course) for reading the ATR (code found here) somehow never wants to complete. The last byte of the ATR always is missing. How can that be?

Well, guess what, the second SIM card I used is sending a broken, non-spec compliant ATR where the header indicates 9 historical bytes are present, but then in reality only 8 bytes are sent by the card.

Of course every reader has a timeout at that point, but that timeout was not yet implemented in my code, and I also wasn't expecting to hit that timeout.

So after using yet another SIM card (now a sysmoUSIM-SJS1, not sure why I didn't even start with that one), it suddenly works.

After a weekend of detours, each of which I would not have assumed at all before, I finally have code that can obtain the ATR and exchange T=0 TPDUs with cards. Of course I could have had that very easily if I wanted (we do have code in pySim for this, e.g.) but not in the architecture that is as close as it gets to the firmware environment of the microcontroller of my target board.

Fernvale Kits - Lack of Interest - Discount

Back in December 2014 at 31C3, bunnie and xobs presented about their exciting Fernvale project, how they reverse engineered parts of the MT6260 ARM SoC, which also happens to contain a Mediatek GSM baseband.

Thousands (at least hundreds) of people have seen that talk live. To date, 2506 people (or AIs?) have watched the recordings on youtube, 4859 more people on media.ccc.de.

Given that Fernvale was the closest you could get to having a hackable baseband processor / phone chip, I expected at least as much interest into this project as we received four years earlier with OsmocomBB.

As a result, in early 2015, sysmocom decided to order 50 units of Fernvale DVT2 evaluation kits from bunnie, and to offer them in the sysmocom webshop to ensure the wider community would be able to get the boards they need for research into widely available, inexpensive 2G baseband chips.

This decision was made purely for the perceived benefit of the community: Make an exciting project available for anyone. With that kind of complexity and component density, it's unlikely anyone would ever solder a board themselves. So somebody has to build some and make it available. The mark-up sysmocom put on top of bunnie's manufacturing cost was super minimal, only covering customs/import/shipping fees to Germany, as well as minimal overhead for packing/picking and accounting.

Now it's almost four years after bunnie + xobs' presentation, and of those 50 Fernvale boards, we still have 34 (!) units in stock. That means, only 16 people on this planet ever had an interest in playing with what at the time I thought was one of the most exciting pieces of equipment to play with.

So we lost somewhere on the order of close to 3600 EUR in dead inventory, for something that never was supposed to be a business anyway. That sucks, but I still think it was worth it.

In order to minimize the losses, sysmocom has now discounted the boards and reduced the price from EUR 110 to to EUR 58.82 (excluding VAT). I have very limited hope that this will increase the amount of interest in this project, but well, you got to try :)

In case you're thinking "oh, let's wait some more time, until they hand them out for free", let me tell you: If money is the issue that prevents you from playing with a Fernvale, then please contact me with the details about what you'd want to do with it, and we can see about providing them for free or at substantially reduced cost.

In the worst case, it was ~ 3600 EUR we could have invested in implementing more Osmocom software, which is sad. But would I do it again if I saw a very exciting project? Definitely!

The lesson learned here is probably that even a technically very exciting project backed by world-renowned hackers like bunnie doesn't mean that anyone will actually ever do anything with it, unless they get everything handed on a silver plate, i.e. all the software/reversing work is already done for them by others. And that actually makes me much more sad than the loss of those ~ 3600 EUR in sysmocom's balance sheet.

I also feel even more sorry for bunnie + xobs. They've invested time, money and passion into a project that nobody really seemed to want to get involved and/or take further. ("nobody" is meant figuratively. I know there were/are some enthusiasts who did pick up. I'm talking about the big picture). My condolences to bunnie + xobs!

Wireshark dissector for 3GPP CBSP - traces wanted!

I recently was reading 3GPP TS 48.049, the specification for the CBSP (Cell Broadcast Service Protocol), which is the protocol between the BSC (Base Station Controller) and the CBC (Cell Broadcast Centre). It is how the CBC according to spec is instructing the BSCs to broadcast the various cell broadcast messages to their respective geographic scope.

While OsmoBTS and OsmoBSC do have support for SMSCB on the CBCH, there is no real interface in OsmoBSC yet on how any external application would instruct it tot send cell broadcasts. The only existing interface is a VTY command, which is nice for testing and development, but hardly a scalable solution.

So I was reading up on the specs, discovered CBSP and thought one good way to get familiar with it is to write a wireshark dissector for it. You can find the result at https://code.wireshark.org/review/#/c/29745/

Now my main problem is that as usual there appear to be no open source implementations of this protocol, so I cannot generate any traces myself. More surprising is that it's not even possible to find any real-world CBSP traces out there. So I'm facing a chicken-and-egg problem. I can only test / verify my wireshark dissector if I find some traces.

So if you happen to have done any work on cell broadcast in 2G network and have a CBSP trace around (or can generate one): Please send it to me, thanks!

Alternatively, you can of course also use the patch linked above, build your own wireshark from scratch, test it and provide feedback. Thanks in either case!

Still alive, just not blogging

It's been months without any update to this blog, and I feel sad about that. Nothing particular has happened to me, everything is proceeding as usual.

At the Osmocom project we've been making great progress on a variety of fronts, including

  • 3GPP LCLS (Local Call, Local Switch)

  • Inter-BSC hand-over in osmo-bsc

  • load Based hand-over in osmo-bsc

  • reintroducing SCCPlite compatibility to the new BSC code in osmo-bsc / libosmo-sigtran

  • finishing the first release of the SIMtrace2 firmware

  • extending test coverage on all fronts, particularly in our TTCN-3 test suites

  • tons of fixes to the osmo-bts measurement processing / reporting

  • higher precision time of arrival reporting in osmo-bts

  • migrating osmocom.org services to new, faster servers

At sysmocom, next to the Osmocom topics above, we've

  • made the sysmoQMOD remote SIM firmware much more robust and reliable

  • after months of delays, finally SIMtrace2 hardware kits are available again

  • created autoamtic testing of pySim-prog and sysmo-usim-util

  • extended our osmo-gsm-tester based automatic testing setup to include multi-TRX nanoBTS setups

In terms of other topic,

  • my wife and I have been to a three week motorbike tour all over the Alps in July

  • I've done tons of servicing (brake piston fittings, brake tubes, fuel line, fixing rust/paint, replacing clutch cable, choke cable, transmission chain, replacing several rusted/worn-out needle bearings, and much more) on my 22year old BMW F650ST to prepare it for many more yers to come. As some type-specific spare parts (mostly plastic parts) are becoming rarer, it was best to take care of replacements sooner than later

  • some servicing/repairs to my 19 year old Audi A4 car (which passed German mandatory inspection without any deficiency at the first attempt!)

  • some servicing of my Yamaha FZ6

  • repaired my Fairphone 2 by swapping the microphone module (mike was mute)

  • I've re-vamped a lot of the physical/hardware infrastructure for gnumonks.org and other sites I run, which was triggered by having to move racks

Re-launching openmoko USB Product ID and Ethernet OUI registry

Some time after Openmoko went out of business, they made available their USB Vendor IDs and IEEE OUI (Ethernet MAC address prefix) available to Open Source Hardware / FOSS projects.

After maintaining that for some years myself, I was unable to find time to continue the work and I had handed it over some time ago to two volunteers. However, as things go, those volunteers also stopped to respond to PID / OUI requests, and we're now launching the third attempt of continuing this service.

As the openmoko.org wiki will soon be moved into an archive of static web pages only, we're also moving the list of allocated PID and OUIs into a git repository.

Since git.openmoko.org is also about to be decommissioned, the repository is now at https://github.com/openmoko/openmoko-usb-oui, next to all the archived openmoko.org repository mirrors.

This also means that in addition to sending an e-mail application for getting an allocation in those ranges, you can now send a pull-request via github.

Thanks to cuvoodoo for volunteering to maintain the Openmoko USB PID and IEEE OUI allocations from now on!

openmoko.org archive down due to datacenter issues

Unfortunately, since about 11:30 am CEST on MAy 24, openmoko.org is down due to some power outage related issues at Hetzner, the hosting company at which openmoko.org has been hosting for more than a decade now.

The problem seems to have caused quite a lot of fall-out tom many servers (Hetzner is hosting some 200k machines, not sure how many affected, though), and Hetzner is anything but verbose when it comes to actually explaining what the issue is.

All they have published is https://www.hetzner-status.de/en.html#8842 - which is rather tight lipped about some power grid issues. But then, what do you have UPSs for if not for "a strong voltage reduction in the local power grid"?

The openmoko.org archive machine is running in Hetzner DC10, by the way. This is where they've had the largest number of tickets.

In any case, we'll have to wait for them to resolve their tickets. They appear to be working day and night on that.

I have a number of machines hosted at Hetzner, and I'm actually rather happy that none of the more important systems were affected that long. Some machines simply lost their uplink connectivity for some minutes, while some others were rebooted (power outage). The openmoko.org archive is the only machine that didn't automatically boot after the outage, maybe the power supply needs replacement.

In any case, I hope the service will be back up again soon.

btw: Guess who's been paying for hosting costs ever since Openmoko, Inc. has shut down? Yes, yours truly. It was OK for something like 9 years, but I want to recursively pull the dynamic content through some cache, which can then be made permanent. The resulting static archive can then be moved to some VM somewhere, without requiring a dedicated root server. That should reduce the costs down to almost nothing.

OsmoCon 2018 CfP closes on 2018-05-30

One of the difficulties with OsmoCon2017 last year was that almost nobody submitted talks / discussions within the deadline, early enough to allow for proper planning.

This lad to the situation where the sysmocom team had to come up with a schedule/agenda on their own. Later on much after the CfP deadline,people then squeezed in talks, making the overall schedule too full.

It is up to you to avoid this situation again in 2018 at OsmoCon2018 by submitting your talk RIGHT NOW. We will be very strict regarding late submissions. So if you would like to shape the Agenda of OsmoCon 2018, this is your chance. Please use it.

We will have to create a schedule soon, as [almost] nobody will register to a conference unless the schedule is known. If there's not sufficient contribution in terms of CfP response from the wider community, don't complain later that 90% of the talks are from sysmocom team members and only about the Cellular Network Infrastructure topics.

You have been warned. Please make your CfP submission in time at https://pretalx.sysmocom.de/osmocon2018/cfp before the CfP deadline on 2018-05-30 23:59 (Europe/Berlin)

Mailing List hosting for FOSS Projects

Recently I've encountered several occasions in which a FOSS project would have been interested in some reliable, independent mailing list hosting for their project communication.

I was surprised how difficult it was to find anyone running such a service.

From the user / FOSS project point of view, the criteria that I would have are:

  • operated by some respected entity that is unlikely to turn hostile, discontinue the service or go out of business altogether

  • free of any type of advertisements (we all know how annoying those are)

  • cares about privacy, i.e. doesn't sell the subscriber lists or non-public archives

  • use FOSS to run the service itself, such as GNU mailman, listserv, ezmlm, ...

  • an easy path to migrate away to another service (or self-hosting) as they grow or their requirements change. A simple mail forward to that new address for the related addresses is typically sufficient for that

If you think mailing lists serve no purpose these days anyways, and everyone is on github: Please have a look at the many thousands of FOSS project mailing lists out there still in use. Not everyone wants to introduce a dependency to the whim of a proprietary software-as-a-service provider.

I never had this problem as I always hosted my own mailman instance on lists.gnumonks.org anyway, and all the entities that I've been involved in (whether non-profit or businesses) had their own mailing list hosts. From franken.de in the 1990ies to netfilter.org, openmoko.org and now osmocom.org, we all pride oursevles in self-hosting.

But then there are plenty of smaller projects that neither have the skills nor the funding available. So they go to yahoo groups or some other service that will then hold them hostage without a way to switch their list archives from private to public, without downloadable archives or forwarding in the case they want to move away :(

Of course the larger FOSS projects also have their own list servers, starting from vger.kernel.org to Linux distributions like Debian GNU/Linux. But what if your FOSS project is not specifically Linux related?

The sort-of obvious candidates that I found all don't really fit:

Now don't get me wrong, I'm of course not expecting that there are commercial entities operating free-of charge list hosting services where you neither pay with money, nor your data, nor by becoming a spam receiver.

But still, in the wider context of the Free Software community, I'm seriously surprised that none of the various non-for-profit / non-commercial foundations or associations are offering a public mailing list hosting service for FOSS projects.

One can of course always ask any from the above list and ask for a mailing list even though it's strictly speaking off-topic to them. But who will do that, if he has to ask uninvited for a favor?

I think there's something missing. I don't have the time to set up a related service, but I would certainly want to contribute in terms of funding in case any existing FOSS related legal entity wanted to expand. If you already have a legal entity, abuse contacts, a team of sysadmins, then it's only half the required effort.

OsmoDevCon 2018 retrospective

One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here).

It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers.

Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network).

What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov.

All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not.

After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic.

In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at https://media.ccc.de/c/osmodevcon2018.

If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you!

Now the sad part is that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon!

There is of course the user-oriented OsmoCon 2018 in October, but that's a much larger event with a different audience.

Nevertheless, I'm very much looking forward to that, too.

The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share!

osmo-fl2k - Using USB-VGA dongles as SDR transmitter

Yesterday, during OsmoDevCon 2018, Steve Markgraf released osmo-fl2k, a new Osmocom member project which enables the use of FL2000 USB-VGA adapters as ultra-low-cost SDR transmitters.

How does it work?

A major part of any VGA card has always been a rather fast DAC which generates the 8-bit analog values for (each) red, green and blue at the pixel clock. Given that fast DACs were very rare/expensive (and still are to some extent), the idea of (ab)using the VGA DAC to transmit radio has been followed by many earlier, mostly proof-of-concept projects, such as Tempest for Eliza in 2001.

However, with osmo-fl2k, for the first time it was possible to completely disable the horizontal and vertical blanking, resulting in a continuous stream of pixels (samples). Furthermore, as the supported devices have no frame buffer memory, the samples are streamed directly from host RAM.

As most USB-VGA adapters appear to have no low-pass filters on their DAC outputs, it is possible to use any of the harmonics to transmit signals at much higher frequencies than normally possible within the baseband of the (max) 157 Mega-Samples per seconds that can be achieved.

osmo-fl2k and rtl-sdr

Steve is the creator of the earlier, complementary rtl-sdr software, which since 2012 transforms USB DVB adapters into general-purpose SDR receivers.

Today, six years later, it is hard to think of where SDR would be without rtl-sdr. Reducing the entry cost of SDR receivers nearly down to zero has done a lot for democratization of SDR technology.

There is hence a big chance that his osmo-fl2k project will attain a similar popularity. Having a SDR transmitter for as little as USD 5 is an amazing proposition.

free riders

Please keep in mind that Steve has done rtl-sdr just for fun, to scratch his own itch and for the "hack value". He chose to share his work with the wider public, in source code, under a free software license. He's a very humble person, he doesn't need to stand in the limelight.

Many other people since have built a business around rtl-sdr. They have grabbed domains with his project name, etc. They are now earning money based on what he has done and shared selflessly, without ever contributing back to the pioneering developers who brought this to all of us in the first place.

So, do we want to bet if history repeats itself? How long will it take for vendors showing up online advertising the USB VGA dongles as "SDR transmitter", possibly even with a surcharge? How long will it take for them to include Steve's software without giving proper attribution? How long until they will violate the GNU GPL by not providing the complete corresponding source code to derivative versions they create?

If you want to thank Steve for his amazing work

  • reach out to him personally

  • contribute to his work, e.g.

  • help to maintain it

  • package it for distributions

  • send patches (via osmocom-sdr mailing list)

  • register an osmocom.org account and update the wiki with more information

And last, but not least, carry on the spirit of "hack value" and democratization of software defined radio.

Thank you, Steve! After rtl-sdr and osmo-fl2k, it's hard to guess what will come next :)

udtrace - Unix domain socket tracing

When developing applications that exchange data over sockets, every so often you'd like to analyze exactly what kind of data is exchanged over the socket.

For TCP/UDP/SCTP/DCCP or other IP-based sockets, this is rather easy by means of libpcap and tools like tcpdump, tshark or wireshark. However, for unix domain socket, unfortunately no such general capture/tracing infrastructure exists in the Linux kernel.

Interestingly, even after searching for quite a bit I couldn't find any existing tools for this. This is surprising, as unix domain sockets are used by a variety of programs, from sql servers to bind8 ndc all the way to the systemctl tool to manage systemd.

In absence of any kernel support, the two technologies I can think of to implement this is either systemtap or a LD_PRELOAD wrapper.

However, I couldn't find an example for using either of those two to get traces of unix domain soocket communications.

Ok, so I get to write my own. My first idea hence was to implement something based on top of systemtap, the Linux kernel tracing framework. Unfortunately, systemtap was broken in Debian unstable (which I use for decades) at the time, so I went back to the good old LD_PRELOAD shim library / wrapper approach.

The result is called udtrace and can be found at

git clone git://git.gnumonks.org/udtrace

or alternatively via its github mirror.

Below is a copy+paste of its README file. Let's hope this tool is useful to other developers, too:

udtrace - Unix Domain socket tracing

This is a LD_PRELOAD wrapper library which can be used to trace the data sent and/or received via unix domain sockets.

Unlike IP based communication that can be captured/traced with pcap programs like tcpdump or wireshark, there is no similar mechanism available for unix domain sockets.

This LD_PRELOAD library intercepts the C library function calls of dynamically linked programs. It will detect all file descriptors representing unix domain sockets and will then print traces of all data sent/received via the socket.

Usage

Simply build libudtrace.so using the make command, and then start your to-be-traced program with

LD_PRELOAD=libudtrace.os

e.g.

LD_PRELOAD=libudtrace.so systemctl status

which will produce output like this:

>>> UDTRACE: Unix Domain Socket Trace initialized (TITAN support DISABLED)
>>> UDTRACE: Adding FD 4
>>> UDTRACE: connect(4, "/run/dbus/system_bus_socket")
4 sendmsg W 00415554482045585445524e414c20
4 sendmsg W 3331333033303330
4 sendmsg W 0d0a4e45474f54494154455f554e49585f46440d0a424547494e0d0a
[...]

Output Format

Currently, udtrace will produc the following output:

At time a FD for a unix domain socket is created:

>>> UDTRACE: Adding FD 8

At time a FD for a unix domain socket is closed:

>>> UDTRACE: Removing FD 8

At time a FD for a unix domain socket is bound or connected:

>>> UDTRACE: connect(9, "/tmp/mncc")

When data is read from the socket:

9 read R 00040000050000004403000008000000680000001c0300002c03000000000000

When data is written to the socket:

9 write W 00040000050000004403000008000000680000001c0300002c03000000000000

Where
  • 9 is the file dsecriptor on which the event happened

  • read/write is the name of the syscall, could e.g. also be sendmsg / readv / etc.

  • R|W is Read / Write (from the process point of view)

  • followed by a hex-dump of the raw data. Only data successfully written (or read) will be printed, not the entire buffer passed to the syscall. The rationale is to only print data that was actually sent to or received from the socket.

TITAN decoder support

Getting hex-dumps is nice and fine, but normally one wants to have a more detailed decode of the data that is being passed on the socket.

For TCP based protocols, there is wireshark. But most protocols on unix domain sockets don't follow inter-operable / public standards, so even if one was to pass the traces into wireshark somehow, there would be no decoder.

In the Osmocom project, we already had some type definitions and decoders for our protocols written in the TTCN-3 programming language, using Eclipse TITAN. In order to build those decoders fro MNCC and PCUIF, please use

make ENABLE_TITAN=1

when building the code.

Please note that this introduces a run-time dependency to libttcn3-dynamic.so, which is (at least on Debian GNU/Linux) not installed in a default library search path, so you will have to use something like:

LD_LIBRARY_PATH=/usr/lib/titan LD_PRELOAD=libudtrace.so systemctl status

Report from the Geniatech vs. McHardy GPL violation court hearing

Today, I took some time off to attend the court hearing in the appeal hearing related to a GPL infringement dispute between former netfilter colleague Partrick McHardy and Geniatech Europe

I am not in any way legally involved in the lawsuit on either the plaintiff or the defendant side. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case.

History of the Case

This case is about GPL infringements in consumer electronics devices based on a GNU/Linux operating system, including the Linux kernel and at least some devices netfilter/iptables. The specific devices in question are a series of satellite TV receivers built by a Shenzhen (China) based company Geniatech, which is represented in Europe by Germany-based Geniatech Europe GmbH.

The Geniatech Europe CEO has openly admitted (out of court) that they had some GPL incompliance in the past, and that there was failure on their part that needed to be fixed. However, he was not willing to accept an overly wide claim in the preliminary injunction against his company.

The history of the case is that at some point in July 2017, Patrick McHardy has made a test purchase of a Geniatech Europe product, and found it infringing the GNU General Public License v2. Apparently no source code (and/or written offer) had been provide alongside the binary - a straight-forward violation of the license terms and hence a violation of copyright. The plaintiff then asked the regional court of Cologne to issue a preliminary injunction against the defendant, which was granted on September 8th,2017.

In terms of legal procedure, in Germany, when a plaintiff applies for a preliminary injunction, it is immediately granted by the court after brief review of the filing, without previously hearing the defendant in an oral hearing. If the defendant (like in this case) wishes to appeal the preliminary injunction, it files an appeal which then results in an oral hearing. This is what happened, after which the district court of cologne (Landgericht Koeln) on October 20, 2017 issued ruling 14 O 188/17 partially upholding the injunction.

All in all, nothing particularly unusual about this. There is no dispute about a copyright infringement having existed, and this generally grants any of the copyright holders the right to have the infringing party to cease and desist from any further infringement.

However, this injunction has a very wide scope, stating that the defendant was to cease and desist not only from ever publishing, selling, offering for download any version of Linux (unless being compliant to the license). It furthermore asked the defendant to cease and desist

  • from putting hyperlinks on their website to any version of Linux

  • from asking users to download any version of Linux

unless the conditions of the GPL are met, particularly the clauses related to providing the complete and corresponding source code.

The appeals case at OLG Cologne

The defendant now escalated this to the next higher court, the higher regional court of Cologne (OLG Koeln), asking to withdraw the earlier ruling of the lower court, i.e. removing the injunction with its current scope.

The first very positive surprise at the hearing was the depth in which the OLG court has studied the subject matter of the dispute prior to the hearing. In the many GPL related court cases that I witnessed so far, it was by far the most precise analysis of how Linux kernel development works, and this despite the more than 1000 pages of filings that parties had made to the court to this point.

Just to give you some examples:

  • the court understood that Linux was created by Linus Torvalds in 1991 and released under GPL to facilitate the open and collaborative development

  • the court recognized that there is no co-authorship / joint authorship (German: Miturheber) in the Linux kernel as a whole, as it was not a group of people planning+developing a given program together, but it is a program that has been released by Linus Torvalds and has since been edited by more than 15.000 developers without any "grand joint plan" but rather in successive iterations. This situation constitutes "editing authorship" (German: Bearbeiterurheber)

  • the court further recognized that being listed as "head of the netfilter core team" or a "subsystem maintainer" doesn't necessarily mean that one is contributing copyrightable works. Reviewing thousands of patches doesn't mean you own copyright on them, drawing an analogy to an editorial office at a publisher.

  • the court understood there are plenty of Linux versions that may not even contain any of Patric McHardy's code (such as older versions)

After about 35 minutes of the presiding judge explaining the court's understanding of the case (and how kernel development works), it went on to summarize the summary of their internal elaboration at the court prior to the meeting.

In this summary, the presiding judge stated very clearly that they believe there is some merit to the arguments of the defendant, and that they would be inclined in a ruling favorable to the defendant based on their current understanding of the case.

He cited the following main reasons:

  • The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author (Bearbeiterurheber), and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.

  • The plaintiff did not sufficiently show what exactly his contributions were and how they were forming themselves copyrightable works

  • The plaintiff did not substantiate what copyrightable contributions he has made outside of netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were

  • The plaintiff being a member of the netfilter core team or even the head of the core team still doesn't support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick's first contribution to netfilter, and five years before joining the core team in 2004.

So all in all, it was clear that the court also thought the ruling on all of Linux was too far-fetching.

The court suggested that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.

Some other details that were mentioned somewhere during the hearing:

  • Patrick McHardy apparently unilaterally terminated the license to his works in an e-mail dated 26th of July 2017 towards the defendant. According to the defendant (and general legal opinion, including my own position), this is in turn a violation of the GPLv2, as it only allowed plaintiff to create and publish modified versions of Linux under the obligation that he licenses his works under GPLv2 to any third party, including the defendant. The defendant believes this is abuse of his rights (German: Rechtsmissbraeuchlich).

  • sworn affidavits of senior kernel developer Greg Kroah-Hartman and current netfilter maintainer Pablo Neira were presented in support of some of the defendants claims. The contents of those are unfortunately not public, neither is the contents of the sworn affidavists presented by the plaintiff.

  • The defendant has made substantiated claims in his filings that Patrick McHardy would perform his enforcement activities not with the primary motivation of achieving license compliance, but as a method to generate monetary gain. Such claims include that McHardy has acted in more than 38 cases, in at least one of which he has requested a contractual penalty of 1.8 million EUR. The total amount of monies received as contractual penalties was quoted as over 2 million EUR to this point. Please note that those are claims made by the defendant, which were just reproduced by the court. The court has not assessed their validity. However, the presiding judge explicitly stated that he received a phone calls about this case from a lawyer known to him personally, who supported that large contractual penalties are being paid in other related cases.

  • One argument by the plaintiff seems to center around being listed as a general kernel networking maintainer until 2017 (despite his latest patches being from 2015, and those were netfilter only)

Withdrawal by Patrick McHardy

At some point, the court hearing was temporarily suspended to provide the legal representation of the plaintiff with the opportunity to have a Phone call with the plaintiff to decide if they would want to continue with their request to uphold the preliminary injunction. After a few minutes, the hearing was resumed, with the plaintiff withdrawing their request to uphold the injunction.

As a result, the injunction is now withdrawn, and the plaintiff has to bear all legal costs (court fees, lawyers costs on both sides).

Personal Opinion

For me, this is all of course a difficult topic. With my history of being the first to enforce the GNU GPLv2 in (equally German) court, it is unsurprising that I am in favor of license enforcement being performed by copyright holders.

I believe individual developers who have contributed to the Linux kernel should have the right to enforce the license, if needed. It is important to have distributed copyright, and to avoid a situation where only one (possibly industry friendly) entity would be able to take [legal] action.

I'm not arguing for a "too soft" approach. It's almost 15 years since the first court cases on license violations on (embedded) Linux, and the fact that the problem still exists today clearly shows the industry is very far from having solved a seemingly rather simple problem.

On the other hand, such activities must always be oriented to compliance, and compliance only. Collecting huge amounts of contractual penalties is questionable. And if it was necessary to collect such huge amounts to motivate large corporations to be compliant, then this must be done in the open, with the community knowing about it, and the proceeds of such contractual penalties must be donated to free software related entities to prove that personal financial gain is not a motivation.

The rumors of Patrick performing GPL enforcement for personal financial gain have been around for years. It was initially very hard for me to believe. But as more and more about this became known, and Patrick would refuse to any contact requests by his former netfilter team-mates as well as the wider kernel community make it hard to avoid drawing related conclusions.

We do need enforcement, both out of court and in court. But we need it to happen out of the closet, with the community in the picture, and without financial gain to individuals. The "principles of community oriented enforcement" of the Software Freedom Conservancy as well as the more recent (but much less substantial) kernel enforcement statement represent the most sane and fair approach for how we as a community should deal with license violations.

So am I happy with the outcome? Not entirely. It's good that an over-reaching injunction was removed. But then, a lot of money and effort was wasted on this, without any verdict/ruling. It would have been IMHO better to have a court ruling published, in which the injunction is substantially reduced in scope (e.g. only about netfilter, or specific versions of the kernel, or specific products, not about placing hyperlinks, etc.). It would also have been useful to have some of the other arguments end up in a written ruling of a court, rather than more or less "evaporating" in the spoken word of the hearing today, without advancing legal precedent.

Lessons learned for the developer community

  • In the absence of detailed knowledge on computer programming, legal folks tend to look at "metadata" more, as this is what they can understand.

  • It matters who has which title and when. Should somebody not be an active maintainer, make sure he's not listed as such.

  • If somebody ceases to be a maintainer or developer of a project, remove him or her from the respective lists immediately, not just several years later.

  • Copyright statements do matter. Make sure you don't merge any patches adding copyright statements without being sure they are actually valid.

Lessons learned for the IT industry

  • There may be people doing GPL enforcement for not-so-noble motives

  • Defending yourself against claims in court can very well be worth it, as opposed to simply settling out of court (presumably for some money). The Telefonica case in 2016 <>_ has shown this, as has this current Geniatech case. The legal system can work, if you give it a chance.

  • Nevertheless, if you have violated the license, and one of the copyright holders makes a properly substantiated claim, you still will get injunctions granted against you (and rightfully so). This was just not done in this case (not properly substantiated, scope of injunction too wide/coarse).

Dear Patrick

For years, your former netfilter colleagues and friends wanted to have a conversation with you. You have not returned our invitation so far. Please do reach out to us. We won't bite, but we want to share our views with you, and show you what implications your actions have not only on Linux, but also particularly on the personal and professional lives of the very developers that you worked hand-in-hand with for a decade. It's your decision what you do with that information afterwards, but please do give us a chance to talk. We would greatly appreciate if you'd take up that invitation for such a conversation. Thanks.

34C3 and its Osmocom GSM/UMTS network

At the 34th annual Chaos Communication Congress, a team of Osmocom folks continued the many years old tradition of operating an experimental Osmocom based GSM network at the event. Though I've originally started that tradition, I'm not involved in installation and/or operation of that network, all the credits go to Lynxis, neels, tsaitgaist and the larger team of volunteers surrounding them. My involvement was only to answer the occasional technical question and to look at bugs that show up in the software during operation, and if possible fix them on-site.

34C3 marks two significant changes in terms of its cellular network:

  • the new post-nitb Osmocom stack was used, with OsmoBSC, OsmoMSC and OsmoHLR

  • both an GSM/GPRS network (on 1800 MHz) was operated ,as well as (for the first time) an UMTS network (in the 850 MHz band)

The good news is: The team did great work building this network from scratch, in a new venue, and without relying on people that have significant experience in network operation. Definitely, the team was considerably larger and more distributed than at the time when I was still running that network.

The bad news is: There was a seemingly endless number of bugs that were discovered while operating this network. Some shortcomings were known before, but the extent and number of bugs uncovered all across the stack was quite devastating to me. Sure, at some point from day 2 onwards we had a network that provided [some level of] service, and as far as I've heard, some ~ 23k calls were switched over it. But that was after more than two days of debugging + bug fixing, and we still saw unexplained behavior and crashes later on.

This is such a big surprise as we have put a lot of effort into testing over the last years. This starts from the osmo-gsm-tester software and continuously running test setup, and continues with the osmo-ttcn3-hacks integration tests that mainly I wrote during the last few months. Both us and some of our users have also (successfully!) performed interoperability testing with other vendors' implementations such as MSCs. And last, but not least, the individual Osmocom developers had been using the new post-NITB stack on their personal machines.

So what does this mean?

  • I'm sorry about the sub-standard state of the software and the resulting problems we've experienced in the 34C3 network. The extent of problems surprised me (and I presume everyone else involved)

  • I'm grateful that we've had the opportunity to discover all those bugs, thanks to the GSM team at 34C3, as well as Deutsche Telekom for donating 3 ARFCNs from their spectrum, as well as the German regulatory authority Bundesnetzagentur for providing the experimental license in the 850 MHz spectrum.

  • We need to have even more focus on automatic testing than we had so far. None of the components should be without exhaustive test coverage on at least the most common transactions, including all their failure modes (such as timeouts, rejects, ...)

My preferred method of integration testing has been by using TTCN-3 and Eclipse TITAN to emulate all the interfaces surrounding a single of the Osmocom programs (like OsmoBSC) and then test both valid and invalid transactions. For the BSC, this means emulating MS+BTS on Abis; emulating MSC on A; emulating the MGW, as well as the CTRL and VTY interfaces.

I currently see the following areas in biggest need of integration testing:

  • OsmoHLR (which needs a GSUP implementation in TTCN-3, which I've created on the spot at 34C3) where we e.g. discovered that updates to the subscriber via VTY/CTRL would surprisingly not result in an InsertSubscriberData to VLR+SGSN

  • OsmoMSC, particularly when used with external MNCC handlers, which was so far blocked by the lack of a MNCC implementation in TTCN-3, which I've been working on both on-site and after returning back home.

  • user plane testing for OsmoMGW and other components. We currently only test the control plane (MGCP), but not the actual user plane e.g. on the RTP side between the elements

  • UMTS related testing on OsmoHNBGW, OsmoMSC and OsmoSGSN. We currently have no automatic testing at all in these areas.

Even before 34C3 and the above-mentioned experiences, I concluded that for 2018 we will pursue a test-driven development approach for all new features added by the sysmocom team to the Osmocom code base. The experience with the many issues at 34C3 has just confirmed that approach. In parallel, we will have to improve test coverage on the existing code base, as outlined above. The biggest challenge will of course be to convince our paying customers of this approach, but I see very little alternative if we want to ensure production quality of our cellular stack.

So here we come: 2018, The year of testing.

Osmocom Review 2017

As 2017 has just concluded, let's have a look at the major events and improvements in the Osmocom Cellular Infrastructure projects (i.e. those projects dealing with building protocol stacks and network elements for mobile network infrastructure.

I've prepared a detailed year 2017 summary at the osmocom.org website, but let me write a bit about the most note-worthy topics here.

NITB Split

Once upon a time, we implemented everything needed to operate a GSM network inside a single process called OsmoNITB. Those days are now gone, and we have separate OsmoBSC, OsmoMSC, OsmoHLR, OsmoSTP processes, which use interfaces that are interoperable with non-Osmocom implementations (which is what some of our users require).

This change is certainly the most significant change in the close-to-10-year history of the project. However, we have tried to make it as non-intrusive as possible, by using default point codes and IP addresses which will make the individual processes magically talk to each other if installed on a single machine.

We've also released a OsmoNITB Migration Guide, as well as our usual set of user manuals in order to help our users.

We'll continue to improve the user experience, to re-introduce some of the features lost in the split, such as the ability to attach names to the subscribers.

Testing

We have osmo-gsm-tester together with the two physical setups at the sysmocom office, which continuously run the latest Osmocom components and test an entire matrix of different BTSs, software configurations and modems. However, this tests at super low load, and it tests only signalling so far, not user plane yet. Hence, coverage is limited.

We also have unit tests as part of the 'make check' process, Jenkins based build verification before merging any patches, as well as integration tests for some of the network elements in TTCN-3. This is much more than we had until 2016, but still by far not enough, as we had just seen at the fall-out from the sub-optimal 34C3 event network.

OsmoCon

2017 also marks the year where we've for the first time organized a user-oriented event. It was a huge success, and we will for sure have another OsmoCon incarnation in 2018 (most likely in May or June). It will not be back-to-back with the developer conference OsmoDevCon this time.

SIGTRAN stack

We have a new SIGTRAN stakc with SUA, M3UA and SCCP as well as OsmoSTP. This has been lacking a long time.

OsmoGGSN

We have converted OpenGGSN into a true member of the Osmocom family, thereby deprecating OpenGGSN which we had earlier adopted and maintained.

On the Linux Kernel Enforcement Statement

I'm late with covering this here, but work overload is having its toll on my ability to blog.

On October 16th, key Linux Kernel developers have released and anounced the Linux Kernel Community Enforcement Statemnt.

In its actual text, those key kernel developers cover

  • compliance with the reciprocal sharing obligations of GPLv2 is critical and mandatory

  • acknowledgement to the right to enforce

  • expression of interest to ensure that enforcement actions are conducted in a manner beneficial to the larger community

  • a method to provide reinstatement of rights after ceasing a license violation (see below)

  • that legal action is a last resort

  • that after resolving any non-compliance, the formerly incompliant user is welcome to the community

I wholeheartedly agree with those. This should be no surprise as I've been one of the initiators and signatories of the earlier statement of the netfilter project on GPL enforcement.

On the reinstatement of rights

The enforcement statement then specifically expresses the view of the signatories on the specific aspect of the license termination. Particularly in the US, among legal scholars there is a strong opinion that if the rights under the GPLv2 are terminated due to non-compliance, the infringing entity needs an explicit reinstatement of rights from the copyright holder. The enforcement statement now basically states that the signatories believe the rights should automatically be re-instated if the license violation ceases within 30 days of being notified of the license violation

To people like me living in the European (and particularly German) legal framework, this has very little to no implications. It has been the major legal position that any user, even an infringing user can automatically obtain a new license as soon as he no longer violates. He just (really or imaginary) obtains a new copy of the source code, at which time he again gets a new license from the copyright holders, as long as he fulfills the license conditions.

So my personal opinion as a non-legal person active in GPL compliance on the reinstatement statement is that it changes little to nothing regarding the jurisdiction that I operate in. It merely expresses that other developers express their intent and interest to a similar approach in other jurisdictions.

SFLC sues SFC over trademark infringement

As the Software Freedom Conservancy (SFC) has publicly disclosed on their website, it appears that Software Freedom Law Center (SFLC) has filed for a trademark infringement lawsuit against SFC.

SFLC has launched SFC in 2006, and SFLC has helped and endorsed SFC in the past.

This lawsuit is hard to believe. What has this community come to, if its various members - who used all to be respected equally - start filing law suits against each other?

It's of course not known what kind of negotiations might have happened out-of-court before an actual lawsuit has been filed. Nevertheless, one would have hoped that people are able to talk to each other, and that the mutual respect for working at different aspects and with possibly slightly different strategies would have resulted in a less confrontational approach to resolving any dispute.

To me, this story just looks like there can only be losers on all sides, by far not just limited to the two entities in question.

On lwn.net some people, including high-ranking members of the FOSS community have started to spread conspiracy theories as to whether there's any secret scheming behind the scenes, particularly from the Linux Foundation towards SFLC to cause trouble towards the SFC and their possibly-not-overly-enjoyed-by-everyone enforcement activities.

I think this is complete rubbish. Neither have I ever had the impression that the LF is completely opposed to license enforcement to begin with, nor do I have remotely enough phantasy to see them engage in such malicious scheming.

What motivates SFLC and/or Eben to attack their former offspring is however unexplainable to the bystander. One hopes there is no connection to his departure from FSF about one year ago, where he served as general counsel for more than two decades.

Obtaining the local IP address of an unbound UDP socket

Sometimes one is finding an interesting problem and is surprised that there is not a multitude of blog post, stackoverflow answers or the like about it.

A (I think) not so uncommon problem when working with datagram sockets is that you may want to know the local IP address that the OS/kernel chooses when sending a packet to a given destination.

In an unbound UDP socket, you basically send and receive packets with any number of peers from a single socket. When sending a packet to destination Y, you simply pass the destination address/port into the sendto() socket function, and the OS/kernel will figure out which of its local IP addresses will be used for reaching this particular destination.

If you're a dumb host with a single default router, then the answer to that question is simple. But in any reasonably non-trivial use case, your host will have a variety of physical and/or virtual network devices with any number of addresses on them.

Why would you want to know that address? Because maybe you need to encode that address as part of a packet payload. In the current use case that we have, it is the OsmoMGW, implementing the IETF MGCP Media Gateway Control Protocol.

So what can you do? You can actually create a new "trial" socket, not bind it to any specific local address/port, but connect() it to the destination of your IP packets. Then you do a getsockname(), which will give you the local address/port the kernel has selected for this socket. And that's exactly the answer to your question. You can now close the "trial" socket and have learned which local IP address the kernel would use if you were to send a packet to that destination.

At least on Linux, this works. While getsockname() is standard BSD sockets API, I'm not sure how portable it is to use it on a socket that has not been explicitly bound by a prior call to bind().

Invited keynote + TTCN-3 talk at netdevconf 2.2 in Seoul

It was a big surprise that I've recently been invited to give a keynote on netfilter history at netdevconf 2.2.

First of all, I wouldn't have expected netfilter to be that relevant next to all the other [core] networking topics at netdevconf. Secondly, I've not been doing any work on netfilter for about a decade now, so my memory is a bit rusty by now ;)

Speaking of Rusty: Timing wise there is apparently a nice coincidence that I'll be able to meet up with him in Berlin later this month, i.e. hopefully we can spend some time reminiscing about old times and see what kind of useful input he has for the keynote.

I'm also asking my former colleagues and successors in the netfilter project to share with me any note-worthy events or anecdotes, particularly also covering the time after my retirement from the core team. So if you have something that you believe shouldn't miss in a keynote on netfilter project history: Please reach out to me by e-mail ASAP and let me know about it.

To try to fend off the elder[ly] statesmen image that goes along with being invited to give keynotes about the history of projects you were working on a long time ago, I also submitted an actual technical talk: TTCN-3 and Eclipse Titan for testing protocol stacks, in which I'll cover my recent journey into TTCN-3 and TITAN land, and how I think those tools can help us in the Linux [kernel] networking community to productively produce tests for the various protocols.

As usual for netdevconf, there are plenty of other exciting talks in the schedule

I'm very much looking forward to both visiting Seoul again, as well as meeting lots of the excellent people involved in the Linux networking subsystems. See ya!

Ten years Openmoko Neo1973 release anniversary dinner

As I noted earlier this year, 2017 marks the tenth anniversary of shipping the first Openmoko phone, the Neo1973.

On this occasion, a number of the key people managed to gather for an anniversary dinner in Taipei. Thanks for everyone who could make it, it was very good to see them together again. Sadly, by far not everyone could attend. You have been missed!

The award for the most crazy attendee of the meeting goes out to my friend Milosch, who has actually flown from his home in the UK to Taiwan, only to meet up with old friends and attend the anniversary dinner.

You can some pictures in Milosch's related tweet.

On Vacation

In case you're wondering about the lack of activity not only on this blog but also in git repositories, mailing lists and the like: I've been on vacation since September 13. It's my usual "one month in Taiwan" routine, during which I spend some time in Taipei, but also take several long motorbike tours around mostly rural Taiwan.

You can find the occasional snapshot in my twitter feed, such as the, pictures, here and there.