I've been experimenting a lot with ffmpeg and ffserver over the last couple of days. The fact that ffmpeg is very little documented is a pity, but not exactly a problem for someone experienced with free software and C development (use the source, Luke).
However, the ffserver program seems to be horribly broken in a number of ways. Independent of the kind of configuration, it regularly segfaults, glibc complains about double-free's, and valgrind or Electric Fence have numerous complaints.
All information you can find after browsing through mail archives, is that it's apparently broken for a number of years. Maybe I'll spend some time at it and fix it at least partially. So I spent about two days to familiarize myself with the source of libavformat, libavcodec, ffmpeg and ffserver. It's not exactly easy to understand, but I think I now got a good understanding of what's going on where.
Another fundamental insufficiency of ffmpeg seems to be that it cannot put the output of one codec into multiple output files. So let's say I want to encode some MPEG2 video and AC3 audio. This is to be written to a .vob file and at the same time sent as a transport stream over the network. The only way you can achieve this now is to encode the input data twice - which I cannot afford due to CPU limitation.
So I was pondering something like streaming the output over multicast RTP plus running something like rtpdump on the same machine to create the local file.
As a summary, I think it's a pity that there is good encoding software like ffmpeg, and that nobody volunteered yet to fix the remaining issues required to turn it into a good streaming and recording solution.