Apparently when I 'bricked' my router, I did no more than install buggy firmware. How so many things worked together to convince me I bricked an otherwise excellent piece of hardware is a fairly long story. I got the router last summer, and put Open-WRT and X-WRT's web interface on it. For never having used a Linux router before, it was amazing. Over the next few months I learned the truth about Open-WRT's firmware and all its nasty bugs. Chances are, things were misconfigured. Even so, I would have problems where the router had to be rebooted manually about once every 5 days. Over those 5 days, performance would degrade so much that I couldn't even SSH into the router anymore. Rebooting fixed that, although I never did figure out what caused it. I was too busy.

I switched the firmware over from the 'stable' but aging WhiteRussian over to the new and supposedly improved Kamikaze. I assumed it would have more up to date versions of the tools I was using, as WhiteRussian was out of date. I also did a bit of tinkering with the layout of my network as well. That last bit is the important part. WhiteRussian and other WRT54 specific distributions save their configuration information in the NVRAM, so it can be saved when reflashing. Kamikaze took a more general approach, and would only import settings from NVRAM. Instead, it would save everything to a flat file format. So my router still held all the old settings from before I switched over.

When push came to shove, I was having problems configuring the firewall to forward connections to different servers in Kamikaze. Again, I might have been missing some crucial detail, but the WebIF interface is incomplete, and madifying text files did not do the trick. I began shopping around, and came across Tomato, a modification of the original Linksys firmware with a very snazzy interface. I decided to give it a try. When I installed it, however, my computer failed to get an IP address from DHCP. When I gave it its old IP as a static IP, that didn't work either. I assumed the worst.

Thanks to a few pointers, I figured out how to get into the emergency TFTP mode, and reflash the firmware. Realizing that a) I successfully reflashed the firmware, without needing any JTAG madness, and b) the problems continued with Tomato, I figured maybe the router was good, and something inside Tomato was broken. On a whim, I tired the older IP address that it used from the WhiteRussian era. Suprisingly it worked, so well that I could get to the snazzy web interface. There I found out that Tomato started with DHCP Disabled!

I cannot stress how bad that is for a piece of wireless software that prides itself on its interface to be lacking DHCP. It boggles my mind that someone forgot to enable the technology that lets you plug things in and go. Even more surprising is the level of quality in everything else in Tomato. The web interface manages to present pretty much every standard use feature one would expect from a router, plus everything possible directly with the WRT54. There are so many settings there that I never knew existed, and certainly never would have found in OpenWRT. For example, apparently, you can increase the transmission power of the wireless antenna up to 25 times the default. I am now able to get a wireless signal in my sun room on the other side of my house. With a bit of overclocking and extra powerful cooling, I'm sure I could make the connection reach all the way across the street to my neighbors. I think I might just have to blog sometime about some of the other fun things exposed by Tomato.

So what went wrong? Lack of community standards in configuration settings for WRT54 firmwares. Pure negligence or stupidity on one person's part, and just an overall panicked nature from yours truly. Thanks a lot to everyone who had something helpful to say, and the 15 or so internet sites I pieced together everything from.

Bricked Router

Dear Lazyweb,

Last night I managed to brick my router by reflashing its firmware doing everything by the book. It was probably just one of those unlucky moments in life. It's an old 3rd revision Linksys WRT54G complete with 2.4-kernel-only Broadcom chips.

My question to you, Lazyweb, is which cheap reflashable router should I get? This one had trouble getting internet to the whole house. If I ever fix this one, I'll probably set up a more distributed network as well. What I want in a new router is a more powerful CPU and possibly Wireless-N capabilities. I know I will have it for a few years, so it will definitely be worth paying a bit more for. Are there any good Wireless-N routers that will let you put your own firmware on it? Preferably something that can handle a 2.6 kernel and that doesn't need a proprietary Broadcom driver?

Fedora 9's Name

The following is some comments posted to Planet Fedora by Fabian Affolter. I found it hilarious, and as there is always a problem when translating things from English to other languages, I thought it would be good to have it translated into something more accessible. I've taken a few liberties translating this, since German doesn't match neatly with English in direct meaning. With his permission:

Moving along, a new name for Fedora 9 was chosen. It's going to be Sulpher, or in German Schwefel. I can live with that.

Were we to continue forever with the Periodic Table of Elements, I would be very happy. From my point of view, it makes things clearer and more intelligent, if the code name came from a scientific field. There are always going to be great names in English, but when translated into other languages, they are hilarious. I would prefer not to have this problem again. Ubuntu, for example, picks names you might expect to hear in a playground. They should belong, rather, in a childrens' book. I'm looking forward for more good laughs, Crummy Cat, Dirty Dog, the Oozing Ox, and the Drunken Deer. If I say at work that I'm using the Effluvious Eel, my coworkers would roll around on the floor laughing.

[I had to use the thesaurus to find Effluvious :P]

Who the devil chose "Mayonnaise"? I'll give you joker's a few good hints about what happens when a Frenchman speaking German has no idea what that is.


To be honest, I think one of the hardest accents in German to understand comes from French speakers. The two languages are so different in some ways, that as a foreigner, it is that much harder to comprehend. While I certainly appreciate calling Fedora after a tasty sauce to go with roast beef, you have to admit that without a large international marketing team, our name will be broadcast around the world. Many big companies pay millions of dollars to have *different* names in different countries. It also cracks me up that "Lagnese Eis", an ice cream company has an Italian name in Germany, but a German name in Israel. (I think it's either "Straub's" or "Strauss's" although I can't remember offhand.)

Smolt is awesomerester

Tonight I'm happy to announce that the SELinux features are complete. We've been collecting SELinux data for a while, but apparently the reports never got updated. Tonight they have been. Whenever we get around to updating smots.org, these new changes will be apparent.

Along with them come other new features, like superpowerful anonymity, a few database optimisations on the backend, better integration with FAS and Bodhi, and talking tigers. I knew today was a good day to wear my rocket ship underpants.

Yukon Ho!

Halp!, a Fud is eatin' my Fud

I promised a blog post about FudCon, and this is it. For starters, FudCon was awesome. Overall, I didn't get as much actual lines of code done as I would have liked, but it was very productive all the same. A big thank you goes to Max and all the others who made FudCon possible. It was great to actually meet some of the Fedora hackers I've gotten to know over the past eight months. I got the chance to sit down and talk to a bunch of different people and flesh out a bunch of different ideas, some more alcohol skewed than others, over the weekend. Hopefully there'll be alot of good coming out of them.

The Sessions

The first hacking session was great. Between everything else, I managed to get about 70% of the security features I wanted to stick into Smolt. Most of the details need to be fleshed out, but I can safely say, Smolt is very very anonymous. If you feel your identity has been compromised, just ask for a new one; it will be fully automated. Also, Lee took some of our queries to our MySQL backend, and optimized them, so I'll be integrating that work in over the weekend.

I also got the chance to listen in to the work that Paul Frields, our newest fearless leader, and the rest of the Fedora Board members are putting together. We're in good hands.


Michael Tiemann's talk was very interesting. One thing that struck me the most is how companies are coming on to the Open Source bandwagon wholesale. They switch their entire stack to open source components, install RHEL on every box and then ask "How can we be more open source?" To be honest, this sounds too much like a religion, and not enough like a solution. The skeptic in me wonders how much of "Open Source" is really understood by the execs running all these companies. Well, assuming they do get it, (and the optimist in me hopes they do,) we discussed how Fedora can help. Something that came up quite often is the infrastructure that Fedora has already. As a group, we coordinate a huge global development team, providing such sundry resources as file servers, application servers, development applications, IDEs, discussion forums, project management software, and even VOIP service. I'd like to coin the term "Salon Fedora", which is what I feel Fedora is going to turn into. It is a virtual salon where developers can come together and discuss ideas openly, with the chance to develop them, and see them deployed into the wild. It is a link between the business hacking community and the hobbiest hacking community. (If you don't like the name, please feel free to think up a new one.)


The second talk I went to is the Pyjigdo talk given by Fedora Unity. Fedora Unity has been using Jigdo and other tools to develop their Respins of Fedora. I also went to their third talk about revisor, and between the two of them, it was great to see how the entire Fedora Unity stack comes together software wise. Also, thanks to Jon and Dutchie showing off how to write a plugin for Revisor, I have a couple of fun ideas for Wevisor which I'll go into at the end.


The fourth talk is an interesting one. On one of the mailing lists, there has been heated discussion about our init system, and how we are going to move forward in the future. Casey Dahlin has a no nonsense approach to building a new init system, but I was happy for the my first chance to see how Fedora Developers resolve these conflicts in person. I suggested to Casey that he have someone record the discussion, because it seemed like it would be a great way to show the world that sometimes Fedora developers can do things amicably. Wishful thinking led to none other than Paul doing the recording, with everyone interested contributing what we needed to. Between people from the RHEL side of things to long term Fedora people to ex-Debian ethusiasts like myself, we were able to go over all the issues involved in switching to Upstart. It'll be interesting to see how one more Ubuntu tool integrates into Fedora, just like they use our tools like PulseAudio.


The fifth talk was just fun. I got to meet Seth Vidal, and talk about Yum. I wish I could have seen something about writing plugins for Yum, but it it was fun to talk about why Fedora doesn't do live upgrades the way that Debian does. Funny thing is, I've stopped missing that feature in some ways.


Finally, there was Luke Macken and Toshio's Turbogears session. Given the whole complexity of Smolt, it was funny to see Luke make things look easy again. Luke's got a great eye for making presentations, so if you missed it, I highly recommend you see the slides and notes. They are put together very well.


FudPub was... well... FudPub. First, we all went for various pork and other products with beer on the side at the Flying Saucer. I fortunately ducked out of the Karaoke, and hung around the bar next door having something like 10 shots of Jäger. Great way to end the day.

Hackfest Two

So Jäger in the night, very cool. The next morning, not so cool. I don't know how it happened, but I woke up to find myself talking to Bobby Frank the next morning about Wevisor, and how to make it not suck. Right in the middle of a pretty mild hangover, I find myself talking to Jon and Dutchie about how we can go about integrating Wevisor into Revisor better. I missed the chance to work on Wevisor last summer, but now I can have the opportunity to do something interesting there.

Holy Crap, I have alot to do

So for the next couple of weeks, I'll be working on Smolt and Wevisor things. Hopefully I'll have that development done by then, and somehow squeeze in working on Haskell. Following that I'll be working on more Haskell, and coordinating some tools with a few other devs, but I'll have more comments on that. Finally, I'll be working on HaPPs a little bit, getting it to work on Fedora, because we'll need something to showcase what Haskell can do.

I feel a little like I bit off more than I can chew. I'll have to try to take things one bite at a time....

KDE 4 and other thoughts

I gave KDE 4 a try today. It was very slick and smooth with all the draw of something severely underdeveloped. If there's one thing I should do, it's give the KDE team credit for being brave enough to label it a 4.0 release. I understand they are getting alot of flak for releasing something so unpolished, but they still have to release sometime.

The one item that struck me as odd though is how much reinvention of the wheel there is. The Compositing effects added a level of depth and integration into the desktop that Compiz Fusion probably wishes it could have. All this is done entirely through KDE's own codebase. I had to ask myself why didn't they just partner up with Compiz Fusion and make it work for them already? In fact, why don't they just unify their codebase with Gnome, and then we'd have one really awesome Linux Desktop.

The reality of the matter is that by unifying the code base, it would make it harder to introduce changes. Theoretically, someone could fork any free software project they want, when they want to try something experimental. But the more I thought about it, making new projects tends to lead to cleaner code, that is sometimes more competitive than the status quo. When I think about the number of tiling window managers available, and how many of them are going to disappear as XMonad becomes more developed, it becomes more obvious how this works out.

In other news, the FDA declared cloned meat safe for eating. I wonder if it's kosher.

FudPub: A Retrospective

I'll blog about FudCon later. When I've recovered, which I estimate to be sometime next millennium.

FudPub on the other hand, I have this to say. After a night of eating sausage, bavarian potato salad, and drinking enough jägermeister to kill a deer, the best cure is the hair of the dog that bit you: something sausagy. Alas, there was none to be had for this poor loup all day at the hackfest, as there was a hackfest going on. I get home tonight, after travelling all the way to Chicago just to get back to Pittsburgh, and having had not a single bite to eat, even of overpriced food from Starbucks, and ask my dad what there is to eat in the fridge. "Well, there's a couple pieces of knockwurst floating around."

It's good to be home.

(I'll be going straight back to classes after this, so it seriously will be a day or two before I can get all my thoughts together. I promise a nice lengthy report on FudCon soon.)

Smolt and FudCon

So far, FudCon has been awesome. It was very good to finally meet Mike, and go over some of the outstanding issues in Smolt. We finally have a good technical solution, so that we know Smolt is secure.

Traditionally, Smolt would store a UUID on the local system. This would be used as a key to the smolts.org database, when sending updates or looking up host information. With this UUID, anyone could remove a profile from the database. Knowing who had which UUID, anyone can find out what hardware you have. This was inherently insecure.

Over the past couple of months, I was trying to think of a way to lock down that access to the person who had control of the local client. Ideally, PolicyKit will define who is allowed access to the functions provided by Smolt, such as submitting and deleting a profile. At bare minimum, this would be root-only, with the UUID lockdown and hidden via file permissions and SELinux. Ideally, this will all be controllable via Cobber, Func, and anything else you might want to use. Let's say you want to share your profile. Just ask the server for a link. Every UUID comes with a new snazzy public UUID that you can share with anyone you want. Don't like your pubic UUID? Just ask for a new one. You will need access through PolicyKit to get this public UUID too. In the end, there is no way someone can just trace a UUID back to you, unless you want them to.

I'll blog more about FudCon later, I'm enjoying a nice look at TurboGears, courtesy of Luke.

Our New Telepathic Overlord

Taken from On the record with Jim Whitehurst, Red Hat's new CEO: 'I must have a mission'

What's it like stepping into Matthew's shoes? You inherit his board, his employees, his company. It's a bit like moving into a house that someone else built. How and when will you begin to put the Jim Whitehurst stamp on the company?

I joined Red Hat because it shares my values. I fundamentally believe that what the company is doing is right. People shouldn't expect a change in Red Hat's core philosophy, because it's a philosophy that I share. I will focus on operational excellence, but Red Hat is still Red Hat. That's not going to change.

[NOTE: I didn't have the chance to ask that last question, but he answered it while answering a separate question.]

--end snip--

Clearly this man is telepathic. The world could use more people like this.

Why I don't like C

This is in response to my last blog post. I commented that while I grew up with C as a lingua franca of computer programming. Despite this handicap, I prefer to avoid C and C like languages for a very good reason. Instead of just explaining why, I am going to quote Bruce Sterling:

As it happened, the problem itself--the problem per se--took this form.

A piece of telco software had been written in C language, a standard

language of the telco field. Within the C software was a

long "do. . .while" construct. The "do. . .while" construct

contained a "switch" statement. The "switch" statement contained

an "if" clause. The "if" clause contained a "break." The "break"

was SUPPOSED to "break" the "if clause." Instead, the "break"

broke the "switch" statement.

Without even explaining what went wrong, it's easy to see the problem with a language with ambiguities. I can't deny the benefits one gains from C, and in this case, writing Telco programs to handle some very high end real time equipment requires a language with those benefits.

I also can't argue that Haskell doesn't have these problems, either. Seeing the kind of complexities using Monads brings, I can't imagine that it's impossible to write obfuscated code. (Actually I would love to see some horribly obfuscated Haskell code; it would be an interesting learning experience.) As actual program binary size grows, though, it's important to reduce the errors per bit ratio. People like to measure projects and code by Significant Lines of Code (SLOC). Assuming one bug for every thousand lines, a one million line banking project would have 1000 bugs, plausibly. There are two approaches found in trying to manage and mitigate this problem. The first falls under 'Design Patterns'. The goal of this solution is to create patterns for putting together ideas, so that the chance of error per LOC goes down. After all, if a good design philosophy could bring the error rate down to one bug per ten thousand lines of code, the bug count would go down, and a manager could get a raise.

The other solution is Abstraction. This actually comes two fold as well. It all means the same thing though. The first is the use of verified standard libraries. One example of this is the prolificness of web frameworks. All the data marshalling, network connection, session management, run time systems access, and connectivity to other services can be abstracted cleanly away with libraries that are tested, packaged, and sometimes sold. In some cases there's only so much that can be done through a library, so many languages come up with convenient structures for handling more esoteric duties, such as concurrency, object oriented programming, aspect oriented programming, certain design patterns, and orthogonality between data structures and language features, all of which require clear support from the syntax. Ultimately, it falls on the shoulders of compiler writers, the duty to make it all come together, whether it's a linked in library, or assembler code generated by a standard language library.

I want to include a new measurement for measuring code complexity, Logical Lines of Code (LLOC). Writing concurrent code in Erlang, with automatic handling of multiple threads that frobnicates foobars amounts to 70 SLOC. Considering that each line of code represents 20 lines of thread safe goodness, this program amounts to 1400 LLOC. A bug rate of 1 bug per 500 SLOC turns into a bug rate of 1 bug per 10,000 LLOC. Suddenly, someone is getting a very big raise. Similarly, a thread safe data container library in java would represent 10 lines of non thread safe container code that has the end developer playing with his own locks and mutexes.

As a side point, lispers have been claiming for years that they don't need to wait for language designers to make up new features, just so they can use them. Through prudent use of defmacro, any new language feature can be designed on the spot. Without going into the sometimes named 'autistic programmer' problem, it leads to alot of fragmentation of the design. LISP seems to have as many design patterns as Java, and they seem to focus alot more on reducing the SLOC count. It also creates a funny problem where no two people's code can look alike. I couldn't even call this balkanization, but it makes LISP the interesting black sheep of the language evolution family tree.

Ultimately languages like Erlang, Haskell, and even F# aren't free of these nasty syntax issues that have bitten the best of the best programmers at least once. Every program is doomed to fail at something, but when trying to reduce the epicness of the fail, you don't want to have low level code hanging around your neck.

Why Haskell?

Over the past few days, I've been taking time off my regularly scheduled programming to learn a bit of Haskell. Anyone who knows me knows I have a tendency to ignore those things I'm supposed to do to go an experiment with those really obscure things that interest me. I think my New Years resolution this year is to stay more focused on those things at hand, but allow me to elaborate why I think Haskell is such a great idea, and what I've learned from it these past few days. But first, let me digress.

In the beginning of 2005, I took my first stab at writing code in Python, for school. My school was pretty Java centric, and despite 5 months with the language, I was already seeing some of the problems with it. I was taking a discrete structures class, and I was working on some 20 line program that needed to 'just work' and after learning SETL in high school, Python, with its funny list comprehensions seemed like a good fit. I never got that program to compile, and I ended up dropping the course, because the professor couldn't teach. (Considering that the entire contents of the course can be summarized in three sentences on Wikipedia, I resolved long ago that I probably didn't even need to take the course in the first place.)

The biggest lesson I took away from that is what is involved in learning a language as an adult. I learnt BASIC, LOGO, C, and C++ by the time I was 14, and while I"ll never go near those languages (except LOGO) without adequate sanity protection, I have to remind myself that the learning process was very different then. In the summer of 2005, I was already writing webcrawlers and text parsing scripts in Python, and becoming quite familiar with the process. In 2006, I worked on a couple of J2EE projects, and created some tools in Python, using PyGTK, and picked up alot of theory in the process. Writing in Python seems like almost second nature after 3 years of use, but I can't let myself forget, my first program never worked. That was the most valuable lesson I could have learned in Python.

In order to learn Haskell, I picked two projects to work on. The first is a couple of patches to bos's cabal-rpm, and the second a patch to xmonad-contrib. Let's face it, 2008 was nearly upon us; how many more 'Scheme Interpreters' do we need to see written in under 48 hours [1] ;-). There were a few other projects one could do to learn Haskell, but I wanted to do something that seemed more practical to me. cabal-rpm was an obvious choice. If I am going to take learning Haskell seriously, the few days invested in making good RPMs out of haskell packages would pay for itself. Instead of fussing around with Cabal and Haskell's own library management, I could integrate it with the tools I already invested time learning, Yum and RPM. Despite my initial complaints about the pair, I've really learned a lot about them since then, and Yum especially has grown on me. (On that note, I'm trying to revive the Haskell SIG, so if anyone else is interested, please help.) The second patch was a bit more circumstance. As I posted I had been playing with xmonad, and I came up with a funny idea for inclusion. I don't know much about all the mathematical formalisms that go into a language like Haskell, but I use a huge number of them every time I tun on my computer, so I decided to just dig in and see how easy it would be to prototype my idea. I had to go through three different ideas before I could find something that made it work, and in doing so, I learned a lot about some of the more esoteric data types in Haskell.


The most exciting part of all this, though, is not that I learned a new language, but that my code actually works. Of course I'm an older and wiser man than I was 3 years ago (sarcasm included), so I was expecting that I would eventually come to understand Haskell better and faster than Python. But every time I came up with a prototyped idea that Haskell didn't like, I had to remind myself, I'm still doing it wrong, and I just had to come up with a better idea, to make it all work.

The most rewarding part of it all, was how once everything compiled, it worked. In the Python world, I have this bad habit of throwing in liberal amounts of debugging statements to make sure that all the types are matched up. The ease that one can just insert things into the language, jump into the interpreter, see the results live, and then continue developing certainly makes learning the language easier. None of this could match the high level formalism that I was able to achieve in Haskell. In writing code, it felt like I was nearly writing pseudocode that described the problem I was trying to solve. If I got a compiler error, it wasn't because my code was buggy, but because my idea was buggy. All I had to do was formally describe the data types I wanted, and if the idea really did fit in the code base, it would compile. Furthermore, by just glancing at a block of code, I could easily figure out what it did. If it didn't do what I wanted it to do, that was obvious. Once the code compiling, there was little need to test it, because I already knew it worked. When it didn't work, I would get some very clear messages from the compiler saying what didn't match up. (When I say clear, i mean the messages themselves were terse and a bit cryptic, but part of the learning process was to learn what they meant as well.) Once I got a good feel for what the errors were saying, I was able to use them as a guide to figuring out how to fix my code. It became a cycle of edit - compile - read - repeat until read = 0. Late at night, it felt almost like the compiler wanted my code to succeed, so it was giving me hints as how to fix my idea, so it could actually make sense. To my Python senses, it was certainly a more enlightening experience than a number of 'log.debug(obj.__class__, dir(obj)) statements.

So why should you learn Haskell? It wasn't as easy to get into the syntax the way it was with Python. I can't deny that Python is probably the easiest language to 'read'. For introducing people to programming, Python is still an excellent language. But if you're looking for a language where you can tell the computer exactly what you mean, and see it work right away, Haskell is a much better choice.