My goal was to create something that can handle a workflow to automate most of the steps in making RPMs for basic Cabal packages. Cabal is the package manager for Haskell, similar to Python Eggs, or Ruby Gems. Since it's so well developed, 95% of the steps required to make an RPM can be automated to a few brief statements. Yesterday, i spent the better part of the day hacking together a module in fedora-devshell that can be used to download Cabal packages from Hackage, the Haskell source package repository, compile them, and install them either to root or to home. The user can also do it step by step, and intervene where necessary.
If you would like to follow along, you can get the source below. (GPL v2+)
In this example, i'm assuming that all the source packages will go in ~/haskell, where i normally work. fedora-devshell does a bunch of things half baked, because i'm still working on them. The only modules and concepts you will need though are Package and Cabal. Package refers to any generic Linux/Unix package, and cabal refers specifically to a Cabal package. Essentially, ~/haskell will have a bunch of Packages, and each Package will have one or more Cabal package, although only one active package. And that's it for theoretical stuff. Just remember, a Package can have a different name than the Cabal package will.
To take a random tarball, and make a regular Package out of it, run:
~/haskell $ ports.py cabal xmonad-test copy_in /some/location/xmonad.darcs.tar.bz2
This will create a directory ~/haskell/xmonad-test, which will have a tarball, and two expanded trees of the tarball, one is the modifiable one, and one is a copy of the original. In the future, fedora-devshell will make it easy to modify the source, compile and test it, and then make diff files to use in RPMs or to send upstream.
to compile the Cabal package, you need the following steps:
~/haskell $ ports.py cabal xmonad-test configure
~/haskell $ ports.py cabal xmonad-test build
~/haskell $ ports.py cabal xmonad-test install
This will install it to $HOME.
You can also use the shortcut
~/haskell $ ports.py cabal xmonad-test install_source
If you want to do it all in one, with a tarball, you can just do this
~/haskell $ ports.py cabal xmonad-test install_sourceball /some/location/xmonad.darcs.tar.bz2
Let's say you want to get some version from Hackage
~/haskell $ ports.py cabal xmonad-test get_from_hackage xmonad 0.8
Or, if you want to download the latest
~/haskell $ ports.py cabal xmonad-test get_latest xmonad
You could install from hackage:
~/haskell $ ports.py cabal xmonad-test install_from_hackage xmonad 0.8
Or even just
~/haskell $ ports.py cabal xmonad-test install_latest xmonad
Essentially, it's yet another package installer for Cabal. If you're a regular haskell wondering why you need this, then it's probably not for you, because there are other tools that can do this better. If you're a packager though, it's a sneak preview of one of many tools that will be integrated together in fedora-devshell that will make your life alot easier.
I started with Debian when i was in high school and i was amazed with the sheer quantity of things i could do with it. Since then, i experimented with Knoppix, Knoppix STD, Gentoo, SuSE, Ubuntu, Crux, and Mandrake before it was Mandriva. I used Fedora 6 as my first RH derivative when i started working for Red Hat, and eventually went 100% RH in house.
2) What is your preferred $your_distribution version?
In Debian land, it is definitely Debian Potato. Woody was also nice. In Fedora, Fedora 10 beats the pants off pretty much anything done before.
3) Write a short story (more like and anecdote) about your past distribution.
I used to piss Seth Vidal off alot complaining about Yum, and how much it used to suck. It wasn't pretty. One day i met him in Raleigh, asked him two questions, and got the most logical answers ever. Suddenly it made sense, and yum stopped sucking. I don't think i've argued with Seth again. The morale of the story is to get to know your Open Source developers; the relationship is so much more rewarding.
Because Yiddish is so similar to German in grammar and vocabulary, it's quite easy to misunderstand Yiddish terms. The meaning of the word can vary widely based on some Jewish joke, sometimes with roots in mysticism or ancient Hebrew vocabulary. Often, words were chosen to be intended as an insult to the christian governments that gave Jews second class citizen status, as well. So sure, you can look up Christmas in your handy copy of Weinreich, (who doesn't have one?) but you won't really understand the translation at all. It's this kind of double entendre that makes the translation completely worthless.
The thought of non-Jews going around wishing each other a 'freyleche nitl' has me cracking up now. :D
The other prediction was that GPL enforcement actions would continue, and perhaps grow. The recent FSF lawsuit against Cisco makes it clear that the GPL enforcers are serious about what they are doing. Your editor cannot help but wonder, though, whether the increasingly litigious actions by the Software Freedom Law Center might not eventually lead to a serious backlash within the community. We are about freedom, not punitive damages. Enforcement of the GPL is necessary if we expect our licenses to be taken seriously, but overly zealous - or greedy - litigation could encourage those who say that use of free software exposes companies to an unacceptable level of risk.
Simply put, i fail to see in how the FSF is being 'greedy'. There have been attempts by the FSF since 2003 to bring Cisco into compliance, and the standard procedure has failed. More importantly, i don't think the GPL is necessarily a burden on companies either.
For instance, consider the EULAs provided with other industry software. The Windows Vista EULA is so onerous, and that's before even considering building a one-off product based on Vista. Between Oracle and SAP, with the multiple levels of licensing, it's a wonder companies don't get sued off the face of the earth just for using a free trial. Legalese has a very broad effect on the industry, and the way companies and other organisations operate. Schools and daycares, for example tend to be so liability conscious that they have very strict rules about what may and may not be done there. In the proprietary world, the burden has definitely been pushed from the software provider to the software consumer in many cases. Compared to the GPL, compliance is very simple.
Compliance can be even easier. Basic compliance roughly requires that all components that are liable under the GPL must be released with both the binary code and source code. GPLv3 compliance requires a few more steps that are a critical part of the design process. For example, the requirement not to use DRM or other methods to lock the user out of using the software on a target platform is a fairly high level engineering decision. GPL compliance also requires being aware of which components use GPL code. How easy is it to be in compliance then? Pretty easy. All you need to do is have a link that says 'source code over here', and you're done.
But what if projects under the GPL could be developed out in the open. I mean, if the linux kernel were hosted somewhere where anyone could see the kernel before it's even released, the source code would lead the binary code. If there was a website to host and house projects that are open source, it would be even simpler. It's a shame we don't have such tools and websites available. ;)
The open source methodology is to release early and release often. In Fedora-land, we have a number of tools, including Fedora Hosted, that are not only open for any takers, but can also be deployed free of charge, because they themselves are open source projects. Any company that wants to stay in compliance needs to understand that in some ways, open source software is easier than closed source.
Open Source development is pretty close to Anarchism. Still, we rely on the courts and government to protect Open Source. What if we were to lose that support, what would the Open Source ecosystem look like then?
Before i begin, let me redefine Anarchism away from the bad taste in your mouth, purely chaotic society where anyone will kill his parents if it means a few bucks. It's really an insult to the decency of mankind to presume anyone would act in such a way. When i refer to anarchism, i refer to a self regulating, self ruling society where the individual decides which rules are important.
I was watching an interview with Eric S. Raymond where the interviewer asked him the million dollar question: "Is Open Source Communism?". His response was extreme disgust, and his argument against this was about the very nature of Communism. Communism forces the individual to share and participate in a single monoculture society, where if you chose not to be a member, you were thrown in the Gulag, shot in the back of the head, and even buried in an unmarked grave. The question was raised around the 'viral' aspect around the GPL, in how it forces the redistributor to retain that license on all modified code. But let's face it, very few people actually want to force people to use the GPL and nothing but the GPL.
Let's take this completely the other direction. Economically, Capitalism is considered the economic polar opposite* of Communism. The idea behind Red Hat is that Open Source makes perfect business sense, because it's been proven to encourage faster economic development than the traditional methods that preceded Open Source development. Capitalism is certainly akin to Anarchism, in that they both encourage a certain free growth, unimpeded by any other limitations. For example, in our society, most capitalistic economies are limited by government regulation, but are otherwise completely subject to the consumer demand.** Capitalism, especially as Open Source moves into it more, relies on a set of organically grown collective agreements between the different corporations. Still, it relies on a level of government regulation and intervention to support and maintain these agreements. For example, corporations rely more than ever on the court systems to enforce trademarks worldwide, because without an overarching court, any individual can use a trademark freely with little retribution. Open Source moves corporations into a space though, where they no longer compete with each other directly, but actually support each other. This is fairly close to an anarchistic economy.
Although Anarchists vary widely on what a post-revolution anarchistic society would look like, there is a consensus that individuals who recognize the value in serving other people's needs will provide the needs selflessly. Issues such as protective militias, police forces, social welfare, care of the sick, production of food, etc.. would all be handled by volunteers. There would be no governmental safety net to fall back on, and a truly independent individual will realize this. Likewise, if the 'community' feels an individual is taking, but not giving back, that person will get an automatically diminished say and respect in the 'society'. Furthermore, the society could choose to find a way to address the issue, or leave it alone. The development of Open Source, initially as a purely volunteer movement, and a shift to a cooperative corporate and volunteer culture really parallels the ideals of many anarchistic thinkers. Individuals and corporations alike have donated countless man-hours, server power, storage and hosting space, and money to do everything the Open Source ecosystem needed.
Open Source succeeded because of its licensing model. I would like to say a set of 'strong' licenses, but no one has really challenged them in court until very recently. Presumably, either they are weak, and we don't know it yet***, or they are so strong that lawyers are afraid to touch it with a 10 foot pole. The licenses are one of the key factors that supported the anarchistic model in a non anarchistic society. The GPL in particular, because of it's 'virility' (sorry, couldn't resist the half pun), has pushed this anarchistic ideal more than anything else. It's license/contract rules force the consumer of GPL licensed software to participate in the anarchism. On the contrary, the BSD license has a key fundamental difference. The rules allow the consumer to release changes under proprietary terms, which lets consumers retain the non anarchistic methods of our society. You could call the GPL non anarchistic, in that it forces the rules on the consumer, but this is simply not true either. The consumer is still fundamentally choosing the software out of free will.
In an anarchistic society, there would be no overarching court system that must be obeyed. If a consumer wanted to rerelease GPL software under another license, the copyright holder would have some alternatives, but none of them include getting the courts to physically stop this person. The copyright holder could convince the offender to cease and desist under a mutual agreement, or he could ask some friends to force him creating ill will. Anarchist argue, of course, that this situation is still fundamentally better, because it encourages people to think more about direct and peaceful confrontation. But here's my question to you, dear Lazyweb reader. This is a bit of a thinking exercise. What would the GPL in a truly anarchistic society work? Without a court to actually enforce the GPL, how would we, the open source developers, convince corporations that Open Source is the way to go? What do we do, when a corporation takes an objectivist point of view, argues that the financial gains outweigh the damages of disrespect, and violates what is essentially now just a Social Contract? What methods do we have to encourage the continued development and momentum of Open Source and Free Software, even when people have a right to do otherwise?
It hit me, that while writing this, this doesn't just have applications in a doom and gloom scenario where some government collapses in our current economic crisis. This could have some very real and practical application, even if the sky doesn't fall. Let's take another thought exercise. Let's say a bunch of high powered and expensive lawyers for Cisco manage to overturn the GPL in courts. What if the government decides not to enforce the GPL, and we lose the ability to enforce our own beloved anarchism? How do we continue the ideals of Free Software and pretty much bluff our way into winning?
* Don't take my word for it, i'm just an academic hack.
** Yes, this oversimplifies things immensely.
*** And let's hope to $deity_or_other that we never get to this point.
If you're interested, you can find me as 'loupgaroublond' on freenode, in several fedora channels, or email me at ynemoy
food is good
good is food
and i am fed
So let your inner geekpoet flow.
In any case, now that finals are over, i'm finally getting around to installing Fedora 10 on my main box, and i'll probably roll my server over soon too. Traditionally, i've used a DVD to do my installations, but it seems like i might change that in the future. I burned a DVD so i can do the installation, but when i ran the media test, it failed. Going back to my Fedora 9 installation and checking the SHA1SUM showed that the original file was not corrupt. I'm not sure why the gnome tools for burning a DVD might have failed, but i'm not really inclined to figure it out. Instead, i'm going to just copy a run of the mill LiveSD to my USB key, and i'll do the installation from there. One big advantage to using the USB key is that it takes alot less time to copy the ISO over.
Up until recently, the WPLUG meetings and events were held on the CMU campus. For a number of reasons, they were forced to find a new location to use. Without going into details, the problem is that CMU is not set up to host community related activities that are at best tangential to what CMU teaches. The level of bureaucracy was too much, and thus, they needed a new location. For the next two months, events are going to be held at the Wilkingsburg School Community Center, a location much better suited to "Community". One theme of conversation was how the LUG could help support the Center by improving the infrastructure, or at least providing wireless internet for the whole building.
It seems there's a very quiet Fedora community in the area, that so far has gone unnoticed. I was asked by a couple of people about what we (Fedora and Red Hat) can do to help support and build up the community. On the same thread, one person 'complained' about giving out Ubuntu media when he doesn't use or support Ubuntu. In the next few months, we will definitely be trying to mobilize the community here, and perhaps train a couple of people to become Ambassadors. In the same vein, there are a few people who want to better support Fedora at install fests. This means a nice NAS system running cobbler that we can use to image machines at parties and meetups with Fedora.
I talked with the guys who are interested in doing BarCamps and other meetups like this in the area. I also brought up the idea of starting up an OpenStreetMap user group. I think we're going to have to plan something once a month in order to gain momentum. There are a number of groups we can lobby in the area to start participating in both events. It should be interesting to see what we can do with this.
It snowed, and none too soon. I managed to grow out a full beard leading up to this, and it's been keeping me nicely warm :). Also, it seems i can make quite good Glögg, which is a swedish variant of mulled wine. These lethal combination of wine, port, brandy, spices and sugar will keep you warm when trekking home up and down the Pittsburgh hills late at night when it's -7 degrees (Celsius, aka 19.4 Fahrenheit) outside.
Is there a decent library (preferably Python) that can manipulate strings that are encoded in Unicode for IPA? This would mean a library that can intelligently recognize when [ʈʰ], [ɭʷ], and [d̪ʰ] are one character each, even though the are formed out of multiple components. I don't really feel like cross referencing all these words tonight and i know a computer could do my homework much faster.
Bonus points if anyone can tell me which language those three phonemes occur. :)
Mapping itself is pretty simple. Yesterday was rainy, so i spent the first half of the day going through the different programs, and testing how well they worked in Fedora. Once that was taken care of, we had a pretty good idea of what worked and what didn't. Overall, JOSM is an excellent program to work with, albeit it could use a few bug fixes and some polish. With some of the plugins, it's very easy to get working right away, produce quality information on the maps, and most importantly, to fill out bugs and clue other people how and why you did things the way you did. I think map making might be one of the easiest things to do to contribute to Open Source without knowing how to program or know alot about computers.
Over the past two days, Richard and i talked about a bunch of good ideas, some of them i hope bring to reality this week. He's been looking for a good tool to get new mappers set up mapping with the most minimal set up possible. Since all the packages we need have been made into RPMs, it should be relatively trivial to put a kickstart and liveUSB together to do the job.
We would like to gather interested people for a Geo SIG inside Fedora. If any of you readers out there love mapping and love Fedora, we would love to hear from you. If we're going to start giving away Fedora at mapping parties, then it will be good to have a SIG and export Fedora users around to help out.
OpenStreetMap and its sponsors are people who 'get' Open Source and Community. To this, i would like to be able to help Richard spread his message as much as we like to spread ours. One of the challenges that he faces when planning mapping parties is that he knows very little about the area he is going to. Out of a purely practical consideration, he needs help to make sure he isn't putting the party in an otherwise unsafe neighborhood. These parties are about having fun, not getting new mappers mugged or worse just for coming out to have an enjoyable afternoon. I invited Richard to join our Ambassadors mailing list, to ask the Ambassadors for help. I posted a quick email there a few days ago, so before you flame or bikeshed me, remember that you had your chance to do that on that thread. If you have the ability and the wishes to do so, please help him out with any practical information you can give him, as an Ambassador, about areas you know about.
On that same note, Richard mentioned another issue that he's been having, similar as we have been having. How do you measure community involvement. Sufficed to say, we may see our second user of EKG in the near future :).
We also talked about ways to open up more of a mapping community in Pittsburgh. Fortunately, a BarCamp has been announced for next weekend here in Pittsburgh. I'm hoping to do a demo of mapping in Pittsburgh, and hopefully starting something new. It should be fun to see how that goes.
Last but not least, no event would be successful if it weren't for one of my pipe dream ideas. Apparently, using a one-off schematic design, it would be reasonable to mass produce USB GPS devices that feed directly to gpsd for 12 USD per piece at lots of 10,000 or more. I'm sure for an extra 50 cents, we could get a green plastic cover. If you're thinking OLPC, you got it. What would it take to outfit OLPCs with GPS devices, and an activity designed around mapping and geo-caching? With large communities of kids being given new technology, they could all use OSM to map things around each other, perhaps collaborate on physical presence events, geo-cache, map their town for other people to see, and even share shortcuts through spaces they know. Since OSM is a free and open wiki, the possibilities are really endless. I would love to know if this could be done.
That's it for now. And now back to the regularly schedule school program of dooooooooom.
OpenStreetMap mapping party report coming soon....
That is, i voted for all the positions, but Nobody seems to a good choice for the presidency.
And now i'm tweaking out on the free Starbucks. :)
I should blog about the OpenStreetMaps after party, and how we're going to have one in Pittsburgh in two weeks.
I should have gone to school today, to find out what happened to my suddenly canceled Yiddish class, due to an ongoing fight between the PhD student, the University and the ADL.
Instead i'm in bed, sick. Damn Autumn.
Fortunately most devices have *some* hardware only recovery mode, no matter how difficult to access, that can be used to reflash everything from scratch. In Sansa's case, the c200 and e200 series use a combination of bootloader and rom to run. Factory wiping the device is a matter of forcing the device to load a bootloader directly from USB and then use the emergency safe mode of the standard bootloader to load a new ROM from scratch. It takes an open source ecosystem for some motivated hacker to come along and write a tool that only Sandisk has, and only in factory. A few more clicks away, and now the device is running Rockbox dual booted with the original firmware.
I used this guide. http://www.rockbox.org/twiki/bin/view/Main/SansaE200Unbrick
This got me thinking. One aspect of open source that wins it many followers is that 'it just works, because everything is just a generic case'. For example, when connecting to the cisco crapware PEAP authentication system on my university's wireless network, NetworkManager just sees it as another network, shows me a standard dialog to get the user name and password, and then does the right thing. Likewise connecting to the crapware cisco VPN i have to connect to daily. There is no need to use special software to access remote network shares, whether they are SMB, NFS, or SSHFS, whether i am using nautilus, dolphin, or the command line using Fuse. Everything behaves like a general purpose system, and components on a Linux System get along just fine.
Hardware corporations that make consumer hardware aren't going to just start writing debricking software for Linux (or even OS X) just because Linux is out there though. Much like the beginnings of the Linux Kernel, there are certainly many hobbyists are coming up with novel ways to take control of our own paid for hardware using every hack possible. The only catch is, to execute many of these fixes, it requires running a compiler, entering things on a command line, and tinkering with various firmwares until it works. What would it take to make debricking 'just work?'. Could we convince Gnome and KDE to detect a bricked USB device when attached to the computer? Can we convince them to pass it on to a standardized recovery program with optional plugins for all sorts of devices? Would this convince enough of those clever hackers to come together and start writing plugins for their own devices? Would larger companies jump on the bandwagon once we advertise that we can support their hardware better on Linux?
Even if we can't, i'm certainly indebted to all those clever hackers out there who at least keep trying. Today i've saved a 30 USD device from the garbage bin.
Setup of the table couldn't have gone smoother. Kudos to the Fedora Event Kit for making it so easy. The hardest part was finding somewhere where we could get electricity, and as it turns out, our booth ended up right at the entrance to the hall we had available. I love it when Fedora is the first thing that people see at a conference or show. Once we had our location picked, getting started was just a matter of getting our posters up on the wall and the two OLPCs set up and running, along side Jon's laptop running Fedora 10 Snap 2.
The original plan was to get 100 units of media to hand out with everyone's welcome packet. When i got my welcome packet, i was worried that Anteil, a local company had upstaged us there, with a complete portfolio, marketing materials and Anteil branded USB stick. To my complete surprise, one of Anteil's employees is a big Fedora fan, and put persistent Fedora on all those sticks ahead of time! Our media was useless, because of someone else's generosity. I want to thank everyone at Anteil for supporting us so much; we really value this.
I got the chance to see a few presentations. They mainly went along the lines of, 'look, here's something cool I can do with FOSS in the workspace." There were presentations on using puppet to manage email systems, using PExpect to manage over 10,000 units of Cisco hardware, one guy gave a presentation on a few cool things he liked about git, and so on. I think the highlight of the presentations was when Chris Moats wiped the first 10MB of a CentOS box that was a wireless router to the room we were in. As he gave his presentation, cobbler and puppet reinstalled and reconfigured the server in the background in 15 minutes, without a single bit of effort on his part.
Although i didn't plan on speaking, Jon Stanley and i held a BOF session on the OLPC. We had yet more problems, as usual, getting mesh networking to work. The laptops are incredibly finicky at shows, unfortunately. We started off with some QA on the laptops, while I made sure they circulated the room and let people get a chance to play with them. Once again, when i had the chance to visit the OLPC lab in Boston last summer as part of my Red Hat internship, the experience was invaluable. It's so much easier to talk about the XOs despite not being a developer, when i actually know what i'm talking about.
One of the guys there brought up an interesting issue though. He was considering buying his niece an XO last spring during the G1G1 program. Unfortunately, his niece is exposed regularly to 'those other established Redmond and Bay Area operating systems'. If he got this laptop for his niece, how would she be able to participate in the real value of the OLPC? Most of that value lies in there being a mesh network of kids all with laptops available. In a world of beige and brushed aluminum boxen, though, she'd be pretty isolated with her green and white system. How are we going to get the incoming computer users integrated into the world as a whole, once we give them the XO laptops? The conversation moved quickly to integrating people using any Linux system in a commercially dominant world.
I noticed very little talk about distros. I got a few grumbles about how much Spacewalk/Satellite sucks, but except in some practical conversations, there was absolutely no discussion of Fedora vs. LTS, nor Fedora vs. Ubuntu vs. SuSE vs. Flavor Of The Month. Overall though, i did see a few Fedora laptops, although there were quite a few Apple laptops there too. Pennsylvania is a state, though, where there are people living there with some very widely conflicting views, politically, socially, and religiously. It's good to know that we all know how to get along with each other when we do have some common goals.
One fact I had to stress to a few people is this. The Fedora Infrastructure stack is as Open Source as it gets. Launchpad and Github are not open source the way we are open source. The goal of Fedora Infrastructure is to have a stack of applications that anyone can redeploy on their own. While it's a conversation for another day, webapps present a huge threat to Open Source. One thing that Fedora Ambassadors might want to consider talking about more is this: Fedora is 100% open source, including our Web Applications, Infrastructure, and Know-How. What other distribution or open source supporter can claim the same?
One of the companies here, Anteil, really gets Open Source. Jim Capp, the CEO told me that he is very open minded about which distros are used inside the company, and they really try to focus on supporting what ever the customer asks for, and not just what the technicians like. Even so, he said many of the customers ask for Red Hat over everything else. If anyone doubts the value that Red Hat brings to Linux, it's certainly not Anteil or their customers.
Jim and i also talked about plans to get Fedora and CentOS into some of the university laboratories in Harrisburg, and that the faculty there seems fairly open to the idea for now. He's really interested in being able to focus on the philanthropic side of Open Source, and not just the day to day business side. It's a great feeling to know that Open Source is so well represented in Central PA.
Finally, I was wondering what was so special about CPOSC that David Nalley told me last summer, "Yaakov, we have to get some Fedora Ambassadors to CPOSC". Right now Linux doesn't hold a very dominant presence in PA at all, other than local companies that provide and service it. Looking at the future recession, service industries always fare off better than production industries, and Linux and Open Source are positioned well to succeed quite nicely. PA is known for its service industry, and so far Pittsburgh and Pennsylvania have done better than most with the looming recession. Central PA is a great area to get involved with Open Source on a grassroots level. Everyone I met there understood the value of Open Source, and knew why they were there. It's been a pleasure getting a chance to meet everyone there.
As for CPOSC, as I've said before, it's this sunday, in Harrisburg, Pennsylvania. Saturday night, we're going to have a small social for local members of the LUG and anyone coming from out of town for the conference near the conference hall. We're meeting Gilligan's on Eisenhower Road at 8pm, and if you want to come join us, just look for the guys with the usual Linux paraphernalia.
Hopefully, i'll be more relaxed by tomorrow night after recovering from my midterms.
Open Source Conference is going to be in Harrisburg PA (that's in the
US) this coming Sunday. The CPOSC is open to all ends of the Open
Source spectrum, from applications to low level hardware and a whole
smattering of IT. Fedora will be sponsoring the event this year, and
we'll be hosting a small table for all things Fedora as well.
We currently are already staffed and have everything running on
schedule (so far), and we have volunteers already for manning our
table.. Still, we love seeing faces of Fedora users. If you want to
come help out, show up wearing a Fedora T-shirt or your trusty
Todd M Zullinger mentioned that he would like to hold a relatively
informal keysigning. If you want to participate, bring a copy of your
PGP and some form of photo ID.
There will be a get together of out of town guests and any locals with
some free time Saturday, the night before. We'll be meeting at a
bar/restaurant near the conference hall, although more details are to
I look forward to seeing you there!
I'm trying to get some code with a few ugly 'global' objects in python to be reduced to something a bit more modular. Sometimes code is clearer than words:
>>> def foo():
... print foo.a
>>> foo.a = 1
>>> foo.a = 2
In otherwords, I have some value I need persistent between function calls. However, there's only one function that needs that value. I can store it in the function itself, and have it available.
The only unfortunate tragedy is that code like this still fails epically:
>>> 0.15 * 6
This is very commmon when developing web applications. For example, a web app that sends out mail notifications may need to use your private SMTP server. You may not feel like telling the whole world where to find it once they hack into your private network.
Solution: Just don't commit that file to the source repository of course :)
Better Solution: People are clumsy and sometimes forget to ignore it. Furthermore, changes to the config file upstream will make practically every new patch conflict with your own working tree. This becomes hell to manage, unless you are truly awesome. Or you use Git. Git is truly awesome.
Furthermore, by using these sane development practices, you'll see other benefits from them. Git makes some very advanced tricks easy, and will make development easier for everyone on your team.
One thing i recommend for git newbies is to use a graphical browser to just view all the changes each step of the way. I like Qgit personally. This exercise is done through the command line, but for some people it also helps to be able to visualize what is going on.
For starters, this requires a bit of preconfiguration. When coding on FAS, i have two main branches i work with, 'master' and 'loupz'. 'master..loupz' (git-ese for all the patches that are in loupz, but not in master) contains one single patch. This patch fixes the configuration files to use my own environment. It contains a few passwords and some information about my local network topology that i just don't want to share. It looks like this:
yankee@koan ~/Projekten/FAS2 $ git log master..loupz
Author: Yaakov M. Nemoy
Date: Tue Jul 22 15:23:16 2008 +0200
My modded fas.cfg
This commit should never be seen in public. If it does, yell at me.
Once this is set up, it's time to start hacking. In putting together this example, i forgot to 'pre-setup' my hacking session, so i will use git-stash to skip over the 'edit' steps.
yankee@koan ~/Projekten/FAS2 $ git stash
Saved working directory and index state "WIP on loupz: 048419f... My modded fas.cfg"
(To restore them type "git stash apply")
HEAD is now at 048419f My modded fas.cfg
The first thing to do is to create a new branch for all the future development. Since we were hacking on fasclient today, i named it accordingly. This new branch has to branch from loupz, and not master, because loupz has the working configuration.
yankee@koan ~/Projekten/FAS2 $ git checkout -b fasclient loupz
Switched to a new branch "fasclient"
Then I do my hacking, or in this case, just run git-stash apply.
yankee@koan ~/Projekten/FAS2 $ git stash apply
# On branch fasclient
# Changed but not updated:
# (use "git add
..." to update what will be committed)
# modified: fas/model/fasmodel.py
# modified: fas/user.py
no changes added to commit (use "git add" and/or "git commit -a")
Then it's time to add code from the working tree to the index, the staging area, so we can commit them to the repository. I like git-add --interactive, because it gives me an extra level of verification as to exactly what is being committed.
yankee@koan ~/Projekten/FAS2 $ git add --interactive
staged unstaged path
1: unchanged +11/-0 fas/model/fasmodel.py
2: unchanged +19/-0 fas/user.py
*** Commands ***
1: [s]tatus 2: [u]pdate 3: [r]evert 4: [a]dd untracked
5: [p]atch 6: [d]iff 7: [q]uit 8: [h]elp
What now> 5
The rest of this part is like watching sausage being made ;) . Next we commit our changes.
yankee@koan ~/Projekten/FAS2 $ git commit
Created commit 53dc510: An alternative way to get the data to fasclient as efficiently as possible.
2 files changed, 29 insertions(+), 0 deletions(-)
If you're following along in Qgit, you'll see that the commit history has become a bit forked between stash, loupz, fasclient, etc... Since we don't need the stuff in the stash anymore, let's just clear it.
yankee@koan ~/Projekten/FAS2 $ git stash clear
So now there should be a string of patches on top of loupz that have been tested with the private configuration file. We know the code works. If we were to call git-push now, the troublesome patch would be included in the push. Here's the fun part. We're going to use the awesomeness of git to rewrite the commit history. This is called rebasing. In this instance, we're going to rebase loupz..fasclient, namely all the patches that are in the fasclient branch, but not in loupz, and rebase them onto master. The command is simple:
yankee@koan ~/Projekten/FAS2 $ git rebase --onto master loupz fasclient
Already on "fasclient"
First, rewinding head to replay your work on top of it...
HEAD is now at 6c583ab Adding some legal voodoo and mumbo jumbo.
Applying An alternative way to get the data to fasclient as efficiently as possible.
warning: squelched 2 whitespace errors
warning: 7 lines add whitespace errors.
To double check this, check git-log
yankee@koan ~/Projekten/FAS2 $ git log
Author: Yaakov M. Nemoy
Date: Sun Sep 28 22:40:25 2008 -0400
An alternative way to get the data to fasclient as efficiently as possibl
Ricky and I are trying to figure this out, this commit is one option.
Author: Yaakov M. Nemoy
Date: Sun Sep 14 20:22:30 2008 -0400
Adding some legal voodoo and mumbo jumbo.
Author: Yaakov M. Nemoy
Date: Sun Sep 14 20:17:45 2008 -0400
Adds help and some documentation to the user
Graphically, you should see that there are now two diverging branches, fasclient and loupz. We now need to merge master to fasclient. This is simple.
yankee@koan ~/Projekten/FAS2 $ git checkout master
Switched to branch "master"
yankee@koan ~/Projekten/FAS2 $ git merge fasclient
fas/model/fasmodel.py | 10 ++++++++++
fas/user.py | 19 +++++++++++++++++++
2 files changed, 29 insertions(+), 0 deletions(-)
yankee@koan ~/Projekten/FAS2 $ git branch -d fasclient
Deleted branch fasclient.
Note: we also deleted fasclient because it's no longer needed.
Time to push upstream.
yankee@koan ~/Projekten/FAS2 $ git push
! [rejected] master -> master (non-fast forward)
error: failed to push some refs to 'ssh://email@example.com/git/fas.git'
It turns out that our local repository was not completely up to date. This is not a big deal, because rebase can help us solve this problem as well. We only need to use a more simple incantation of git-rebase. We simply want to take all the patches that are in our local 'master' branch, but are not on the origin/master branch upstream, and have them tacked on the end.
yankee@koan ~/Projekten/FAS2 $ git rebase origin/master
Current branch master is up to date.
Oops, we need to fetch first ;)
yankee@koan ~/Projekten/FAS2 $ git fetch
remote: Counting objects: 61, done.
remote: Compressing objects: 100% (46/46), done.
remote: Total 46 (delta 31), reused 0 (delta 0)
Unpacking objects: 100% (46/46), done.
6c583ab..cc20015 master -> origin/master
yankee@koan ~/Projekten/FAS2 $ git rebase origin/master
First, rewinding head to replay your work on top of it...
HEAD is now at cc20015 Initial fas_client method.
Applying An alternative way to get the data to fasclient as efficiently as possible.
warning: squelched 2 whitespace errors
warning: 7 lines add whitespace errors.
Finally, we can just push all these changes upstream.
yankee@koan ~/Projekten/FAS2 $ git push
Counting objects: 11, done.
Compressing objects: 100% (6/6), done.
Writing objects: 100% (6/6), 1.19 KiB, done.
Total 6 (delta 4), reused 0 (delta 0)
cc20015..3df1588 master -> master
If you view this graphically, you will see that the branch with the private configuration is horribly out of date. We need rebase one more time. But this time, we need to check out the branch first.
yankee@koan ~/Projekten/FAS2 $ git checkout loupz
Switched to branch "loupz"
yankee@koan ~/Projekten/FAS2 $ git rebase master
First, rewinding head to replay your work on top of it...
HEAD is now at 3df1588 An alternative way to get the data to fasclient as efficiently as possible.
Applying My modded fas.cfg
And that does it!
This method provides three benefits. First, it makes it very easy to maintain a private configuration without conflicting with other developers on your team. Secondly, it makes it very easy to ensure that there are no 'floating bits' that are uncommitted on your working tree. The only thing that isn't handled by Git are the bits you are actively working on. You can keep each branch separate and organised. Thirdly, once you have to merge with upstream, you can do so cleanly, without a nagging 'merge' commit. This will help you maintain a clean and professional looking log to development.
To make the last point very clear, try to find two git repositories, one with 'merge' commits, and one without, and view it in Qgit. You will notice the repo with 'merge' commits will fork frequently any time two developers work simultaneously. It can sometimes be hard to follow. In contrast, look at the other repo. Development goes in a single linear line. It only forks when a developer or team has to work on a radically different feature. It is much clearer why there are forks in this second repo.
Problem: The user would like to run 'foo' server in conjunction with Pulseaudio in either gnome or kde without too much 'tinkering'. Tinkering means changing absolutely no files outside of /home without the aid of yum or some otherwise simple and reproducible command. Furthermore, these changes should work reliably across setups.
1) So in this case, the 'foo' server is jack. There are a number of programs I would like to work with that work with Jack instead of Pulse. Jack is essentially lower level than Pulse, so this solution is geared towards things that work underneath Pulse, thus tools that have to be running before Pulse is. Since jack is the defacto audio daemon for Freeeee and a favourite by the developers, Fedora users might want to become more familiar with it.
2) To the best of my knowledge, Gnome starts Pulseaudio by calling up the legacy ESD. Since Pulse wraps ESD nicely, it starts up instead using a systemwide default configuration. KDE on the other hand has a directory for systemwide and user specific start up scripts to be run. Pulse is included as a systemwide script. If the user is a member of pulse-rt, then it automatically uses realtime mode settings. Otherwise it resorts to defaults. We're assuming you want realtime settings.
# usermod -G pulse-rt
Nicely enough, we shouldn't need to do anything more for jack to run in realtime mode as well.
3) Although we can hack the KDE systemwide directories to start jackd as well, or just change the user specific ones, it provides no guarantees that jack can be started before pulse. Furthemore, this is not a possible solution for gnome.
When the user logins from gdm or kdm, the display manager searches for some .dekstop files that specify all the possible desktop environments available to the user. These .desktop files have links to different programs, such as 'gnome-session' or 'startkde'. These programs start up the comprehensive desktop environment. The user can also specify their own script or program to be run on the startup of the X server. This script is stored in a per-user basis at ~/.xsession . Although this is considered a 'legacy' method, it is quite commonly used in more minimalist setups. In order to enable this 'legacy' feature, run:
# yum install xorg-x11-xinit-session
Then create your ~/.xsession script.
$ vi ~/.xsession
and fill it with the following example
jackd --realtime -d alsa &
and finally make it executable
$ chmod +x ~/.xsession
Notice that in the .xsession file, the command for jackd has an ampersand '&' after it. This means that the program is run 'in the background'. gnome-session does not have the ampersand. When gnome-session is finished, then the X server dies. If you replace it with any other program, remember, once that program dies, the X server dies with it. (Try it with xterm or firefox, and see what happens when you close it down.)
You can of course experiment further with .xsession and put any programs you want to start, regardless of which DE you use.
4) Finally, Pulse must be configured to work with Jack. I used a config provided by jebba, one of the Freeeee developers. It goes in ~/.pulse/default.pa .
$ vi ~/.pulse/default.pa
Then fill it with this
load-module module-jack-sink channels=2 channel_map=front-left,front-right
#load-module module-jack-source channels=2 channel_map=front-left,front-right
## The following is not mandatory
## The following can conflict with this config.
#load-module module-alsa-source device=hw:1,0
#load-module module-oss device="/dev/dsp" sink_name=output source_name=input
#load-module module-oss-mmap device="/dev/dsp" sink_name=output source_name=input
### Load several protocols
# no workie
### Network access (may be configured with paprefs, so leave this commented
### here if you plan to use paprefs)
load-module module-null-sink sink_name=rtp format=s16be channels=2 rate=44100 description="martin RTP Multicast Sink"
load-module module-rtp-send source=rtp.monitor
### Publish connection data in the X11 root window
5) Finally, log out of your Gnome or KDE session, to come back to your login screen. Under 'session type' you should see a new option for 'Custom' next to KDE and Gnome. Login using that session type. Remember, Gnome or KDE will load up, depending on which you have put in .xsession . If you don't see that option, you may need to restart your computer, in order to restart your display manager. (If you're savy, you can just 'init 3' and then 'init 5' from the command line as root.)
6) One of the nice things about this configuration is that if either Pulse or Jack die, for any reason at all, you only need to open up a terminal and run the following commands.
$ killall pulseaudio ; killall jackd
$ jackd --realtime -d alsa & && pulseaudio &
Since they both now have sane default configurations, there is very little muck that has to be typed in at the command line to restart them.
To put it plain and simple, Governor Sarah Palin's email accounts were hacked and i'm assuming 99% of all Americans over 18 know about this. For international readers, the email accounts of Governor Sarah Palin, vice presidential candidate for the Republican party, were hacked. The hacker, an amateur, posted both the results and the attack vector online to 4chan, an internet forum, which led to the quick discovery of his identity. It turns out that he is the son of a democratic Senator, although since nothing has gone to court yet, no one knows what the penalties are.
I direct you to this soundbite from the McCain campaign:
"This is a shocking invasion of the governor's privacy and a violation of law."
Indeed, one reason why it's illegal is to protect people from using such private information to harass her and her family, by using cell phones numbers and other private information retrieved out of the email accounts. Most importantly though, this highlights the dangers of having private information leaked to anyone.
Which brings me to my next point. Sarah Palin should be impeached and removed from all political office. I direct you to the information leaked about the attack.
If you have a look through those emails, some of them are personal or friendly, but in between there are numerous emails regarding State business. Anyone who works at Red Hat and is forced to use the bloatware called collaboration software, instead of being allowed to use a more convenient GMail account should automatically understand why this is a severe problem.
Yahoo email is neither private nor secure. The government can do all the handwaving magic it sees fit, but nothing will change the fact that Yahoo is an international company that presents a very high target to hackers. Thus Yahoo is 100% insecure. Furthermore, Yahoo has full access to emails they host and transmit. Therefore, there is no privacy, and any sensitive government issues could be revealed to the whole world. When you consider the information a VP is privy to, let alone a governor, the potential for misuse is scary. If you want to talk about radicals destroying a large nation for fun or profit, this is the way to go. If you want to discuss shady deals between politicians being revealed, carelessness like this will only make life much easier. Each state has an IT infrastructure as does the US Government, which is secured by one of the best security systems in the world. Using a Yahoo account is downright foolish.
Furthermore, privacy is also there to protect individuals from other malicious individuals. It's tragic, but a matter of ironic justice that Sarah Palin's family is suffering from the exact issues that are brought up when discussing the importance of privacy and data protection.
Yahoo mail is also run outside of the government systems. If you think the data retention policies of the Whitehouse have been bad in the past 8 years, just wait till Sarah Palin comes to power. They won't even exist! Data retention is the law for many reasons, least of all to track the doings of public officers and employees of public corporations. These rules are there to protect the American people and anyone doing business with Americans. While discussions about the data retention policies are reasonable, the bottom line is, without one, our individual rights are doomed. By using Yahoo mail, Sarah Palin has given the middle finger to the systems that keep politicians on the straight and narrow. As a politician who 'cleaned up' Alaska, she should understand this better than most politicians.
My take on this is that Palin didn't do this out of any malicious intent herself. I can imagine that she found the state provided system slow, complicated and onerous, so she decided to use an 'easier' solution. Given the magnitude of discovery, in most places i've worked, anyone who screwed up so bad would probably be fired. There is plenty of room here to make the 'ignorant/unintelligent soccer mom' argument, but she is still an elected politician, which means even a soccer mom would be smarter than that....
Despite this, it's clear that Sarah Palin knows very little about technology, the Internet, and most importantly the laws surrounding it. What ever reason she had for not using a state provided email address shows she thinks some need of hers trumps years of experience that IT professionals have had with privacy and security. On a personal note, it's insulting and disgusting. On a broader note, Sarah Palin is not fit to be a politician in the 21st century, let alone Vice President. Finally, it's clear she's doing something very wrong in Alaska already. She needs to be impeached.
This is FreeJ in all its mighty glory. One nice thing about it is that the upstream developers have asked me to become a maintainer, just to keep the spec file upstream. The FreeJ developers certainly 'get' open source :).
To rip off the description from the homepage:
FreeJ is a vision mixer: an instrument for realtime video manipulation used in the fields of dance teather, veejaying, medical visualisation and TV.
FreeJ lets you interact with multiple layers of video, filtered by effect chains and then mixed together. Controllers can be scripted for keyboard, midi and joysticks, to manipulate images, movies, live cameras, particle generators, text scrollers, flash animations and more.
All the resulting video mix can be shown on multiple and remote screens, encoded into a movie and streamed live to the internet.
FreeJ can be found at its homepage, at: http://freej.org/
The RPMs are available on my fedorahosted space. Since the RPMs depend on 'non-free' software, although do not ship any non-free bits themselves, i have created a new non-free repo available at:
I would like to put this package up for review. To those maintainers responsible, would i be better off submitting a review to livna or to rpmfusion?
The SRPMs and SPEC files are there too, so feel free to rebuild them for other versions of Fedora or other RPM based distros. If you send the packages to me, i'll throw them up there in that space too.
This included the much discussed whopping 1GB of free storage that of course no one would ever sanely fill, no matter how insanely Google gave out free space. Then suddenly it was 2GB, and then one day Google added a crazy man counter to the GMail login page counting up the amount of available space.
Today, I noticed the following message at the bottom of my GMail account:
You are currently using 1094 MB (15%) of your 7142 MB.
I finally did the impossible!
(Now for breakfast at Milliways.)
in the 10 or so hours i have available for Fedora and Red Hat business a week, it seems i spend about five of them wrestling with Zimbra to go through mailing list traffic. I've noticed a bunch of personal emails have gotten lost in the stream. It's obvious i need a better tool to work with. I'm sure many of you have your own personal favorite email reader, which you would just love to rave about. I'm open for something new, provided it can do the following.
I need something that can filter and process email *fast*. Whether it has some magic code that enables it to do IMAP at lightening speed, or maintains a local cache and pushes updates to IMAP in the background, i don't care, so long as it is fast.
I need the ability to label emails with more than one label at a time. This is important for two reasons. One, i label things with Todo and Priority flags based on filters. Two, conversations can cross mailing lists, i want a single thread to have two labels.
Finally, i need the ability to view a discussion by thread, and not just as a series of single emails. Zimbra is really bad at this, and evolution is only marginally better. Unfortunately, either working with the Zimbra web client or the Zimbra connector for Evolution are both recipes for disaster.
As extra credit, any scriptability in Python is definitely worth some serious extra credit.
Let's look at what Google has done. Probably the most original thing in Chrome is that there are tabs where the title bar used to be. The person who can script them out and make them accessible through libwnck or some similar desktop environment based API will certainly get alot of kudos from me. Really, the concept does not belong to Google, but I believe the credit should go to Opera for first integrating the browser widgets with the tab. Another revolutionary concept is binding processes to either tabs or sites, or other process splitting criteria. Again, the credit goes elsewhere, to Microsoft, but thoroughness that Chrome uses is pretty remarkable. In fact many of the 'unique features' in Chrome have already been implemented elsewhere, in some fashion.
Chrome isn't an OS though. If it was, where is the Desktop Environment? Where is the kernel to run it all? Where is the hardware support? Where are the millions of man hours that go into releasing a top notch operating system? Too busy writing browsers apparently. Chrome is no more an OS than .Net or Python are OSes either. Chrome is a virtual machine / application container. Way back when, in 1995, there was a question on how to run code both on a client machine and a more powerful server. Way back in 1985 the same question existed. There are rather large sections of Tannenbaum's Principles of a Modern Operating system that go into RPC functions and function stubs. For those of you with long memories, you should all remember how Java was supposed to revolutionize the web, by making it more 'desktop application-like'. As a developer of web applications, i always struggle with the concept of how my code runs like a weird application in the server space, but just a bunch of documents in the client space. I really would like to see more tools that unify the design a bit more, and many of those kinds of tools are coming of age.
Google has been at the forefront of these design paradigms, with Google Gears as their showpiece. Having little personal experience with Google Gears itself, i can't really comment on how it works with Chrome. Just realizing though, how cleverly Google has wrapped an essentially document oriented display framework with a sophisticated programming language on top into a virtual machine system that can interact with the user's desktop in one smooth paradigm, and I can't call it anything but a Virtual Machine.
I realize the OS will definitely be bigger, without a doubt. There are also a few concerns, such as using different bootloaders for the various architectures, and then autodetecting the right kernel, and such. Is there a demand for such a thing?
My brother also mentioned he would like to be able to start up the OS image once in Wintendo, or some other OS, in cases where he can't restart the computer, or the computer can't boot up from USB. What would it take to include some virtualization platform in the livecd-to-disc tool that could work in Wintendo so all the user needs to do is run some program and get a popup window with their ever so useful persistent Fedora LiveUSB?
I install the GTK version, poke my nose around a bit, tinker with a few features, and 10 minutes later, I've spit out a 40mb mkv encoded in h264 and mp3. The quality is great, and I needed to know only a few basics about video encoding. I'll have to pose the granny test on my family later, but for a power user, the program is excellent.
There probably is no real reason why it took so long to be released. Up until 0.7, xmonad was facing roughly a 1-2 month release cycle with alot of rapid and rabid development in between. The fundamental changes to xmonad 0.8 since 0.7 are minor, and fall chiefly in the refinement category. It seems that development to xmonad has become relatively stable. One of the reasons to push the release was that there was so little development on the core parts to begin with. So the decision was made, and the release went by really smoothly.
This does not mean that xmonad is starting to die out. There are two things that make xmonad radically different than many other window managers. First of all, xmonad is based on some fundamental mathematical principles and data structures common to the functional programming world. The original goal to produce a functional window manager (no pun intended, although feel free to interpret one in ;) ) has essentially taken a year to develop from start to finish. It also uses some highly testable development methods including unit tests exposed to QuickCheck, the haskell unit tester, that greatly increase stability. Stable, if slow, development of xmonad implies that after a year of public usage, all the boiler plate code around the window manager has been refined and tested by a large number of users into some elegant minimal code, and by minimal, i mean 1045 significant lines of code style minimal.
The second important component is the contrib library, which is really quite essential to development. It is another ~9000 SLOC with a number of components to make using xmonad far easier and more flexible. In the true functional style, they are just a bunch of user submitted functions that different users have found useful for others too. For example, it includes fancy advanced layouts, layout combinators, alternate syntaxes for window management and key commands, random tools for outputting important information to dzen, or other tools, and for interacting with other parts of the system. Not a single line of code is actually run unless the user uses it.
The config file itself is actually just a haskell program cleverly disguised as a config file. In this latest release, there is an add-on in contrib that will even convert other config file formats into a haskell file, compile it, and run it, all in the background. The only components that are included in the run time are the ones being used. Of course, as a user, you never need to work with compiling your own xmonad window manager; part of that 1000 lines of code is the boilerplate to automatically look for your ~/.xmonad/xmonad.hs, compile it, and load it up, without much trouble. You can even make changes, reload your WM, and that same boilerplate will just pass your session on to the newest version.
I've built some packages, which are available at my personal repo at:
If you are already using my packages and repo, you should automatically get updates through yum. These packages are currently for Fedora 9, i386 only, but if anyone submits some builds for other archs/platforms, i'll be glad to stick them in the repo.
On a side note, I'm having some trouble signing my packages. Anyone know how to make sure rpmsign is using the right gpg key to sign packages?
I'll admit the process went alot slower than I would have liked, but it was an incredible learning process. In working on the guidelines, I learnt alot about how to get involved in Fedora, how to work with the myriad of communities and committees and boards and groups involved. I got to meet all sorts of people in the Fedora world online and offline. I probably annoyed some people being a crazy upstart kid with all sorts of crazy ideas, but it seems to have turned out alright in the end. I also learnt many random technical facts about how GHC puts together code, and RPM in a display of shocking intimacy that would probably make even my most 'sexually progressive' friends blush. (And believe me, they make everyone else blush.) For someone who 'just wanted to learn how to make RPMs', it was a bit of an intense introduction.
If you're curious, I have several bugs that track the progress of things. I still have a bit of a confusing mess to sort out about handling libraries, but I think I'm a bit tired from all the heat. Anyone familiar with summers in Northern Europe can well understand the problems I'm having in 32 degree weather (Centigrade). (That's 90 degrees Fahrenheit.)
Without further ado:
macros for ghc and rpm: https://bugzilla.redhat.com/show_bug.cgi?id=460304
My question to Fedorians at large is this. Have we ever sent anything official to the Chaos Computing Congress in Berlin? Do we want to?
(So I lied, it's two questions.)
Today I realized that SPEC files can loosely be translated into 'bacon files' in Dutch.
If you want to know more, check out this Wikipedia article (in Dutch, so if you don't know enough about bacon, you'll have to scroll down and find the links for other languages).
This morning, after getting some coffee and sitting down to go through mail and a daily list of blogs I futilely try to keep up with, I noticed I was starting to have a bit of a problem with fruit flies. One google query and 15 minutes later, I have the ultimate fruit fly trap set up. Now to convince the poor fruit flies they want to swim around in a soup of orange juice and liquor.
When I was finished, I sat back down to my computer to find my coffee was cold. I think I really need to learn better how to stay on task....
An american in Paris
As is our wont, Fedora | Paris ought to organise an exceptional meeting upon welcoming a distinguished member of the community in the City of Light.
This time, it is no less an international superstar, since Max Spevack will join us on August the 2nd.
Of course, scandalmongers will say that he is coming to see the Fedora-fr society first and foremost, that he is coming to discuss with the French community about its future direction, about the events it organises etc.
If the ex-Fedora Project Leader, today in charge of the EMEA community for RedHat, turns into a Parisian for a week-end, it's simply because he has heard about our little "jamborees" and wishes to verify the legend according to which Fedora | Paris is unflinchingly The place to be.
On this occasion, and in order to digest the never-ending official meetings which will be held during the day, we will gather in front of Le vin qui danse at 7 PM, so that we introduce to our guest a very different gastronomy than the one he may know in his own country.
On this occasion, and in order to divert the original purpose of his visit, the French-speaking ambassadors MrTom, llaumgui, pingou, TiTaX and armelk
will join us and will try to distract the attention of this great man from partying and discuss such fiddling subjects such as the society's budget or the events to come...
If, like us, you want to fight against these party poopers, please report in the comments on our blog, or come bring us your support on the IRC channel:
#fedora-paris on Freenode. We are counting on you.
~ Pikachu_2014 and bochecha (thanks to milady for the translation)
Fedora at CPOSC 2008
So if you're interested in tracking what we're doing, this would be the place to check.
I'll be posting more information to my blog too once I find out more details about the event. If you want to know more about CPOSC, visit here
I have a funny suspicion, but I must warn you, this is just a suspicion and has no bearing on reality. While it seems there is alot of goodwill and love surrounding the Ubuntu community, for their ability to make a Linux based Desktop OS that is supposedly easy to use, there have been a number of random issues in the developer community that have cause numerous controversies. Ubuntu even opened up shop with a controversy over the idea of stealing away good talent from the Debian developer community. I have the sneaky suspicion that there's alot more to this story than just some fight with developers over packaging formats. It really seems too trivial for a large company like Intel to do such a thing. Rather, in the negotiation process with Ubuntu, something else went horribly wrong. Who knows what it was.
Personally, I'm really looking forward to the next press release. It should be really interesting.
FAS2 has really been our good showcase of "How to do TurboGears Right". The team of people who have put it together have really done an excellent job, and hacking on it is a real joy. The deployment and documentation I've set up should provide a good reference on how to turn an already deployed project into something Migrateable.
The biggest drive for doing it today was that we had a number of tickets open against FAS2, which require numerous small changes to the DB. One change that seemed simple enough to develop is the ability for a group administrator to specify the rules for joining the group in a text field. This was requested by the Art team.
So, in the vein of a million monkeys on a million typewriters, hopefully a good triaging solution will come forth (and it has :) )
In the main tent, there were many people coming by to give presentations. Unfortunately most of them were in Dutch, and I had a bit of difficulty following along. Even so, I got to learn about a wide variety of topics including a lecture about FreeJ - Open Source VJ and Video arts processing software, Open.Amsterdam - Open Standards and Open Source activist organization for Noord Holland,
I spent most of my time in the Media Village otherwise knows an HomeFree. I have to give the Media guys credit for coming up with the most interesting tent. Most of the tents were large military field tents, generally this green brown colour. Off to one side is HomeFree, this brightly-lit flimsy affair that somehow held up. Every night there was practically a techno party as everyone spent most of the time showing off what they could do with Open Source software and tools.
One of the goals we had in the Media Village was to take a spare P4 box someone had and turn it into the best Input Output box we could make. A couple of guys brought Wii controllers, cameras, midi devices, and we spent most of the time trying to get it all integrated into FreeJ. Fedora was definitely the first choice of OS, but we had some compiling issues with FreeJ, which I spent some time working on, so we had to switch to Debian, after trying Dynebolic.
Just to show you how well Fedora and Red Hat was received, let me show you a few pictures.
Hmm... Suddenly everything in my Third Eye has become clear. I can see the Truth!
One successful Fedora installation.
A Fedora developer in his natural habitat - the middle of nowhere.
More photos can be found here
If you live in the Netherlands or Germany, we're doing a mini Eth0 after party in Wageningen next week, from Thursday to Saturday. Come one, come all, bring a tent and sleeping bag.
To get here, I took the most Dutch route possible: by bike. Honestly, how often do you get the chance to go for a random bike ride in the middle of nowhere, where you're 10 feet under sea level? All I can say is the extra costs of taking a bike on the train were well worth the price. Even if I did get lost twice and go 5km out of the way. (Getting directions is easier here than in Gelderland, their Dutch is easier to understand for some reason.)
I think it's really important for people to understand a little bit about how open source in Europe works compared to the US. For many people here it isn't just a development model or a way of guaranteeing some level of code security, but just a matter of life and reality. Many people here, at this event, are pretty involved not only in messing around with fun electronic toys, but also administrating some very complex networks and systems deployments. Being able to apply a certain level of code freedom to playing with complex servers scales equally as well to being able to create new tools for Audio and Video production. In other words, all the cool parties use open source here.
When working with Free Media geeks, having libraries of open media for use in productions is equally as important. It's very common to want to use movies out of pop culture or out of alternative culture (cue obvious cut to a scene from Yellow Submarine for 750 milliseconds.) The sooner most common media, even off-Hollywood films are under licenses like the Creative Commons, the closer artists are able to legally and freely use this media for their performances as well. Open Source and Open Media aren't just philosophical discussions but really affect the things that people her do.
If you want to see pictures, you can check out mine on Facebook here:
Or on Flickr:
If you're looking to comment, you can find them here:
Remember, this is MediaWiki, so there is a Talk page.
I've also added xmobar to the fray of packages I've put up in my previous blog post.
Test them out!
MYTH: PackageKit is just one level of abstraction on top of package management that serves no purpose other than to make things more complex.
REALITY: As stated in said article, Linspire has also created such a layer. There is a huge difference between PK and CNR though. CNR is really more of a user based shopping center for packages that traditionally wrapped on top of Apt. I haven't looked at it at all in the past year, so I am not fully qualified to comment, but it seem to maintain that overall approach from what I've seen.
PackageKit integrates tightly with the Gnome and KDE desktops using some cross platform tools to make managing a system easier. Namely, PackageKit integrates with ConsoleKit and PolicyKit to let administrators fine tune the users control over packages installation. It is a far better solution than giving users access to sudo.
Neither of these layers are redundant. They provide a level of service well beyond what a basic package manager can provide. Furthermore, they provide it in a clear cross desktop and cross distribution fashion. The need for these two tools is clear.
MYTH: There are too many package managers, and the landscape is threatening, and unhealthy for the Linux future.
REALITY: Package managers are developed all the time, because every user needs something different. For the desktop user, package management all looks the same, but this is not true. For the enterprise user, package management is a crucial feature. For the enterprise desktop, being able to mix both these use cases is an optimal solution. Let's examine each one. In this case, we're concerned only with dependency resolution, because the rest is mainly a usability layer on top.
The first reality is that not all package managers do dependency resolution the same, because that's the key point of a package manager. For example, there are a number of dependency cases that only a handful of package managers can solve correctly, such as the libBCD use case. The last time I checked, both apt and yum fail to solve it, but zypper has no problem with it. Even if the user doesn't care, having bad dependency resolution on the desktop can make life difficult for new users as well as advanced power users. openSuSE had the drive to innovate package management a little bit, and it's paid off by having a quality dependency resolution.
Other distributions, such as Gentoo and Crux take two totally different approaches. Admittedly neither of them are intended for the average desktop user, the idea of dependency resolution on both those distros is entirely unique.
In the enterprise, a package manager is not just a tool for delivering software once to a system, but a tool for creating bits of a system out of a variety of components. Deploying server applications, such as java applications can require an incredibly complicated and thorny dependency resolution process. It can include a number of criteria, such as prefered way of deploying java packages (there's about 15 different ways to do it), deploying custom kernels based on hardware, intended work load, configuration of the day, or one-shot actions such as updating the bios or other firmware. It could include other external matters, such as deploying language files for a certain region, because a laptop has been transfered from one department to another as part of an enterprise reorganization. This is overkill for the desktop user, and it is critical that there be more than one package manager in the Linux landscape for this reason alone.
(One of the values of yum is its plugin system. An enterprise administrator can develop all these plugins using the same package manager that normally fits on the desktop.)
Having PackageKit subject to this complex dependency resolution process is good for the enterprise desktop. Certain 'power users' in a company can be given permission to install extra packages from trusted company sources as is needed. Even updates can be handled by the user. To an administrator, this situation is ideal, because he receives two guarantees. 1) the user is getting the right packages, and that the system will likely not break down. 2) the user is empowered to do their job and is a satisfied coworker.
Thus, the conclusion so far is that PackageKit and multiple package managers are a clear necessity to the Linux Community.
MYTH: There are too many packaging formats, some are too archaic. Why can't we just settle on one format for writing packages?
REALITY: The LSB Packaging API uses XML. For that reason alone, there will be at least two packaging formats. Not all of us like XML. Nor is XML the ideal format for delivering package information to every system. For the reasons I stated above, different formats are needed for all the different use cases. A Crux pkgbuild or a Gentoo ebuild is a packaging format fine tuned to the systems they run on. Forcing them to use some other method to package their programs wouldn't make any sense at all.
In the article, the author writes:
Hell, if you really think scripting is the best way to install
software in 2008, why not go back to installing everything by
compiling all software from source code? That works even better
and it will work on any Linux distribution. It’s tedious and
requires every Linux user to also be at least enough of a
developer to be able to deal with gcc, make, and library
compatibility issues, but hey it would work.
This seems to imply that there is something old fashioned in using shell scripts. This couldn't be further from the truth. In creating nearly all packages, there is always some amount of shell scripting that needs to be done. Many packages require some form of post-install and pre-uninstall scripting to be done. The most universal method for doing this on any distribution is a script. The best part of this though, is that it works entirely behind the scenes. I never have to interact with RPM running any script, as the process is completely hidden from the end user. Putting a layer in C on top if this only makes the life of a package maintainer harder.
MYTH: We should have some system of universal packages. The user should be able to get a package from just about anywhere, and it should run automatically on their Linux system.
REALITY: I am not going to go into the MSI installer debate that has been argued to death. Let's address some of the real issues that face the Linux desktop.
Every distribution uses slightly different compiler settings. There are a million and one reasons for doing so, but in effect, no package, neither RPM nor DEB, nor anything else is guaranteed to run on every Linux system. Having some universal package API will not solve that problem, and will only draw users away from the real target they should be focusing on.
Third party software that hasn't been developed with your distro in mind presents a strong problem for everyone. Chances are, it will break your system. Having a common API will not make the problem easier, because there are always a number of other layers to worry about. For example, if the program doesn't ship with an SELinux policy, it will break for all confined users. Using something like the SuSE build service to get the latest Amarok for Fedora would be pointless. Your distro may use an alternate file naming scheme, such as GoboLinux. RPMs would have a hard time working in such a system, let alone some random API.
The reality of the situation is that the differences between distros gives people a chance to innovate more with how a Linux system works. Trying to funnel everything down through some common layer at the bottom really reduces the chances for big innovation to happen. Steven comments that layers like PackageKit and CNR are bandaids on top of a failed process. This is not how I would describe it. They are tools to make the confusing landscape easier for the end user, while letting distributions maintain their competitive advantage. I highly doubt that such a common API will be effective, let alone accepted easily by the community.
But please, prove me wrong.