Ploum.net En
Threads by month
- ----- 2025 -----
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- 1 participants
- 28 discussions
HOW I FELT IN LOVE WITH CALENDAR.TXT
by Ploum on 2025-09-03
https://ploum.net/2025-09-03-calendar-txt.html
The more I learn about Unix tools, the more I realise we are reinventing
everyday Rube Goldberg’s wheels and that Unix tools are, often,
elegantly enough.
Months ago, I discovered calendar.txt. A simple file with all your dates
which was so simple and stupid that I wondered 1) why I didn’t think
about it myself and, 2) how it could be useful.
Calendar.txt (terokarvinen.com)
https://terokarvinen.com/2021/calendar-txt/
I downloaded the file and tried it. Without thinking much about it, I
realised that I could add the following line to my offpunk startup:
> !grep `date -I` calendar.txt --color
And, just like that, I suddenly have important things for my day
everytime I start Offpunk. In my "do_the_internet.sh", I added the
following:
> grep `date -I`calendar.txt --color -A 7
Which allows me to have an overview of the next seven days.
But what about editing? This is the alias I added to my shell to
automatically edit today’s date:
> alias calendar="vim +/`date -I` ~/inbox/calendar.txt"
It feels so easy, so elegant, so simple. All those aliases came
naturally, without having to spend more than a few seconds in the man
page of "date". No need to fiddle with a heavy web interface. I can grep
through my calendar. I can edit it with vim. I can share it, save it and
synchronise it without changing anything else, without creating any
account. Looking for a date, even far in the future, is as simple as
typing "/YEAR-MONTH-DAY" in vim.
Recurring events
================
The icing on the cake became apparent when I received my teaching
schedule for the next semester. I had to add a recurring event every
Tuesday minus some special cases where the university is closed.
Not a big deal. I do it each year, fiddling with the web interface of my
calendar to find the good options to make the event recurrent then
removing those special cases without accidentally removing the whole
series.
It takes at most 10 minutes, 15 if I miss something. Ten minutes of my
life that I hate, forced to use a mouse and click on menus which are
changing every 6 months because, you know, "yeah, redesign".
But, with my calendar.txt, it takes exactly 15 seconds.
> /Tue
To find the first Tuesday.
> i
To write the course number and classroom number, escape then
> n.n.n.n.n.n.nn.n.n.n.nn.n.n.n.n.
I’m far from being a Vim expert but this occurred naturally, without
really thinking about the tool. I was only focused on the date being
correct. It was quick and pleasant.
Shared events and collaboration
===============================
I read my email in Neomutt. When I’m invited to an event, I must open a
browser to access the email through my webmail and click the "Yes"
button in order to have it added to my calendar. Events I didn’t respond
show in my calendar, even if I don’t want them. It took me some
settings-digging not to display events I refused. Which is kinda dumb
but so are the majority of our tools those days.
With calendar.txt, I manually enter the details from the invitation,
which is not perfect but takes less time than opening a browser, login
into a webmail and clicking a button while waiting at each step the
loading of countless of JavaScript libraries.
Invitations are rare enough that I don’t mind entering the details by
hand. But I’m thinking about doing a small bash script that would read
an ICS file and add it to calendar.txt. It looks quite easy to do.
I also thought about doing the reverse : a small script that would
create an ICS and send it by email to any address added to an event. But
it would be hard to track down which events were already sent and which
ones are new. Let’s stick to the web interface when I need to create a
shared event.
Calendar.txt should remain simple and for my personal use. The point of
Unix tools is to allow you to create the tools you need for yourself,
not create a startup with a shiny name/logo that will attract investors
hoping to make billions in a couple of years by enshitifying the life of
captive users.
And when you work with a team, you are stuck anyway with the worst
possible tool that satisfies the need of the dumbest member of the team.
Usually the manager.
With Unix tools, each solution is personal and different from the
others.
Other source of inspiration : Jouraling with neovim/coreutils and
dateutils (tangled.sh)
https://tangled.sh/@oppi.li/journal
Simplifying calendaring
=======================
Another unexpected advantage of the system is that you don’t need to
guess the end date of events anymore. All I need to know is that I have
a meeting at 10 and a lunch at 12. I don’t need to estimate the duration
of the meeting which is, anyway, usually only a rough estimation and not
an important information. But you can’t create an event in modern
calendar without giving a precise end.
Calendar.txt is simple, calendar.txt is good.
I can add events without thinking about it, without calendaring being a
chore. Sandra explains how she realised that using an online calendar
was a chore when she started to use a paper agenda.
When calendar was a verb (idiomdrottning.org)
gemini://idiomdrottning.org/calendaring
Going back to a paper calendar is probably something I will end up doing
but, in the meantime, calendar.txt is a breeze.
Trusting my calendar
====================
Most importantly, I now trust my calendar.
I’ve been burned by this before: I had created my whole journey to a
foreign country on my online calendar only to discover upon landing that
my calendar had decided to be "smart" and to change all events because I
was not in the same time zone. Since then, I actually write the time of
an event in the title of the event, even if it looks redundant. This
also helps with events being moved by accident while scrolling on a
smartphone or in a browser. Which is rare but happened enough to make me
anxious.
I had the realisation that I don’t trust any calendar application
because, for events with a very precise time (like a train), I always
fall back on checking the confirmation email or PDFs.
It’s not the case anymore with calendar.txt. I trust the file. I trust
the tool.
There are not many tools you can trust.
Mobile calendar.txt
===================
I don’t need notifications about events on my smartphone. If a
notification tells me about an event I forgot, it would be too late
anyway. And if my phone is on silent, like always, the notification is
useless anyway. We killed notifications with too much notification,
something I addressed here :
Une vie sans notifications (ploum.net)
https://ploum.net/2025-09-02-mudita.html
I do want to consult/edit my calendar on my phone. Getting the file on
my phone is easy as having it synchronised with my computer through any
mean. It’s a simple txt file.
Using it is another story.
Looking at my phone, I realise how far we have fallen: Android doesn’t
allow me to do a simple shortcut to that calendar.txt file which would
open on the current day. There’s probably a way but I can’t think of
one. Probably because I don’t understand that system. After all, I’m not
supposed to even try understanding it.
Android is not Unix. Android, like other proprietary Operating System,
is a cage you need to fight against if you don’t want to surrender your
choices, your data, your soul. Unix is freedom: hard to conquer but
impossible to let go as soon as you tasted it.
1
0
20 YEARS OF LINUX ON THE DESKTOP (PART 4)
by Ploum on 2025-07-23
https://ploum.net/2025-07-23-linux_desktop4.html
> Previously in "20 years of Linux on the Deskop": After contributing to
the launch of Ubuntu as the "perfect Linux desktop", Ploum realises that
Ubuntu is drifting away from both Debian and GNOME. In the meantime,
mobile computing threatens to make the desktop irrelevant.
20 years of Linux on the Desktop (part 1)
https://ploum.net/2024-10-20-20years-linux-desktop-part1.html
20 years of Linux on the Desktop (part 2)
https://ploum.net/2024-12-16-linux_desktop2.html
20 years of Linux on the Desktop (part 3)
https://ploum.net/2025-03-08-linux_desktop3.html
The big desktop schism
======================
The fragmentation of the Ubuntu/GNOME communities became all too
apparent when, in 2010, Mark Shuttleworth announced during the Ubuntu-
summit that Ubuntu would drop GNOME in favour of its own in-house and
secretly developed desktop: Unity.
I was in the audience. I remember shaking my head in disbelief while
Mark was talking on stage, just a few metres from me.
Working at the time in the automotive industry, I had heard rumours that
Canonical was secretly talking with BMW to put Ubuntu in their cars and
that there was a need for a new touchscreen interface in Ubuntu. Mark
hoped to make an interface that would be the same on computers and
touchscreens. Hence the name: "Unity". It made sense but I was not
happy.
The GNOME community was, at the time, in great agitation about the
future. Some thought that GNOME was looking boring. That there was no
clear sense of direction except minor improvements. In 2006, the German
Linux Company SUSE had signed a patent agreement with Microsoft covering
patents related to many Windows 95 concepts like the taskbar, the tray,
the startmenu. SUSE was the biggest contributor to KDE and the agreement
was covering the project. But Red Hat and GNOME refused to sign that
agreement, meaning that Microsoft suing the GNOME project was now
plausible.
Everyone seems to forget why GNOME and GNOME 3 and Unity happened (liam-
on-linux.dreamwidth.org)
https://liam-on-linux.dreamwidth.org/85359.html
How Microsoft shattered Gnome's unity with Windows 95
(www.theregister.com)
https://www.theregister.com/2013/06/03/thank_microsoft_for_linux_desktop_fa…
An experiment of an alternative desktop breaking all Windows 95 concepts
was done in JavaScript: GNOME-shell.
A JavaScript desktop? Seriously? Yeah, it was cool for screenshots but
it was slow and barely usable. It was an experiment, nothing else. But
there’s a rule in the software world: nobody will ever end an
experiment. An experiment will always grow until it becomes too big to
cancel and becomes its own project.
Providing the GNOME desktop to millions of users, Mark Shuttleworth was
rightly concerned about the future of GNOME. Instead of trying to fix
GNOME, he decided to abandon it. That was the end of Ubuntu as
Debian+GNOME.
What concerned me was that Ubuntu was using more and more closed
products. Products that were either proprietary, developed behind closed
doors or, at the very least, were totally controlled by Canonical
people.
In 2006, I had submitted a Summer of Code project to build a GTK
interface to Ubuntu’s new bug tracker: Launchpad. Launchpad was an in-
house project which looked like it was based on the Python CMS Plone and
I had some experience with it. During that summer, I realised that
Launchpad was, in fact, proprietary and had no API. To my surprise,
there was no way I could get the source code of Launchpad. Naively, I
had thought that everything Ubuntu was doing would be free software.
Asking the dev team, I was promised Launchpad would become free "later".
I could not understand why Canonical people were not building it in the
open.
I still managed to build "Conseil" by doing web scraping but it broke
with every single change done internally by the Launchpad team.
As a side note, the name "Conseil" was inspired by the book "20.000
leagues under the sea", by Jules Vernes, a book I had downloaded from
the Gutenberg project and that I was reading on my Nokia 770. The device
was my first e-reader and I’ve read tenths of public domain books on it.
This was made possible thanks to the power of opensource: FBreader, a
very good epub reading software, had been easily ported to the N770 and
was easily installable.
I tried to maintain Conseil for a few months before giving up. It was my
first realisation that Canonical was not 100% open source. Even
technically free software was developed behind closed doors or, at the
very least, with tight control over the community. This included
Launchpad, Bzr, Upstart, Unity and later Mir. The worse offender would
later be Snap.
To Mark Shuttleworth’s credit, it should be noted that, most of the
time, they were really trying to fix core issues with Linux’s ecosystem.
In retrospective, it looks easy to see those moves as "bad". But, in
reality, Canonical had a strong vision and keeping control was easier
than to do everything in the open. Bzr was launched before git existed
(by a few days). Upstard was created before Systemd. Those decisions
made sense at the time.
Half an hour with Mark 'SABDFL' Shuttleworth • The Register
(www.theregister.com)
https://www.theregister.com/2024/11/11/mark_shuttleworth_ubuntu_interview/
Even the move to Unity would later prove to be very strategical as, in
2012, GNOME would suddenly depend on Systemd, which was explicitly
developed as a competitor to Upstart. Ubuntu would concede defeat in
2015 by replacing Upstart with Systemd and in 2018 by reinstating GNOME
as the default desktop. But those were not a given in 2010.
systemd, 10 years later: a historical and technical retrospective
(blog.darknedgy.net)
https://blog.darknedgy.net/technology/2020/05/02/0/
What if Ubuntu were right, a discussion with Jonathan Riddel about KDE
(ploum.net)
https://ploum.net/what-if-ubuntu-were-right/index.html
But even with the benefit of doubt, Canonical would sometimes cross huge
red lines, like that time where Unity came bundled with some Amazon
advertisement, tracking you on your own desktop. This was, of course,
not really well received.
The end of Maemo: when incompetence is not enough, be malevolent
================================================================
At the same time in the nascent mobile world, Nokia was not the only one
suffering from the growing Apple/Google duopoly. Microsoft was going
nowhere with its own mobile operating system, WindowsCE and running like
a headless chicken. The director of the "Business division" of
Microsoft, a guy named Stephen Elop, signed a contract with Nokia to
develop some Microsoft Office feature on Symbian. This looked like an
anecdotical side business until, a few months after that contract, in
September 2010, Elop leaves Microsoft to become… CEO of Nokia.
This was important news to me because, at 2010’s GUADEC (GNOME’s annual
conference) in Then Hague, I had met a small tribe of free software
hackers called Lanedo. After a few nice conversations, I was excited to
be offered a position in the team.
In my mind at the time, I would work on GNOME technologies full-time
while being less and less active in the Ubuntu world! I had chosen my
side: I would be a GNOME guy.
I was myself more and more invested in GNOME, selling GNOME t-shirts at
FOSDEM and developing "Getting Things GNOME!", a software that would
later become quite popular.
First release of GTG in 2009
https://ploum.net/206-getting-things-gnome-01-just-5-minutes-more/index.html
Joining Lanedo without managing to land a job at Canonical (despite
several tries) was the confirmation that my love affair with Ubuntu had
to be ended.
The quest for the best non-Ubuntu distribution
https://ploum.net/best-gnome3-distribution/index.html
In 2010, Lanedo biggest customer was, by far, Nokia. I had been hired to
work on Maemo (or maybe Meego? This was unclear). We were not thrilled
to see an ex-Microsoft executive take the reins of Nokia.
As we feared, one of Elop’s first actions as CEO of Nokia was to kill
Maemo in an infamous "burning platform" memo. Elop is a Microsoft man
and hates anything that looks like free software. In fact, like a good
manager, he hates everything technical. It is all the fault of the
developers which are not "bringing their innovation to the market fast
enough". Sadly, nobody highlighted the paradox that "bringing to the
market" had never been the job of the developers. Elop’s impact on the
Nokia company is huge and nearly immediate: the stock is in free fall.
One Nokia developer posted on Twitter: "Developers are blamed because
they did what management asked them to do". But, sometimes, management
even undid the work of the developers.
The Meego team at Nokia was planning a party for the release of their
first mass-produced phone, the N8. While popping Champaign during the
public announcement of the N8 release, the whole team learned that the
phone had eventually been shipped with… Symbian. Nobody had informed the
team. Elop had been CEO for less than a week and Nokia was in total
chaos.
But Stephen Elop is your typical "successful CEO". "Successful" like in
inheriting one of the biggest and most successful mobile phone makers
and, in a couple of years, turning it into ashes. You can’t invent such
"success".
> During Elop's tenure, Nokia's stock price dropped 62%, their mobile
phone market share was halved, their smartphone market share fell from
33% to 3%, and the company suffered a cumulative €4.9 billion loss
(source: Stephen Elop on Wikipedia)
https://en.wikipedia.org/wiki/Stephen_Elop
It should be noted that, against all odds, the Meego powered Nokia N9,
which succeeded to the N8, was a success and was giving true hope of
Meego competing with Android/iOS. N9 was considered a "flagship" and it
showed. At Lanedo, we had discussed having an N9 bought by the company
for each employee so we could "eat our own dog food" (something which
was done at Collabora). But Elop announcement was clearly underderstood
as the killing of Meego/Maemo and Symbian to leave room to… Windows
Phone!
The Nokia N9 was available in multiple colours (picture by Bytearray
render on Wikimedia)
https://ploum.net/files/nokia_n9.jpg
Well, Elop promised that, despite moving to Windows Phone, Nokia would
release one Meego phone every year. I don’t remember if anyone bought
that lie. We could not really believe that all those years of work would
be killed just when the success of the N9 proved that we did it right.
But that was it. The N9 was the first and the last of its kind.
Ironically, the very first Windows Phone, the Lumia 800, will basically
be the N9 with Windows Phone replacing Meego. And it would receive worse
reviews that the N9.
At that moment, one question is on everybody's lips: is Stephen Elop
such a bad CEO or is he destroying Nokia on purpose? Is it typical
management incompetence or malevolence? Or both?
The answer comes when Microsoft, Elop’s previous employer, bought Nokia
for a fraction of the price it would have paid if Elop hasn’t been CEO.
It’s hard to argue that this was not premeditated: Elop managed to
discredit and kill every software-related project Nokia had ever done.
That way, Nokia could be sold as a pure hardware maker to Microsoft,
without being encumbered by a software culture which was too distant
from Microsoft. And Elop goes back to his old employer as a richer man,
receiving a huge bonus for having tanked a company. But remember dear
MBA students, he’s a "very successful manager", you should aspire to
become like him.
Les voies du capitalisme sont impénétrables.
As foolish as it sounds, this is what the situation was: the biggest
historical phone maker in the world merged with the biggest historical
software maker. Vic Gundotra, head of the Google+ social network,
posted: "Two turkeys don’t make an eagle." But one thing was clear:
Microsoft was entering the mobile computing market because everything
else was suddenly irrelevant.
Every business eyes were pointed towards mobile computing where,
ironically, Debian+GNOME had been a precursor.
Just when it looked like Ubuntu managed to make Linux relevant on the
desktop, nobody cared about the desktop anymore. How could Mark
Shuttleworth makes Ubuntu relevant in that new world?
(to be continued)
> Subscribe by email or by rss to get the next episodes of "20 years of
Linux on the Desktop".
>
> I’m currently turning this story into a book. I’m looking for an agent
or a publisher interested to work with me on this book and on an English
translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten
novel which sold out in three weeks in France and Belgium.
1
0
REDUCING THE DIGITAL CLUTTER OF CHATS
by Ploum on 2025-05-23
https://ploum.net/2025-05-23-chats-digital-clutter.html
I hate modern chats. They presuppose we are always online, always
available to chat. They force us to see and think about them each time
we get our eyes on one of our devices. Unlike mailboxes, they are never
empty. We can’t even easily search through old messages (unlike the chat
providers themselves, which use the logs to learn more about us). Chats
are the epitome of the business idiot: they make you always busy but
prevent you from thinking and achieving anything.
It is quite astonishing to realise that modern chat systems use 100 or
1000 times more resources (in size and computing power) than 30 years
ago, that they are less convenient (no custom client, no search) and
that they work against us (centralisation, surveillance, ads). But, yay,
custom emojis!
Do not get me wrong: chats are useful! When you need an immediate
interaction or a quick on-the-go message, chats are the best.
I needed to keep being able to chat while keeping the digital clutter to
a minimal and preserving my own sanity. That’s how I came up with the
following rules.
Rule 1: One chat to rule them all
=================================
One of the biggest problems of centralised chats is that you must be on
many of them. I decided to make Signal my main chat and to remove
others.
Signal was, for me, a good compromise of respecting my privacy, being
open source and without ads while still having enough traction that I
could convince others to join it.
Yes, Signal is centralised and has drawbacks like relying on some Google
layers (which I worked around by using Molly-FOSS). I simply do not see
XMPP, Matrix or SimpleX becoming popular enough in the short term. Wire
and Threema had no advantages over Signal. I could not morally justify
using Whatsapp nor Telegram.
In 2022, as I decided to use Signal as my main chat, I deleted all
accounts but Signal and Whatsapp and disabled every notification from
Whatsapp, forcing myself to open it once a week to see if I had missed
something important. People who really wanted to reach me quickly
understood that it was better to use Signal. This worked so well that I
forgot to open Whatsapp for a whole month which was enough for Whatsapp
to decide that my account was not active anymore.
Le suicide de mon compte WhatsApp (ploum.net)
https://ploum.net/le-suicide-de-mon-compte-whatsapp/index.html
Not having Whatsapp is probably the best thing which happened to me
regarding chats. Suddenly, I was out of tenths or hundreds of group
chats. Yes, I missed lots of stuff. But, most importantly, I stopping
fearing missing them. Seriously, I never missed having Whatsapp. Not
once. Thanks Meta for removing my account!
While travelling in Europe, it is now standard that taxi and hotels will
chat with you using Whatsapp. Not anymore for me. Guess what? It works
just fine. In fact, I suspect it works even better because people are
forced to either do what we agreed during our call or to call me, which
requires more energy and planning.
Rule 2: Mute, mute, mute!
=========================
Now that Signal is becoming more popular, some group chats are migrating
to it. But I’ve learned the lesson : I’m muting them. This allows me to
only see the messages when I really want to look at them. Don’t hesitate
to mute vocal group chats and people with whom you don’t need day-to-day
interaction.
I’m also leaving group chats which are not essential. Whatsapp deletion
told me that nearly no group chat is truly essential.
Many times, I’ve had people sending me emails about what was told on a
group chat because they knew I was not there. Had I been on that group,
I would probably have missed the messages but nobody would have cared.
If you really want to get in touch with me, send me an email!
Rule 3: No read receipts nor typing indicators
==============================================
I was busy, walking in the street with my phone in hands for directions.
A notification popped up with an important message. It was important but
not urgent. I could not deal with the message at that moment. I wanted
to take the time. One part of my brain told me not to open the message
because, if I did, the sender would see a "read receipt". He would see
that I had read the message but would not receive any answer.
For him, that would probably translate in "he doesn’t care". I
consciously avoided to open Signal until I was back home and could deal
with the message.
That’s when I realised how invasive the "read receipt" was. I disabled
it and never regretted that move. I’m reading messages on my own watch
and replying when I want to. Nobody needs to know if I’ve seen the
message. It is wrong in every aspect.
Signal preferences showing read receipts and typing indicator disabled
https://ploum.net/files/signal_receipts.jpg
Rule 4: Temporary discussions only
==================================
The artist Bruno Leyval, who did the awesome cover of my novel Bikepunk,
is obsessed with deletion and disappearance. He set our Signal chat so
that every message is deleted after a day. At first, I didn’t see the
point.
Until I understood that this was not only about privacy, it also was
about decluttering our mind, our memories.
Since then, I’ve set every chat in Signal to delete messages after one
week.
Signal preferences showing disappearing messages set to one week
https://ploum.net/files/signal_disappearing.jpg
This might seem like nothing but this changes everything. Suddenly,
chats are not a long history of clutter. Suddenly, you see chats as
transient and save things you want to keep. Remember that you can’t
search in chats? This means that chats are transient anyway. With most
chats, your history is not saved and could be lost by simply dropping
your phone on the floor. Something important should be kept in a chat?
Save it! But it should probably have been an email.
Embracing the transient nature of chat, making it explicit greatly
reduce the clutter.
Conclusion
==========
I know that most of you will say that "That’s nice Ploum but I can’t do
that because everybody is on XXX" where XXX is most often Whatsapp in my
own circles. But this is wrong: you believe everybody is on XXX because
you are yourself using XXX as your main chat. When surveying my students
this year, I’ve discovered that nearly half of them was not on Whatsapp.
Not for some hard reason but because they never saw the need for it. In
fact, they were all spread over Messenger, Instagram, Snap, Whatsapp,
Telegram, Discord. And they all believed that "everybody is where I am".
In the end, the only real choice to make is between being able to get
immediately in touch with a lot of people or having room for your mental
space. I choose the latter, you might prefer the former. That’s fine!
I still don’t like chat. I’m well aware that the centralised nature of
Signal makes it a short-term solution. But I’m not looking for the best
sustainable chat. I just want fewer chats in my life.
If you want to get in touch, send me an email!
1
0
GOODBYE OFFPUNK, WELCOME XKCDPUNK!
by Ploum on 2025-04-01
https://ploum.net/2025-04-01-xkcdpunk.html
For the last three years, I’ve been working on Offpunk, a command-line
gemini and web browser.
Offpunk.net
https://offpunk.net
While my initial goal was to browse the Geminisphere offline, the
mission has slowly morphed into cleaning and unenshitiffying the modern
web, offering users a minimalistic way of browsing any website with
interesting content.
Rendering the Web with Pictures in Your Terminal (ploum.net)
https://ploum.net/2022-03-24-ansi_html.html
Focusing on essentials
======================
From the start, it was clear that Offpunk would focus on essentials. If
a website needs JavaScript to be read, it is considered as non-
essential.
It worked surprisingly well. In fact, in multiple occurrence, I’ve
discovered that some websites work better in Offpunk than in Firefox. I
can comfortably read their content in the former, not in the latter.
By default, Offpunk blocks domains deemed as nonessentials or too
enshitified like twitter, X, facebook, linkedin, tiktok. (those are
configurable, of course. Defaults are in offblocklist.py).
Cleaning websites, blocking worst offenders. That’s good. But it is only
a start.
It’s time to go further, to really cut out all the crap from the web.
And, honestly, besides XKCD comics, everything is crap on the modern
web.
> As an online technical discussion grows longer, the probability of a
comparison with an existing XKCD comic approaches 1.
> – XKCD’s law
XKCD’s law (ploum.net)
https://ploum.net/xkcds-law/index.html
If we know that we will end our discussion with an XKCD’s comic, why not
cut all the fluff? Why don’t we go straight to the conclusion in a true
minimalistic fashion?
Introducing XKCDpunk
====================
That’s why I’m proud to announce that, starting with today’s release,
Offpunk 2.7 will now be known as XKCDpunk 1.0.
Xkcdpunk.net
https://xkcdpunk.net
XKCDpunk includes a new essential command "xkcd" which, as you guessed,
takes an integer as a parameter and display the relevant XKCD comic in
your terminal, while caching it to be able to browse it offline.
Screenshot of XKCDpunk showing comic 626
https://ploum.net/files/xkcdpunk1.png
Of course, this is only an early release. I need to clean a lot of code
to remove everything not related to accessing xkcd.com. Every non-xkcd
related domain will be added to offblocklist.py.
I also need to clean every occurrence of "Offpunk" to change the name.
All offpunk.net needs to be migrated to xkcd.net. Roma was not built in
one day.
Don’t hesitate to install an "offpunk" package, as it will still be
called in most distributions.
offpunk package versions - Repology (repology.org)
https://repology.org/project/offpunk/versions
And report bugs on the xkcdpunk’s mailinglist.
xkcdpunk-users on lists.sr.ht
https://lists.sr.ht/~lioploum/offpunk-users
Goodbye Offpunk, welcome XKCDpunk!
1
0
THE CANDID NAIVETY OF GEEKS
by Ploum on 2025-03-28
https://ploum.net/2025-03-28-geeks-naivety.html
I mean, come on!
================
Amazon recently announced that, from now on, everything you say to Alexa
will be sent to their server.
Pluralistic: Amazon annihilates Alexa privacy settings, turns on
continuous, nonconsensual audio uploading (15 Mar 2025)
(pluralistic.net)
https://pluralistic.net/2025/03/15/altering-the-deal/
What surprised me the most with this announcement is how it was met with
surprise and harsh reactions. People felt betrayed.
I mean, come on!
Did you really think that Amazon was not listening to you before that?
Did you really buy an Alexa trusting Amazon to "protect your privacy"?
Recently, I came across a comment on Hacker News where the poster
defended Apple as protecting privacy of its users because "They market
their product as protecting our privacy".
I mean, once again, come on!
Did you really think that "marketing" is telling the truth? Are you a
freshly debarked Thermian? (In case you missed it, this is a Galaxy
Quest reference.)
The whole point of marketing is to lie, lie and lie again.
What is the purpose of that gadget?
===================================
The whole point of the whole Amazon Alexa tech stack is to send
information to Amazon. That’s the main goal of the thing. The fact that
it is sometimes useful to you is a direct consequence of the thing
sending information to Amazon. Just like Facebook linking you with
friends is a consequence of you giving your information to Meta.
Usefulness is only a byproduct of privacy invasion.
Having a fine-grained setting enabling "do not send all information to
Amazon please" is, at best, wishful thinking. We had the same in the
browser ("do-not-track"). It didn’t work.
I’ve always been convinced that the tech geeks who bought an Amazon
Alexa perfectly knew what they were doing. One of my friends has a
Google Echo and justify it with "Google already knows everything about
our family through our phones, so I’m trading only a bit more of our
privacy for convenience". I don’t agree with him but, at the very least,
it’s a logical opinion.
We all know that what can be done with a tool will be done eventually.
And you should prepare for it. On a side note, I also postulate that the
reason Amazon removed that setting is because they were already
gathering too much data to justify its existence in case there’s a
complaint or an investigation in the future."How did you manage to get
those data while your product says it will not send data?".
But, once again, any tech person knows that pushing a button in an
interface is not a proof of anything in the underlying software.
Please stop being naive about Apple
===================================
That’s also the point with Apple: Apple is such a big company that the
right hand has no idea about what the left hand is doing. Some privacy
people are working at Apple and doing good job. But their work is
continuously diluted through the interests of quick and cheap
production, marketing, release, new features, gathering data for
advertising purpose. Apple is not a privacy company and has never been:
it is an opportunistic company which advertise privacy when it feels it
could help sell more iPhones. But deeply inside, they absolutely don’t
care and they will absolutely trade the (very little) privacy they have
if it means selling more.
Sometimes, geek naivety is embarrassingly stupid. Like "brand loyalty".
Marketing lies to you. As a rule of thumb, the bigger the company, the
bigger the lie. In tech, there’s no way for a big company to not lie
because marketers have no real understanding of they are selling. Do you
really think that people who chose to advertise "privacy" at Apple have
any strong knowledge about "privacy"? That they could simply give you a
definition of "privacy"?
I know that intelligent people go to great intellectual contortions to
justify buying the latest overpriced spying shiny coloured screen with
an apple logo. It looks like most humans actively look to see their
freedom restricted. Seirdy calls it "the domestication of users".
WhatsApp and the domestication of users (seirdy.one)
https://seirdy.one/posts/2021/01/27/whatsapp-and-the-domestication-of-users/
And that’s why I see Apple as a cult: most tech people cannot be
reasoned about it.
The Cost of Being Convinced (ploum.net)
https://ploum.net/the-cost-of-being-convinced/index.html
You can’t find a technical solution to a lie
============================================
Bill Cole, contributor to Spamassassin, recently posted on Mastodon that
the whole DNS stack to protect spammers was not working.
> spammers are more consistent at making SPF, DKIM, and DMARC correct
than are legitimate senders.
🆘Bill Cole 🇺🇦: "@jwz@mastodon.social The stats we collect for the…"
(toad.social)
https://toad.social/@grumpybozo/114213600922816869
It is, once again, a naive approach to spam. The whole stack was
designed with the mindset "bad spammers will try to hide themselves".
But was is happening in your inbox, really?
Most spam is not "black hat spam". It is what I call "white-collar
spam": perfectly legitimate company, sending you emails from legitimate
address. You slept in a hotel during a business trip? Now you will
receive weekly emails about our hotel for the rest of your life. And it
is the same for any shop, any outlet, anything you have done. Your inbox
is filled with "white-collar" junk. And they know this perfectly well.
In Europe, we have a rule, the RGPD, which forbid businesses to keep
your data without your express consent. I did the experiment for several
months to send a legal threat to every single white-collar spam I
received. Guess what: they always replied that it was a mistake, that I
was now removed, that it should not have happened, that I checked the
box (which was false but how could I prove it?) or even, on one
occasion, that they restored a backup containing my email before I
unsubscribed (I unsubscribed from that one 10 years before, which makes
it very unlikely).
In short, they lied. All of them. All of them are spammers and they lie
pretending that "they thought you were interested".
In one notable case, they told me that they had erased all my data
while, still having the cookie on my laptop, I could see and use my
account. Thirty days later, I was still connected and I figured that
they simply managed to change my user id from "ploum" to "deleted_ploum"
in the database. While answering me straight in the face that they had
no information about me in their database.
Corporations are lying. You must treat every corporate word as a
straight lie until proved otherwise.
But Ploum, if all marketing is a lie, why trusting Signal?
==========================================================
If you can’t trust marketing, why do I use Signal and Protonmail?
First of all, Signal is open source. And, yes, I’ve read some of the
source code for some feature I was interested in. I’ve also read through
some very deep audit of Signal source code.
Reviewing the Cryptography Used by Signal (soatok.blog)
https://soatok.blog/2025/02/18/reviewing-the-cryptography-used-by-signal/
I’m also trusting the people behind Signal. I’m trusting people who
recommend Signal. I’m trusting the way Signal is built.
But most importantly, Signal sole existence is to protect privacy of its
users. It’s not even a corporation and, yes, this is important.
Yes, they could lie in their marketing. Like Telegram did (and still
does AFAIK). But this would undermine their sole reason to exist.
I don’t say that Signal is perfect: I say I trust them to believe
themselves what they announce. For now.
What about Protonmail?
======================
For the same reasons, Protonmail can, to some extent, be trusted.
Technically, they can access most of the emails of their customers
(because those emails arrive unencrypted to PM’s servers). But I trust
Protonmail not to sell any data because if there’s any doubt that they
do it, the whole business will crumble. They have a strong commercial
incentive to do everything they can to protect my data. I pay them for
that. It’s not a "checkbox" they could remove, it’s their whole raison
d’être.
This is also why I pay for Kagi as my search engine: their business
incentive is to provide me the best search results with less slop, less
advertising. As soon as they start doing some kind of advertising, I
will stop paying them and they know it. Or if Kagi starts becoming to AI
centric for my taste, like they did for Lori:
Why I Lost Faith in Kagi (d-shoot.net)
https://d-shoot.net/kagi.html
I don’t blindly trust companies. Paying them is not a commitment to obey
them, au contraire. Every relation with a commercial entity is, by
essence, temporary. I pay for a service with strings attached. If the
service degrade, if my conditions are not respected, I stop paying. If
I’m not convinced they can be trusted, I stop paying them. I know I can
pay and still be the product. If I have any doubt, I don’t pay. I try to
find an alternative and migrate to it. Email being critical to me, I
always have two accounts on two different trustable providers with an
easy migrating path (which boils down to changing my DNS config).
Fighting the Androidification
=============================
Cory Doctorow speaks a lot about enshitification. Where users are more
and more exploited. But one key component of a good enshitification is
what I call "Androidification".
Androidification is not about degrading the user experience. It’s about
closing doors, removing special use cases, being less and less
transparent. It’s about taking open source software and frog boiling it
to a full closed proprietary state while killing all the competition in
the process.
Android was, at first, an Open Source project. With each release, it
became more closed, more proprietary. As I explain in my "20 years of
Linux on the Desktop" essay, I believe it has always been part of the
plan. Besides the Linux kernel, Google was always wary not to include
any GPL or LGPL licensed library in Android.
20 years of Linux on the Desktop (part 3) (ploum.net)
https://ploum.net/2025-03-08-linux_desktop3.html
It took them 15 years but they finally achieved killing the Android Open
Source Project:
Google will develop the Android OS fully in private, here's why
(www.androidauthority.com)
https://www.androidauthority.com/google-android-development-aosp-3538503/
This is why I’m deeply concerned by the motivation of Canonical to
switch Ubuntu’s coreutils to an MIT licensed version.
Ubuntu 25.10 plans to swap GNU coreutils for Rust (go.theregister.com)
https://go.theregister.com/feed/www.theregister.com/2025/03/19/ubuntu_2510_…
This is why I’m deeply concerned that Protonmail quietly removed the
issue tracker from its Protonmail Bridge Github page (making the
development completely opaque for what is an essential tool for
technical Protonmail users).
I mean, commons!
================
This whole naivety is also why I’m deeply concerned by very intelligent
and smart tech people not understanding what "copyleft" is, why it is
different from "open source" and why they should care.
We need more of Richard Stallman, not less (ploum.net)
https://ploum.net/2023-06-19-more-rms.html
Corporations are not your friend. They never were. They lie. The only
possible relationship with them is an opportunistic one. And if you one
to build commons that they cannot steal, you need strong copyleft.
On Open Source and the Sustainability of the Commons (ploum.net)
https://ploum.net/2024-07-01-opensource_sustainability.html
But firstly, my fellow geeks, you need to lose your candid naivety.
I mean, come on, let’s build the commons!
1
0
20 YEARS OF LINUX ON THE DESKTOP (PART 3)
by Ploum on 2025-03-08
https://ploum.net/2025-03-08-linux_desktop3.html
> Previously in "20 years of Linux on the Deskop": After contributing to
the launch of Ubuntu as the "perfect Linux desktop", Ploum realises that
Ubuntu is drifting away from both Debian and GNOME. But something else
is about to shake the world…
20 years of Linux on the Desktop (part 1)
https://ploum.net/2024-10-20-20years-linux-desktop-part1.html
20 years of Linux on the Desktop (part 2)
https://ploum.net/2024-12-16-linux_desktop2.html
The new mobile paradigm
=======================
While I was focused on Ubuntu as a desktop solution, another
GNOME+Debian product had appeared and was shaking the small free
software world: Maemo.
It will come as a shock for the youngest but this was a time without
smartphones (yes, we had electricity and, no, dinosaurs were already
extinct, please keep playing Pokémon instead of interrupting me). Mobile
phones were still quite new and doing exactly two things: calls and
SMSes. In fact, they were sold as calling machines and the SMS frenzy,
which was just a technical hack around the GSM protocol, took everybody
by surprise, including operators. Were people really using awkward
cramped keyboard to send themselves flood of small messages?
Small pocket computers with tiny keyboard started to appear. There were
using proprietary operating systems like WinCE or Symbian and browsing a
mobile version of the web, called "WAP", that required specific WAP
sites and that nobody used. The Blackberry was so proprietary that it
had its own proprietary network. It was particularly popular amongst
business people that wanted to look serious. Obama was famously addicted
to his Blackberry to the point that the firm had to create a secure
proprietary network only for him once he took office in the White House.
But like others, Blackberries were very limited, with very limited
software. Nothing like a laptop computer.
N770, the precursor
===================
In 2005, Nokia very quietly launched the N770 as an experiment. Unlike
its competitors, it has no keyboard but a wide screen that could be used
with a stylus. Inside was running a Debian system with an interface
based on GNOME: Maemo.
The N770, browsing Wikipedia
https://ploum.net/files/old/Nokia770-fi-wiki-600x450.jpg
Instead of doing all the development in-house, Nokia was toying with
free software. Most of the software work was done by small European
companies created by free software hackers between 2004 and 2005. Those
companies, often created specifically to work with Nokia, were only a
handful of people each and had very narrow expertise. Fluendo was
working on the media framework GStreamer. Immendio was working on the
GTK user interface layer. Collabora was focusing on messaging software.
Etc.
Far from the hegemony of American giant monopolists, the N770 was a
mostly European attempt at innovating through a collaborative network of
smaller and creative actors, everything led by the giant Nokia.
During FOSDEM 2005, GNOME developer Vincent Untz lent me a N770
prototype for two days. The first night was a dream come true: I was
laying in bed, chatting on IRC and reading forums. Once the N770 was
publicly released, I immediately bought my own. While standing in line
in the bakery one Sunday morning, I discovered that there was an
unprotected wifi. I used it to post a message on the Linuxfr website
telling my fellow geeks that I was waiting for my croissants and could
still chat with them thanks to free software.
Those days, chatting while waiting in a queue has been normalised to the
point you remark someone not doing it. But, in 2005, this was brand new.
So new that it started a running meme about "Ploum’s baker" on Linuxfr.
Twenty years later, some people that I meet for the first time still
greet me with "say hello to your baker" when they learn who I am. For
the record, the baker, an already-old woman at the time of the original
post, retired a couple years later and the whole building was demolished
to give place to a motorbike shop.
This anecdote highlights a huge flaw of the N770: without wifi, it was a
dead weight. When I showed it to people, they didn’t understand what it
was, they asked why I would carry it if I could not make calls with it.
Not being able to use the Internet without a wifi was a huge miss but,
to be fair, 3G didn’t exist yet. Another flaw was that installing new
software was far from being user-friendly. Being based on Debian, Maemo
was offering a Synaptic-like interface where you had to select your
software in a very long list of .deb packages, including the technical
libraries.
Also, it was slow and prone to crash but that could be solved.
Having played with the N770 in my bed and having seen the reactions of
people around me when I used it, I knew that the N770 could become a
worldwide hit. It was literally the future. There were only two things
that Nokia needed to solve: make it a phone and make it easy to install
new software. Also, if it could crash less, that would be perfect.
The Nokia (un)management guide to failure
=========================================
But development seemed to stall. It would take more than two years for
Nokia to successively release two successors to the N770: the N800 and
the N810. But, besides some better performance, none of the core issues
were addressed. None of those were phones. None of those offered easy
installation of software. None were widely released. In fact, it was so
confidential that you could only buy them through the Nokia website of
some specific countries. The items were not in traditional shops nor
catalogues. When I asked my employer to get a N810, the purchasing
department was unable to find a reference: it didn’t exist for them.
Tired by multiple days of discussion with the purchasing administration,
my boss gave me his own credit card, asked me to purchase it on the
Nokia website and made a "diverse material expense" to be reimbursed.
The thing was simply not available to businesses. It was like Nokia
wanted Maemo to fail at all cost.
While the N800 and N810 were released, a new device appeared on the
market: the Apple iPhone.
I said that the problem with the N770 is that you had to carry a phone
with it. Steve Jobs had come to the same conclusion with the iPod.
People had to carry an iPod and a phone. So he added the phone to the
iPod. It should be highlighted that the success of the iPhone took
everyone by surprise, including Steve Jobs himself. The original iPhone
was envisioned as an iPod and nothing else. There was no app, no app
store, no customisation (Steve Jobs was against it). It was nevertheless
a hit because you could make calls, listen to music and Apple spent a
fortune in marketing to advertise it worldwide. The marketing frenzy was
crazy. Multiple people that knew I was "good with computers" asked me if
I could unlock the iPhone they bought in the USA and which was not
working in Europe (I could not). They spent a fortune on a device that
was not working. Those having one were showing it to everyone.
With the iPhone, you had music listening and a phone on one single
device. In theory, you could also browse the web. Of course, there was
no 3G so browsing the web was mostly done through wifi, like the N770.
But, at the time, websites were done with wide screens in mind and Flash
was all the rage. The iPhone was not supporting Flash and the screen was
vertical, which made web browsing a lot worse than on the N770. And,
unlike the N770, you could not install any application.
The iPhone 1 was far from the revolution Apple want us to believe. It
was just very good marketing. In retrospective, the N770 could have been
a huge success had Nokia done some marketing at all. They did none.
Another Linux on your mobile
============================
In 2008, Google launched its first phone which still had a physical
keyboard. Instead of developing the software from scratch, Google used a
Linux system initially developed as an embedded solution for cameras:
Android. At the same time, Apple came to the realisation I had in 2005
that installing software was a key feature. The App Store was born.
Phone, web browsing and custom applications, all on one device. Since
2005, people who had tried the N770 knew this was the answer. They
simply did not expect it from Apple nor Google.
When Android was first released, I thought it was what Maemo should have
been. Because of the Linux kernel, I was thinking it would be a "free"
operating system. I made a deep comparison with Maemo, diving into some
part of the source code, and was surprised by some choices. Why Java?
And why would Android avoid GStreamer in its multimedia stack? Technical
explanations around that choice were not convincing. Years later, I
would understand that this was not a technical choice: besides the Linux
kernel itself, Google would explicitly avoid every GPL and LGPL licensed
code. Android was only "free software" by accident. Gradually, the
Android Open Source Project (AOSP) would be reduced to a mere skeleton
while Android itself became more and more restricted and proprietary.
In reaction to the iPhone and to Android, Nokia launched the N900 at the
end of 2009. Eventually, the N900 was a phone. It even included an app
store called, for unknown marketing reasons, "OVI store". The phone was
good. The software was good, with the exception of the infamous OVI
store (which was bad, had a bad name, a non-existent software offering
and, worse of all, was conflicting with deb packages).
The N900 would probably have taken the world by storm if released 3
years earlier. It would have been a success and a huge competitor to the
iPhone if released 18 months before. Is it too late? The world seems to
settle with an Apple/Google duopoly. A duopoly that could have been
slightly shacked by the N900 if Nokia had done at least some marketing.
It should be noted that the N900 had a physical keyboard. But, at that
point, nobody really cared.
When failing is not enough, dig deeper
======================================
At least, there was the Maemo platform. Four years of work. Something
could be done with that. That’s why, in 2010, Nokia decided to… launch
Meego, a new Linux platform which replaced the Debian infrastructure by
RPMs and the GNOME infrastructure by Qt.
No, really.
Even if it was theoretically, the continuation of Maemo (Maemo 6,
codenamed Harmattan, was released as Meego 1), it felt like starting
everything from scratch with a Fedora+KDE system. Instead of a strong
leadership, Meego was a medley of Linux Foundation, Intel, AMD and
Nokia. Design by committee with red tape everywhere. From the outside,
it looked like Nokia outsourced its own management incompetence and
administrative hubris. The N9 phone would be released in 2011 without
keyboard but with Meego.
History would repeat itself two years later when people working on Meego
(without Nokia) would replace it with Tizen. Yet another committee.
From being three years ahead of the competition in 2005 thanks to Free
Software, Nokia managed to become two years too late in 2010 thanks to
incredibly bad management and choosing to hide its products instead of
advertising them.
I’ve no inside knowledge of what Nokia was at this time but my
experience in the industry allows me to perfectly imagine the hundreds
of meetings that probably happened at that time.
When business decisions look like very bad management from the outside,
it is often because they are. In the whole Europe at the time, technical
expertise was seen as the realm of those who were not gifted enough to
become managers. As a young engineer, I thought that managers from
higher levels were pretentious and incompetent idiots. After climbing
the ladder and becoming a manager myself, years later, I got the
confirmation that I was even underestimating the sheer stupidity of
management. It is not that most managers were idiots, they were also
proud of their incompetence and, as this story would demonstrate, they
sometimes need to become deeply dishonest to succeed.
It looks like Nokia never really trusted its own Maemo initiative
because no manager really understood what it was. To add insult to
injury the company bought Symbian OS in 2008, an operating system which
was already historical and highly limited at that time. Nodoby could
figure out why they spent cash on that and why Symbian was suddenly an
internal competitor to Maemo (Symbian was running on way cheaper
devices).
The emotional roller coster
===========================
In 2006, I was certain that free software would take over the world. It
was just a matter of time. Debian and GNOME would soon be on most
desktop thanks to Ubuntu and on most mobile devices thanks to Maemo.
There was no way for Microsoft to compete against such power. My wildest
dreams were coming true.
Five years later, the outlooadministrative hubris. The N9 phone would be
released in 2011 without keyboard but with Meego.k was way darker. Apple
was taking the lead by being even more proprietary and closed than
Microsoft. Google seemed like good guys but could we trust them? Even
Ubuntu was drifting away from its own Debian and GNOME roots. The
communities I loved so much were now fragmented.
Where would I go next?
(to be continued)
> Subscribe by email or by rss to get the next episodes of "20 years of
Linux on the Desktop".
>
> I’m currently turning this story into a book. I’m looking for an agent
or a publisher interested to work with me on this book and on an English
translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten
novel which sold out in three weeks in France and Belgium.
1
0
THE ENGAGEMENT REHAB
by Ploum on 2025-02-27
https://ploum.net/2025-02-27-engagement-rehab.html
I’ve written extensively, in French, about my quest to break my
"connection addiction" by doing what I called "disconnections". At
first, it was only doing three months without major news media and
social networks. Then I tried to do one full year where I would only
connect once a day.
This proved to be too ambitious and failed around May when the amount of
stuff that required me to be online (banking, travel booking, online
meetings, …) became too high.
À la recherche de la déconnexion parfaite (ploum.net)
https://ploum.net/2025-02-11-deconnexion_parfaite.html
But I’m not giving up. I started 2025 by buying a new office chair and
pledging to never be connected in that chair. I disabled Wifi in the
Bios of my laptop. To be online, I now need to use my laptop on my
standing desk which has a RJ-45 cable.
This means I can be connected whenever I want but I’m physically feeling
the connection as standing up. There’s now a clear physical difference
between "being online" and "being in my offline bubble".
This doesn’t mean that I’m as super productive as I was dreaming.
Instead of working on my current book project, I do lots of work on
Offpunk, I draft blog posts like this one. Not great but, at least, I
feel I’ve accomplished something at the end of the day.
Hush is addicted to YouTube and reflects on spending 28 days without it.
Like myself, they found themselves not that much productive but, at the
very least, not feeling like shit at the end of the day.
Reflection on Four Weeks without YouTube (hush)
gemini://tilde.town/~hush/gemlog/2025-02-26.gmi
I’ve read that post because being truly disconnected forces me to read
more of what is in my Offpunk. My RSS feeds, my toread list and many
gemlogs. This is basically how I start every day:
Ploum’s workflow with Offpunk
gemini://offpunk.net/workflow_ploum.gmi
I’ve discovered that between 20 and 25% of what I read from online
sources is from Gemini. It appears that I like "content" on Gemini.
Historically, people were complaining that there was no content on
Gemini, that most posts were about the protocol itself.
There Is No Content on Gemini (ploum.net)
https://ploum.net/2022-10-05-there-is-no-content-on-gemini.html
Then there was a frenzy of posts about why social media were bad. And
those are subtly replaced by some kind of self-reflection about our own
habits, our owns addictions. Like this one about addiction to analytics:
analytics are risky business (drmollytov.flounder.online)
gemini://drmollytov.flounder.online/gemlog/2025-02-27.gmi
That’s when it struck me: we are all addicted to engagement. On both
sides. We like being engaged. We like seeing engagement on our own
content. Gemini is an engagement rehab!
While reading Gemini posts, I feel that I’m not alone being addicted to
engagement, suffering from it and trying to find a solution.
And when people in the real world starts, out of the blue, asking my
opinion about Elon Musk’s latest declaration, it reminds me that the
engagement addiction is not an individual problem but a societal one.
Anyway, welcome to Gemini, welcome to rehab! I’m Ploum and I’m addicted
to engagement.
1
0
MY COLLEAGUE JULIUS
by Ploum on 2024-12-23
https://ploum.net/2024-12-23-julius-en.html
Traduction en français
https://ploum.net/2024-12-23-julius-fr.html
Do you know Julius? You certainly know who I’m talking about!
I met Julius at university. A measured, friendly young man. He always
wore a smile on his face. What struck me about Julius, aside from his
always perfectly ironed clothes, was his ability to listen. He never
interrupted me. He accepted gratefully when he was wrong. He answered
questions without hesitation.
He attended all the classes and often asked for our notes to "compare
with his own" as he said. Then came the infamous computer project. As a
team of students, we had to code a fairly complex system software using
the C language. Julius took part in all our meetings but I don’t
remember witnessing him write a single line of code. In the end, I think
he did the report formatting. Which, to his credit, was very well done.
Because of his charisma and elegance, Julius was the obvious choice to
give the final presentation.
He was so self-confident during the presentation that the professors
didn’t immediately notice the problem. He had started talking about the
C virtual machine used in our project. He even showed a slide with an
unknown logo and several random screenshots which had nothing to do with
anything known in computing.
For those who don’t know about computing, C is a compiled language. It
doesn’t need a virtual machine. Talking about a C virtual machine is
like talking about the carburettor of an electric vehicle. It doesn’t
make sense.
I stood up, interrupted Julius and improvised by saying it was just a
joke. “Of course!” said Julius, looking at me with a big smile. The jury
was perplexed. But I saved the day.
Throughout our studies, I’ve heard several professors discuss the
“Julius case.” Some thought he was very good. Others said he was lacking
a fundamental understanding. Despite failing some classes, he ended up
graduating with me.
After that, our paths went apart for several years.
I’ve been working for nearly a decade at a large company where I had
significant responsibilities. One day, my boss announced that recruiters
had found a rare gem for our team. An extraordinary resume, he told me.
From the perfect cut of his suit, I recognised Julius before seeing his
face.
Julius! My old classmate!
If I had aged, he had matured. Still charismatic and self-assured. He
now sported a slightly graying three-day beard that gave him an air of
wise authority. He genuinely seemed happy to see me.
We talked about the past and about our respective careers. Unlike me,
Julius had never stayed very long in the same company. He usually left
after a year, sometimes less. His resume was impressive: he had gained
various experiences, touched on all areas of computing. Each time, he
moved up in skills and salary. I would later discover that, while we
held similar positions, he had been hired at twice my salary. He also
got bonuses I didn’t even know existed.
But I wasn’t aware of this aspect when we started working together. At
first, I tried to train him on our projects and internal processes. I
assigned him tasks on which he would ask me questions. Many questions,
not always very relevant ones. With his characteristic calm and his
signature smile.
He took initiatives. Wrote code or documentation. He had answers to all
the questions we could ask, regardless of the field. Sometimes it was
very good, often mediocre or, in some cases, complete nonsense. It took
us some time to understand that each of Julius’s contributions needed to
be completely reviewed and corrected by another team member. If it was
not our field of expertise, it had to be checked externally. We quickly
had a non-written rule stating that no document from Julius should leave
the team before being proofread by two of us.
But Julius excelled in formatting, presentation, and meeting management.
Regularly, my boss would come up to me and say, “We’re really lucky to
have this Julius! What talent! What a contribution to the team!”
I tried, without success, to explain that Julius understood nothing of
what we were doing. That we had reached the point where we sent him to
useless meetings to get rid of him for a few hours. But even that
strategy had its limits.
It took us a week of crisis management meetings to calm down a customer
disappointed by an update of our software. We had to explain that, if
Julius had promised that the interface would be simplified to have only
one button that would do exactly what the client wanted, there was a
misunderstanding. That aside from developing a machine that read minds,
it was impossible to meet his complex needs with just one button.
We decided to act when I heard Julius claim to a customer, panicked at
the idea of being "hacked", that, for security reasons, our servers
connected to the Internet had no IP address. We had to forbid him from
meeting a client alone.
For those who don’t know about computing, the "I" in IP address stands
for Internet. The very definition of the Internet is the network of
interconnected computers that have an IP address.
Being on the Internet without an IP address is like claiming to be
reachable by phone without having a phone number.
The team was reorganised so that one of us was always responsible for
keeping Julius occupied. I never wanted to speak ill of him because he
was my friend. An exasperated programmer had no such restraint and
exposed the problem to my boss. Who responded by accusing her of
jealousy, as he was very satisfied with Julius’s work. She was
reprimanded and resigned shortly after.
Fortunately, Julius announced that he was leaving because he had
received an offer he couldn’t refuse. He brought cakes to celebrate his
last day with us. My boss and the entire human resources department were
genuinely sad to see him go.
I said goodbye to Julius and never saw him again. On his LinkedIn
account, which is very active and receives hundreds of comments, the
year he spent with us became an incredible experience. He hasn’t
exaggerated anything. Everything is true. But his way of turning words
and a kind of poorly concealed modesty gives the impression that he
really contributed a lot to the team. He later became the deputy CEO
then interim CEO of a startup that had just been acquired by a
multinational. An economic newspaper wrote an article about him. After
that episode, he joined the team of a secretary of state. A meteoric
career!
On my side, I tried to forget Julius. But, recently, my boss came to me
with a huge smile. He had met the salesperson from a company that had
amazed him with its products. Artificial intelligence software that
would, I quote, boost our productivity!
I now have an artificial intelligence software that helps me code.
Another that helps me search for information. A third one that
summarises and writes my emails. I am not allowed to disable them.
At every moment, every second, I feel surrounded by Julius. By dozens of
Juliuses.
I have to work in a mist of Juliuses. Every click on my computer, every
notification on my phone seems to come from Julius. My life is hell
paved with Juliuses.
My boss came to see me. He told me that the team’s productivity was
dangerously declining. That we should use artificial intelligence more
effectively. That we risked being overtaken by competitors who, without
a doubt, were using the very latest artificial intelligence. That he had
hired a consultant to install a new time and productivity management
artificial intelligence.
I started to cry. “Another Julius!” I sobbed.
My boss sighed. He patted my shoulder and said, “I understand. I miss
Julius too. He would certainly have helped us get through this difficult
time.”
Picture by Max Gruber/Better Images of AI
https://betterimagesofai.org/images?artist=MaxGruber&title=Clickworker3d-pr…
1
0
20 YEARS OF LINUX ON THE DESKTOP (PART 2)
by Ploum on 2024-12-16
https://ploum.net/2024-12-16-linux_desktop2.html
> Previously in "20 years of Linux on the Deskop" : Looking to make the
perfect desktop with GNOME and Debian, a young Ploum finds himself
joining a stealth project called "no-name-yet". The project is later
published under the name "Ubuntu".
20 years of Linux on the Desktop (part 1)
https://ploum.net/2024-10-20-20years-linux-desktop-part1.html
Flooded with Ubuntu CD-ROMs
===========================
The first official Ubuntu release was 4.10. At that time, I happened to
be the president of my University LUG: LouvainLiNux. LouvainLiNux was
founded a few years before by Fabien Pinckaers, Anthony Lesuisse and
Benjamin Henrion as an informal group of friends. After they graduated
and left university, Fabien handled me all the archives, all the
information and told me do continue the work while he was running his
company that would, much later, becomes Odoo. With my friend Bertrand
Rousseau, we decided to make Louvain-Li-Nux a formal and enduring
organisation known as "KAP" (Kot-à-Projet). Frédéric Minne designed the
logo by putting the student hat ("calotte") of Fabien on a penguin
clipart.
Louvain-Li-Nux
https://www.louvainlinux.org/
In 2005 and 2006, we worked really hard to organise multiple install
parties and conferences. We were also offering resources and support. At
a time where broadband Internet was not really common, the best resource
to install GNU/Linux was an installation CD-ROM.
Thanks to Mark Shuttleworth’s money, Ubuntu was doing something
unprecedented: sending free CD-ROMs of Ubuntu to anyone requesting them.
Best of all: the box contained two CD-ROMs. A live image and an
installation CD. Exactly how I dreamed it (I’m not sure if the free CD-
ROMs started with 4.10, 5.04 or even 5.10).
I managed to get Louvain-Li-Nux recognised as an official Ubuntu
distributor and we started to receive boxes full of hundreds of CD-ROMs
with small cardboard dispensers. We had entire crates of Ubuntu CD-ROMs.
It was the easiest to install. It was the one I knew the best and I had
converted Bertrand (before Fabien taught me about Debian, Bertrand tried
to convert me to Mandrake, which he was using himself. He nevertheless
spent the whole night with me when I installed Debian for the first
time, not managing to configure the network because the chipset of my
ethernet card was not the same as the one listed on the box of said
card. At the time, you had to manually choose which module to load. It
was another era, kids these days don’t know what they are missing).
With Louvain-Li-Nux, we literally distributed hundreds of CD-ROMs. I’ve
myself installed Ubuntu on tenths of computers. It was not always easy
as the market was pivoting from desktop computers to laptops. Laptops
were starting to be affordable and powerful enough. But laptops came
with exotic hardware, wifi, Bluetooth, power management, sleep,
hibernate, strange keyboard keys and lots of very complex stuff that you
don’t need to handle on a desktop computer with a RJ-45 hole.
Sound was a hard problem. I remember spending hours on a laptop before
realising there was a hardware switch. To play multiple sounds at the
same time, you needed to launch a daemon called ESD. Our frustration
with ESD would lead Bertrand and I to trap Lennart Poetering in a cave
in Brussels to spend the whole night drinking beers with him while
swearing we would wear a "we love Lennart" t-shirt during FOSDEM in
order to support is new Polypaudio project that was heavily criticised
at the time. Spoiler: we never did the t-shirt thing but Polypaudio was
renamed Pulseaudio and succeeded without our support.
Besides offering beers to developers, I reported all the bugs I
experienced and worked hard with Ubuntu developers. If I remember
correctly, I would, at some point, even become the head of the "bug
triaging team" (if such a position ever existed. It might be that
someone called me like that to flatter my ego). Selected as a student
for the Google Summer of Code, I created a python client for Launchpad
called "Conseil". Launchpad had just replaced Bugzilla but, as I found
out after starting Conseil, was not open source and had no API. I
learned web scrapping and was forced to update Conseil each time
something changed on Launchpad side.
The most important point about Bugzilla and Launchpad was the famous bug
#1. Bug #1, reported by sabdfl himself, was about breaking Microsoft
monopoly. It could be closed once it would be considered that any
computer user could freely choose which operating system to use on a
newly bought computer.
The very first book about Ubuntu
================================
Meanwhile, I was contacted by a French publisher who stumbled upon my
newly created blog that I mainly used to profess my love of Ubuntu and
Free Software. Yes, the very blog you are currently reading.
That French publisher had contracted two authors to write a book about
Ubuntu and wanted my feedback about the manuscript. I didn’t really like
what I read and said it bluntly. Agreeing with me, the editor asked me
to write a new book, using the existing material if I wanted. But the
two other authors would remain credited and the title could not be
changed. I naively agreed and did the work, immersing myself even more
in Ubuntu.
The result was « Ubuntu, une distribution facile à installer », the very
first book about Ubuntu. I hated the title. But, as I have always
dreamed of becoming a published author, I was proud of my first book.
And it had a foreword by Mark Shuttleworth himself.
I updated and rewrote a lot of it in 2006, changing its name to "Ubuntu
Efficace". A later version was published in 2009 as "Ubuntu Efficace,
3ème édition". During those years, I was wearing Ubuntu t-shirts. In my
room, I had a collection of CD-ROMs with each Ubuntu version (I would
later throw them, something I still regret). I bootstrapped "Ubuntu-
belgium" at FOSDEM. I had ploum(a)ubuntu.com as my primary email on my
business card and used it to look for jobs, hoping to set the tone. You
could say that I was an Ubuntu fanatic.
The very first Ubuntu-be meeting. I took the picture and gimped a quick
logo.
https://ploum.net/files/old/ubuntube_fosdem.jpg
Ironically, I was never paid by Canonical and never landed a job there.
The only money I received for that work was from my books or from Google
through the Summer of Code (remember: Google was still seen as a good
guy). I would later work for Lanedo and be paid to contribute to GNOME
and LibreOffice. But never to contribute to Ubuntu nor Debian.
In the Ubuntu and GNOME community with Jeff Waugh
=================================================
Something which was quite new to me was that Ubuntu had a "community
manager". At the time, it was not the title of someone posting on
Twitter (which didn’t exist). It was someone tasked with putting the
community together, with being the public face of the project.
Jeff Waugh is the first Ubuntu community manager I remember and I was
blown away by his charism. Jeff came from the GNOME project and one of
his pet issues was to make computers easier. He started a trend that
would, way later, gives birth to the infamous GNOME 3 design.
You have to remember that the very first fully integrated desktop on
Linux was KDE. And KDE had a very important problem: it was relying on
the Qt toolkit which, at the time, was under a non-free license. You
could not use Qt in a commercial product without paying Trolltech, the
author of Qt.
GNOME was born as an attempt by Miguel de Icaza and Federico Mena to
create a KDE-like desktop using the free toolkit created for the Gimp
image editor: Gtk.
This is why I liked to make the joke that the G in GNOME stands for Gtk,
that the G in Gtk stands for Gimp, that the G in Gimp stands for GNU and
that the G in GNU stands for GNU. This is not accurate as the G in GNOME
stands for GNU but this makes the joke funnier. We, free software geeks,
like to have fun.
Like its KDE counterpart, GNOME 1 was full of knobs and whistles.
Everything could be customised to the pixel and to the milliseconds.
Jeff Waugh often made fun of it by showing the preferences boxes and
asking the audience who wanted to customise a menu animation to the
millisecond. GNOME 1 was less polished than KDE and heavier than very
simple window managers like Fvwm95 or Fvwm2 (my WM of choice before I
started my quest for the perfect desktop).
Screenshot from my FVWM2 config which is still featured on fvwm.org, 21
years later
https://ploum.net/files/fvwm.jpg
With GNOME 2, GNOME introduced its own paradigm and philosophy: GNOME
would be different from KDE by being less customisable but more
intuitive. GNOME 2 opened a new niche in the Linux world: a fully
integrated desktop for those who don’t want to tweak it.
KDE was for those wanting to customise everything. The most popular
distributions featured KDE: Mandrake, Red Hat, Suse. The RPM world.
There was no real GNOME centric distribution. And there was no desktop
distribution based on Debian. As Debian was focused on freedom, there
was no KDE in Debian.
Which explains why GNOME + Debian made a lot of sense in my mind.
As Jeff Waugh had been the GNOME release manager for GNOME 2 and was
director of the GNOME board, having him as the first Ubuntu community
manager set the tone: Ubuntu would be very close to GNOME. And it is
exactly what happened. There was a huge overlap between GNOME and Ubuntu
enthusiasts. As GNOME 2 would thrive and get better with each release,
Ubuntu would follow.
But some people were not happy. While some Debian developers had been
hired by Canonical to make Ubuntu, some others feared that Ubuntu was a
kind of Debian fork that would weaken Debian. Similarly, Red Hat had
been investing lot of time and money in GNOME. I’ve never understood
why, as Qt was released under the GPL in 2000, making KDE free, but Red
Hat wanted to offer both KDE and GNOME. It went as far as tweaking both
of them so they would look perfectly identical when used on Red Hat
Linux. Red Hat employees were the biggest pool of contributors to GNOME.
There was a strong feeling in the atmosphere that Ubuntu was
piggybacking on the work of Debian and Red Hat.
I didn’t really agree as I thought that Ubuntu was doing a lot of
thankless polishing and marketing work. I liked the Ubuntu community and
was really impressed by Jeff Waugh. Thanks to him, I entered the GNOME
community and started to pay attention to user experience. He was
inspiring and full of energy.
Drinking a beer with Jeff Waugh and lots of hackers at FOSDEM. I’m the
one with the red sweater.
https://ploum.net/files/old/fosdem_jdub.jpg
Benjamin Mako Hill
==================
What I didn’t realise at the time was that Jeff Waugh’s energy was not
in infinite supply. Mostly burned out by his dedication, he had to step
down and was replaced by Benjamin Mako Hill. That’s, at least, how I
remember it. A quick look at Wikipedia told me that Jeff Waugh and
Benjamin Mako Hill were, in fact, working in parallel and that Jeff
Waugh was not the community manager but an evangelist. It looks like
I’ve been wrong all those years. But I choose to stay true to my own
experience as I don’t want to write a definitive and exhaustive history.
Benjamin Mako Hill was not a GNOME guy. He was a Debian and FSF guy. He
was focused on the philosophical aspects of free software. His
intellectual influence would prove to have a long-lasting effect on my
own work. I remember fondly that he introduced the concept of "anti-
features" to describe the fact that developers are sometimes working to
do something against their own users. They spend energy to make the
product worse. Examples include advertisement in apps or limited-version
software. But it is not limited to software: Benjamin Mako Hill took the
example of benches designed so you can’t sleep on them, to prevent
homeless person to take a nap. It is obviously more work to design a
bench that prevents napping. The whole anti-feature concept would be
extended and popularised twenty years later by Cory Doctorow under the
term "enshitification".
Benjamin Mako Hill introduced a code of conduct in the Ubuntu community
and made the community very aware of the freedom and philosophical
aspects. While I never met him, I admired and still admire Benjamin. I
felt that, with him at the helm, the community would always stay true to
its ethical value. Bug #1 was the leading beacon: offering choice to
users, breaking monopolies.
Jono Bacon
==========
But the one that would have the greatest influence on the Ubuntu
community is probably Jono Bacon who replaced Benjamin Mako Hill. Unlike
Jeff Waugh and Benjamin Mako Hill, Jono Bacon had no Debian nor GNOME
background. As far as I remember, he was mostly unknown in those
communities. But he was committed to communities in general and had very
great taste in music. I’m forever grateful for introducing me to
Airbourne.
With what feels like an immediate effect but probably lasted months or
years, the community mood switched from engineering/geek discussions to
a cheerful, all-inclusive community.
It may look great on the surface but I hated it. The GNOME, Debian and
early Ubuntu communities were shared-interest communities. You joined
the community because you liked the project. The communities were
focused on making the project better.
With Jono Bacon, the opposite became true. The community was great and
people joined the project because they liked the community, the sense of
belonging. Ubuntu felt each day more like a church. The project was seen
as less important than the people. Some aspects would not be discussed
openly not to hurt the community.
I felt every day less and less at home in the Ubuntu community.
Decisions about the project were taken behind closed doors by Canonical
employees and the community transformed from contributors to unpaid
cheerleaders. The project to which I contributed so much was every day
further away from Debian, from freedom, from openness and from its
technical roots.
But people were happy because Jono Bacon was such a good entertainer.
Something was about to break…
(to be continued)
> Subscribe by email or by rss to get the next episodes of "20 years of
Linux on the Desktop".
>
> I’m currently turning this story into a book. I’m looking for an agent
or a publisher interested to work with me on this book and on an English
translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten
novel which sold out in three weeks in France and Belgium.
1
0
20 YEARS OF LINUX ON THE DESKTOP (PART 1)
by Ploum on 2024-10-20
https://ploum.net/2024-10-20-20years-linux-desktop-part1.html
Twenty years ago, I had an epiphany: Linux was ready for the desktop.
(*audience laughs*)
I had been one of those teenagers invited everywhere to "fix" the
computer. Neighbours, friends, family. Yes, that kind of nerdy teenager.
You probably know what I mean. But I was tired of installing cracked
antivirus and cleaning infested Microsoft Windows computers, their RAM
full of malware, their CPU slowing to a crawl, with their little power
LED begging me to alleviate their suffering.
Tired of being an unpaid Microsoft support technician, I offered people
to install Linux on their computer, with my full support, or to never
talk with me about their computer any more.
To my surprise, some accepted the Linux offer.
I started to always have two CD-ROMs with me: the latest Knoppix and
Debian Woody. I would first launch Knoppix, the first Live CD Linux,
make a demonstration to my future victims and, more importantly, save
the autogenerated XFree86 config file. I would then install Debian. When
X would fail to start after installation, which was a given, I would
copy the X config file from Knoppix, install GNOME 2 and OpenOffice from
the testing repository and start working on what needed to be done. Like
installing and launching ESD by default to allow multiple sounds.
Configuring the network which was, most of the time, a USB ADSL modem
requiring some proprietary firmware that I would have downloaded
beforehand.
I would also create some shell scripts for common operations: connect to
Internet, mount the USB camera, etc. I put those scripts on the GNOME
desktop so people could simply click to launch them. In some cases, I
would make a quick Zenity interface.
It was a lot of work upfront but, after that, it worked. People were
using their computer for months or years without messing or breaking
anything. Most of my complains were about running some windows software
(like those on CD-ROM founds in cereal boxes).
With GNOME 2.0, I felt that Linux was ready for the desktop. It was just
really hard to install. And that could be fixed.
The Perfect Desktop
===================
I had my own public wiki (called the FriWiki) with a few regular
contributors. On it, I wrote a long text called: "Debian+GNOME=The
Perfect Desktop?". It explained my observations and all the problems
that needed to be fixed.
As I wanted to improve the situation, I described how the installation
should autodetect everything, like Knoppix did. I also suggested to have
a LiveCD and an Installation CD that would be totally mirroring each
other so you could test then install. In a perfect world, you could
install directly from the LiveCD but I didn’t know if it was technically
possible. That installation should also offer standard partitions
schemes, autodetect and preserve the Windows partition so people would
not be afraid of messing their system.
Installation was not everything. I suggested that the user created
during installation should automatically have root rights. I had learned
that two passwords were really too high of a bar for most if not for all
people. I don’t remember anyone understanding the "root" concept.
Systems with multiple users were too complex to teach so I ended in
every case with a single whole family account. The fact that this single
account was prevented from doing some stuff on the computer was
baffling. Especially for trivial things such as mounting a CD-ROM or a
USB key.
Speaking of root: installing software could be really more friendly. I
imagined a synaptic-like interface which would only display main
applications (not all packages) with screenshots, descriptions and
reviews. I did some mockups and noted that the hardest parts would
probably be the selection and the translation. I’ve lost those mockups
but, in my souvenirs, they were incredibly close of what app stores
would become years later.
I, of course, insisted on installing ESD by default to multiplex sound,
to include all the multimedia codecs, lbdvdcss and all possible firmware
(called "drivers" at the time) in case hardware is added later.
I had pages and pages of detailed analysis of every single aspect I
wanted to be improved to make Linux ready for the desktop.
With 2.0, GNOME switched to a bi-yearly release. Every six months, a new
GNOME desktop would be released, no matter what. I thought it would be a
nice idea to have the underlying OS released just after, to be kept in
sync. But every six months was probably a bit too much work and people I
knew would not upgrade as often anyway. So I advocated for a yearly
release where the version number would be the full year. This would
greatly help people to understand what version they were running. Like
in "I’m running Linux Desktop 2003".
UserLinux
=========
When you have a good idea, it’s probably because this idea is already in
the zeitgeist of the time. I don’t believe in "ownership" or even
"stealing" when it’s about ideas. Bruce Perens himself was thinking
about the subject. He decided to launch UserLinux, an initiative that
had the goal of doing exactly what I had in mind.
I immediately joined the project and started to be a very vocal member,
always referring to my "Perfect Desktop" essay. I wanted UserLinux to
succeed. If Bruce Perens was behind it, it could not not succeed, right?
A mockup of the UserLinux desktop (GNOME 2.0 with a custom theme)
https://ploum.net/files/userlinux.jpg
Unfortunately, most of UserLinux people were, like me, talking a lot but
not doing much. The only active member was an artist who designed a logo
and started to create multiple GNOME themes. It was great. And lots of
discussions about how to make the theme even better ensued.
This is how I learned about "bike shedding".
UserLinux was the ultimate bikeshedded project. To my knowledge, no code
was ever written. In fact, it was not even clear what code should be
written at all. After launching the initial idea, Bruce Perens was
mostly discussing with us, everybody waiting for someone to "do
something".
No-name-yet
===========
At the start of 2004, I was contacted by Sébastien Bacher, a Debian
developer who told me that he had read my "Perfect Desktop" essay months
ago and forwarded it to someone who had very similar ideas. And lots of
money. So much money that they were already secretly working on it and,
now that it was starting to take shape, they were interested in my
feedback about the very alpha version.
I, of course, agreed and was excited. This is how I joined a mysterious
project called "no-name-yet" with a nonameyet.com website and an IRC
channel. While discussing about the project and learning it in great
depth, my greatest fear was that it would become a fork of Debian. I
felt strongly that one should not fork Debian lightly. Instead, it
should be more a few packages and metapackages that would sit on top of
Debian. Multiple people in the team assured me that the goal was to
cooperate with Debian, not to fork it.
At one point, I strongly argued with someone on IRC whose nick was
"sabdfl". Someone else asked me in private if I knew who it was. I
didn’t.
That’s how I learned that the project was funded by Mark Shuttleworth
himself.
Dreaming of being an astronaut, I was a huge fan of Mark Shuttleworth.
The guy was an astronaut but was also a free software supporter. I knew
him from the days when he tried to offer bounties to improve free
software like Thunderbird. Without much success. But I was surprised to
learn that Mark had also been a Debian developer.
This guy was my hero (and still kinda is). He represented all of my
dreams: an astronaut, a debian developer and a billionaire (in my order
of importance). Years later, I would meet him once in the hall of an
Ubuntu Summit. He was typing on his laptop, looked at me and I could not
say anything other than "Hello". And that was it. But I’m proud to say
that his hackergothi on planet.ubuntu.com is still the one I designed
for him as a way to celebrate his space flight.
sabdfl’s hackergotchi, designed by yours, truly
https://ploum.net/files/sabdfl.png
During the spring or early summer of 2004, I received a link to the very
first alpha version of no-name-yet. Which, suddenly, had a real name.
And I liked it: Ubuntu. I installed Ubuntu on a partition to test.
Quickly, I found myself using it daily, forgetting about my Debian
partition. It was brown. Very brown at first. A bit later, it even got
naked people on the login screen (and I defended sabdfl for this
controversial decision). Instead of studying for my exams, I started to
do lengthy reports about what could be improved, about bugs I found,
etc.
The first Ubuntu login picture with three half-naked people looking
toward the sky. Some alpha versions were with even fewer clothes.
https://ploum.net/files/old/login.png
This makes me one of the very few people on earth who started to use
Ubuntu with 4.04 (it was not named like that, of course).
Wanting to promote Ubuntu and inspired by Tristan Nitot, I decided to
start a blog.
A blog which, coincidentally, was started the very same day the first
public Ubuntu was released, exactly twenty years ago.
A blog you are reading right now.
And this was just the beginning…
(to be continued)
> Subscribe by email or by rss to get the next episodes of "20 years of
Linux on the Desktop".
>
> I’m currently turning this story into a book. I’m looking for an agent
or a publisher interested to work with me on this book and on an English
translation of "Bikepunk", my new post-apocalyptic-cyclist typewritten
novel.
1
0