Volunteer Suicide on Debian Day and other avoidable deaths

Debian, Volunteer, Suicide

Feeds

May 03, 2024

hackergotchi for Colin Watson

Colin Watson

Playing with rich

One of the things I do as a side project for Freexian is to work on various bits of business automation: accounting tools, programs to help contributors report their hours, invoicing, that kind of thing. While it’s not quite my usual beat, this makes quite a good side project as the tools involved are mostly rather sensible and easy to deal with (Python, git, ledger, that sort of thing) and it’s the kind of thing where I can dip into it for a day or so a week and feel like I’m making useful contributions. The logic can be quite complex, but there’s very little friction in the tools themselves.

A recent case where I did run into some friction in the tools was with some commands that need to present small amounts of tabular data on the terminal, using OSC 8 hyperlinks if the terminal supports them: think customer-related information with some links to issues. One of my colleagues had previously done this using a hack on top of texttable, which was perfectly fine as far as it went. However, now I wanted to be able to add multiple links in a single table cell in some cases, and that was really going to stretch the limits of that approach: working out the width of the displayed text in the cell was going to take an annoying amount of bookkeeping.

I started looking around to see whether any other approaches might be easier, without too much effort (remember that “a day or so a week” bit above). ansiwrap looked somewhat promising, but it isn’t currently packaged in Debian, and it would have still left me with the problem of figuring out how to integrate it into texttable, which looked like it would be quite complicated. Then I remembered that I’d heard good things about rich, and thought I’d take a look.

rich turned out to be exactly what I wanted. Instead of something like this based on the texttable hack above:

import shutil
from pyxian.texttable import UrlTable

termsize = shutil.get_terminal_size((80, 25))
table = UrlTable(max_width=termsize.columns)
table.set_deco(UrlTable.HEADER)
table.set_cols_align(["l"])
table.set_cols_dtype(["u"])
table.add_row(["Issue"])
table.add_row([(issue_url, f"#{issue_id}")]
print(table.draw())

… now I can do this instead:

import rich
from rich import box
from rich.table import Table

table = Table(box=box.SIMPLE)
table.add_column("Issue")
table.add_row(f"[link={issue_url}]#{issue_id}[/link]")
rich.print(table)

While this is a little shorter, the real bonus is that I can now just put multiple [link] tags in a single string, and it all just works. No ceremony. In fact, once the relevant bits of code passed type-checking (since the real code is a bit more complex than the samples above), it worked first time. It’s a pleasure to work with a library like that.

It looks like I’ve only barely scratched the surface of rich, but I expect I’ll reach for it more often now.

03 May, 2024 03:09PM by Colin Watson

May 02, 2024

Paul Wise

FLOSS Activities April 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Administration

  • Debian IRC: updated #debian-dpl access list for new DPL
  • Debian wiki: unblock IP addresses, approve accounts

Communication

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 May, 2024 08:16AM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Hi,

Keeping my promise for monthly bits, here's a quick snapshot of my first ten days as DPL.

Special thanks to Jonathan for an insightful introduction that left less room for questions. His introduction covered my first tasks like expense approval and CTTE member appointments thoroughly. Although I made a visible oversight by forgetting to exclude Simon McVittie from the list, whose term has ended , I'm committed to learning from this mistake. In future I'll prioritize thorough proofreading to ensure accuracy.

Part of my "work" was learning what channels I need to subscribe and adjust my .procmailrc and .muttrc took some time.

Recently I had my first press interview. I had to answer a couple of prepared questions for Business IT News. It seems journalists are always on the lookout for unique angles. When asked if humility is a new trait for DPLs, my response would be a resounding "No." In my experience, humility is a common quality among DPLs I've encountered, including Jonathan.

One of my top priorities is reaching out to all our dedicated and appointed teams, including those managing critical infrastructure. I've begun with the CTTE, Salsa Admins and Debian Snapshot. Everything appears to be in order with the CTTE team. I'm waiting for response from Salsa and Snapshot, which is fine given the recent contact.

I was pointed out to the fact that lintian is in an unfortunate state as Axel Beckert confirmed on the lintian maintainers list. It turns out that bug #1069745 of magics-python should not have been undetected for a long time if lintian bug #677078 would have been fixed. It seems obvious to me that lintian needs more work to fulfill its role as reliably policy checker to ensure our high level of packaging quality.

In any case thanks a lot to Axel who is doing his best but it seems urgent to me to find some more person-power for this task. Any volunteer to lend some helping hand in the lintian maintainers team?

On 2024-04-30 I gave my first talk "Bits from greenhorn DPL" online at MiniDebConf Brasil in Belo Horizonte. The Q&A afterwards stired some flavours of the question: "What can Debian Brasil do better?" My answer was always in a way: Given your great activity in now organising the fifth MiniDebConf you are doing pretty well and I have no additional hints for the moment.

Kind regards Andreas.

02 May, 2024 01:00AM by Andreas Tille

May 01, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppInt64 0.0.5 on CRAN: Minor Maintenance

The new-ish package RcppInt64 (announced last fall in this post, with three small updates following) arrived on CRAN yesterday as relase 0.0.5. RcppInt64 collects some of the previous conversions between 64-bit integer values in R and C++, and regroups them in a single package. It offers two interfaces: both a more standard as<>() converter from R values along with its companions wrap() to return to R, as well as more dedicated functions ‘from’ and ‘to’.

This release addresses an new nag from CRAN who no longer want us to use the ‘non-API’ header function SET_S4_OBJECT so a small change was made.

The brief NEWS entry follows:

Changes in version 0.0.5 (2024-04-30)

  • Minor refactoring of internal code to not rely on SET_S4_OBJECT.

Courtesy of my CRANberries, there is a diffstat report relative to the previous release. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

01 May, 2024 11:44PM

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes the 2024 GSOC contributors/students

GSoC logo

We are very excited to announce that Debian has selected seven contributors to work under mentorship on a variety of projects with us during the Google Summer of Code.

Here are the list of the projects, students, and details of the tasks to be performed.


Project: Android SDK Tools in Debian

  • Student: anuragxone

Deliverables of the project: Make the entire Android toolchain, Android Target Platform Framework, and SDK tools available in the Debian archives.


Project: Benchmarking Parallel Performance of Numerical MPI Packages

  • Student: Nikolaos

Deliverables of the project: Deliver an automated method for Debian maintainers to test selected numerical Debian packages for their parallel performance in clusters, in particular to catch performance regressions from updates, and to verify expected performance gains, such as Amdahl’s and Gufstafson’s law, from increased cluster resources.


Project: Debian MobCom

  • Student: Nathan D

Deliverables of the project: Update the outdated mobile packages and recreate aged packages due to new dependencies. Bring in more mobile communication tools by adding about 5 new packages.


Project: Improve support of the Rust coreutils in Debian

  • Student: Sreehari Prasad TM

Deliverables of the project: Make uutils behave more like GNU’s coreutils by improving compatibility with GNU coreutils test suit.


Project: Improve support of the Rust findutils in Debian

  • Student: hanbings

Deliverables of the project: A safer and more performant implementation of the GNU suite's xargs, find, locate and updatedb tools in rust.


Project: Expanding ROCm support within Debian and derivatives

  • Student: xuantengh

Deliverables of the project: Building, packaging, and uploading missing ROCm software into Debian repositories, starting with simple tools and progressing to high-level applications like PyTorch, with the final deliverables comprising a series of ROCm packages meeting community quality assurance standards.


Project: procps: Development of System Monitoring, Statistics and Information Tools in Rust

  • Student: Krysztal Huang

Deliverables of the project: Improve the usability of the entire Rust-based implementation of the procps utility on Linux.


Congratulations and welcome to all the contributors!

The Google Summer of Code program is possible in Debian thanks to the efforts of Debian Developers and Debian Contributors that dedicate part of their free time to mentor contributors and outreach tasks.

Join us and help extend Debian! You can follow the contributors' weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or reach out to the individual projects' team mailing lists.

01 May, 2024 09:56PM by Nilesh Patra

hackergotchi for Guido Günther

Guido Günther

Free Software Activities April 2024

A short status update of what happened on my side last month. Maintenance and code review keep to be the top time sinks (in a positive way).

01 May, 2024 11:41AM

hackergotchi for Colin Watson

Colin Watson

Free software activity in April 2024

My Debian contributions this month were all sponsored by Freexian.

  • I’m trying to get back into bugs.debian.org administration, so I spent some time catching up on my owner@bugs.debian.org mailbox and answering a number of support requests there.
  • I fixed a regression I’d introduced last year where groff’s PDF output had invalid date headers, both upstream and in Debian.
  • I released man-db 2.12.1.
  • openssh:
    • I did a little more testing of Luca Boccassi’s modifications to upstream’s inline systemd notification patch.
    • I did an extensive review of some of the choices in Debian’s OpenSSH packaging, in light of last month’s xz-utils backdoor.
    • I fixed a build failure on ppc64el, forwarded upstream.
    • I proposed reducing shared library linkage in tcp-wrappers; its maintainer accepted this by disabling NIS support.
    • I applied a suggestion to improve ordering of systemd services in relation to nss-user-lookup.target.
  • I updated putty to 0.81.
  • Python team:
  • I did some inconclusive investigation of flaky tests in gcr4. More work is needed there.
  • I proposed a patch for a build failure in gyoto, both upstream and in Debian.

You can support my work directly via Liberapay.

01 May, 2024 11:34AM by Colin Watson

hackergotchi for Bits from Debian

Bits from Debian

Infomaniak First Platinum Sponsor of DebConf24

infomaniaklogo

We are pleased to announce that Infomaniak has committed to sponsor DebConf24 as a Platinum Sponsor.

Infomaniak is an independent cloud service provider recognised throughout Europe for its commitment to privacy, the local economy and the environment. Recording growth of 18% in 2023, the company is developing a suite of online collaborative tools and cloud hosting, streaming, marketing and events solutions.

Infomaniak uses exclusively renewable energy, builds its own data centers and develops its solutions in Switzerland at the heart of Europe, without relocating. The company powers the website of the Belgian radio and TV service (RTBF) and provides streaming for more than 3,000 TV and radio stations in Europe.

With this commitment as Platinum Sponsor, Infomaniak is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Infomaniak contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.

Thank you very much, Infomaniak, for your support of DebConf24!

Become a sponsor too!

DebConf24 will take place from 28th July to 4th August 2024 in Busan, South Korea, and will be preceded by DebCamp, from 21st to 27th July 2024.

DebConf24 is accepting sponsors! Interested companies and organizations should contact the DebConf team through sponsors@debconf.org, or viisit the DebConf24 website at https://debconf24.debconf.org/sponsors/become-a-sponsor/.

01 May, 2024 10:08AM by Sahil Dhiman

Russ Allbery

Review: To Each This World

Review: To Each This World, by Julie E. Czerneda

Publisher: DAW
Copyright: November 2022
ISBN: 0-7564-1543-8
Format: Kindle
Pages: 676

To Each This World is a standalone science fiction novel.

Henry m'Yama t'Nowak is the Arbiter of New Earth. This is somewhat akin to a president, but only in very specific ways. Henry's job is to deal with the Kmet.

New Earth was settled by slower-than-light colony ship from old Earth, our Earth. It is, so far as they know, the last of humanity in the universe. Origin Earth fell silent hundreds of years previous, before the colonists even landed. New Earth is now a carefully and thoughtfully managed world where humans survived, thrived, and at one point sent out six slower-than-light colony ships of its own. All were feared lost after a rushed launch due to a solar storm.

As this story opens, a probe from one of those ships arrives.

This is cause for rejoicing, but there are two small problems. The first is that the culture of New Earth has changed drastically since the days when they launched the Halcyon colony ships. New Earth is now part of the Duality, a new alliance with aliens painstakingly negotiated after their portal appeared in orbit. The Kmet were peaceful, eager to form an alliance and offer new technology, although they struggled with concepts such as individuality and insisted on interacting only with the Arbiter. Their technological gifts and the apparent loss of the Halcyon colony ships refocused New Earth on safety and caution. This unexpected message is a somewhat tricky political problem, a reminder of the path not taken.

The other small problem is that the reaction of the Kmet to this message is... dramatic.

This book has several problems, but the most serious is that it is simply too long. If you have read any other Czerneda novels, you know that she tends towards sprawling world-building, but usually there are enough twists and turns in the plot to keep the story moving while the protagonists slowly puzzle out the scientific mysteries. To Each This World is not sufficiently twisty for 676 pages. I think you could have cut half the novel without losing any major plot points.

The interesting parts of this book, to me, were figuring out what's going on with the Kmet, some of the political tensions within the New Earth government, and understanding what Henry and Pilot Killian's story had to do with the apparently-unrelated but intriguing interludes following Beth Seeker in a strange place called Doublet. All that stuff is in here, but it's alongside a whole lot of Henry wrestling with lifeboat ethics in situations where he thinks he needs to lie to and manipulate people for their own good. We also get several extended tours of societies that, while vaguely interesting in a science fiction world-building way, have essentially nothing to do with the plot.

We also get a whole lot of Henry's eagerly helpful AI polymorph Flip. I wanted to like this character, and I occasionally managed, but I felt like there was a constant mismatch between, in hindsight, how Czerneda meant for me to see Flip and what I thought she was signaling while I was reading. I wanted Flip to either be a fascinatingly weird companion or to be directly relevant to the plot, but instead there were hundreds of pages of unnerving creepiness mixed with obsequiousness and emotional neediness, all of which I think I read more into than Czerneda had intended. The overall experience was more exhausting than fun.

The core of the plot is solid, and if you like SF novels built around world-building and scientific mysteries, there's a lot here to enjoy. I think Czerneda's Species Imperative series (starting with Survival) is a better execution of some of the same ideas, but I liked that series a lot and was willing to read another take on it. Czerneda is one of the SF writers who takes biology seriously and is willing to write very alien aliens, and that leads to a few satisfying twists. Also, Beth Seeker is a great character (I wish we'd seen more of her), and Killian, while a bit generic, is a serviceable protagonist when Czerneda needs someone to go poke things with a stick.

Henry... I'm not sure what I think of Henry, and your enjoyment of this book may depend on how much you click with him.

Henry is a diplomat and an extrovert. His greatest joy and talent is talking to people, navigating political situations, and negotiating. Science fiction is full of protagonists who should be this character, but they rarely are this character, probably because a lot of writers are introverts. I think Czerneda deserves real credit for making her charismatic politician sufficiently accurate that his thought processes occasionally felt alien. For me, Henry was easiest to appreciate when Killian was the viewpoint protagonist and I could look him through someone else's eyes, but Henry's viewpoint mostly worked as well. There's a lot of competence porn enjoyment in watching him do his thing.

The problem for me is that I thought several of his actions were unforgivably unethical, but no one in the book who matters seems to agree. I can see why he reached those unethical decisions, but they were profound violations of consent. He directly lies to people because he thinks telling the truth would be too risky and not get them to do what he wants them to do, and Czerneda sets up the story to imply that he might be right.

This is not necessarily a bad choice in a novel, but the author has to do some work to bring me along, and Czerneda didn't do enough of that work. I kept wanting there to be some twist or sting or complication that forced Henry to come to terms with what he was doing, but it never happens. He has to pick between two moral principles that I consider rather finely balanced, if not tilted in the opposite direction that he does, and he treats one principle as inviolable and the other as mostly unimportant. The plans he makes on that basis work fine, and those on the other side of that decision are never heard from again. It left a bad taste in my mouth, particularly given how much of the book is built around Henry making tough, tricky decisions under pressure.

I don't know about this book. I have a lot of mixed feelings. Parts of it I quite enjoyed. Parts of it I mostly enjoyed but wish were much less dragged out. Parts of it frustrated or bored me. It's one of those books where the more I thought about it after reading it, the more the parts I disliked annoyed me.

If you like Czerneda's style of world-building and biology, and if you have more tolerance for Henry's decisions than I did, you may well like this, but read Species Imperative first. I should probably also warn that there is a lot of magical technology in this book that blatantly violates some core principles of physics. I have a high tolerance for that sort of thing, but if you don't, you're going to be grumbling.

Rating: 6 out of 10

01 May, 2024 03:39AM

hackergotchi for Matthew Palmer

Matthew Palmer

The Mediocre Programmer's Guide to Rust

Me: “Hi everyone, my name’s Matt, and I’m a mediocre programmer.”

Everyone: “Hi, Matt.”

Facilitator: “Are you an alcoholic, Matt?”

Me: “No, not since I stopped reading Twitter.”

Facilitator: “Then I think you’re in the wrong room.”

Yep, that’s my little secret – I’m a mediocre programmer. The definition of the word “hacker” I most closely align with is “someone who makes furniture with an axe”. I write simple, straightforward code because trying to understand complexity makes my head hurt.

Which is why I’ve always avoided the more “academic” languages, like OCaml, Haskell, Clojure, and so on. I know they’re good languages – people far smarter than me are building amazing things with them – but the time I hear the word “endofunctor”, I’ve lost all focus (and most of my will to live). My preferred languages are the ones that come with less intellectual overhead, like C, PHP, Python, and Ruby.

So it’s interesting that I’ve embraced Rust with significant vigour. It’s by far the most “complicated” language that I feel at least vaguely comfortable with using “in anger”. Part of that is that I’ve managed to assemble a set of principles that allow me to almost completely avoid arguing with Rust’s dreaded borrow checker, lifetimes, and all the rest of the dark, scary corners of the language. It’s also, I think, that Rust helps me to write better software, and I can feel it helping me (almost) all of the time.

In the spirit of helping my fellow mediocre programmers to embrace Rust, I present the principles I’ve assembled so far.

Neither a Borrower Nor a Lender Be

If you know anything about Rust, you probably know about the dreaded “borrow checker”. It’s the thing that makes sure you don’t have two pieces of code trying to modify the same data at the same time, or using a value when it’s no longer valid.

While Rust’s borrowing semantics allow excellent performance without compromising safety, for us mediocre programmers it gets very complicated, very quickly. So, the moment the compiler wants to start talking about “explicit lifetimes”, I shut it up by just using “owned” values instead.

It’s not that I never borrow anything; I have some situations that I know are “borrow-safe” for the mediocre programmer (I’ll cover those later). But any time I’m not sure how things will pan out, I’ll go straight for an owned value.

For example, if I need to store some text in a struct or enum, it’s going straight into a String. I’m not going to start thinking about lifetimes and &'a str; I’ll leave that for smarter people. Similarly, if I need a list of things, it’s a Vec<T> every time – no &'b [T] in my structs, thank you very much.

Attack of the Clones

Following on from the above, I’ve come to not be afraid of .clone(). I scatter them around my code like seeds in a field. Life’s too short to spend time trying to figure out who’s borrowing what from whom, if I can just give everyone their own thing.

There are warnings in the Rust book (and everywhere else) about how a clone can be “expensive”. While it’s true that, yes, making clones of data structures consumes CPU cycles and memory, it very rarely matters. CPU cycles are (usually) plentiful and RAM (usually) relatively cheap. Mediocre programmer mental effort is expensive, and not to be spent on premature optimisation. Also, if you’re coming from most any other modern language, Rust is already giving you so much more performance that you’re probably ending up ahead of the game, even if you .clone() everything in sight.

If, by some miracle, something I write gets so popular that the “expense” of all those spurious clones becomes a problem, it might make sense to pay someone much smarter than I to figure out how to make the program a zero-copy masterpiece of efficient code. Until then… clone early and clone often, I say!

Derive Macros are Powerful Magicks

If you start .clone()ing everywhere, pretty quickly you’ll be hit with this error:


error[E0599]: no method named `clone` found for struct `Foo` in the current scope

This is because not everything can be cloned, and so if you want your thing to be cloned, you need to implement the method yourself. Well… sort of.

One of the things that I find absolutely outstanding about Rust is the “derive macro”. These allow you to put a little marker on a struct or enum, and the compiler will write a bunch of code for you! Clone is one of the available so-called “derivable traits”, so you add #[derive(Clone)] to your structs, and poof! you can .clone() to your heart’s content.

But there are other things that are commonly useful, and so I’ve got a set of traits that basically all of my data structures derive:


#[derive(Clone, Debug, Default)]
struct Foo {
    // ...
}

Every time I write a struct or enum definition, that line #[derive(Clone, Debug, Default)] goes at the top.

The Debug trait allows you to print a “debug” representation of the data structure, either with the dbg!() macro, or via the {:?} format in the format!() macro (and anywhere else that takes a format string). Being able to say “what exactly is that?” comes in handy so often, not having a Debug implementation is like programming with one arm tied behind your Aeron.

Meanwhile, the Default trait lets you create an “empty” instance of your data structure, with all of the fields set to their own default values. This only works if all the fields themselves implement Default, but a lot of standard types do, so it’s rare that you’ll define a structure that can’t have an auto-derived Default. Enums are easily handled too, you just mark one variant as the default:


#[derive(Clone, Debug, Default)]
enum Bar {
    Something(String),
    SomethingElse(i32),
    #[default]   // <== mischief managed
    Nothing,
}

Borrowing is OK, Sometimes

While I previously said that I like and usually use owned values, there are a few situations where I know I can borrow without angering the borrow checker gods, and so I’m comfortable doing it.

The first is when I need to pass a value into a function that only needs to take a little look at the value to decide what to do. For example, if I want to know whether any values in a Vec<u32> are even, I could pass in a Vec, like this:


fn main() {
    let numbers = vec![0u32, 1, 2, 3, 4, 5];

    if has_evens(numbers) {
        println!("EVENS!");
    }
}

fn has_evens(numbers: Vec<u32>) -> bool {
    numbers.iter().any(|n| n % 2 == 0)
}

Howver, this gets ugly if I’m going to use numbers later, like this:


fn main() {
    let numbers = vec![0u32, 1, 2, 3, 4, 5];

    if has_evens(numbers) {
        println!("EVENS!");
    }

    // Compiler complains about "value borrowed here after move"
    println!("Sum: {}", numbers.iter().sum::<u32>());
}

fn has_evens(numbers: Vec<u32>) -> bool {
    numbers.iter().any(|n| n % 2 == 0)
}

Helpfully, the compiler will suggest I use my old standby, .clone(), to fix this problem. But I know that the borrow checker won’t have a problem with lending that Vec<u32> into has_evens() as a borrowed slice, &[u32], like this:


fn main() {
    let numbers = vec![0u32, 1, 2, 3, 4, 5];

    if has_evens(&numbers) {
        println!("EVENS!");
    }
}

fn has_evens(numbers: &[u32]) -> bool {
    numbers.iter().any(|n| n % 2 == 0)
}

The general rule I’ve got is that if I can take advantage of lifetime elision (a fancy term meaning “the compiler can figure it out”), I’m probably OK. In less fancy terms, as long as the compiler doesn’t tell me to put 'a anywhere, I’m in the green. On the other hand, the moment the compiler starts using the words “explicit lifetime”, I nope the heck out of there and start cloning everything in sight.

Another example of using lifetime elision is when I’m returning the value of a field from a struct or enum. In that case, I can usually get away with returning a borrowed value, knowing that the caller will probably just be taking a peek at that value, and throwing it away before the struct itself goes out of scope. For example:


struct Foo {
    id: u32,
    desc: String,
}

impl Foo {
    fn description(&self) -> &str {
        &self.desc
    }
}

Returning a reference from a function is practically always a mortal sin for mediocre programmers, but returning one from a struct method is often OK. In the rare case that the caller does want the reference I return to live for longer, they can always turn it into an owned value themselves, by calling .to_owned().

Avoid the String Tangle

Rust has a couple of different types for representing strings – String and &str being the ones you see most often. There are good reasons for this, however it complicates method signatures when you just want to take some sort of “bunch of text”, and don’t care so much about the messy details.

For example, let’s say we have a function that wants to see if the length of the string is even. Using the logic that since we’re just taking a peek at the value passed in, our function might take a string reference, &str, like this:


fn is_even_length(s: &str) -> bool {
    s.len() % 2 == 0
}

That seems to work fine, until someone wants to check a formatted string:


fn main() {
    // The compiler complains about "expected `&str`, found `String`"
    if is_even_length(format!("my string is {}", std::env::args().next().unwrap())) {
        println!("Even length string");
    }
}

Since format! returns an owned string, String, rather than a string reference, &str, we’ve got a problem. Of course, it’s straightforward to turn the String from format!() into a &str (just prefix it with an &). But as mediocre programmers, we can’t be expected to remember which sort of string all our functions take and add & wherever it’s needed, and having to fix everything when the compiler complains is tedious.

The converse can also happen: a method that wants an owned String, and we’ve got a &str (say, because we’re passing in a string literal, like "Hello, world!"). In this case, we need to use one of the plethora of available “turn this into a String” mechanisms (.to_string(), .to_owned(), String::from(), and probably a few others I’ve forgotten), on the value before we pass it in, which gets ugly real fast.

For these reasons, I never take a String or an &str as an argument. Instead, I use the Power of Traits to let callers pass in anything that is, or can be turned into, a string. Let us have some examples.

First off, if I would normally use &str as the type, I instead use impl AsRef<str>:


fn is_even_length(s: impl AsRef<str>) -> bool {
    s.as_ref().len() % 2 == 0
}

Note that I had to throw in an extra as_ref() call in there, but now I can call this with either a String or a &str and get an answer.

Now, if I want to be given a String (presumably because I plan on taking ownership of the value, say because I’m creating a new instance of a struct with it), I use impl Into<String> as my type:


struct Foo {
    id: u32,
    desc: String,
}

impl Foo {
    fn new(id: u32, desc: impl Into<String>) -> Self {
        Self { id, desc: desc.into() }
    }
}

We have to call .into() on our desc argument, which makes the struct building a bit uglier, but I’d argue that’s a small price to pay for being able to call both Foo::new(1, "this is a thing") and Foo::new(2, format!("This is a thing named {name}")) without caring what sort of string is involved.

Always Have an Error Enum

Rust’s error handing mechanism (Results… everywhere), along with the quality-of-life sugar surrounding it (like the short-circuit operator, ?), is a delightfully ergonomic approach to error handling. To make life easy for mediocre programmers, I recommend starting every project with an Error enum, that derives thiserror::Error, and using that in every method and function that returns a Result.

How you structure your Error type from there is less cut-and-dried, but typically I’ll create a separate enum variant for each type of error I want to have a different description. With thiserror, it’s easy to then attach those descriptions:


#[derive(Clone, Debug, thiserror::Error)]
enum Error {
    #[error("{0} caught fire")]
    Combustion(String),
    #[error("{0} exploded")]
    Explosion(String),
}

I also implement functions to create each error variant, because that allows me to do the Into<String> trick, and can sometimes come in handy when creating errors from other places with .map_err() (more on that later). For example, the impl for the above Error would probably be:


impl Error {
    fn combustion(desc: impl Into<String>) -> Self {
        Self::Combustion(desc.into())
    }

    fn explosion(desc: impl Into<String>) -> Self {
        Self::Explosion(desc.into())
    }
}

It’s a tedious bit of boilerplate, and you can use the thiserror-ext crate’s thiserror_ext::Construct derive macro to do the hard work for you, if you like. It, too, knows all about the Into<String> trick.

Banish map_err (well, mostly)

The newer mediocre programmer, who is just dipping their toe in the water of Rust, might write file handling code that looks like this:


fn read_u32_from_file(name: impl AsRef<str>) -> Result<u32, Error> {
    let mut f = File::open(name.as_ref())
        .map_err(|e| Error::FileOpenError(name.as_ref().to_string(), e))?;

    let mut buf = vec![0u8; 30];
    f.read(&mut buf)
        .map_err(|e| Error::ReadError(e))?;

    String::from_utf8(buf)
        .map_err(|e| Error::EncodingError(e))?
        .parse::<u32>()
        .map_err(|e| Error::ParseError(e))
}

This works great (or it probably does, I haven’t actually tested it), but there are a lot of .map_err() calls in there. They take up over half the function, in fact. With the power of the From trait and the magic of the ? operator, we can make this a lot tidier.

First off, assume we’ve written boilerplate error creation functions (or used thiserror_ext::Construct to do it for us)). That allows us to simplify the file handling portion of the function a bit:


fn read_u32_from_file(name: impl AsRef<str>) -> Result<u32, Error> {
    let mut f = File::open(name.as_ref())
        // We've dropped the `.to_string()` out of here...
        .map_err(|e| Error::file_open_error(name.as_ref(), e))?;

    let mut buf = vec![0u8; 30];
    f.read(&mut buf)
        // ... and the explicit parameter passing out of here
        .map_err(Error::read_error)?;

    // ...

If that latter .map_err() call looks weird, without the |e| and such, it’s passing a function-as-closure, which just saves on a few characters typing. Just because we’re mediocre, doesn’t mean we’re not also lazy.

Next, if we implement the From trait for the other two errors, we can make the string-handling lines significantly cleaner. First, the trait impl:


impl From<std::string::FromUtf8Error> for Error {
    fn from(e: std::string::FromUtf8Error) -> Self {
        Self::EncodingError(e)
    }
}

impl From<std::num::ParseIntError> for Error {
    fn from(e: std::num::ParseIntError) -> Self {
        Self::ParseError(e)
    }
}

(Again, this is boilerplate that can be autogenerated, this time by adding a #[from] tag to the variants you want a From impl on, and thiserror will take care of it for you)

In any event, no matter how you get the From impls, once you have them, the string-handling code becomes practically error-handling-free:


    Ok(
        String::from_utf8(buf)?
            .parse::<u32>()?
    )

The ? operator will automatically convert the error from the types returned from each method into the return error type, using From. The only tiny downside to this is that the ? at the end strips the Result, and so we’ve got to wrap the returned value in Ok() to turn it back into a Result for returning. But I think that’s a small price to pay for the removal of those .map_err() calls.

In many cases, my coding process involves just putting a ? after every call that returns a Result, and adding a new Error variant whenever the compiler complains about not being able to convert some new error type. It’s practically zero effort – outstanding outcome for the mediocre programmer.

Just Because You’re Mediocre, Doesn’t Mean You Can’t Get Better

To finish off, I’d like to point out that mediocrity doesn’t imply shoddy work, nor does it mean that you shouldn’t keep learning and improving your craft. One book that I’ve recently found extremely helpful is Effective Rust, by David Drysdale. The author has very kindly put it up to read online, but buying a (paper or ebook) copy would no doubt be appreciated.

The thing about this book, for me, is that it is very readable, even by us mediocre programmers. The sections are written in a way that really “clicked” with me. Some aspects of Rust that I’d had trouble understanding for a long time – such as lifetimes and the borrow checker, and particularly lifetime elision – actually made sense after I’d read the appropriate sections.

Finally, a Quick Beg

I’m currently subsisting on the kindness of strangers, so if you found something useful (or entertaining) in this post, why not buy me a refreshing beverage? It helps to know that people like what I’m doing, and helps keep me from having to sell my soul to a private equity firm.

01 May, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

April 30, 2024

Russell Coker

April 28, 2024

USB PSUs

I just bought a new USB PSU from AliExpress [1]. I got this to reduce the clutter in my bedroom, I charge my laptop, PineTime, and a few phones at the same time and a single PSU with lots of ports makes it easier. Also I bought a couple of really short USB-C cables as it’s been proven by both real life tests and mathematical modelling that shorter cables get tangled less. This power supply is based on Gallium Nitride (GaN) [2] technology which makes it efficient and cool.

One thing I only learned about after that purchase is the new USB PPS standard (see the USB Wikipedia page for details [3]). The PPS (Programmable Power Supply) standard allows (quoting Wikipedia) “allowing a voltage range of 3.3 to 21 V in 20 mV steps, and a current specified in 50 mA steps, to facilitate constant-voltage and constant-current charging”. What this means in practice (when phones support it which for me will probably be 2029 or something) is that the phone could receive power exactly matching the voltage needed for the battery and not have any voltage conversion inside the phone. Phones are designed to stop charging at a certain temperature, this probably doesn’t concern people in places like Northern Europe but in Australia it can be an issue. Removing the heat dissipation from inefficiencies in voltage change circuitry means the phone will be cooler when charging and can charge at a higher rate.

There is a “Certified USB Fast Charger” logo for chargers which do this, but it seems that at the moment they just include “PPS” in the feature list. So I highly recommend that GaN and PPS be on your feature list for your next USB PSU, but failing that the 240W PSU I bought for $36 was a good deal.

28 April, 2024 10:02PM by etbe

Galaxy Note 9 Droidian

Droidian Support for Note 9

Droidian only supported the version of this phone with the Exynos chipset. The GSM Arena specs page for the Note 9 shows that it’s the SM-N960F part number [1]. In Australia all Note 9 phones should have the Exynos but it doesn’t hurt to ask for the part number before buying.

The status of the Note9 in Droidian went from fully supported to totally unsupported in the time I was working on this blog post. Such a rapid change is disappointing, it would be good if they at least kept the old data online. It would also be good if they didn’t require a hash character in the URL for each phone which breaks the archive.org mirroring.

Installing Droidian

Firstly Power+VolumeDown will reboot in some situations where Power button on its own won’t. The Note 9 hardware keys are:

  • Power – Right side
  • Volume up/down – long button top of the left side
  • Bixby – key for Samsung assistant that’s below the volume on the left

The Droidian install document for the Galaxy Note 9 9 now deleted is a bit confusing and unclear. Here is the install process that worked for me.

  1. The doc says to start by installing “Android 10 (Q) stock firmware”, but apparently a version of Android 10 that’s already on the phone will do for that.
  2. Download the rescue.img file and the “Droidian’s image” files from the Droidian page and extract the “Droidian’s image” zip.
  3. Connect your phone to your workstation by USB, preferably USB 3 because it will take a few minutes to transfer the image at USB 2 speed. Install the Debian package adb on the workstation.
  4. To “Unlock the bootloader” you can apparently use a PC and the Samsung software but the unlock option in the Android settings gives the same result without proprietary software, here’s how to do it:
    1. Connect the phone to Wifi. Then in settings go to “Software update”, then click on “Download and install”. Refuse to install if it offers you a new version (the unlock menu item will never appear unless you do this, so you can’t unlock without Internet access).
    2. In settings go to “About phone”, then “Software information”, then tap on “Build number” repeatedly until “Developer mode” is enabled.
    3. In settings go to the new menu “Developer options” then turn on the “OEM unlocking” option, this does a factory reset of the phone.
  5. To flash the recovery.img you apparently use Odin on Windows. I used the heimdall-flash package on Debian. On your Linux workstation run the commands:
    adb reboot download
    heimdall flash --RECOVERY recovery.img

    Then press VOLUME-UP+BIXBY+POWER as soon as it reboots to get into the recovery image. If you don’t do it soon enough it will do a default Android boot which will wipe the recovery.img you installed and also do a factory reset which will disable “Developer mode” and you will need to go back to step 4.

  6. If the above step works correctly you will have a RECOVERY menu where the main menu has options “Reboot system now”, “Apply update”, “Factory reset”, and “Advanced” in a large font. If you failed to install recovery.img then you would get a similar menu but with a tiny font which is the Samsung recovery image which won’t work so reboot and try again.
  7. When at the main recovery menu select “Advanced” and then “Enter fastboot”. Note that this doesn’t run a different program or do anything obviously different, just gives a menu – that’s OK we want it at this menu.
  8. Run “./flash_all.sh” on your workstation.
  9. Then it should boot Droidian! This may take a bit of time.

First Tests

Battery

The battery and its charge and discharge rates are very important to me, it’s what made the PinePhonePro and Librem5 unusable as daily driver phones.

After running for about 100 minutes of which about 40 minutes were playing with various settings the phone was at 89% battery. The output of “upower -d” isn’t very accurate as it reported power use ranging from 0W to 25W! But this does suggest that the phone might last for 400 minutes of real use that’s not CPU intensive, such as reading email, document editing, and web browsing. I don’t think that 6.5 hours of doing such things non-stop without access to a power supply or portable battery is something I’m ever going to do. Samsung when advertising the phone claimed 17 hours of video playback which I don’t think I’m ever going to get – or want.

After running for 11 hours it was at 58% battery. Then after just over 21 hours of running it had 13% battery. Generally I don’t trust the upower output much but the fact that it ran for over 21 hours shows that its battery life is much better than the PinePhonePro and the Librem5. During that 21 hours I’ve had a ssh session open with the client set to send ssh keep-alive messages every minute. So it had to remain active. There is an option to suspend on Droidian but they recommend you don’t use it. There is no need for the “caffeine mode” that you have on Mobian. For comparison my previous tests suggested that when doing nothing a PinePhonePro might last for 30 hours on battery while the Liberem5 might only list 10 hours [2]. This test with Droidian was done with the phone within my reach for much of that time and subject to my desire to fiddle with new technology – so it wasn’t just sleeping all the time.

When charging from the USB port on my PC it went from 13% to 27% charge in half an hour and then after just over an hour it claimed to be at 33%. It ended up taking just over 7 hours to fully charge from empty that’s not great but not too bad for a PC USB port. This is the same USB port that my Librem5 couldn’t charge from. Also the discharge:charge ratio of 21:7 is better than I could get from the PinePhonePro with Caffeine mode enabled.

rndis0

The rndis0 interface used for IP over USB doesn’t work. Droidian bug #36 [3].

Other Hardware

The phone I bought for testing is the model with 6G of RAM and 128G of storage, has a minor screen crack and significant screen burn-in. It’s a good test system for $109. The screen burn-in is very obvious when running the default Android setup but when running the default Droidian GNOME setup set to the Dark theme (which is a significant power saving with an AMOLED screen) I can’t see it at all. Buying a cheap phone with screen burn-in is something I recommend.

The stylus doesn’t work, this isn’t listed on the Droidian web page. I’m not sure if I tested the stylus when the phone was running Android, I think I did.

D State Processes

I get a kernel panic early in the startup for unknown reasons and some D state kernel threads which may or may not be related to that. Droidian bug #37 [4].

Second Phone

The Phone

I ordered a second Note9 on ebay, it had been advertised at $240 for a month and the seller accepted my offer of $200. With postage that’s $215 for a Note9 in decent condition with 8G of RAM and 512G of storage. But Droidian dropped support for the Note9 before I got to install it. At the moment I’m not sure what I’ll do with this, maybe I’ll keep it on Android.

I also bought four phone cases for $16. I got spares because of the high price of postage relative to the case cost and the fact that they may be difficult to get in a few years.

The Tests

For the next phone my plan was to do more tests on Android before upgrading it to Debian. Here are the ones I can think of now, please suggest any others I should do.

  • Log output of “ps auxf” equivalent.
  • Make notes on what they are doing with SE Linux.
  • Test the stylus.
  • Test USB networking to my workstation and my laptop.
  • Make a copy of the dmesg output. Also look for D state processes and other signs of problems.

Droidian and Security

When I tell technical people about Droidian a common reaction is “great you can get a cheap powerful phone and have better security than Android”. This is wrong in several ways. Firstly Android has quite decent security. Android runs most things in containers and uses SE Linux. Droidian has the Debian approach for most software (IE it all runs under the same UID without any special protections) and the developers have no plans to use SE Linux. I’ve previously blogged about options for Sandboxing for Debian phone use, my blog post is NOT a solution to the problem but an analysis of the different potential ways of going about solving it [5].

The next issue is that Droidian has no way to update the kernel and the installation instructions often advise downgrading Android (running a less secure kernel) before the installation. The Android Generic Kernel Image project [6] addresses this by allowing a separation between drivers supplied by the hardware vendor and the kernel image supplied by Google. This also permits running the hardware vendor’s drivers with a GKI kernel released by Google after the hardware vendor dropped security support. But this only applies to Android 11 and later, so Android 10 devices (like the Note 9 image for Droidian) miss out on this.

28 April, 2024 11:40AM by etbe

Kitty and Mpv

6 months ago I switched to Kitty for terminal emulation [1]. So far there’s only been one thing that I couldn’t effectively do with Kitty that I did with Konsole in the past, that is watching a music video in 1/4 of the screen while using the rest for terminals. I could setup multiple Kitty windows taking up the rest of the screen but I wanted to keep using a single Kitty with multiple terminals and just have mpv go over one of them. Kitty supports it’s own graphical interface so “mpv –vo=kitty” works but took 6* the CPU power in my tests which isn’t good for a laptop.

For X11 there’s a –ontop option for mpv that does what you expect, but that doesn’t work on Wayland. Not working is mostly Wayland’s fault as there is a long tail of less commonly used graphical operations that work in X11 but aren’t yet implemented in Wayland. I have filed a Debian bug report about this, the mpv man page should note that it’s only going to work on X11 on Linux.

I have discovered a solution to that, in the KDE settings there’s a “Window Rules” section, I created an entry for “Window class” exactly matching “mpv” and then added a rule “Keep above other windows” and set it for “force” and “yes”.

After that I can just resize mpv to occlude just one terminal and keep using the rest. Also one noteworthy thing with this is that it makes mpv go on top of the KDE taskbar, which can be a feature.

28 April, 2024 05:38AM by etbe

April 27, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.11 on CRAN: Calendar Updates

The eleventh release of the qlcal package arrivied at CRAN today.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.34 and contains more updates to 2024 calendars.

Changes in version 0.0.11 (2024-04-27)

  • Synchronized with QuantLib 1.34

  • Calendar updates for Brazil, India, Singapore, South Africa, Thailand, United States

  • Minor continuous integration update

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

27 April, 2024 09:58PM

April 26, 2024

RcppSpdlog 0.0.17 on CRAN: New Upstream

Version 0.0.17 of RcppSpdlog arrived on CRAN overnight following and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.14 of spdlog which was release yesterday.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.17 (2024-04-25)

  • Minor continuous integration update

  • Upgraded to upstream release spdlog 1.14.0

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 April, 2024 09:16PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Continued life with bcachefs

This post was supposed to be called “death with bcachefs”, but it sounded a bit too dramatic. :-) Evidently bcachefs-tools in Debian is finally getting an update (although in experimental), so that's good. Meanwhile, one of my multi-device filesystems died a horrible death, and since I had backups, I didn't ask for its fix to be prioritized—fsck still is unable to repair it and I don't use bcachefs on that machine anymore. But the other one still lives fairly happily.

Hanging around #bcachefs on IRC tells me that indeed, this thing is still quite experimental. Some of the killer features (like proper compression) don't perform as well as they should yet. Large rewrites are still happening. People are still reporting quite weird bugs that are being triaged and mostly fixed (although if you can't reproduce them, you're pretty much hosed). But it's a fun ride. Again: Have backups. They saved me. :-)

26 April, 2024 08:05PM

hackergotchi for Robert McQueen

Robert McQueen

Update from the GNOME board

It’s been around 6 months since the GNOME Foundation was joined by our new Executive Director, Holly Million, and the board and I wanted to update members on the Foundation’s current status and some exciting upcoming changes.

Finances

As you may be aware, the GNOME Foundation has operated at a deficit (nonprofit speak for a loss – ie spending more than we’ve been raising each year) for over three years, essentially running the Foundation on reserves from some substantial donations received 4-5 years ago. The Foundation has a reserves policy which specifies a minimum amount of money we have to keep in our accounts. This is so that if there is a significant interruption to our usual income, we can preserve our core operations while we work on new funding sources. We’ve now “hit the buffers” of this reserves policy, meaning the Board can’t approve any more deficit budgets – to keep spending at the same level we must increase our income.

One of the board’s top priorities in hiring Holly was therefore her experience in communications and fundraising, and building broader and more diverse support for our mission and work. Her goals since joining – as well as building her familiarity with the community and project – have been to set up better financial controls and reporting, develop a strategic plan, and start fundraising. You may have noticed the Foundation being more cautious with spending this year, because Holly prepared a break-even budget for the Board to approve in October, so that we can steady the ship while we prepare and launch our new fundraising initiatives.

Strategy & Fundraising

The biggest prerequisite for fundraising is a clear strategy – we need to explain what we’re doing and why it’s important, and use that to convince people to support our plans. I’m very pleased to report that Holly has been working hard on this and meeting with many stakeholders across the community, and has prepared a detailed and insightful five year strategic plan. The plan defines the areas where the Foundation will prioritise, develop and fund initiatives to support and grow the GNOME project and community. The board has approved a draft version of this plan, and over the coming weeks Holly and the Foundation team will be sharing this plan and running a consultation process to gather feedback input from GNOME foundation and community members.

In parallel, Holly has been working on a fundraising plan to stabilise the Foundation, growing our revenue and ability to deliver on these plans. We will be launching a variety of fundraising activities over the coming months, including a development fund for people to directly support GNOME development, working with professional grant writers and managers to apply for government and private foundation funding opportunities, and building better communications to explain the importance of our work to corporate and individual donors.

Board Development

Another observation that Holly had since joining was that we had, by general nonprofit standards, a very small board of just 7 directors. While we do have some committees which have (very much appreciated!) volunteers from outside the board, our officers are usually appointed from within the board, and many board members end up serving on multiple committees and wearing several hats. It also means the number of perspectives on the board is limited and less representative of the diverse contributors and users that make up the GNOME community.

Holly has been working with the board and the governance committee to reduce how much we ask from individual board members, and improve representation from the community within the Foundation’s governance. Firstly, the board has decided to increase its size from 7 to 9 members, effective from the upcoming elections this May & June, allowing more voices to be heard within the board discussions. After that, we’re going to be working on opening up the board to more participants, creating non-voting officer seats to represent certain regions or interests from across the community, and take part in committees and board meetings. These new non-voting roles are likely to be appointed with some kind of application process, and we’ll share details about these roles and how to be considered for them as we refine our plans over the coming year.

Elections

We’re really excited to develop and share these plans and increase the ways that people can get involved in shaping the Foundation’s strategy and how we raise and spend money to support and grow the GNOME community. This brings me to my final point, which is that we’re in the run up to the annual board elections which take place in the run up to GUADEC. Because of the expansion of the board, and four directors coming to the end of their terms, we’ll be electing 6 seats this election. It’s really important to Holly and the board that we use this opportunity to bring some new voices to the table, leading by example in growing and better representing our community.

Allan wrote in the past about what the board does and what’s expected from directors. As you can see we’re working hard on reducing what we ask from each individual board member by increasing the number of directors, and bringing additional members in to committees and non-voting roles. If you’re interested in seeing more diverse backgrounds and perspectives represented on the board, I would strongly encourage you consider standing for election and reach out to a board member to discuss their experience.

Thanks for reading! Until next time.

Best Wishes,
Rob
President, GNOME Foundation

Update 2024-04-27: It was suggested in the Discourse thread that I clarify the interaction between the break-even budget and the 1M EUR committed by the STF project. This money is received in the form of a contract for services rather than a grant to the Foundation, and must be spent on the development areas agreed during the planning and application process. It’s included within this year’s budget (October 23 – September 24) and is all expected to be spent during this fiscal year, so it doesn’t have an impact on the Foundation’s reserves position. The Foundation retains a small % fee to support its costs in connection with the project, including the new requirement to have our accounts externally audited at the end of the financial year. We are putting this money towards recruitment of an administrative assistant to improve financial and other operational support for the Foundation and community, including the STF project and future development initiatives.

(also posted to GNOME Discourse, please head there if you have any questions or comments)

26 April, 2024 10:39AM by ramcq

Russell Coker

Humane AI Pin

I wrote a blog post The Shape of Computers [1] exploring ideas of how computers might evolve and how we can use them. One of the devices I mentioned was the Humane AI Pin, which has just been the recipient of one of the biggest roast reviews I’ve ever seen [2], good work Marques Brownlee! As an aside I was once given a product to review which didn’t work nearly as well as I think it should have worked so I sent an email to the developers saying “sorry this product failed to work well so I can’t say anything good about it” and didn’t publish a review.

One of the first things that caught my attention in the review is the note that the AI Pin doesn’t connect to your phone. I think that everything should connect to everything else as a usability feature. For security we don’t want so much connecting and it’s quite reasonable to turn off various connections at appropriate times for security, the Librem5 is an example of how this can be done with hardware switches to disable Wifi etc. But to just not have connectivity is bad.

The next noteworthy thing is the external battery which also acts as a magnetic attachment from inside your shirt. So I guess it’s using wireless charging through your shirt. A magnetically attached external battery would be a great feature for a phone, you could quickly swap a discharged battery for a fresh one and keep using it. When I tried to make the PinePhonePro my daily driver [3] I gave up and charging was one of the main reasons. One thing I learned from my experiment with the PinePhonePro is that the ratio of charge time to discharge time is sometimes more important than battery life and being able to quickly swap batteries without rebooting is a way of solving that. The reviewer of the AI Pin complains later in the video about battery life which seems to be partly due to wireless charging from the detachable battery and partly due to being physically small. It seems the “phablet” form factor is the smallest viable personal computer at this time.

The review glosses over what could be the regarded as the 2 worst issues of the device. It does everything via the cloud (where “the cloud” means “a computer owned by someone I probably shouldn’t trust”) and it records everything. Strange that it’s not getting the hate the Google Glass got.

The user interface based on laser projection of menus on the palm of your hand is an interesting concept. I’d rather have a Bluetooth attached tablet or something for operations that can’t be conveniently done with voice. The reviewer harshly criticises the laser projection interface later in the video, maybe technology isn’t yet adequate to implement this properly.

The first criticism of the device in the “review” part of the video is of the time taken to answer questions, especially when Internet connectivity is poor. His question “who designed the Washington Monument” took 8 seconds to start answering it in his demonstration. I asked the Alpaca LLM the same question running on 4 cores of a E5-2696 and it took 10 seconds to start answering and then printed the words at about speaking speed. So if we had a free software based AI device for this purpose it shouldn’t be difficult to get local LLM computation with less delay than the Humane device by simply providing more compute power than 4 cores of a E5-2696v3. How does a 32 core 1.05GHz Mali G72 from 2017 (as used in the Galaxy Note 9) compare to 4 cores of a 2.3GHz Intel CPU from 2015? Passmark says that Intel CPU can do 48GFlop with all 18 cores so 4 cores can presumably do about 10GFlop which seems less than the claimed 20-32GFlop of the Mali G72. It seems that with the right software even older Android phones could give adequate performance for a local LLM. The Alpaca model I’m testing with takes 4.2G of RAM to run which is usable in a Note 9 with 8G of RAM or a Pixel 8 Pro with 12G. A Pixel 8 Pro could have 4.2G of RAM reserved for a LLM and still have as much RAM for other purposes as my main laptop as of a few months ago. I consider the speed of Alpaca on my workstation to be acceptable but not great. If we can get FOSS phones running a LLM at that speed then I think it would be great for a first version – we can always rely on newer and faster hardware becoming available.

Marques notes that the cause of some of the problems is likely due to a desire to make it a separate powerful product in the future and that if they gave it phone connectivity in the start they would have to remove that later on. I think that the real problem is that the profit motive is incompatible with good design. They want to have a product that’s stand-alone and justifies the purchase price plus subscription and that means not making it a “phone accessory”. While I think that the best thing for the user is to allow it to talk to a phone, a PC, a car, and anything else the user wants. He compares it to the Apple Vision Pro which has the same issue of trying to be a stand-alone computer but not being properly capable of it.

One of the benefits that Marques cites for the AI Pin is the ability to capture voice notes. Dictaphones have been around for over 100 years and very few people have bought them, not even in the 80s when they became cheap. While almost everyone can occasionally benefit from being able to make a note of an idea when it’s not convenient to write it down there are few people who need it enough to carry a separate device, not even if that device is tiny. But a phone as a general purpose computing device with microphone can easily be adapted to such things. One possibility would be to program a phone to start a voice note when the volume up and down buttons are pressed at the same time or when some other condition is met. Another possibility is to have a phone have a hotkey function that varies by what you are doing, EG if bushwalking have the hotkey be to take a photo or if on a flight have it be taking a voice note. On the Mobile Apps page on the Debian wiki I created a section for categories of apps that I think we need [4]. In that section I added the following list:

  1. Voice input for dictation
  2. Voice assistant like Google/Apple
  3. Voice output
  4. Full operation for visually impaired people

One thing I really like about the AI Pin is that it has the potential to become a really good computing and personal assistant device for visually impaired people funded by people with full vision who want to legally control a computer while driving etc. I have some concerns about the potential uses of the AI Pin while driving (as Marques stated an aim to do), but if it replaces the use of regular phones while driving it will make things less bad.

Marques concludes his video by warning against buying a product based on the promise of what it can be in future. I bought the Librem5 on exactly that promise, the difference is that I have the source and the ability to help make the promise come true. My aim is to spend thousands of dollars on test hardware and thousands of hours of development time to help make FOSS phones a product that most people can use at low price with little effort.

Another interesting review of the pin is by Mrwhostheboss [5], one of his examples is of asking the pin for advice about a chair but without him knowing the pin selected a different chair in the room. He compares this to using Google’s apps on a phone and seeing which item the app has selected. He also said that he doesn’t want to make an order based on speech he wants to review a page of information about it. I suspect that the design of the pin had too much input from people accustomed to asking a corporate travel office to find them a flight and not enough from people who look through the details of the results of flight booking services trying to save an extra $20. Some people might say “if you need to save $20 on a flight then a $24/month subscription computing service isn’t for you”, I reject that argument. I can afford lots of computing services because I try to get the best deal on every moderately expensive thing I pay for. Another point that Mrwhostheboss makes is regarding secret SMS, you probably wouldn’t want to speak a SMS you are sending to your SO while waiting for a train. He makes it clear that changing between phone and pin while sharing resources (IE not having a separate phone number and separate data store) is a desired feature.

The most insightful point Mrwhostheboss made was when he suggested that if the pin had come out before the smartphone then things might have all gone differently, but now anything that’s developed has to be based around the expectations of phone use. This is something we need to keep in mind when developing FOSS software, there’s lots of different ways that things could be done but we need to meet the expectations of users if we want our software to be used by many people.

I previously wrote a blog post titled Considering Convergence [6] about the possible ways of using a phone as a laptop. While I still believe what I wrote there I’m now considering the possibility of ease of movement of work in progress as a way of addressing some of the same issues. I’ve written a blog post about Convergence vs Transferrence [7].

26 April, 2024 08:30AM by etbe

April 25, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.22 on CRAN: Maintenance

A new minor release 0.4.22 of RQuantLib arrived at CRAN earlier today, and has been uploaded to Debian.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for more than twenty years (!!) as it was one of the first packages I uploaded there.

This release of RQuantLib updates to QuantLib version 1.34 which was just released yesterday, and deprecates use of an access point / type for price/yield conversion for bonds. We also made two minor earlier changes.

Changes in RQuantLib version 0.4.22 (2024-04-25)

  • Small code cleanup removing duplicate R code

  • Small improvements to C++ compilation flags

  • Robustify internal version comparison to accommodate RC releases

  • Adjustments to two C++ files for QuantLib 1.34

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

25 April, 2024 09:25PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Sorting out backup internet #3: failover

With local recursive DNS and a 5G modem in place the next thing was to work on some sort of automatic failover when the primary FTTP connection failed. My wife works from home too and I sometimes travel so I wanted to make sure things didn’t require me to be around to kick them into switch the link in use.

First, let’s talk about what I didn’t do. One choice to try and ensure as seamless a failover as possible would be to get a VM somewhere out there. I’d then run Wireguard tunnels over both the FTTP + 5G links to the VM, and run some sort of routing protocol (RIP, OSPF?) over the links. Set preferences such that the FTTP is preferred, NAT v4 to the VM IP, and choose somewhere that gave me a v6 range I could just use directly.

This has the advantage that I’m actively checking link quality to the outside work, rather than just to the next hop. It also means, if the failover detection is fast enough, that existing sessions stay up rather than needing re-established.

The downsides are increased complexity, adding another point of potential failure (the VM + provider), the impact on connection quality (even with a decent endpoint it’s an extra hop and latency), and finally the increased cost involved.

I can cope with having to reconnect my SSH sessions in the event of a failure, and I’d rather be sure I can make full use of the FTTP connection, so I didn’t go this route. I chose to rely on local link failure detection to provide the signal for failover, and a set of policy routing on top of that to make things a bit more seamless.

Local link failure turns out to be fairly easy. My FTTP is a PPPoE configuration, so in /etc/ppp/peers/aquiss I have:

lcp-echo-interval 1
lcp-echo-failure 5
lcp-echo-adaptive

Which gives me a failover of ~ 5s if the link goes down.

I’m operating the 5G modem in “bridge” rather than “router” mode, which means I get the actual IP from the 5G network via DHCP. The DHCP lease the modem hands out is under a minute, and in the event of a network failure it only hands out a 192.168.254.x IP to talk to its web interface. As the 5G modem is the last resort path I choose not to do anything special with this, but the information is at least there if I need it.

To allow both interfaces to be up and the FTTP to be preferred I’m simply using route metrics. For the PPP configuration that’s:

defaultroute-metric 100

and for the 5G modem I have:

iface sfp.31 inet dhcp
    metric 1000
    vlan-raw-device sfp

There’s a wrinkle in that pppd will not replace an existing default route, so I’ve created /etc/ppp/ip-up.d/default-route to ensure it’s added:

#!/bin/bash

[ "$PPP_IFACE" = "pppoe-wan" ] || exit 0

# Ensure we add a default route; pppd will not do so if we have
# a lower pref route out the 5G modem
ip route add default dev pppoe-wan metric 100 || true

Additionally, in /etc/dhcp/dhclient.conf I’ve disabled asking for any server details (DNS, NTP, etc) - I have internal setups for the servers I want, and don’t want to be trying to select things over the 5G link by default.

However, what I do want is to be able to access the 5G modem web interface and explicitly route some traffic out that link (e.g. so I can add it to my smokeping tests). For that I need some source based routing.

First step, add a 5g table to /etc/iproute2/rt_tables:

16  5g

Then I ended up with the following in /etc/dhcp/dhclient-exit-hooks.d/modem-interface-route, which is more complex than I’d like but seems to do what I want:

#!/bin/sh

case "$reason" in
    BOUND|RENEW|REBIND|REBOOT)
        # Check if we've actually changed IP address
        if [ -z "$old_ip_address" ] ||
           [ "$old_ip_address" != "$new_ip_address" ] ||
           [ "$reason" = "BOUND" ] || [ "$reason" = "REBOOT" ]; then
            if [ ! -z "$old_ip_address" ]; then
                ip rule del from $old_ip_address lookup 5g
            fi
            ip rule add from $new_ip_address lookup 5g

            ip route add default dev sfp.31 table 5g || true
            ip route add 192.168.254.1 dev sfp.31 2>/dev/null || true
        fi
    ;;

    EXPIRE)
        if [ ! -z "$old_ip_address" ]; then
            ip rule del from $old_ip_address lookup 5g
        fi
    ;;

    *)
    ;;
esac

What does all that aim to do? We want to ensure traffic directed to the 5G WAN address goes out the 5G modem, so I can SSH into it even when the main link is up. So we add a rule directing traffic from that IP to hit the 5g routing table, and a default route in that table which uses the 5G link. There’s no configuration for the FTTP connection in that table, so if the 5G link is down the traffic gets dropped, which is what we want. We also configure 192.168.254.1 to go out the link to the modem, as that’s where the web interface lives.

I also have a curl callout (curl --interface sfp.31 … to ensure it goes out the 5G link) after the routes are configured to set dynamic DNS with Mythic Beasts, which helps with knowing where to connect back to. I seem to see IP address changes on the 5G link every couple of days at least.

Additionally, I have an entry in the interfaces configuration carving out the top set of the netblock my smokeping server is in:

    up ip rule add from 192.0.2.224/27 lookup 5g

My smokeping /etc/smokeping/config.d/Probes file then looks like:

*** Probes ***

+ FPing

binary = /usr/bin/fping

++ FPingNormal

++ FPing5G

sourceaddress = 192.0.2.225

+ FPing6

binary = /usr/bin/fping

which allows me to use probe = FPing5G for targets to test them over the 5G link.

That mostly covers the functionality I want for a backup link. There’s one piece that isn’t quite solved, however, IPv6, which can wait for another post.

25 April, 2024 05:38PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Biosphere

I've been enjoying Biosphere as the soundtrack to my recent "concentrated work" spells.

Knives by Biosphere

I remember seeing their name on playlists of yester-year: axioms, bluemars1, and (still a going concern) soma.fm's drone zone.


  1. Bluemars lives on, at echoes of bluemars

25 April, 2024 03:15PM

Russ Allbery

Review: Nation

Review: Nation, by Terry Pratchett

Publisher: Harper
Copyright: 2008
Printing: 2009
ISBN: 0-06-143303-9
Format: Trade paperback
Pages: 369

Nation is a stand-alone young adult fantasy novel. It was published in the gap between Discworld novels Making Money and Unseen Academicals.

Nation starts with a plague. The Russian influenza has ravaged Britain, including the royal family. The next in line to the throne is off on a remote island and must be retrieved and crowned as soon as possible, or an obscure provision in Magna Carta will cause no end of trouble. The Cutty Wren is sent on this mission, carrying the Gentlemen of Last Resort.

Then comes the tsunami.

In the midst of fire raining from the sky and a wave like no one has ever seen, Captain Roberts tied himself to the wheel of the Sweet Judy and steered it as best he could, straight into an island. The sole survivor of the shipwreck: one Ermintrude Fanshaw, daughter of the governor of some British island possessions. Oh, and a parrot.

Mau was on the Boys' Island when the tsunami came, going through his rite of passage into manhood. He was to return to the Nation the next morning and receive his tattoos and his adult soul. He survived in a canoe. No one else in the Nation did.

Terry Pratchett considered Nation to be his best book. It is not his best book, at least in my opinion; it's firmly below the top tier of Discworld novels, let alone Night Watch. It is, however, an interesting and enjoyable book that tackles gods and religion with a sledgehammer rather than a knife.

It's also very, very dark and utterly depressing at the start, despite a few glimmers of Pratchett's humor. Mau is the main protagonist at first, and the book opens with everyone he cares about dying. This is the place where I thought Pratchett diverged the most from his Discworld style: in Discworld, I think most of that would have been off-screen, but here we follow Mau through the realization, the devastation, the disassociation, the burials at sea, the thoughts of suicide, and the complete upheaval of everything he thought he was or was about to become. I found the start of this book difficult to get through. The immediate transition into potentially tragic misunderstandings between Mau and Daphne (as Ermintrude names herself once there is no one to tell her not to) didn't help.

As I got farther into the book, though, I warmed to it. The best parts early on are Daphne's baffled but scientific attempts to understand Mau's culture and her place in it. More survivors arrive, and they start to assemble a community, anchored in large part by Mau's stubborn determination to do what's right even though he's lost all of his moorings. That community eventually re-establishes contact with the rest of the world and the opening plot about the British monarchy, but not before Daphne has been changed profoundly by being part of it.

I think Pratchett worked hard at keeping Mau's culture at the center of the story. It's notable that the community that reforms over the course of the book essentially follows the patterns of Mau's lost Nation and incorporates Daphne into it, rather than (as is so often the case) the other way around. The plot itself is fiercely anti-colonial in a way that mostly worked. Still, though, it's a quasi-Pacific-island culture written by a white British man, and I had some qualms.

Pratchett quite rightfully makes it clear in the afterward that this is an alternate world and Mau's culture is not a real Pacific island culture. However, that also means that its starkly gender-essentialist nature was a free choice, rather than one based on some specific culture, and I found that choice somewhat off-putting. The religious rituals are all gendered, the dwelling places are gendered, and one's entire life course in Mau's world seems based on binary classification as a man or a woman. Based on Pratchett's other books, I assume this was more an unfortunate default than a deliberate choice, but it's still a choice he could have avoided.

The end of this book wrestles directly with the relative worth of Mau's culture versus that of the British. I liked most of this, but the twists that Pratchett adds to avoid the colonialist results we saw in our world stumble partly into the trap of making Mau's culture valuable by British standards. (I'm being a bit vague here to avoid spoilers.) I think it is very hard to base this book on a different set of priorities and still bring the largely UK, US, and western European audience along, so I don't blame Pratchett for failing to do it, but I'm a bit sad that the world still revolved around a British axis.

This felt quite similar to Discworld to me in its overall sensibilities, but with the roles of moral philosophy and humor reversed. Discworld novels usually start with some larger-than-life characters and an absurd plot, and then the moral philosophy sneaks up behind you when you're not looking and hits you over the head. Nation starts with the moral philosophy: Mau wrestles with his gods and the problem of evil in a way that reminded me of Job, except with a far different pantheon and rather less tolerance for divine excuses on the part of the protagonist. It's the humor, instead, that sneaks up on you and makes you laugh when the plot is a bit too much. But the mix arrives at much the same place: the absurd hand-in-hand with the profound, and all seen from an angle that makes it a bit easier to understand.

I'm not sure I would recommend Nation as a good place to start with Pratchett. I felt like I benefited from having read a lot of Discworld to build up my willingness to trust where Pratchett was going. But it has the quality of writing of late Discworld without the (arguable) need to read 25 books to understand all of the backstory. Regardless, recommended, and you'll never hear Twinkle Twinkle Little Star in quite the same way again.

Rating: 8 out of 10

25 April, 2024 04:18AM

April 24, 2024

Nazi.Compare

Daniel Pocock elected on ANZAC Day and anniversary of Easter Rising (FSFE Fellowship)

24 April is the anniversary of the Easter Rising. That is the day that Irish republicans bravely rose up against foreign control. The Irish leaders were captured and killed.

25 April is ANZAC Day. It is the anniversary of various battles, most notable of which is Gallipoli.

The Gallipoli landings, like the Easter Rising, are notable for a significant loss of life.

ANZAC Day has evolved to commemorate not only those who lost their lives at Gallipoli but all the sacrifices made by those who serve in uniform for Australia and New Zealand.

When the FSFE Fellowship voted for the Irish-Australian Daniel Pocock in 2017, voting finished at 11:05 UTC on 24 April 2017. That is the 101st anniversary of the Easter rising.

FSFE announced Pocock's victory the following day, 25 April 2017, ANZAC Day, as we can see in the screenshot below.

FSFE headquarters are in Berlin, Germany and the majority of members appear to be Germans. After the community voted for an Irish-Australian, the management removed the elections from the FSFE constitution. There hasn't been another election since.

Simply mentioning the ANZACs is akin to standing on the shoulders of giants. Nonetheless, Pocock successfully discovered a lot of evidence of bad faith, corruption and deception by the FSFE management.

The Nazi.Compare site was created to continue the story

Daniel Pocock, FSFE, ANZAC Day

Read more of Daniel Pocock's blogs about visits to ANZAC sites around the world.

24 April, 2024 09:00PM

April 22, 2024

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader Election 2024, Andreas Tille elected.

The voting period for the Debian Project Leader election has ended. Please join us in congratulating Andreas Tille as the new Debian Project Leader.

The new term for the project leader started on 2024-04-21.

369 of 1,010 Debian Developers voted using the Condorcet method.

More information about the results of the voting are available on the Debian Project Leader Elections 2024 page.

Many thanks all of our Developers for voting.

22 April, 2024 12:00PM by Donald Norwood

Vincent Fourmond

QSoas version 3.3 is out

Version 3.3 brings in new features, including reverse Laplace transforms and fits, pH fits, commands for picking points from a dataset, averaging points with the same X value, or perform singular value decomposition.

In addition to these new features, many previous commands were improved, like the addition of a bandcut filter in FFT filtering, better handling of the loading of files produced by QSoas itself, and a button to interrupt the processing of scripts.

There are a lot of other new features, improvements and so on, look for the full list there.

About QSoas


QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is released under the GNU General Public License. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 3.3. You can download for free its source code or precompiled versions for MacOS and Windows there. Alternatively, you can clone from the GitHub repository.

22 April, 2024 10:50AM by Vincent Fourmond (noreply@blogger.com)

Russ Allbery

Review: The Stars, Like Dust

Review: The Stars, Like Dust, by Isaac Asimov

Series: Galactic Empire #2
Publisher: Fawcett Crest
Copyright: 1950, 1951
Printing: June 1972
Format: Mass market
Pages: 192

The Stars, Like Dust is usually listed as the first book in Asimov's lesser-known Galactic Empire Trilogy since it takes place before Pebble in the Sky. Pebble in the Sky was published first, though, so I count it as the second book. It is very early science fiction with a few mystery overtones.

Buying books produces about 5% of the pleasure of reading them while taking much less than 5% of the time. There was a time in my life when I thoroughly enjoyed methodically working through a used book store, list in hand, tracking down cheap copies to fill in holes in series. This means that I own a lot of books that I thought at some point that I would want to read but never got around to, often because, at the time, I was feeling completionist about some series or piece of world-building. From time to time, I get the urge to try to read some of them.

Sometimes this is a poor use of my time.

The Galactic Empire series is from Asimov's first science fiction period, after the Foundation series but contemporaneous with their collection into novels. They're set long, long before Foundation, but after humans have inhabited numerous star systems and Earth has become something of a backwater. That process is just starting in The Stars, Like Dust: Earth is still somewhere where an upper-class son might be sent for an education, but it has been devastated by nuclear wars and is well on its way to becoming an inward-looking relic on the edge of galactic society.

Biron Farrill is the son of the Lord Rancher of Widemos, a wealthy noble whose world is one of those conquered by the Tyranni. In many other SF novels, the Tyranni would be an alien race; here, it's a hierarchical and authoritarian human civilization. The book opens with Biron discovering a radiation bomb planted in his dorm room. Shortly after, he learns that his father had been arrested. One of his fellow students claims to be on Biron's side against the Tyranni and gives him false papers to travel to Rhodia, a wealthy world run by a Tyranni sycophant.

Like most books of this era, The Stars, Like Dust is a short novel full of plot twists. Unlike some of its contemporaries, it's not devoid of characterization, but I might have liked it better if it were. Biron behaves like an obnoxious teenager when he's not being an arrogant ass. There is a female character who does a few plot-relevant things and at no point is sexually assaulted, so I'll give Asimov that much, but the gender stereotypes are ironclad and there is an entire subplot focused on what I can only describe as seduction via petty jealousy.

The writing... well, let me quote a typical passage:

There was no way of telling when the threshold would be reached. Perhaps not for hours, and perhaps the next moment. Biron remained standing helplessly, flashlight held loosely in his damp hands. Half an hour before, the visiphone had awakened him, and he had been at peace then. Now he knew he was going to die.

Biron didn't want to die, but he was penned in hopelessly, and there was no place to hide.

Needless to say, Biron doesn't die. Even if your tolerance for pulp melodrama is high, 192 small-print pages of this sort of thing is wearying.

Like a lot of Asimov plots, The Stars, Like Dust has some of the shape of a mystery novel. Biron, with the aid of some newfound companions on Rhodia, learns of a secret rebellion against the Tyranni and attempts to track down its base to join them. There are false leads, disguised identities, clues that are difficult to interpret, and similar classic mystery trappings, all covered with a patina of early 1950s imaginary science. To me, it felt constructed and artificial in ways that made the strings Asimov was pulling obvious. I don't know if someone who likes mystery construction would feel differently about it.

The worst part of the plot thankfully doesn't come up much. We learn early in the story that Biron was on Earth to search for a long-lost document believed to be vital to defeating the Tyranni. The nature of that document is revealed on the final page, so I won't spoil it, but if you try to think of the stupidest possible document someone could have built this plot around, I suspect you will only need one guess. (In Asimov's defense, he blamed Galaxy editor H.L. Gold for persuading him to include this plot, and disavowed it a few years later.)

The Stars, Like Dust is one of the worst books I have ever read. The characters are overwrought, the politics are slapdash and build on broad stereotypes, the romantic subplot is dire and plays out mainly via Biron egregiously manipulating his petulant love interest, and the writing is annoying. Sometimes pulp fiction makes up for those common flaws through larger-than-life feats of daring, sweeping visions of future societies, and ever-escalating stakes. There is little to none of that here. Asimov instead provides tedious political maneuvering among a class of elitist bankers and land owners who consider themselves natural leaders. The only places where the power structures of this future government make sense are where Asimov blatantly steals them from either the Roman Empire or the Doge of Venice.

The one thing this book has going for it — the thing, apart from bloody-minded completionism, that kept me reading — is that the technology is hilariously weird in that way that only 1940s and 1950s science fiction can be. The characters have access to communication via some sort of interstellar telepathy (messages coded to a specific person's "brain waves") and can travel between stars through hyperspace jumps, but each jump is manually calculated by referring to the pilot's (paper!) volumes of the Standard Galactic Ephemeris. Communication between ships (via "etheric radio") requires manually aiming a radio beam at the area in space where one thinks the other ship is. It's an unintentionally entertaining combination of technology that now looks absurdly primitive and science that is so advanced and hand-waved that it's obviously made up.

I also have to give Asimov some points for using spherical coordinates. It's a small thing, but the coordinate systems in most SF novels and TV shows are obviously not fit for purpose.

I spent about a month and a half of this year barely reading, and while some of that is because I finally tackled a few projects I'd been putting off for years, a lot of it was because of this book. It was only 192 pages, and I'm still curious about the glue between Asimov's Foundation and Robot series, both of which I devoured as a teenager. But every time I picked it up to finally finish it and start another book, I made it about ten pages and then couldn't take any more. Learn from my error: don't try this at home, or at least give up if the same thing starts happening to you.

Followed by The Currents of Space.

Rating: 2 out of 10

22 April, 2024 02:22AM

April 20, 2024

Debian Day

UDRP Legitimate interests: EU whistleblower directive, workplace health & safety concerns

There is an effort underway to censor this web site and discredit those who created it by using the WIPO UDRP.

The accusation of bad faith from the trademark holder is a huge insult. I'd rather be accused of whistleblowing than be wrongly accused of bad faith.

Then again, how can any of us be accused of whistleblowing when the Debian Social Contract says We won't hide problems?

In 2019, the European Union adopted the Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who report breaches of Union law .

The Debian Social Contract includes the clause:

3. We will not hide problems

This clause appears completely compatible with the EU Whisteblowing Directive. In fact, this clause appears to encourage and authorize whistleblowing.

The fact that a Debian Developer wrote a combined resignation/suicide note on the night before Debian Day makes it hard to ignore the possibility that his death is related to something in the Debian environment.

Public Health England quantifies three suicides in a community as the minimum threshold to declare a suicide cluster.

20 April, 2024 01:00PM

Bad faith: suicide, stigma and tarnishing

WIPO panelists are asked to consider whether the content of the web sites tarnishes a trademark. There is clearly a lot of stigma around suicides. It is inevitable that some tarnishing may occur when suicide is mentioned.

Nonetheless, the panel needs to consider whether tarnishing is the lesser evil.

The factual revelations of a Debian / open source suicide cluster do run the risk of tarnishing the Debian trademark, I am not going to dispute that.

Yet the panel can not automatically conclude that tarnishing is done in bad faith. If the reason for publishing evidence related to a suicide cluster is in the interest of public health and preventing more suicides then it looks like tarnishing but it is NOT bad faith.

Given the nature of suicide, there is simply no way to publish these public health concerns without the counter-accusations of tarnishing.

In responding to the previous case of UDRP harassment by IBM Red Hat, in relation to the domain name WeMakeFedora.org, I made reference to the Holmes and Rahe Stress Scale. The loss of a family member is one of the highest events on the scale, between 63 and 100 points. Significant attacks on the business, career or professional reputation are rated between 39 and 47. It is suggested that when these events and scores combine, for example, through bad luck or persistent harassment, overall scores over 300 are highly likely to have an impact on health. In other words, there is a higher risk of illness, accident and suicide for people subjected to stress of this level.

The complainant is clearly aware of these arguments from the prior WeMakeFedora.org case so their decision to embark upon a copy-cat case and deliberately submit documents referring to 2018, the specific time when I lost two family members, appears to be a reckless and deliberate attempt to knowingly impose more pain on my family and I. Therefore, it is clearly a violation of UDRP rule 15(e), harassment and bad faith by those who initiated the procedure.

20 April, 2024 01:00PM

Bad faith: how many Debian Developers really committed suicide?

We can look at the list of Debian Developers and filter the list by "Current status".

There are 972 Debian Developers listed with the status "Debian Developer, uploading". These are people considered to be active.

When people resign/retire, their status is changed to "Emeritus". There are 449 Debian Developers with status Emeritus.

The status "Removed" is distinct from the status "Emeritus". There are 272 Debian Developers with status "Removed". This list includes people who have died, it includes victims who have been disappeared and it includes people who have failed to respond to any attempts to communicate. Some of these people may have been participating under a fake identity. Some of them may have become fed up with the politics and walked away without saying goodbye. Some of these 272 people that we can't account for may have joined the suicide cluster.

Public health statistics tell us that only one in four suicide victims leave a note. In a large enough group of people, for every one person who leaves notes like Frans Pop, it seems like a reasonable hypothesis that there are three more who we don't know about because they didn't leave a note. Ian Murdock left notes on social media. Richard Rothwell left notes.

With or without a note, in this list of 272 people "Removed", there will be many more we don't know about because their families would never think to tell us.

How many of these 272 vanished after some secret humiliation on the debian-private mailing list?

How many bloggers have committed suicide after WIPO denounced them with accusations of bad faith?

20 April, 2024 01:00PM

Understanding Debian Community

Bad faith: Debian Community domain used for harassment after WIPO seizure

When submitting a UDRP case to WIPO, complainants are asked to sign a declaration stating that they are not using the UDRP for harassment.

Here is what was submitted to WIPO for the Debian Community censorship case:

Jonathan Cohen, Debian, harassment, UDRP, WIPO

Here is a similar declaration submitted in the latest WIPO UDRP censorship case:

Alessio Canova, Debian, harassment, UDRP, WIPO

In the 2022 case, the WIPO panel did not publish the names of any volunteers or make any accusations against volunteers. Yet as soon as the Debianists had seized the domain, it was used to publish attack pages directed at a single volunteer.

The victim has exercised his legal rights under the GDPR and asked for the public attacks to be removed. The trademark holder in their high arrogance has decided they are above the law of the GDPR.

Therefore, in hindsight, we can see that the 2022 case really was intended for harassment. They simply didn't use the domain name for anything else.

The Wayback Machine has captured snapshots of the page:

Snapshot of Debian.Community after seizure in 2022.

Last snapshot of Debian.Community was in November 2023, a few months before Alessio Canova signed the declaration for the new UDRP case on 5 March 2024.

The volunteer resigned from mentoring at a time when he lost two family members. The trademark holder has made a gross violation of the privacy of this man's family.

We can only wonder, why did the trademark holder have to change their lawyer after the 2022 case? Is it because they tricked Jonathan Cohen into signing that declaration and then after seizing the domain, they reneged on the declaration Cohen had signed for them and used the domain for harassment anyway?

Lawyers don't like to be used like that. Is it possible that Cohen has declined to work for these puppet masters again?

20 April, 2024 12:00PM

Nazi.Compare

20 April: Hitler's Birthday, Debian Project Leader Election Results

Adolf Hitler was born on 20 April 1889 in Austria. Today would be the Fuhrer's 135th birthday.

In 1939, shortly after Hitler annexed Austria, the Nazi command in Berlin had a big celebration for the 50th birthday of Adolf Hitler. It was such a big occasion that it has its own Wikipedia entry.

One of the quotes in Wikipedia comes from British historian Ian Kershaw:

an astonishing extravaganza of the Führer cult. The lavish outpourings of adulation and sycophancy surpassed those of any previous Führer Birthdays

For the first time ever, the Debian Project Leader election has finished just after 2am (Germany, Central European Summer Time) on the birthday of Hitler and the winning candidate is Andreas Tille from Germany.

Hitler's time of birth was 18:30, much later in the day.

Tille appears to be the first German to win this position in Debian.

We don't want to jinx Tille's first day on the job so we went to look at how each of the candidates voted in the 2021 lynching of Dr Richard Stallman.

This blog previously explored the question of whether Dr Stallman, who is an atheist, would be subject to anti-semitism during the Holocaust years because of his Jewish ancestry. We concluded that RMS would have definitely been on Hitler's list of targets.

Here we trim the voting tally sheet to show how Andreas Tille and Sruthi Chandran voted on the question of lynching Dr Stallman:

       Tally Sheet for the votes cast. 
 
   The format is:
       "V: vote 	Login	Name"
 The vote block represents the ranking given to each of the 
 candidates by the voter. 
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

     Option 1--------->: Call for the FSF board removal, as in rms-open-letter.github.io
   /  Option 2-------->: Call for Stallman's resignation from all FSF bodies
   |/  Option 3------->: Discourage collaboration with the FSF while Stallman is in a leading position
   ||/  Option 4------>: Call on the FSF to further its governance processes
   |||/  Option 5----->: Support Stallman's reinstatement, as in rms-support-letter.github.io
   ||||/  Option 6---->: Denounce the witch-hunt against RMS and the FSF
   |||||/  Option 7--->: Debian will not issue a public statement on this issue
   ||||||/  Option 8-->: Further Discussion
   |||||||/
V: 88888817	          tille	Andreas Tille
V: 21338885	           srud	Sruthi Chandran

We can see that Tille voted for option 7: he did not want Debian's name used in the attacks on Dr Stallman. However, he did not want Debian to denounce the witch hunt either. This is scary. A lot of Germans were willing to stand back and do nothing while Dr Stallman's Jewish ancestors were being dragged off to concentration camps.

The only thing necessary for the triumph of evil is that good men do nothing.

On the other hand, Sruthi Chandran appears to be far closer to the anti-semitic spirit. She put her first and second vote preferences next to the options that involved defaming and banishing Dr Stallman.

Will the new DPL be willing to stop the current vendettas against a volunteer and his family? Or will Tille continue using resources for stalking a volunteer in the same way that Nazis stalked the Jews?

Adolf Hitler famously died by suicide, a lot like the founder of Debian, Ian Murdock, who was born in Konstanz, Germany.

Will Tille address the questions of the Debian suicide cluster or will he waste more money on legal fees to try and cover it up?

20 April, 2024 09:00AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - March 2024

Time really flies when you are really busy you have fun! Our Montréal Debian User Group met on Sunday March 31st and I only just found the time to write our report :)

This time around, 9 of us we met at EfficiOS's offices1 to chat, hang out and work on Debian and other stuff!

Here is what we did:

pollo:

  • did some clerical work for the DebConf videoteam
  • tried to book a plane ticket for DC24
  • triaged #1067620 (dependency problem with whipper)
  • closed #1067121 (flaky test in supysonic)
  • closed #1065514 (qpdfview crossbuilding)

tvaz:

tassia:

  • planned & brainstormed for the upcoming Debian usability tests
  • mentored a student/new contributor (justin)
  • babysat a future contributor!
  • closed #1067649
  • learnt about fabre.debian.net & element.debian.social (thanks, pollo!)

viashimo:

  • uploaded puppet-strings 4.1.2-1 to unstable
  • updated various services in personal infra
  • cleaned vagrant-hostmanager and worked on packaging the new upstream release (1.8.10)
  • extended GPG key expiry
  • looked at options for a new backup machine

lavamind:

  • updated puppetdb to 8.4.1

justin:

  • opened #1068152 after a misfortune with #1068151
  • created new contributor accounts (salsa & wiki)

Pictures

Here are pictures of the event. Well, one picture (thanks Tassia!) of the event itself and another one of the crisp Italian lager I drank at the bar after the event :)

People at the event working around a long table A glass of beer illuminated by sunlight


  1. Maintainers, amongst other things, of the great LTTng

20 April, 2024 12:08AM by Louis-Philippe Véronneau

April 19, 2024

Nazi.Compare

September 11: Axel Beckert (ETH Zurich) attacks American freedoms

Under Jonathan Carter, Debian has spent over $120,000 attacking a volunteer. What type of lawyering does this money buy?

The date September 11, also referred to as 9-11, is well known as the anniversary of the tragic attacks that Al Qaeda made against targets in the United States of America.

Shortly after the anniversary of the attacks in September 2010, Der Spiegel published an article about Operation Pastorius, Hitler's plans that included the use of either missiles or kamikaze pilots to destroy the towers of New York City.

Many free software products and free software organizations have been founded in the United States and have been founded on promises of freedom that resonate with the American philosophy.

For example, the real FSF was founded by Dr Richard Stallman in Boston. Dr Stallman is widely known for making the distinction between free as in speech as opposed to free as in beer.

Various observers have noted that these values, inspired by the First Amendmant and Bill of Rights, are closely intertwined with the philosophy of software freedom.

In Coding Freedom (E. Gabriella Coleman, Princeton University Press), the author explores many of the synergies between freedom philosophies in licenses, in technology and in speech. Interestingly, Coleman anticipates the vendettas being practiced through the UDRP today:

Because a commitment to free speech and intellectual property is housed under the same roof—the US Constitution—the potential for conflict has long existed. For most of their legal existence, however, conflict was noticeably absent, largely because the scope of both free speech and intellectual property law were more contained than they are today. It was only during the course of the twentieth-century that the First Amendment and intellectual property took on the unprecedented symbolic and legal mean- ings they now command in the United States as well as many other nations.

while noting the intersection of Debian with the DeCSS affair and other milestones in the evolution of the Internet:

Much of the coherence emerged through reasoned political debate. Cleverness—or prankstership—played a pivotal role as well. Prodromou, a Debian developer and editor of one of the first Internet zines, Pigdog, circulated a decoy program that hijacked the name DeCSS, even though it performed an entirely different operation from Johansen’s DeCSS.

Decoding Liberation: The Promise of Free and Open Source Software, (Samir Chopra, Scott D. Dexter) makes more observations on the relationship between the First Amendment and Software Freedom:

In the following year, Bruce Perens reframed this definition as the Debian Social Contract (Debian Project 2004), emphasizing the rights of, and programmers’ responsibilities to, the community of users.

The Fedora Foundations, advanced by Red Hat, now a subsidiary of IBM, brought together developers under a similar promise:

Freedom: We are dedicated to free software and content. Advancing software and content freedom is a central community goal, which we accomplish through the software and content we promote.

Many of us have contributed decades of work under these terms and conditions, the promise of an American style of freedom.

Yet this is under attack and one of the most dramatic attacks in the history of free software was launched on September 11, 2022, when a group of fascist Germans and Swiss banded together to demand state violence against volunteers discussing the toxic culture in Debian.

Axel Beckert, September 11, Debian Social Contract

The September 11 attacks were notable for the impact on the emergency services, especially the firemen. One of the volunteers being attacked started doing voluntary work with the Wireless Institute Civil Emergency Network (WICEN) when he was fourteen years old.

How would you feel if little Germans like Axel Beckert at ETH Zurich were plotting against you and your family on the anniversary of the most notorious terrorist attacks in living memory?

The September 11 attacks involved a huge and immediate loss of life. In Debian, we have seen the evidence of a suicide cluster slowly coming out of the shadows. One of the volunteers has died, in a possible suicide, on the very same day the latest victim went to the church to get married.

Axel Beckert, ETH Zurich, Debian, perjury

How much of the $120,000 Debian legal budget paid for this abhorrent attack on American principles and freedoms that underpin the world of free software? Who pocketed that money?

19 April, 2024 08:00PM

Debian Day

Bad faith: attacking a volunteer at a time of grief, disrespect for the sanctity of human life

In the latest UDRP and legal vendettas from Debian trademark holders, they make a big fuss about keeping the suicides private but they show no respect for the privacy of my own family. It is hypocrisy.

The complainant admits they began attacking my family and I in 2018 (evidence: screenshot of doxing messages from Chris Lamb further below). This was a time when I lost two family members and it was a disturbing time for my family and I for a range of reasons.

I told fellow collaborators that I couldn't fully commit to some of my voluntary responsibilities at that time. (email to Molly de Blanc)

At the same time, one of the issues causing controversy is the appearance of a Debian suicide cluster or an open source software suicide cluster. The attempt to minimize attention on individual suicides also has the effect of minimizing discussion about whether the combined body of deaths form a cluster. Public health authorities define three or more suicides as a cluster. The public health authorities advise that clusters need special attention to avoid the risk of further deaths.

Moreover, given the way that the Debian deaths intersect with my own family life, including the unexplained death of Adrian von Bidder on the day of our wedding, a possible suicide, the grief and toxicity associated with these phenomena have inevitably become intertwined.

This phenomena should be examined from an independent perspective, with a focus on the issues and not trying to misdirect attention towards a volunteer who expressed concerns about it. Forcing an individual volunteer to write about such phenomena under the threat that WIPO will denounce me is abhorrent.

Given that we already have this unexplained Debian death on the very day of our wedding, which is a huge scar, how can they possibly be imposing more scars upon my life with the continued burden of public harassment on the Debian web site and through WIPO? It is too much and it has been going on for too long.

Therefore, the bad faith is entirely on the part of those bullies forcing the matter before WIPO.

19 April, 2024 01:00PM

April 18, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

Sorting out backup internet #2: 5G modem

Having setup recursive DNS it was time to actually sort out a backup internet connection. I live in a Virgin Media area, but I still haven’t forgiven them for my terrible Virgin experiences when moving here. Plus it involves a bigger contractual commitment. There are no altnets locally (though I’m watching youfibre who have already rolled out in a few Belfast exchanges), so I decided to go for a 5G modem. That gives some flexibility, and is a bit easier to get up and running.

I started by purchasing a ZTE MC7010. This had the advantage of being reasonably cheap off eBay, not having any wifi functionality I would just have to disable (it’s going to plug it into the same router the FTTP connection terminates on), being outdoor mountable should I decide to go that way, and, finally, being powered via PoE.

For now this device sits on the window sill in my study, which is at the top of the house. I printed a table stand for it which mostly does the job (though not as well with a normal, rather than flat, network cable). The router lives downstairs, so I’ve extended a dedicated VLAN through the study switch, down to the core switch and out to the router. The PoE study switch can only do GigE, not 2.5Gb/s, but at present that’s far from the limiting factor on the speed of the connection.

The device is 3 branded, and, as it happens, I’ve ended up with a 3 SIM in it. Up until recently my personal phone was with them, but they’ve kicked me off Go Roam, so I’ve moved. Going with 3 for the backup connection provides some slight extra measure of resiliency; we now have devices on all 4 major UK networks in the house. The SIM is a preloaded data only SIM good for a year; I don’t expect to use all of the data allowance, but I didn’t want to have to worry about unexpected excess charges.

Performance turns out to be disappointing; I end up locking the device to 4G as the 5G signal is marginal - leaving it enabled results in constantly switching between 4G + 5G and a significant extra latency. The smokeping graph below shows a brief period where I removed the 4G lock and allowed 5G:

Smokeping 4G vs 5G graph

(There’s a handy zte.js script to allow doing this from the device web interface.)

I get about 10Mb/s sustained downloads out of it. EE/Vodafone did not lead to significantly better results, so for now I’m accepting it is what it is. I tried relocating the device to another part of the house (a little tricky while still providing switch-based PoE, but I have an injector), without much improvement. Equally pinning the 4G to certain bands provided a short term improvement (I got up to 40-50Mb/s sustained), but not reliably so.

speedtest.net results

This is disappointing, but if it turns out to be a problem I can look at mounting it externally. I also assume as 5G is gradually rolled out further things will naturally improve, but that might be wishful thinking on my part.

Rather than wait until my main link had a problem I decided to try a day working over the 5G connection. I spend a lot of my time either in browser based apps or accessing remote systems via SSH, so I’m reasonably sensitive to a jittery or otherwise flaky connection. I picked a day that I did not have any meetings planned, but as it happened I ended up with an adhoc video call arranged. I’m pleased to say that it all worked just fine; definitely noticeable as slower than the FTTP connection (to be expected), but all workable and even the video call was fine (at least from my end). Looking at the traffic graph shows the expected ~ 10Mb/s peak (actually a little higher, and looking at the FTTP stats for previous days not out of keeping with what we see there), and you can just about see the ~ 3Mb/s symmetric use by the video call at 2pm:

4G traffic during the work day

The test run also helped iron out the fact that the content filter was still enabled on the SIM, but that was easily resolved.

Up next, vaguely automatic failover.

18 April, 2024 05:21PM

Russ Allbery

Review: Unseen Academicals

Review: Unseen Academicals, by Terry Pratchett

Series: Discworld #37
Publisher: Harper
Copyright: October 2009
Printing: November 2014
ISBN: 0-06-233500-6
Format: Mass market
Pages: 517

Unseen Academicals is the 37th Discworld novel and includes many of the long-standing Ankh-Morpork cast, but mostly as supporting characters. The main characters are a new (and delightful) bunch with their own concerns. You arguably could start reading here if you really wanted to, although you would risk spoiling several previous books (most notably Thud!) and will miss some references that depend on familiarity with the cast.

The Unseen University is, like most institutions of its sort, funded by an endowment that allows the wizards to focus on the pure life of the mind (or the stomach). Much to their dismay, they have just discovered that an endowment that amounts to most of their food budget requires that they field a football team.

Glenda runs the night kitchen at the Unseen University. Given the deep and abiding love that wizards have for food, there is both a main kitchen and a night kitchen. The main kitchen is more prestigious, but the night kitchen is responsible for making pies, something that Glenda is quietly but exceptionally good at.

Juliet is Glenda's new employee. She is exceptionally beautiful, not very bright, and a working class girl of the Ankh-Morpork streets down to her bones. Trevor Likely is a candle dribbler, responsible for assisting the Candle Knave in refreshing the endless university candles and ensuring that their wax is properly dribbled, although he pushes most of that work off onto the infallibly polite and oddly intelligent Mr. Nutt.

Glenda, Trev, and Juliet are the sort of people who populate the great city of Ankh-Morpork. While the people everyone has heard of have political crises, adventures, and book plots, they keep institutions like the Unseen University running. They read romance novels, go to the football games, and nurse long-standing rivalries. They do not expect the high mucky-mucks to enter their world, let alone mess with their game.

I approached Unseen Academicals with trepidation because I normally don't get along as well with the Discworld wizard books. I need not have worried; Pratchett realized that the wizards would work better as supporting characters and instead turns the main plot (or at least most of it; more on that later) over to the servants. This was a brilliant decision. The setup of this book is some of the best of Discworld up to this point.

Trev is a streetwise rogue with an uncanny knack for kicking around a can that he developed after being forbidden to play football by his dear old mum. He falls for Juliet even though their families support different football teams, so you may think that a Romeo and Juliet spoof is coming. There are a few gestures of one, but Pratchett deftly avoids the pitfalls and predictability and instead makes Juliet one of the best characters in the book by playing directly against type. She is one of the characters that Pratchett is so astonishingly good at, the ones that are so thoroughly themselves that they transcend the stories they're put into.

The heart of this book, though, is Glenda.

Glenda enjoyed her job. She didn't have a career; they were for people who could not hold down jobs.

She is the kind of person who knows where she fits in the world and likes what she does and is happy to stay there until she decides something isn't right, and then she changes the world through the power of common sense morality, righteous indignation, and sheer stubborn persistence. Discworld is full of complex and subtle characters fencing with each other, but there are few things I have enjoyed more than Glenda being a determinedly good person. Vetinari of course recognizes and respects (and uses) that inner core immediately.

Unfortunately, as great as the setup and characters are, Unseen Academicals falls apart a bit at the end. I was eagerly reading the story, wondering what Pratchett was going to weave out of the stories of these individuals, and then it partly turned into yet another wizard book. Pratchett pulled another of his deus ex machina tricks for the climax in a way that I found unsatisfying and contrary to the tone of the rest of the story, and while the characters do get reasonable endings, it lacked the oomph I was hoping for. Rincewind is as determinedly one-note as ever, the wizards do all the standard wizard things, and the plot just isn't that interesting.

I liked Mr. Nutt a great deal in the first part of the book, and I wish he could have kept that edge of enigmatic competence and unflappableness. Pratchett wanted to tell a different story that involved more angst and self-doubt, and while I appreciate that story, I found it less engaging and a bit more melodramatic than I was hoping for. Mr. Nutt's reactions in the last half of the book were believable and fit his background, but that was part of the problem: he slotted back into an archetype that I thought Pratchett was going to twist and upend.

Mr. Nutt does, at least, get a fantastic closing line, and as usual there are a lot of great asides and quotes along the way, including possibly the sharpest and most biting Vetinari speech of the entire series.

The Patrician took a sip of his beer. "I have told this to few people, gentlemen, and I suspect never will again, but one day when I was a young boy on holiday in Uberwald I was walking along the bank of a stream when I saw a mother otter with her cubs. A very endearing sight, I'm sure you will agree, and even as I watched, the mother otter dived into the water and came up with a plump salmon, which she subdued and dragged on to a half-submerged log. As she ate it, while of course it was still alive, the body split and I remember to this day the sweet pinkness of its roes as they spilled out, much to the delight of the baby otters who scrambled over themselves to feed on the delicacy. One of nature's wonders, gentlemen: mother and children dining on mother and children. And that's when I first learned about evil. It is built into the very nature of the universe. Every world spins in pain. If there is any kind of supreme being, I told myself, it is up to all of us to become his moral superior."

My dissatisfaction with the ending prevents Unseen Academicals from rising to the level of Night Watch, and it's a bit more uneven than the best books of the series. Still, though, this is great stuff; recommended to anyone who is reading the series.

Followed in publication order by I Shall Wear Midnight.

Rating: 8 out of 10

18 April, 2024 02:37AM

April 17, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 0.12.8.2.1 on CRAN: Micro Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1135 other packages on CRAN, downloaded 33.7 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 579 times according to Google Scholar.

Yesterday’s release accommodates reticulate by suspending a single test that now ‘croaks’ creating a reverse-dependency issue for that package. No other changes were made.

The set of changes since the last CRAN release follows.

Changes in RcppArmadillo version 0.12.8.2.1 (2024-04-15)

  • One-char bug fix release commenting out one test that upsets reticulate when accessing a scipy sparse matrix

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 April, 2024 02:00AM

April 16, 2024

Understanding Debian Community

Who is a real Debian Developer?

Personally, I resigned from some of my activities mentoring in Google Summer of Code at a time when I lost two family members.

Most people only showed sympathy and respect for my family at that time. Colleagues in the Debian world started sending me insults, telling me that I am not a real Debian Developer. It is no surprise that there is a suicide cluster in this group ( Debian suicide cluster meets criteria from Public Health England).

Instead of apologizing to my family, they have paid vast sums of money to lawyers to repeat these insults over and over again. ( Evidence: over $120,000 of Debian money wasted on using lawyers to harass my family and I after loss of two family members).

Therefore, it is important to look at who really is a Debian Developer.

Origins of the term Debian Developer

Looking at the very first archived copy of an email from the debian-project mailing list in 1994, we find that Debian co-authors are using the term Debian Developer four years before there was a trademark. That is four years before the Debian Project constitution. The term Debian Developer is completely valid for somebody who has done significant creative work over many decades. In plain English, the term Debian Developer can mean three things: somebody who possesses the skill of creating Debian software, somebody who has an authorship interest in the Debian software and thirdly, but lastly, somebody who is a member of the clique. Copyright law does not require somebody to be a member of the clique. I never joined the Debian Project Unincorporated Association, I have always used the term Debian Developer first and foremost to describe myself as an author with moral rights in the creative work.

Legitimate interest: a very long history of voluntary contribution

Some of us started doing Debian as a hobby alongside other hobbies such as amateur radio. One of the early Debian Project Leaders, Bruce Perens, also notably came to Debian for amateur radio purposes.

I passed the amateur radio exam in 1993, when I was 14 years of age. My first years of voluntary activities in amateur radio and free software were during a time when I was legally a child. I didn't receive any payment for some of those activities. I offered my time on the basis that I was gaining skills and helping real communities.

Around the same time, while I was still legally a child, I came to appreciate the fact that there are some adults who exploit talented and precocious youngsters by trying to direct the work that is being undertaken and failing to disclose or share financial benefits.

I believe my first engagement with Debian was in 1997 and the first proof I can find of my engagement with Debian is an email from 23 February 1998 about package creation.

The Debian Project constitution was originally published on 10 September 1998, some time later.

The trademark was only registered later on 21 December 1999

Looking at the Scientologie.org UDRP verdict, ( WIPO UDRP case D2000-0410) the panelists gave some weight to those possessing a copyright interest that predates the registration of a trademark or a copyright interest arising from a situation that intersects with the history of the trademark.

The spirit of the Scientologie.org UDRP verdict can be extracted in good faith to questions like who can use the term Debian Developer.

Legitimate interests: the promise of recognition

The misfits behind the WIPO insults do not pay the rest of us anything for our collaboration in creating the Debian software.

They told us that the only thing we get in return for our creations is the recognition.

Using the term Debian Developer is interchangeable with recognition for our skills and recognition of our status as voluntary, un-paid joint authors who are not compensated in any manner other than recognition.

They are now using the debian.org web site and the trademark to give people negative recognition. This is like bouncing a cheque.

In the circumstances, it seems entirely appropriate for me to follow through on the promise of recognizing people. The misfits have provided a list of the domains along with the dates that each domain name was registered. On the list, the name debian.plus is the first name registered. debian.plus was registered for the purpose of delivering on the promise of positive recognition to the authors and our work.

Evidence: my blog Modern Slavery & Debian Open Source lists many of the promises of recognition in lieu of payment for our work.

Debian promises recognition, I take the following quote from the latest Debian law suit where they admit using the promise of recognition to lure people into working for free:

64. ... un des avantages importants de travailler pour la communauté Debian est la valeur de sa réputation dans le domaine, à la fois professionellement et dans la communauté. ...

The promise of recognition is repeated again here in the Debian wiki.

The motivations of the authors also are varied, but the coin that they get paid in is often recognition, acclaim in the peer group, or experience that can be traded in in the work place

The same thing appears in the page about Debian Membership:

Debian has several types of association and membership for those who do wish to be recognised, or have rights within the project.

For people promoting Debian, there is a template for giving a talk. It includes the comments:

you are recognized for your contributions ... Did you ever have a boss who takes credit for your work? Not in Debian.

In short, there is a big emphasis on working for recognition instead of a salary. They gave us the promise of recognition and that gives rise to a legitimate interest in using the trademark in domain names for web sites about our work.

Moreover, it means once we gain the status of Debian Developer in the sense of being a joint author, as the term has been used since at least 1994, they can't bounce the cheque and extinguish our copyright / recognition / status as these things are interchangeable.

Bad faith: not every co-author wants to be a member of something too

In a number of jurisdictions, we have seen people establishing associations, some of them legally incorporated, some of them unincorporated, where they now use the term Debian Developer interchangeably with the status of a member rather than the status of an author.

The insistence that authorship rights can be dumbed down to a relation of membership is an example of gaslighting, as explained elsewhere.

Over the years, people have regularly protested against this practice of conflating authorship and membership.

In 2005, some Debian Developers in the UK created the Debian UK Society. They published a proposed constitution / articles of incorporation suggesting that every Debian Developer in the UK would become a member of the Society unless they opt-out.

Some authors felt this was a forced membership, similar to forced membership of a trade union.

Here is a blog post by MJ Ray objecting to the change in status conflating joint authorship with rights of membership.

The Debian UK Society (DUS) asserted automatic membership of debian developers (much like that sometimes suggested for SPI and rejected every time) and some of its members insulted and lied about me instead of fixing that bug. Credit to them for fixing it eventually.

The matter was discussed at length on the debian-project mailing list.

That's not interesting, though. I don't care about DUS except:

  1. I want no connection with it right now; including
  2. I want it not to hold my personal details (especially not the inaccurate personal details it currently uses).

[ ... SNIP ... ]

Opt-out membership associations seem a very shady practice - can anyone clearly opt-out without DUS recording personal data?

and again on the debian-uk mailing list.

Steve McIntyre: Membership of the society consists of the set of registered Debian developers resident in the UK, bar those who have deliberately opted out.

Why would you force authors to downgrade their rights from their status under copyright law to a lower status as described in the Debian UK Society constitution?

Under copyright law, joint authors can't expel each other

Under the constitutions of these associations, they purport that authorship and membership can be simultaneously extinguished on the whims of the leader of the day.

Some of us never joined any of these associations yet they claim, in bad faith, that they have the power to "expel" us.

The status of Debian Developer is independent of membership status

Nonetheless, when we examine the words from Steve McIntyre above, we can see that the status of being a Debian Developer (co-author or joint author) is something distinct from being a member.

The distinction is therefore clear to those who created those periphery associations around the copyrighted work.

Who has a copyright interest in the Debian GNU/Linux?

The question of copyright in the Debian GNU/Linux software is examined in much more detail in the DebianGNULinux.org blog about the subject.

Those having a copyright interest are therefore joint authors entitled to recognition as Debian Developers.

16 April, 2024 11:00AM

April 15, 2024

Free Software Fellowship

Halász Dávid & IBM Red Hat, OSCAL, Albania dating

This page has been refreshed after we incorrectly identified the developer in these photos.

Debian Community News has raised the issue of dating and romance with women from Albania and Kosovo.

OSCAL 2024 is back. We don't know when. Check here for more details.

Debian Community News has also published evidence that many of the women who received travel bursaries are not publishing code.

It is obvious that many of the women in red t-shirts are being paid to attend the Albanian conferences and pretend to be volunteers. They come and sit in the workshops to make the room look full. They ask questions and then at the end of the day they all disappear. There is no follow up on any of the topics presented. As soon as they receive their cash payment at the end of the day, all these women vanish.

If this is so obvious, why do the leaders of free software organizations continue to give their developers travel funds to go to events in Albania?

If it sounds like a rort, if it looks like a rort and if it smells like a rort (or pussy) then it probably is a rort.

Many free software organizations are registered as tax-privileged charities, for example, using the US 501(c)(3) status, which can lead to serious punishments if the funds pay for holidays and dating.

Looking through photos published by Andis Rado, we found pictures of the the Principal Software Engineer Halász Dávid.

Halász Dávid, Blerta, OSCAL, dating, Tirana, Albania Halász Dávid, OSCAL, dating, Tirana, Albania

Mr Dávid looks comfortable, a lot like the photos of Justin Flory (UNICEF, Red Hat) with a pimp at his feet:

This is a photo of Joerg Jaspert from the Debian Account Managers team. He is one of the people who leaked information about abuse in Albania and caused a great deal of inconvenience for everybody else who volunteered their weekend to go there.

At the time that Joerg Jaspert started making these privacy violations, he was on the school council at Dalbergschule in Fulda, Germany. Local magazines published a photo of him in a Debian t-shirt with other parents Claudia Beck and Ina Riechert:

Claudia Beck, Jörg Jaspert, Ina Riechert

15 April, 2024 06:00PM

Andreas Rönnquist

Status update for Allegro packaging in Debian

I have mailed to a Debian bug on allegro4.4 describing my reasoning regarding the allegro libraries – in short, allegro4.4 is pretty much dead upstream, and my interest was basically to keep alex4 (which is cool) in Debian, but since it migrated to non-free, my interest in allegro4.4 has waned. So, if anybody would like to still see allegro4.4 in Debian, please step up now and help out. Since it is dead upstream, my reasoning is that it is better to remove it from Debian if no maintainer who wants to help steps up.

Previously Tobias Hansen has helped out, but now it is 8 (!) years since his last upload of either package. (Please don’t interpret this as judgement, I am very happy for the help he has provided and all the work he has done on the packages).

Allegro5 is another deal – still active upstream, and I have kept it up to date in Debian, and while I have held the latest upload a short while because of the time_t transition, it will come sooner or later – There I am also waiting on a final decision on this bug from upstream. Other than that allegro 5 is in a very good state, and I will keep maintaining it as long as I can. But help would of course be appreciated on allegro5 too.

15 April, 2024 04:10PM by gusnan

Understanding Debian Community

Bad faith: no communication before opening WIPO UDRP case

The misfits did not make any attempt to contact me and propose a solution to the conflict. They unilaterally opened a dispute through the UDRP.

Moreover, I had published a blog telling people that I would consider giving some of the domains away to people who have similar rights derived from joint authorship.

There have been many opportunities for them to communicate with me like a human being. They talk about Debian being a "family" but they pack together like gang rapists to pick off developers one at a time and attack us.

They are bypassing any normal human communication because they want to cause the maximum amount of stress. They want WIPO to publish the name of my family in a negative context more than they want any of those domains.

In such circumstances, they prove they are committing the act of harassment under UDRP rule 15(e)

15 April, 2024 01:00PM

April 13, 2024

Simon Josefsson

Reproducible and minimal source-only tarballs

With the release of Libntlm version 1.8 the release tarball can be reproduced on several distributions. We also publish a signed minimal source-only tarball, produced by git-archive which is the same format used by Savannah, Codeberg, GitLab, GitHub and others. Reproducibility of both tarballs are tested continuously for regressions on GitLab through a CI/CD pipeline. If that wasn’t enough to excite you, the Debian packages of Libntlm are now built from the reproducible minimal source-only tarball. The resulting binaries are reproducible on several architectures.

What does that even mean? Why should you care? How you can do the same for your project? What are the open issues? Read on, dear reader…

This article describes my practical experiments with reproducible release artifacts, following up on my earlier thoughts that lead to discussion on Fosstodon and a patch by Janneke Nieuwenhuizen to make Guix tarballs reproducible that inspired me to some practical work.

Let’s look at how a maintainer release some software, and how a user can reproduce the released artifacts from the source code. Libntlm provides a shared library written in C and uses GNU Make, GNU Autoconf, GNU Automake, GNU Libtool and gnulib for build management, but these ideas should apply to most project and build system. The following illustrate the steps a maintainer would take to prepare a release:

git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make distcheck
gpg -b libntlm-1.8.tar.gz

The generated files libntlm-1.8.tar.gz and libntlm-1.8.tar.gz.sig are published, and users download and use them. This is how the GNU project have been doing releases since the late 1980’s. That is a testament to how successful this pattern has been! These tarballs contain source code and some generated files, typically shell scripts generated by autoconf, makefile templates generated by automake, documentation in formats like Info, HTML, or PDF. Rarely do they contain binary object code, but historically that happened.

The XZUtils incident illustrate that tarballs with files that are not included in the git archive offer an opportunity to disguise malicious backdoors. I blogged earlier how to mitigate this risk by using signed minimal source-only tarballs.

The risk of hiding malware is not the only motivation to publish signed minimal source-only tarballs. With pre-generated content in tarballs, there is a risk that GNU/Linux distributions such as Trisquel, Guix, Debian/Ubuntu or Fedora ship generated files coming from the tarball into the binary *.deb or *.rpm package file. Typically the person packaging the upstream project never realized that some installed artifacts was not re-built through a typical autoconf -fi && ./configure && make install sequence, and never wrote the code to rebuild everything. This can also happen if the build rules are written but are buggy, shipping the old artifact. When a security problem is found, this can lead to time-consuming situations, as it may be that patching the relevant source code and rebuilding the package is not sufficient: the vulnerable generated object from the tarball would be shipped into the binary package instead of a rebuilt artifact. For architecture-specific binaries this rarely happens, since object code is usually not included in tarballs — although for 10+ years I shipped the binary Java JAR file in the GNU Libidn release tarball, until I stopped shipping it. For interpreted languages and especially for generated content such as HTML, PDF, shell scripts this happens more than you would like.

Publishing minimal source-only tarballs enable easier auditing of a project’s code, to avoid the need to read through all generated files looking for malicious content. I have taken care to generate the source-only minimal tarball using git-archive. This is the same format that GitLab, GitHub etc offer for the automated download links on git tags. The minimal source-only tarballs can thus serve as a way to audit GitLab and GitHub download material! Consider if/when hosting sites like GitLab or GitHub has a security incident that cause generated tarballs to include a backdoor that is not present in the git repository. If people rely on the tag download artifact without verifying the maintainer PGP signature using GnuPG, this can lead to similar backdoor scenarios that we had for XZUtils but originated with the hosting provider instead of the release manager. This is even more concerning, since this attack can be mounted for some selected IP address that you want to target and not on everyone, thereby making it harder to discover.

With all that discussion and rationale out of the way, let’s return to the release process. I have added another step here:

make srcdist
gpg -b libntlm-1.8-src.tar.gz

Now the release is ready. I publish these four files in the Libntlm’s Savannah Download area, but they can be uploaded to a GitLab/GitHub release area as well. These are the SHA256 checksums I got after building the tarballs on my Trisquel 11 aramo laptop:

91de864224913b9493c7a6cec2890e6eded3610d34c3d983132823de348ec2ca  libntlm-1.8-src.tar.gz
ce6569a47a21173ba69c990965f73eb82d9a093eb871f935ab64ee13df47fda1  libntlm-1.8.tar.gz

So how can you reproduce my artifacts? Here is how to reproduce them in a Ubuntu 22.04 container:

podman run -it --rm ubuntu:22.04
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
./bootstrap
./configure
make dist srcdist
sha256sum libntlm-*.tar.gz

You should see the exact same SHA256 checksum values. Hooray!

This works because Trisquel 11 and Ubuntu 22.04 uses the same version of git, autoconf, automake, and libtool. These tools do not guarantee the same output content for all versions, similar to how GNU GCC does not generate the same binary output for all versions. So there is still some delicate version pairing needed.

Ideally, the artifacts should be possible to reproduce from the release artifacts themselves, and not only directly from git. It is possible to reproduce the full tarball in a AlmaLinux 8 container – replace almalinux:8 with rockylinux:8 if you prefer RockyLinux:

podman run -it --rm almalinux:8
dnf update -y
dnf install -y make wget gcc
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8.tar.gz
tar xfa libntlm-1.8.tar.gz
cd libntlm-1.8
./configure
make dist
sha256sum libntlm-1.8.tar.gz

The source-only minimal tarball can be regenerated on Debian 11:

podman run -it --rm debian:11
apt-get update
apt-get install -y --no-install-recommends make git ca-certificates
git clone https://gitlab.com/gsasl/libntlm.git
cd libntlm
git checkout v1.8
make -f cfg.mk srcdist
sha256sum libntlm-1.8-src.tar.gz 

As the Magnus Opus or chef-d’œuvre, let’s recreate the full tarball directly from the minimal source-only tarball on Trisquel 11 – replace docker.io/kpengboy/trisquel:11.0 with ubuntu:22.04 if you prefer.

podman run -it --rm docker.io/kpengboy/trisquel:11.0
apt-get update
apt-get install -y --no-install-recommends autoconf automake libtool make wget git ca-certificates
wget https://download.savannah.nongnu.org/releases/libntlm/libntlm-1.8-src.tar.gz
tar xfa libntlm-1.8-src.tar.gz
cd libntlm-v1.8
./bootstrap
./configure
make dist
sha256sum libntlm-1.8.tar.gz

Yay! You should now have great confidence in that the release artifacts correspond to what’s in version control and also to what the maintainer intended to release. Your remaining job is to audit the source code for vulnerabilities, including the source code of the dependencies used in the build. You no longer have to worry about auditing the release artifacts.

I find it somewhat amusing that the build infrastructure for Libntlm is now in a significantly better place than the code itself. Libntlm is written in old C style with plenty of string manipulation and uses broken cryptographic algorithms such as MD4 and single-DES. Remember folks: solving supply chain security issues has no bearing on what kind of code you eventually run. A clean gun can still shoot you in the foot.

Side note on naming: GitLab exports tarballs with pathnames libntlm-v1.8/ (i.e.., PROJECT-TAG/) and I’ve adopted the same pathnames, which means my libntlm-1.8-src.tar.gz tarballs are bit-by-bit identical to GitLab’s exports and you can verify this with tools like diffoscope. GitLab name the tarball libntlm-v1.8.tar.gz (i.e., PROJECT-TAG.ARCHIVE) which I find too similar to the libntlm-1.8.tar.gz that we also publish. GitHub uses the same git archive style, but unfortunately they have logic that removes the ‘v’ in the pathname so you will get a tarball with pathname libntlm-1.8/ instead of libntlm-v1.8/ that GitLab and I use. The content of the tarball is bit-by-bit identical, but the pathname and archive differs. Codeberg (running Forgejo) uses another approach: the tarball is called libntlm-v1.8.tar.gz (after the tag) just like GitLab, but the pathname inside the archive is libntlm/, otherwise the produced archive is bit-by-bit identical including timestamps. Savannah’s CGIT interface uses archive name libntlm-1.8.tar.gz with pathname libntlm-1.8/, but otherwise file content is identical. Savannah’s GitWeb interface provides snapshot links that are named after the git commit (e.g., libntlm-a812c2ca.tar.gz with libntlm-a812c2ca/) and I cannot find any tag-based download links at all. Overall, we are so close to get SHA256 checksum to match, but fail on pathname within the archive. I’ve chosen to be compatible with GitLab regarding the content of tarballs but not on archive naming. From a simplicity point of view, it would be nice if everyone used PROJECT-TAG.ARCHIVE for the archive filename and PROJECT-TAG/ for the pathname within the archive. This aspect will probably need more discussion.

Side note on git archive output: It seems different versions of git archive produce different results for the same repository. The version of git in Debian 11, Trisquel 11 and Ubuntu 22.04 behave the same. The version of git in Debian 12, AlmaLinux/RockyLinux 8/9, Alpine, ArchLinux, macOS homebrew, and upcoming Ubuntu 24.04 behave in another way. Hopefully this will not change that often, but this would invalidate reproducibility of these tarballs in the future, forcing you to use an old git release to reproduce the source-only tarball. Alas, GitLab and most other sites appears to be using modern git so the download tarballs from them would not match my tarballs – even though the content would.

Side note on ChangeLog: ChangeLog files were traditionally manually curated files with version history for a package. In recent years, several projects moved to dynamically generate them from git history (using tools like git2cl or gitlog-to-changelog). This has consequences for reproducibility of tarballs: you need to have the entire git history available! The gitlog-to-changelog tool also output different outputs depending on the time zone of the person using it, which arguable is a simple bug that can be fixed. However this entire approach is incompatible with rebuilding the full tarball from the minimal source-only tarball. It seems Libntlm’s ChangeLog file died on the surgery table here.

So how would a distribution build these minimal source-only tarballs? I happen to help on the libntlm package in Debian. It has historically used the generated tarballs as the source code to build from. This means that code coming from gnulib is vendored in the tarball. When a security problem is discovered in gnulib code, the security team needs to patch all packages that include that vendored code and rebuild them, instead of merely patching the gnulib package and rebuild all packages that rely on that particular code. To change this, the Debian libntlm package needs to Build-Depends on Debian’s gnulib package. But there was one problem: similar to most projects that use gnulib, Libntlm depend on a particular git commit of gnulib, and Debian only ship one commit. There is no coordination about which commit to use. I have adopted gnulib in Debian, and add a git bundle to the *_all.deb binary package so that projects that rely on gnulib can pick whatever commit they need. This allow an no-network GNULIB_URL and GNULIB_REVISION approach when running Libntlm’s ./bootstrap with the Debian gnulib package installed. Otherwise libntlm would pick up whatever latest version of gnulib that Debian happened to have in the gnulib package, which is not what the Libntlm maintainer intended to be used, and can lead to all sorts of version mismatches (and consequently security problems) over time. Libntlm in Debian is developed and tested on Salsa and there is continuous integration testing of it as well, thanks to the Salsa CI team.

Side note on git bundles: unfortunately there appears to be no reproducible way to export a git repository into one or more files. So one unfortunate consequence of all this work is that the gnulib *.orig.tar.gz tarball in Debian is not reproducible any more. I have tried to get Git bundles to be reproducible but I never got it to work — see my notes in gnulib’s debian/README.source on this aspect. Of course, source tarball reproducibility has nothing to do with binary reproducibility of gnulib in Debian itself, fortunately.

One open question is how to deal with the increased build dependencies that is triggered by this approach. Some people are surprised by this but I don’t see how to get around it: if you depend on source code for tools in another package to build your package, it is a bad idea to hide that dependency. We’ve done it for a long time through vendored code in non-minimal tarballs. Libntlm isn’t the most critical project from a bootstrapping perspective, so adding git and gnulib as Build-Depends to it will probably be fine. However, consider if this pattern was used for other packages that uses gnulib such as coreutils, gzip, tar, bison etc (all are using gnulib) then they would all Build-Depends on git and gnulib. Cross-building those packages for a new architecture will therefor require git on that architecture first, which gets circular quick. The dependency on gnulib is real so I don’t see that going away, and gnulib is a Architecture:all package. However, the dependency on git is merely a consequence of how the Debian gnulib package chose to make all gnulib git commits available to projects: through a git bundle. There are other ways to do this that doesn’t require the git tool to extract the necessary files, but none that I found practical — ideas welcome!

Finally some brief notes on how this was implemented. Enabling bootstrappable source-only minimal tarballs via gnulib’s ./bootstrap is achieved by using the GNULIB_REVISION mechanism, locking down the gnulib commit used. I have always disliked git submodules because they add extra steps and has complicated interaction with CI/CD. The reason why I gave up git submodules now is because the particular commit to use is not recorded in the git archive output when git submodules is used. So the particular gnulib commit has to be mentioned explicitly in some source code that goes into the git archive tarball. Colin Watson added the GNULIB_REVISION approach to ./bootstrap back in 2018, and now it no longer made sense to continue to use a gnulib git submodule. One alternative is to use ./bootstrap with --gnulib-srcdir or --gnulib-refdir if there is some practical problem with the GNULIB_URL towards a git bundle the GNULIB_REVISION in bootstrap.conf.

The srcdist make rule is simple:

git archive --prefix=libntlm-v1.8/ -o libntlm-v1.8.tar.gz HEAD

Making the make dist generated tarball reproducible can be more complicated, however for Libntlm it was sufficient to make sure the modification times of all files were set deterministically to the timestamp of the last commit in the git repository. Interestingly there seems to be a couple of different ways to accomplish this, Guix doesn’t support minimal source-only tarballs but rely on a .tarball-timestamp file inside the tarball. Paul Eggert explained what TZDB is using some time ago. The approach I’m using now is fairly similar to the one I suggested over a year ago. If there are problems because all files in the tarball now use the same modification time, there is a solution by Bruno Haible that could be implemented.

Side note on git tags: Some people may wonder why not verify a signed git tag instead of verifying a signed tarball of the git archive. Currently most git repositories uses SHA-1 for git commit identities, but SHA-1 is not a secure hash function. While current SHA-1 attacks can be detected and mitigated, there are fundamental doubts that a git SHA-1 commit identity uniquely refers to the same content that was intended. Verifying a git tag will never offer the same assurance, since a git tag can be moved or re-signed at any time. Verifying a git commit is better but then we need to trust SHA-1. Migrating git to SHA-256 would resolve this aspect, but most hosting sites such as GitLab and GitHub does not support this yet. There are other advantages to using signed tarballs instead of signed git commits or git tags as well, e.g., tar.gz can be a deterministically reproducible persistent stable offline storage format but .git sub-directory trees or git bundles do not offer this property.

Doing continous testing of all this is critical to make sure things don’t regress. Libntlm’s pipeline definition now produce the generated libntlm-*.tar.gz tarballs and a checksum as a build artifact. Then I added the 000-reproducability job which compares the checksums and fails on mismatches. You can read its delicate output in the job for the v1.8 release. Right now we insists that builds on Trisquel 11 match Ubuntu 22.04, that PureOS 10 builds match Debian 11 builds, that AlmaLinux 8 builds match RockyLinux 8 builds, and AlmaLinux 9 builds match RockyLinux 9 builds. As you can see in pipeline job output, not all platforms lead to the same tarballs, but hopefully this state can be improved over time. There is also partial reproducibility, where the full tarball is reproducible across two distributions but not the minimal tarball, or vice versa.

If this way of working plays out well, I hope to implement it in other projects too.

What do you think? Happy Hacking!

13 April, 2024 04:44PM by simon

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Domo Arigato, Mr. debugfs

Years ago, at what I think I remember was DebConf 15, I hacked for a while on debhelper to write build-ids to debian binary control files, so that the build-id (more specifically, the ELF note .note.gnu.build-id) wound up in the Debian apt archive metadata. I’ve always thought this was super cool, and seeing as how Michael Stapelberg blogged some great pointers around the ecosystem, including the fancy new debuginfod service, and the find-dbgsym-packages helper, which uses these same headers, I don’t think I’m the only one.

At work I’ve been using a lot of rust, specifically, async rust using tokio. To try and work on my style, and to dig deeper into the how and why of the decisions made in these frameworks, I’ve decided to hack up a project that I’ve wanted to do ever since 2015 – write a debug filesystem. Let’s get to it.

Back to the Future

Time to admit something. I really love Plan 9. It’s just so good. So many ideas from Plan 9 are just so prescient, and everything just feels right. Not just right like, feels good – like, correct. The bit that I’ve always liked the most is 9p, the network protocol for serving a filesystem over a network. This leads to all sorts of fun programs, like the Plan 9 ftp client being a 9p server – you mount the ftp server and access files like any other files. It’s kinda like if fuse were more fully a part of how the operating system worked, but fuse is all running client-side. With 9p there’s a single client, and different servers that you can connect to, which may be backed by a hard drive, remote resources over something like SFTP, FTP, HTTP or even purely synthetic.

The interesting (maybe sad?) part here is that 9p wound up outliving Plan 9 in terms of adoption – 9p is in all sorts of places folks don’t usually expect. For instance, the Windows Subsystem for Linux uses the 9p protocol to share files between Windows and Linux. ChromeOS uses it to share files with Crostini, and qemu uses 9p (virtio-p9) to share files between guest and host. If you’re noticing a pattern here, you’d be right; for some reason 9p is the go-to protocol to exchange files between hypervisor and guest. Why? I have no idea, except maybe due to being designed well, simple to implement, and it’s a lot easier to validate the data being shared and validate security boundaries. Simplicity has its value.

As a result, there’s a lot of lingering 9p support kicking around. Turns out Linux can even handle mounting 9p filesystems out of the box. This means that I can deploy a filesystem to my LAN or my localhost by running a process on top of a computer that needs nothing special, and mount it over the network on an unmodified machine – unlike fuse, where you’d need client-specific software to run in order to mount the directory. For instance, let’s mount a 9p filesystem running on my localhost machine, serving requests on 127.0.0.1:564 (tcp) that goes by the name “mountpointname” to /mnt.

$ mount -t 9p \
-o trans=tcp,port=564,version=9p2000.u,aname=mountpointname \
127.0.0.1 \
/mnt

Linux will mount away, and attach to the filesystem as the root user, and by default, attach to that mountpoint again for each local user that attempts to use it. Nifty, right? I think so. The server is able to keep track of per-user access and authorization along with the host OS.

WHEREIN I STYX WITH IT

Since I wanted to push myself a bit more with rust and tokio specifically, I opted to implement the whole stack myself, without third party libraries on the critical path where I could avoid it. The 9p protocol (sometimes called Styx, the original name for it) is incredibly simple. It’s a series of client to server requests, which receive a server to client response. These are, respectively, “T” messages, which transmit a request to the server, which trigger an “R” message in response (Reply messages). These messages are TLV payload with a very straight forward structure – so straight forward, in fact, that I was able to implement a working server off nothing more than a handful of man pages.

Later on after the basics worked, I found a more complete spec page that contains more information about the unix specific variant that I opted to use (9P2000.u rather than 9P2000) due to the level of Linux specific support for the 9P2000.u variant over the 9P2000 protocol.

MR ROBOTO

The backend stack over at zoo is rust and tokio running i/o for an HTTP and WebRTC server. I figured I’d pick something fairly similar to write my filesystem with, since 9P can be implemented on basically anything with I/O. That means tokio tcp server bits, which construct and use a 9p server, which has an idiomatic Rusty API that partially abstracts the raw R and T messages, but not so much as to cause issues with hiding implementation possibilities. At each abstraction level, there’s an escape hatch – allowing someone to implement any of the layers if required. I called this framework arigato which can be found over on docs.rs and crates.io.

/// Simplified version of the arigato File trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.File.html
trait File {
/// OpenFile is the type returned by this File via an Open call.
 type OpenFile: OpenFile;
/// Return the 9p Qid for this file. A file is the same if the Qid is
 /// the same. A Qid contains information about the mode of the file,
 /// version of the file, and a unique 64 bit identifier.
 fn qid(&self) -> Qid;
/// Construct the 9p Stat struct with metadata about a file.
 async fn stat(&self) -> FileResult<Stat>;
/// Attempt to update the file metadata.
 async fn wstat(&mut self, s: &Stat) -> FileResult<()>;
/// Traverse the filesystem tree.
 async fn walk(&self, path: &[&str]) -> FileResult<(Option<Self>, Vec<Self>)>;
/// Request that a file's reference be removed from the file tree.
 async fn unlink(&mut self) -> FileResult<()>;
/// Create a file at a specific location in the file tree.
 async fn create(
&mut self,
name: &str,
perm: u16,
ty: FileType,
mode: OpenMode,
extension: &str,
) -> FileResult<Self>;
/// Open the File, returning a handle to the open file, which handles
 /// file i/o. This is split into a second type since it is genuinely
 /// unrelated -- and the fact that a file is Open or Closed can be
 /// handled by the `arigato` server for us.
 async fn open(&mut self, mode: OpenMode) -> FileResult<Self::OpenFile>;
}
/// Simplified version of the arigato OpenFile trait; this isn't actually
/// the same trait; there's some small cosmetic differences. The
/// actual trait can be found at:
///
/// https://docs.rs/arigato/latest/arigato/server/trait.OpenFile.html
trait OpenFile {
/// iounit to report for this file. The iounit reported is used for Read
 /// or Write operations to signal, if non-zero, the maximum size that is
 /// guaranteed to be transferred atomically.
 fn iounit(&self) -> u32;
/// Read some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes read is
 /// returned.
 async fn read_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
/// Write some number of bytes up to `buf.len()` from the provided
 /// `offset` of the underlying file. The number of bytes written
 /// is returned.
 fn write_at(
&mut self,
buf: &mut [u8],
offset: u64,
) -> FileResult<u32>;
}

Thanks, decade ago paultag!

Let’s do it! Let’s use arigato to implement a 9p filesystem we’ll call debugfs that will serve all the debug files shipped according to the Packages metadata from the apt archive. We’ll fetch the Packages file and construct a filesystem based on the reported Build-Id entries. For those who don’t know much about how an apt repo works, here’s the 2-second crash course on what we’re doing. The first is to fetch the Packages file, which is specific to a binary architecture (such as amd64, arm64 or riscv64). That architecture is specific to a component (such as main, contrib or non-free). That component is specific to a suite, such as stable, unstable or any of its aliases (bullseye, bookworm, etc). Let’s take a look at the Packages.xz file for the unstable-debug suite, main component, for all amd64 binaries.

$ curl \
https://deb.debian.org/debian-debug/dists/unstable-debug/main/binary-amd64/Packages.xz \
| unxz

This will return the Debian-style rfc2822-like headers, which is an export of the metadata contained inside each .deb file which apt (or other tools that can use the apt repo format) use to fetch information about debs. Let’s take a look at the debug headers for the netlabel-tools package in unstable – which is a package named netlabel-tools-dbgsym in unstable-debug.

Package: netlabel-tools-dbgsym
Source: netlabel-tools (0.30.0-1)
Version: 0.30.0-1+b1
Installed-Size: 79
Maintainer: Paul Tagliamonte <paultag@debian.org>
Architecture: amd64
Depends: netlabel-tools (= 0.30.0-1+b1)
Description: debug symbols for netlabel-tools
Auto-Built-Package: debug-symbols
Build-Ids: e59f81f6573dadd5d95a6e4474d9388ab2777e2a
Description-md5: a0e587a0cf730c88a4010f78562e6db7
Section: debug
Priority: optional
Filename: pool/main/n/netlabel-tools/netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
Size: 62776
SHA256: 0e9bdb087617f0350995a84fb9aa84541bc4df45c6cd717f2157aa83711d0c60

So here, we can parse the package headers in the Packages.xz file, and store, for each Build-Id, the Filename where we can fetch the .deb at. Each .deb contains a number of files – but we’re only really interested in the files inside the .deb located at or under /usr/lib/debug/.build-id/, which you can find in debugfs under rfc822.rs. It’s crude, and very single-purpose, but I’m feeling a bit lazy.

Who needs dpkg?!

For folks who haven’t seen it yet, a .deb file is a special type of .ar file, that contains (usually) three files inside – debian-binary, control.tar.xz and data.tar.xz. The core of an .ar file is a fixed size (60 byte) entry header, followed by the specified size number of bytes.

[8 byte .ar file magic]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
[60 byte entry header]
[N bytes of data]
...

First up was to implement a basic ar parser in ar.rs. Before we get into using it to parse a deb, as a quick diversion, let’s break apart a .deb file by hand – something that is a bit of a rite of passage (or at least it used to be? I’m getting old) during the Debian nm (new member) process, to take a look at where exactly the .debug file lives inside the .deb file.

$ ar x netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ ls
control.tar.xz debian-binary
data.tar.xz netlabel-tools-dbgsym_0.30.0-1+b1_amd64.deb
$ tar --list -f data.tar.xz | grep '.debug$'
./usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug

Since we know quite a bit about the structure of a .deb file, and I had to implement support from scratch anyway, I opted to implement a (very!) basic debfile parser using HTTP Range requests. HTTP Range requests, if supported by the server (denoted by a accept-ranges: bytes HTTP header in response to an HTTP HEAD request to that file) means that we can add a header such as range: bytes=8-68 to specifically request that the returned GET body be the byte range provided (in the above case, the bytes starting from byte offset 8 until byte offset 68). This means we can fetch just the ar file entry from the .deb file until we get to the file inside the .deb we are interested in (in our case, the data.tar.xz file) – at which point we can request the body of that file with a final range request. I wound up writing a struct to handle a read_at-style API surface in hrange.rs, which we can pair with ar.rs above and start to find our data in the .deb remotely without downloading and unpacking the .deb at all.

After we have the body of the data.tar.xz coming back through the HTTP response, we get to pipe it through an xz decompressor (this kinda sucked in Rust, since a tokio AsyncRead is not the same as an http Body response is not the same as std::io::Read, is not the same as an async (or sync) Iterator is not the same as what the xz2 crate expects; leading me to read blocks of data to a buffer and stuff them through the decoder by looping over the buffer for each lzma2 packet in a loop), and tarfile parser (similarly troublesome). From there we get to iterate over all entries in the tarfile, stopping when we reach our file of interest. Since we can’t seek, but gdb needs to, we’ll pull it out of the stream into a Cursor<Vec<u8>> in-memory and pass a handle to it back to the user.

From here on out its a matter of gluing together a File traited struct in debugfs, and serving the filesystem over TCP using arigato. Done deal!

A quick diversion about compression

I was originally hoping to avoid transferring the whole tar file over the network (and therefore also reading the whole debug file into ram, which objectively sucks), but quickly hit issues with figuring out a way around seeking around an xz file. What’s interesting is xz has a great primitive to solve this specific problem (specifically, use a block size that allows you to seek to the block as close to your desired seek position just before it, only discarding at most block size - 1 bytes), but data.tar.xz files generated by dpkg appear to have a single mega-huge block for the whole file. I don’t know why I would have expected any different, in retrospect. That means that this now devolves into the base case of “How do I seek around an lzma2 compressed data stream”; which is a lot more complex of a question.

Thankfully, notoriously brilliant tianon was nice enough to introduce me to Jon Johnson who did something super similar – adapted a technique to seek inside a compressed gzip file, which lets his service oci.dag.dev seek through Docker container images super fast based on some prior work such as soci-snapshotter, gztool, and zran.c. He also pulled this party trick off for apk based distros over at apk.dag.dev, which seems apropos. Jon was nice enough to publish a lot of his work on this specifically in a central place under the name “targz” on his GitHub, which has been a ton of fun to read through.

The gist is that, by dumping the decompressor’s state (window of previous bytes, in-memory data derived from the last N-1 bytes) at specific “checkpoints” along with the compressed data stream offset in bytes and decompressed offset in bytes, one can seek to that checkpoint in the compressed stream and pick up where you left off – creating a similar “block” mechanism against the wishes of gzip. It means you’d need to do an O(n) run over the file, but every request after that will be sped up according to the number of checkpoints you’ve taken.

Given the complexity of xz and lzma2, I don’t think this is possible for me at the moment – especially given most of the files I’ll be requesting will not be loaded from again – especially when I can “just” cache the debug header by Build-Id. I want to implement this (because I’m generally curious and Jon has a way of getting someone excited about compression schemes, which is not a sentence I thought I’d ever say out loud), but for now I’m going to move on without this optimization. Such a shame, since it kills a lot of the work that went into seeking around the .deb file in the first place, given the debian-binary and control.tar.gz members are so small.

The Good

First, the good news right? It works! That’s pretty cool. I’m positive my younger self would be amused and happy to see this working; as is current day paultag. Let’s take debugfs out for a spin! First, we need to mount the filesystem. It even works on an entirely unmodified, stock Debian box on my LAN, which is huge. Let’s take it for a spin:

$ mount \
-t 9p \
-o trans=tcp,version=9p2000.u,aname=unstable-debug \
192.168.0.2 \
/usr/lib/debug/.build-id/

And, let’s prove to ourselves that this actually mounted before we go trying to use it:

$ mount | grep build-id
192.168.0.2 on /usr/lib/debug/.build-id type 9p (rw,relatime,aname=unstable-debug,access=user,trans=tcp,version=9p2000.u,port=564)

Slick. We’ve got an open connection to the server, where our host will keep a connection alive as root, attached to the filesystem provided in aname. Let’s take a look at it.

$ ls /usr/lib/debug/.build-id/
00 0d 1a 27 34 41 4e 5b 68 75 82 8E 9b a8 b5 c2 CE db e7 f3
01 0e 1b 28 35 42 4f 5c 69 76 83 8f 9c a9 b6 c3 cf dc E7 f4
02 0f 1c 29 36 43 50 5d 6a 77 84 90 9d aa b7 c4 d0 dd e8 f5
03 10 1d 2a 37 44 51 5e 6b 78 85 91 9e ab b8 c5 d1 de e9 f6
04 11 1e 2b 38 45 52 5f 6c 79 86 92 9f ac b9 c6 d2 df ea f7
05 12 1f 2c 39 46 53 60 6d 7a 87 93 a0 ad ba c7 d3 e0 eb f8
06 13 20 2d 3a 47 54 61 6e 7b 88 94 a1 ae bb c8 d4 e1 ec f9
07 14 21 2e 3b 48 55 62 6f 7c 89 95 a2 af bc c9 d5 e2 ed fa
08 15 22 2f 3c 49 56 63 70 7d 8a 96 a3 b0 bd ca d6 e3 ee fb
09 16 23 30 3d 4a 57 64 71 7e 8b 97 a4 b1 be cb d7 e4 ef fc
0a 17 24 31 3e 4b 58 65 72 7f 8c 98 a5 b2 bf cc d8 E4 f0 fd
0b 18 25 32 3f 4c 59 66 73 80 8d 99 a6 b3 c0 cd d9 e5 f1 fe
0c 19 26 33 40 4d 5a 67 74 81 8e 9a a7 b4 c1 ce da e6 f2 ff

Outstanding. Let’s try using gdb to debug a binary that was provided by the Debian archive, and see if it’ll load the ELF by build-id from the right .deb in the unstable-debug suite:

$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Yes! Yes it will!

$ file /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
/usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, interpreter *empty*, BuildID[sha1]=e59f81f6573dadd5d95a6e4474d9388ab2777e2a, for GNU/Linux 3.2.0, with debug_info, not stripped

The Bad

Linux’s support for 9p is mainline, which is great, but it’s not robust. Network issues or server restarts will wedge the mountpoint (Linux can’t reconnect when the tcp connection breaks), and things that work fine on local filesystems get translated in a way that causes a lot of network chatter – for instance, just due to the way the syscalls are translated, doing an ls, will result in a stat call for each file in the directory, even though linux had just got a stat entry for every file while it was resolving directory names. On top of that, Linux will serialize all I/O with the server, so there’s no concurrent requests for file information, writes, or reads pending at the same time to the server; and read and write throughput will degrade as latency increases due to increasing round-trip time, even though there are offsets included in the read and write calls. It works well enough, but is frustrating to run up against, since there’s not a lot you can do server-side to help with this beyond implementing the 9P2000.L variant (which, maybe is worth it).

The Ugly

Unfortunately, we don’t know the file size(s) until we’ve actually opened the underlying tar file and found the correct member, so for most files, we don’t know the real size to report when getting a stat. We can’t parse the tarfiles for every stat call, since that’d make ls even slower (bummer). Only hiccup is that when I report a filesize of zero, gdb throws a bit of a fit; let’s try with a size of 0 to start:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 0 Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
warning: Discarding section .note.gnu.build-id which has a section size (24) larger than the file size [in module /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug]
[...]

This obviously won’t work since gdb will throw away all our hard work because of stat’s output, and neither will loading the real size of the underlying file. That only leaves us with hardcoding a file size and hope nothing else breaks significantly as a result. Let’s try it again:

$ ls -lah /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
-r--r--r-- 1 root root 954M Dec 31 1969 /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug
$ gdb -q /usr/sbin/netlabelctl
Reading symbols from /usr/sbin/netlabelctl...
Reading symbols from /usr/lib/debug/.build-id/e5/9f81f6573dadd5d95a6e4474d9388ab2777e2a.debug...
(gdb)

Much better. I mean, terrible but better. Better for now, anyway.

Kilroy was here

Do I think this is a particularly good idea? I mean; kinda. I’m probably going to make some fun 9p arigato-based filesystems for use around my LAN, but I don’t think I’ll be moving to use debugfs until I can figure out how to ensure the connection is more resilient to changing networks, server restarts and fixes on i/o performance. I think it was a useful exercise and is a pretty great hack, but I don’t think this’ll be shipping anywhere anytime soon.

Along with me publishing this post, I’ve pushed up all my repos; so you should be able to play along at home! There’s a lot more work to be done on arigato; but it does handshake and successfully export a working 9P2000.u filesystem. Check it out on on my github at arigato, debugfs and also on crates.io and docs.rs.

At least I can say I was here and I got it working after all these years.

13 April, 2024 01:27PM

April 12, 2024

NOKUBI Takatsugu

mailman3-web error when upgrading to bookworm

I tried to upgrade bullseye machien to bookworm, so I got the following error:

File “/usr/lib/python3/dist-packages/django/contrib/auth/mixins.py”, line 5, in
from django.contrib.auth.views import redirect_to_login
File “/usr/lib/python3/dist-packages/django/contrib/auth/views.py”, line 20, in
from django.utils.http import (
ImportError: cannot import name ‘url_has_allowed_host_and_scheme’ from ‘django.utils.http’ (/usr/lib/python3/dist-packages/django/utils/http.py)

During handling of the above exception, another exception occurred:

It is similar to #1000810, but it is already closed.

My solution is:

  • apt remove mailman3-web
    • keep db and config files (do not purge)
  • apt autoremove
    • remove django related packages
  • apt install mailman3-web mailman3-full

I tried to send to the report, but it rerutns `550 Unknown or archived bug’ …

12 April, 2024 01:34PM by knok

April 11, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

Sorting out backup internet #1: recursive DNS

I work from home these days, and my nearest office is over 100 miles away, 3 hours door to door if I travel by train (and, to be honest, probably not a lot faster given rush hour traffic if I drive). So I’m reliant on a functional internet connection in order to be able to work. I’m lucky to have access to Openreach FTTP, provided by Aquiss, but I worry about what happens if there’s a cable cut somewhere or some other long lasting problem. Worst case I could tether to my work phone, or try to find some local coworking space to use while things get sorted, but I felt like arranging a backup option was a wise move.

Step 1 turned out to be sorting out recursive DNS. It’s been many moons since I had to deal with running DNS in a production setting, and I’ve mostly done my best to avoid doing it at home too. dnsmasq has done a decent job at providing for my needs over the years, covering DHCP, DNS (+ tftp for my test device network). However I just let it forward to my ISP’s nameservers, which means if that link goes down it’ll no longer be able to resolve anything outside the house.

One option would have been to either point to a different recursive DNS server (Cloudfare’s 1.1.1.1 or Google’s Public DNS being the common choices), but I’ve no desire to share my lookup information with them. As another approach I could have done some sort of failover of resolv.conf when the primary network went down, but then I would have to get into moving files around based on networking status and that felt a bit clunky.

So I decided to finally setup a proper local recursive DNS server, which is something I’ve kinda meant to do for a while but never had sufficient reason to look into. Last time I did this I did it with BIND 9 but there are more options these days, and I decided to go with unbound, which is primarily focused on recursive DNS.

One extra wrinkle, pointed out by Lars, is that having dynamic name information from DHCP hosts is exceptionally convenient. I’ve kept dnsmasq as the local DHCP server, so I wanted to be able to forward local queries there.

I’m doing all of this on my RB5009, running Debian. Installing unbound was a simple matter of apt install unbound. I needed 2 pieces of configuration over the default, one to enable recursive serving for the house networks, and one to enable forwarding of queries for the local domain to dnsmasq. I originally had specified the wildcard address for listening, but this caused problems with the fact my router has many interfaces and would sometimes respond from a different address than the request had come in on.

/etc/unbound/unbound.conf.d/network-resolver.conf
server:
  interface: 192.0.2.1
  interface: 2001::db8:f00d::1
  access-control: 192.0.2.0/24 allow
  access-control: 2001::db8:f00d::/56 allow


/etc/unbound/unbound.conf.d/local-to-dnsmasq.conf
server:
  domain-insecure: "example.org"
  do-not-query-localhost: no

forward-zone:
  name: "example.org"
  forward-addr: 127.0.0.1@5353


I then had to configure dnsmasq to not listen on port 53 (so unbound could), respond to requests on the loopback interface (I have dnsmasq restricted to only explicitly listed interfaces), and to hand out unbound as the appropriate nameserver in DHCP requests - once dnsmasq is not listening on port 53 it no longer does this by default.

/etc/dnsmasq.d/behind-unbound
interface=lo
port=5353
dhcp-option=option6:dns-server,[2001::db8:f00d::1]
dhcp-option=option:dns-server,192.0.2.1


With these minor changes in place I now have local recursive DNS being handled by unbound, without losing dynamic local DNS for DHCP hosts. As an added bonus I now get 10/10 on Test IPv6 - previously I was getting dinged on the ability for my DNS server to resolve purely IPv6 reachable addresses.

Next step, actually sorting out a backup link.

11 April, 2024 05:41PM

Reproducible Builds

Reproducible Builds in March 2024

Welcome to the March 2024 report from the Reproducible Builds project! In our reports, we attempt to outline what we have been up to over the past month, as well as mentioning some of the important things happening more generally in software supply-chain security. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Arch Linux minimal container userland now 100% reproducible
  2. Validating Debian’s build infrastructure after the XZ backdoor
  3. Making Fedora Linux (more) reproducible
  4. Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management
  5. Software and source code identification with GNU Guix and reproducible builds
  6. Two new Rust-based tools for post-processing determinism
  7. Distribution work
  8. Mailing list highlights
  9. Website updates
  10. Delta chat clients now reproducible
  11. diffoscope updates
  12. Upstream patches
  13. Reproducibility testing framework

Arch Linux minimal container userland now 100% reproducible

In remarkable news, Reproducible builds developer kpcyrd reported that that the Arch Linux “minimal container userland” is now 100% reproducible after work by developers dvzv and Foxboron on the one remaining package. This represents a “real world”, widely-used Linux distribution being reproducible.

Their post, which kpcyrd suffixed with the question “now what?”, continues on to outline some potential next steps, including validating whether the container image itself could be reproduced bit-for-bit. The post, which was itself a followup for an Arch Linux update earlier in the month, generated a significant number of replies.


Validating Debian’s build infrastructure after the XZ backdoor

From our mailing list this month, Vagrant Cascadian wrote about being asked about trying to perform concrete reproducibility checks for recent Debian security updates, in an attempt to gain some confidence about Debian’s build infrastructure given that they performed builds in environments running the high-profile XZ vulnerability.

Vagrant reports (with some caveats):

So far, I have not found any reproducibility issues; everything I tested I was able to get to build bit-for-bit identical with what is in the Debian archive.

That is to say, reproducibility testing permitted Vagrant and Debian to claim with some confidence that builds performed when this vulnerable version of XZ was installed were not interfered with.


Making Fedora Linux (more) reproducible

In March, Davide Cavalca gave a talk at the 2024 Southern California Linux Expo (aka SCALE 21x) about the ongoing effort to make the Fedora Linux distribution reproducible.

Documented in more detail on Fedora’s website, the talk touched on topics such as the specifics of implementing reproducible builds in Fedora, the challenges encountered, the current status and what’s coming next. (YouTube video)


Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management

Julien Malka published a brief but interesting paper in the HAL open archive on Increasing Trust in the Open Source Supply Chain with Reproducible Builds and Functional Package Management:

Functional package managers (FPMs) and reproducible builds (R-B) are technologies and methodologies that are conceptually very different from the traditional software deployment model, and that have promising properties for software supply chain security. This thesis aims to evaluate the impact of FPMs and R-B on the security of the software supply chain and propose improvements to the FPM model to further improve trust in the open source supply chain. PDF

Julien’s paper poses a number of research questions on how the model of distributions such as GNU Guix and NixOS can “be leveraged to further improve the safety of the software supply chain”, etc.


Software and source code identification with GNU Guix and reproducible builds

In a long line of commendably detailed blog posts, Ludovic Courtès, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier have together published two interesting posts on the GNU Guix blog this month. In early March, Ludovic Courtès, Maxim Cournoyer, Jan Nieuwenhuizen and Simon Tournier wrote about software and source code identification and how that might be performed using Guix, rhetorically posing the questions: “What does it take to ‘identify software’? How can we tell what software is running on a machine to determine, for example, what security vulnerabilities might affect it?”

Later in the month, Ludovic Courtès wrote a solo post describing adventures on the quest for long-term reproducible deployment. Ludovic’s post touches on GNU Guix’s aim to support “time travel”, the ability to reliably (and reproducibly) revert to an earlier point in time, employing the iconic image of Harold Lloyd hanging off the clock in Safety Last! (1925) to poetically illustrate both the slapstick nature of current modern technology and the gymnastics required to navigate hazards of our own making.


Two new Rust-based tools for post-processing determinism

Zbigniew Jędrzejewski-Szmek announced add-determinism, a work-in-progress reimplementation of the Reproducible Builds project’s own strip-nondeterminism tool in the Rust programming language, intended to be used as a post-processor in RPM-based distributions such as Fedora

In addition, Yossi Kreinin published a blog post titled “refix: fast, debuggable, reproducible builds that describes a tool that post-processes binaries in such a way that they are still debuggable with gdb, etc.. Yossi post details the motivation and techniques behind the (fast) performance of the tool.


Distribution work

In Debian this month, since the testing framework no longer varies the build path, James Addison performed a bulk downgrade of the bug severity for issues filed with a level of normal to a new level of wishlist. In addition, 28 reviews of Debian packages were added, 38 were updated and 23 were removed this month adding to ever-growing knowledge about identified issues. As part of this effort, a number of issue types were updated, including Chris Lamb adding a new ocaml_include_directories toolchain issue [] and James Addison adding a new filesystem_order_in_java_jar_manifest_mf_include_resource issue [] and updating the random_uuid_in_notebooks_generated_by_nbsphinx to reference a relevant discussion thread [].

In addition, Roland Clobus posted his 24th status update of reproducible Debian ISO images. Roland highlights that the images for Debian unstable often cannot be generated due to changes in that distribution related to the 64-bit time_t transition.

Lastly, Bernhard M. Wiedemann posted another monthly update for his reproducibility work in openSUSE.


Mailing list highlights

Elsewhere on our mailing list this month:


Website updates

There were made a number of improvements to our website this month, including:

  • Pol Dellaiera noticed the frequent need to correctly cite the website itself in academic work. To facilitate easier citation across multiple formats, Pol contributed a Citation File Format (CIF) file. As a result, an export in BibTeX format is now available in the Academic Publications section. Pol encourages community contributions to further refine the CITATION.cff file. Pol also added an substantial new section to the “buy in” page documenting the role of Software Bill of Materials (SBOMs) and ephemeral development environments. [][]

  • Bernhard M. Wiedemann added a new “commandments” page to the documentation [][] and fixed some incorrect YAML elsewhere on the site [].

  • Chris Lamb add three recent academic papers to the publications page of the website. []

  • Mattia Rizzolo and Holger Levsen collaborated to add Infomaniak as a sponsor of amd64 virtual machines. [][][]

  • Roland Clobus updated the “stable outputs” page, dropping version numbers from Python documentation pages [] and noting that Python’s set data structure is also affected by the PYTHONHASHSEED functionality. []


Delta chat clients now reproducible

Delta Chat, an open source messaging application that can work over email, announced this month that the Rust-based core library underlying Delta chat application is now reproducible.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made a number of changes such as uploading versions 259, 260 and 261 to Debian and made the following additional changes:

  • New features:

    • Add support for the zipdetails tool from the Perl distribution. Thanks to Fay Stegerman and Larry Doolittle et al. for the pointer and thread about this tool. []
  • Bug fixes:

    • Don’t identify Redis database dumps as GNU R database files based simply on their filename. []
    • Add a missing call to File.recognizes so we actually perform the filename check for GNU R data files. []
    • Don’t crash if we encounter an .rdb file without an equivalent .rdx file. (#1066991)
    • Correctly check for 7z being available—and not lz4—when testing 7z. []
    • Prevent a traceback when comparing a contentful .pyc file with an empty one. []
  • Testsuite improvements:

    • Fix .epub tests after supporting the new zipdetails tool. []
    • Don’t use parenthesis within test “skipping…” messages, as PyTest adds its own parenthesis. []
    • Factor out Python version checking in test_zip.py. []
    • Skip some Zip-related tests under Python 3.10.14, as a potential regression may have been backported to the 3.10.x series. []
    • Actually test 7z support in the test_7z set of tests, not the lz4 functionality. (Closes: reproducible-builds/diffoscope#359). []

In addition, Fay Stegerman updated diffoscope’s monkey patch for supporting the unusual Mozilla ZIP file format after Python’s zipfile module changed to detect potentially insecure overlapping entries within .zip files. (#362)

Chris Lamb also updated the trydiffoscope command line client, dropping a build-dependency on the deprecated python3-distutils package to fix Debian bug #1065988 [], taking a moment to also refresh the packaging to the latest Debian standards []. Finally, Vagrant Cascadian submitted an update for diffoscope version 260 in GNU Guix. []


Upstream patches

This month, we wrote a large number of patches, including:

Bernhard M. Wiedemann used reproducibility-tooling to detect and fix packages that added changes in their %check section, thus failing when built with the --no-checks option. Only half of all openSUSE packages were tested so far, but a large number of bugs were filed, including ones against caddy, exiv2, gnome-disk-utility, grisbi, gsl, itinerary, kosmindoormap, libQuotient, med-tools, plasma6-disks, pspp, python-pypuppetdb, python-urlextract, rsync, vagrant-libvirt and xsimd.

Similarly, Jean-Pierre De Jesus DIAZ employed reproducible builds techniques in order to test a proposed refactor of the ath9k-htc-firmware package. As the change produced bit-for-bit identical binaries to the previously shipped pre-built binaries:

I don’t have the hardware to test this firmware, but the build produces the same hashes for the firmware so it’s safe to say that the firmware should keep working.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility.

In March, an enormous number of changes were made by Holger Levsen:

  • Debian-related changes:

    • Sleep less after a so-called “404” package state has occurred. []
    • Schedule package builds more often. [][]
    • Regenerate all our HTML indexes every hour, but only every 12h for the released suites. []
    • Create and update unstable and experimental base systems on armhf again. [][]
    • Don’t reschedule so many “depwait” packages due to the current size of the i386 architecture queue. []
    • Redefine our scheduling thresholds and amounts. []
    • Schedule untested packages with a higher priority, otherwise slow architectures cannot keep up with the experimental distribution growing. []
    • Only create the stats_buildinfo.png graph once per day. [][]
    • Reproducible Debian dashboard: refactoring, update several more static stats only every 12h. []
    • Document how to use systemctl with new systemd-based services. []
    • Temporarily disable armhf and i386 continuous integration tests in order to get some stability back. []
    • Use the deb.debian.org CDN everywhere. []
    • Remove the rsyslog logging facility on bookworm systems. []
    • Add zst to the list of packages which are false-positive diskspace issues. []
    • Detect failures to bootstrap Debian base systems. []
  • Arch Linux-related changes:

    • Temporarily disable builds because the pacman package manager is broken. [][]
    • Split reproducible_html_live_status and split the scheduling timing . [][][]
    • Improve handling when database is locked. [][]
  • Misc changes:

    • Show failed services that require manual cleanup. [][]
    • Integrate two new Infomaniak nodes. [][][][]
    • Improve IRC notifications for artifacts. []
    • Run diffoscope in different systemd slices. []
    • Run the node health check more often, as it can now repair some issues. [][]
    • Also include the string Bot in the userAgent for Git. (Re: #929013). []
    • Document increased tmpfs size on our OUSL nodes. []
    • Disable memory account for the reproducible_build service. [][]
    • Allow 10 times as many open files for the Jenkins service. []
    • Set OOMPolicy=continue and OOMScoreAdjust=-1000 for both the Jenkins and the reproducible_build service. []

Mattia Rizzolo also made the following changes:

  • Debian-related changes:

    • Define a systemd slice to group all relevant services. [][]
    • Add a bunch of quotes in scripts to assuage the shellcheck tool. []
    • Add stats on how many packages have been built today so far. []
    • Instruct systemd-run to handle diffoscope’s exit codes specially. []
    • Prefer the pgrep tool over grepping the output of ps. []
    • Re-enable a couple of i386 and armhf architecture builders. [][]
    • Fix some stylistic issues flagged by the Python flake8 tool. []
    • Cease scheduling Debian unstable and experimental on the armhf architecture due to the time_t transition. []
    • Start a few more i386 & armhf workers. [][][]
    • Temporarly skip pbuilder updates in the unstable distribution, but only on the armhf architecture. []
  • Other changes:

    • Perform some large-scale refactoring on how the systemd service operates. [][]
    • Move the list of workers into a separate file so it’s accessible to a number of scripts. []
    • Refactor the powercycle_x86_nodes.py script to use the new IONOS API and its new Python bindings. []
    • Also fix nph-logwatch after the worker changes. []
    • Do not install the stunnel tool anymore, it shouldn’t be needed by anything anymore. []
    • Move temporary directories related to Arch Linux into a single directory for clarity. []
    • Update the arm64 architecture host keys. []
    • Use a common Postfix configuration. []

The following changes were also made by:

  • Jan-Benedict Glaw:

    • Initial work to clean up a messy NetBSD-related script. [][]
  • Roland Clobus:

    • Show the installer log if the installer fails to build. []
    • Avoid the minus character (i.e. -) in a variable in order to allow for tags in openQA. []
    • Update the schedule of Debian live image builds. []
  • Vagrant Cascadian:

    • Maintenance on the virt* nodes is completed so bring them back online. []
    • Use the fully qualified domain name in configuration. []

Node maintenance was also performed by Holger Levsen, Mattia Rizzolo [][] and Vagrant Cascadian [][][][]



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 April, 2024 04:49PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

OpenSC and the Belgian eID

Getting the Belgian eID to work on Linux systems should be fairly easy, although some people do struggle with it.

For that reason, there is a lot of third-party documentation out there in the form of blog posts, wiki pages, and other kinds of things. Unfortunately, some of this documentation is simply wrong. Written by people who played around with things until it kind of worked, sometimes you get a situation where something that used to work in the past (but wasn't really necessary) now stopped working, but it's still added to a number of locations as though it were the gospel.

And then people follow these instructions and now things don't work anymore.

One of these revolves around OpenSC.

OpenSC is an open source smartcard library that has support for a pretty large number of smartcards, amongst which the Belgian eID. It provides a PKCS#11 module as well as a number of supporting tools.

For those not in the know, PKCS#11 is a standardized C API for offloading cryptographic operations. It is an API that can be used when talking to a hardware cryptographic module, in order to make that module perform some actions, and it is especially popular in the open source world, with support in NSS, amongst others. This library is written and maintained by mozilla, and is a low-level cryptographic library that is used by Firefox (on all platforms it supports) as well as by Google Chrome and other browsers based on that (but only on Linux, and as I understand it, only for linking with smartcards; their BoringSSL library is used for other things).

The official eID software that we ship through eid.belgium.be, also known as "BeID", provides a PKCS#11 module for the Belgian eID, as well as a number of support tools to make interacting with the card easier, such as the "eID viewer", which provides the ability to read data from the card, and validate their signatures. While the very first public version of this eID PKCS#11 module was originally based on OpenSC, it has since been reimplemented as a PKCS#11 module in its own right, with no lineage to OpenSC whatsoever anymore.

About five years ago, the Belgian eID card was renewed. At the time, a new physical appearance was the most obvious difference with the old card, but there were also some technical, on-chip, differences that are not so apparent. The most important one here, although it is not the only one, is the fact that newer eID cards now use a NIST P-384 elliptic curve-based private keys, rather than the RSA-based ones that were used in the past. This change required some changes to any PKCS#11 module that supports the eID; both the BeID one, as well as the OpenSC card-belpic driver that is written in support of the Belgian eID.

Obviously, the required changes were implemented for the BeID module; however, the OpenSC card-belpic driver was not updated. While I did do some preliminary work on the required changes, I was unable to get it to work, and eventually other things took up my time so I never finished the implementation. If someone would like to finish the work that I started, the preliminal patch that I wrote could be a good start -- but like I said, it doesn't yet work. Also, you'll probably be interested in the official documentation of the eID card.

Unfortunately, in the mean time someone added the Applet 1.8 ATR to the card-belpic.c file, without also implementing the required changes to the driver so that the PKCS#11 driver actually supports the eID card. The result of this is that if you have OpenSC installed in NSS for either Firefox or any Chromium-based browser, and it gets picked up before the BeID PKCS#11 module, then NSS will stop looking and pass all crypto operations to the OpenSC PKCS#11 module rather than to the official eID PKCS#11 module, and things will not work at all, causing a lot of confusion.

I have therefore taken the following two steps:

  1. The official eID packages now conflict with the OpenSC PKCS#11 module. Specifically only the PKCS#11 module, not the rest of OpenSC, so you can theoretically still use its tools. This means that once we release this new version of the eID software, when you do an upgrade and you have OpenSC installed, it will remove the PKCS#11 module and anything that depends on it. This is normal and expected.
  2. I have filed a pull request against OpenSC that removes the Applet 1.8 ATR from the driver, so that OpenSC will stop claiming that it supports the 1.8 applet.

When the pull request is accepted, we will update the official eID software to make the conflict versioned, so that as soon as it works again you will again be able to install the OpenSC and BeID packages at the same time.

In the mean time, if you have the OpenSC PKCS#11 module installed on your system, and your eID authentication does not work, try removing it.

11 April, 2024 09:33AM

April 09, 2024

Ian Jackson

Why we’ve voted No to CfD for Derril Water solar farm

[personal profile] ceb and I are members of the Derril Water Solar Park cooperative.

We were recently invited to vote on whether the coop should bid for a Contract for Difference, in a government green electricity auction.

We’ve voted No.

“Green electricity” from your mainstream supplier is a lie

For a while [personal profile] ceb and I have wanted to contribute directly to green energy provision. This isn’t really possible in the mainstream consumer electricy market.

Mainstream electricity suppliers’ “100% green energy” tariffs are pure greenwashing. In a capitalist boondoogle, they basically “divvy up” the electricity so that customers on the (typically more expensive) “green” tariff “get” the green electricity, and the other customers “get” whatever is left. (Of course the electricity is actually all mixed up by the National Grid.) There are fewer people signed up for these tariffs than there is green power generated, so this basically means signing up for a “green” tariff has no effect whatsoever, other than giving evil people more money.

Ripple

About a year ago we heard about Ripple. The structure is a little complicated, but the basic upshot is:

Ripple promote and manage renewable energy schemes. The schemes themselves are each an individual company; the company is largely owned by a co-operative. The co-op is owned by consumers of electricity in the UK., To stop the co-operative being an purely financial investment scheme, shares ownership is limited according to your electricity usage. The electricity is be sold on the open market, and the profits are used to offset members’ electricity bills. (One gotcha from all of this is that for this to work your electricity billing provider has to be signed up with Ripple, but ours, Octopus, is.)

It seemed to us that this was a way for us to directly cause (and pay for!) the actual generation of green electricity.

So, we bought shares in one these co-operatives: we are co-owners of the Derril Water Solar Farm. We signed up for the maximum: funding generating capacity corresponding to 120% of our current electricity usage. We paid a little over £5000 for our shares.

Contracts for Difference

The UK has a renewable energy subsidy scheme, which goes by the name of Contracts for Difference. The idea is that a renewable energy generation company bids in advance, saying that they’ll sell their electricity at Y price, for the duration of the contract (15 years in the current round). The lowest bids win. All the electricity from the participating infrastructure is sold on the open market, but if the market price is low the government makes up the difference, and if the price is high, the government takes the winnings.

This is supposedly good for giving a stable investment environment, since the price the developer is going to get now doesn’t depends on the electricity market over the next 15 years. The CfD system is supposed to encourage development, so you can only apply before you’ve commissioned your generation infrastructure.

Ripple and CfD

Ripple recently invited us to agree that the Derril Water co-operative should bid in the current round of CfDs.

If this goes ahead, and we are one of the auction’s winners, the result would be that, instead of selling our electricity at the market price, we’ll sell it at the fixed CfD price.

This would mean that our return on our investment (which show up as savings on our electricity bills) would be decoupled from market electricity prices, and be much more predictable.

They can’t tell us the price they’d want to bid at, and future electricity prices are rather hard to predict, but it’s clear from the accompanying projections that they think we’d be better off on average with a CfD.

The documentation is very full of financial projections and graphs; other factors aren’t really discussed in any detail.

The rules of the co-op didn’t require them to hold a vote, but very sensibly, for such a fundamental change in the model, they decided to treat it roughly the same way as for a rules change: they’re hoping to get 75% Yes votes.

Voting No

The reason we’re in this co-op at all is because we want to directly fund renewable electricity.

Participating in the CfD auction would involve us competing with capitalist energy companies for government subsidies. Subsidies which are supposed to encourage the provision of green electricity.

It seems to us that participating in this auction would remove most of the difference between what we hoped to do by investing in Derril Water, and just participating in the normal consumer electricity market.

In particular, if we do win in the auction, that’s probably directly removing the funding and investment support model for other, market-investor-funded, projects.

In other words, our buying into Derril Water ceases to be an additional green energy project, changing (in its minor way) the UK’s electricity mix. It becomes a financial transaction much more tenously connected (if connected at all) to helping mitigate the climate emergency.

So our conclusion was that we must vote against.



comment count unavailable comments

09 April, 2024 09:38PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Think outside the box • Welcome Eclipse!

Now that we are back from our six month period in Argentina, we decided to adopt a kitten, to bring more diversity into our lives. Perhaps this little girl will teach us to think outside the box!

Yesterday we witnessed a solar eclipse — Mexico City was not in the totality range (we reached ~80%), but it was a great experience to go with the kids. A couple dozen thousand people gathered for a massive picnic in las islas, the main area inside our university campus.

Afterwards, we went briefly back home, then crossed the city to fetch the little kitten. Of course, the kids were unanimous: Her name is Eclipse.

09 April, 2024 04:38PM

hackergotchi for Matthew Palmer

Matthew Palmer

How I Tripped Over the Debian Weak Keys Vulnerability

Those of you who haven’t been in IT for far, far too long might not know that next month will be the 16th(!) anniversary of the disclosure of what was, at the time, a fairly earth-shattering revelation: that for about 18 months, the Debian OpenSSL package was generating entirely predictable private keys.

The recent xz-stential threat (thanks to @nixCraft for making me aware of that one), has got me thinking about my own serendipitous interaction with a major vulnerability. Given that the statute of limitations has (probably) run out, I thought I’d share it as a tale of how “huh, that’s weird” can be a powerful threat-hunting tool – but only if you’ve got the time to keep pulling at the thread.

Prelude to an Adventure

Our story begins back in March 2008. I was working at Engine Yard (EY), a now largely-forgotten Rails-focused hosting company, which pioneered several advances in Rails application deployment. Probably EY’s greatest claim to lasting fame is that they helped launch a little code hosting platform you might have heard of, by providing them free infrastructure when they were little more than a glimmer in the Internet’s eye.

I am, of course, talking about everyone’s favourite Microsoft product: GitHub.

Since GitHub was in the right place, at the right time, with a compelling product offering, they quickly started to gain traction, and grow their userbase. With growth comes challenges, amongst them the one we’re focusing on today: SSH login times. Then, as now, GitHub provided SSH access to the git repos they hosted, by SSHing to git@github.com with publickey authentication. They were using the standard way that everyone manages SSH keys: the ~/.ssh/authorized_keys file, and that became a problem as the number of keys started to grow.

The way that SSH uses this file is that, when a user connects and asks for publickey authentication, SSH opens the ~/.ssh/authorized_keys file and scans all of the keys listed in it, looking for a key which matches the key that the user presented. This linear search is normally not a huge problem, because nobody in their right mind puts more than a few keys in their ~/.ssh/authorized_keys, right?

2008-era GitHub giving monkey puppet side-eye to the idea that nobody stores many keys in an authorized_keys file

Of course, as a popular, rapidly-growing service, GitHub was gaining users at a fair clip, to the point that the one big file that stored all the SSH keys was starting to visibly impact SSH login times. This problem was also not going to get any better by itself. Something Had To Be Done.

EY management was keen on making sure GitHub ran well, and so despite it not really being a hosting problem, they were willing to help fix this problem. For some reason, the late, great, Ezra Zygmuntowitz pointed GitHub in my direction, and let me take the time to really get into the problem with the GitHub team. After examining a variety of different possible solutions, we came to the conclusion that the least-worst option was to patch OpenSSH to lookup keys in a MySQL database, indexed on the key fingerprint.

We didn’t take this decision on a whim – it wasn’t a case of “yeah, sure, let’s just hack around with OpenSSH, what could possibly go wrong?”. We knew it was potentially catastrophic if things went sideways, so you can imagine how much worse the other options available were. Ensuring that this wouldn’t compromise security was a lot of the effort that went into the change. In the end, though, we rolled it out in early April, and lo! SSH logins were fast, and we were pretty sure we wouldn’t have to worry about this problem for a long time to come.

Normally, you’d think “patching OpenSSH to make mass SSH logins super fast” would be a good story on its own. But no, this is just the opening scene.

Chekov’s Gun Makes its Appearance

Fast forward a little under a month, to the first few days of May 2008. I get a message from one of the GitHub team, saying that somehow users were able to access other users’ repos over SSH. Naturally, as we’d recently rolled out the OpenSSH patch, which touched this very thing, the code I’d written was suspect number one, so I was called in to help.

The lineup scene from the movie The Usual Suspects
They're called The Usual Suspects for a reason, but sometimes, it really is Keyser Söze

Eventually, after more than a little debugging, we discovered that, somehow, there were two users with keys that had the same key fingerprint. This absolutely shouldn’t happen – it’s a bit like winning the lottery twice in a row1 – unless the users had somehow shared their keys with each other, of course. Still, it was worth investigating, just in case it was a web application bug, so the GitHub team reached out to the users impacted, to try and figure out what was going on.

The users professed no knowledge of each other, neither admitted to publicising their key, and couldn’t offer any explanation as to how the other person could possibly have gotten their key.

Then things went from “weird” to “what the…?”. Because another pair of users showed up, sharing a key fingerprint – but it was a different shared key fingerprint. The odds now have gone from “winning the lottery multiple times in a row” to as close to “this literally cannot happen” as makes no difference.

Milhouse from The Simpsons says that We're Through The Looking Glass Here, People

Once we were really, really confident that the OpenSSH patch wasn’t the cause of the problem, my involvement in the problem basically ended. I wasn’t a GitHub employee, and EY had plenty of other customers who needed my help, so I wasn’t able to stay deeply involved in the on-going investigation of The Mystery of the Duplicate Keys.

However, the GitHub team did keep talking to the users involved, and managed to determine the only apparent common factor was that all the users claimed to be using Debian or Ubuntu systems, which was where their SSH keys would have been generated.

That was as far as the investigation had really gotten, when along came May 13, 2008.

Chekov’s Gun Goes Off

With the publication of DSA-1571-1, everything suddenly became clear. Through a well-meaning but ultimately disasterous cleanup of OpenSSL’s randomness generation code, the Debian maintainer had inadvertently reduced the number of possible keys that could be generated by a given user from “bazillions” to a little over 32,000. With so many people signing up to GitHub – some of them no doubt following best practice and freshly generating a separate key – it’s unsurprising that some collisions occurred.

You can imagine the sense of “oooooooh, so that’s what’s going on!” that rippled out once the issue was understood. I was mostly glad that we had conclusive evidence that my OpenSSH patch wasn’t at fault, little knowing how much more contact I was to have with Debian weak keys in the future, running a huge store of known-compromised keys and using them to find misbehaving Certificate Authorities, amongst other things.

Lessons Learned

While I’ve not found a description of exactly when and how Luciano Bello discovered the vulnerability that became CVE-2008-0166, I presume he first came across it some time before it was disclosed – likely before GitHub tripped over it. The stable Debian release that included the vulnerable code had been released a year earlier, so there was plenty of time for Luciano to have discovered key collisions and go “hmm, I wonder what’s going on here?”, then keep digging until the solution presented itself.

The thought “hmm, that’s odd”, followed by intense investigation, leading to the discovery of a major flaw is also what ultimately brought down the recent XZ backdoor. The critical part of that sequence is the ability to do that intense investigation, though.

When I reflect on my brush with the Debian weak keys vulnerability, what sticks out to me is the fact that I didn’t do the deep investigation. I wonder if Luciano hadn’t found it, how long it might have been before it was found. The GitHub team would have continued investigating, presumably, and perhaps they (or I) would have eventually dug deep enough to find it. But we were all super busy – myself, working support tickets at EY, and GitHub feverishly building features and fighting the fires in their rapidly-growing service.

As it was, Luciano was able to take the time to dig in and find out what was happening, but just like the XZ backdoor, I feel like we, as an industry, got a bit lucky that someone with the skills, time, and energy was on hand at the right time to make a huge difference.

It’s a luxury to be able to take the time to really dig into a problem, and it’s a luxury that most of us rarely have. Perhaps an understated takeaway is that somehow we all need to wrestle back some time to follow our hunches and really dig into the things that make us go “hmm…”.

Support My Hunches

If you’d like to help me be able to do intense investigations of mysterious software phenomena, you can shout me a refreshing beverage on ko-fi.


  1. the odds are actually probably more like winning the lottery about twenty times in a row. The numbers involved are staggeringly huge, so it’s easiest to just approximate it as “really, really unlikely”. 

09 April, 2024 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

April 08, 2024

Bastian Blank

Python dataclasses for Deb822 format

Python includes some helping support for classes that are designed to just hold some data and not much more: Data Classes. It uses plain Python type definitions to specify what you can have and some further information for every field. This will then generate you some useful methods, like __init__ and __repr__, but on request also more. But given that those type definitions are available to other code, a lot more can be done.

There exists several separate packages to work on data classes. For example you can have data validation from JSON with dacite.

But Debian likes a pretty strange format usually called Deb822, which is in fact derived from the RFC 822 format of e-mail messages. Those files includes single messages with a well known format.

So I'd like to introduce some Deb822 format support for Python Data Classes. For now the code resides in the Debian Cloud tool.

Usage

Setup

It uses the standard data classes support and several helper functions. Also you need to enable support for postponed evaluation of annotations.

from __future__ import annotations
from dataclasses import dataclass
from dataclasses_deb822 import read_deb822, field_deb822

Class definition start

Data classes are just normal classes, just with a decorator.

@dataclass
class Package:

Field definitions

You need to specify the exact key to be used for this field.

    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')

Default values are also supported.

    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )

Reading files

for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Full example

from __future__ import annotations
from dataclasses import dataclass
from debian_cloud_images.utils.dataclasses_deb822 import read_deb822, field_deb822
from typing import Optional
import sys


@dataclass
class Package:
    package: str = field_deb822('Package')
    version: str = field_deb822('Version')
    arch: str = field_deb822('Architecture')
    multi_arch: Optional[str] = field_deb822(
        'Multi-Arch',
        default=None,
    )


for p in read_deb822(Package, sys.stdin, ignore_unknown=True):
    print(p)

Known limitations

08 April, 2024 05:00PM by Bastian Blank

April 07, 2024

Thorsten Alteholz

My Debian Activities in March 2024

FTP master

This month I accepted 147 and rejected 12 packages. The overall number of packages that got accepted was 151.

If you file an RM bug, please do check whether there are reverse dependencies as well and file RM bugs for them. It is annoying and time-consuming when I have to do the moreinfo dance.

Debian LTS

This was my hundred-seventeenth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

During my allocated time I uploaded:

  • [DLA 3770-1] libnet-cidr-lite-perl security update for one CVE to fix IP parsing and ACLs based on the result
  • [#1067544] Bullseye PU bug for libmicrohttpd
  • Unfortunately XZ happened at the end of month and I had to delay/intentionally delayed other uploads: they will appear as DLA-3781-1 and DLA-3784-1 in April

I also continued to work on qtbase-opensource-src and last but not least did a week of FD.

Debian ELTS

This month was the sixty-eighth ELTS month. During my allocated time I uploaded:

  • [ELA-1062-1]libnet-cidr-lite-perl security update for one CVE to improve parsing of IP addresses in Jessie and Stretch
  • Due to XZ I also delayed the uploads here. They will appear as ELA-1069-1 and DLA-1070-1 in April

I also continued on an update for qtbase-opensource-src in Stretch (and LTS and other releases as well) and did a week of FD.

Debian Printing

This month I uploaded new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded a new upstream or bugfix version of:

misc

This month I uploaded new upstream or bugfix versions of:

07 April, 2024 11:56AM by alteholz

April 06, 2024

John Goerzen

Facebook is Censoring Stories about Climate Change and Illegal Raid in Marion, Kansas

It is, sadly, not entirely surprising that Facebook is censoring articles critical of Meta.

The Kansas Reflector published an artical about Meta censoring environmental articles about climate change — deeming them “too controversial”.

Facebook then censored the article about Facebook censorship, and then after an independent site published a copy of the climate change article, Facebook censored it too.

The CNN story says Facebook apologized and said it was a mistake and was fixing it.

Color me skeptical, because today I saw this:

Yes, that’s right: today, April 6, I get a notification that they removed a post from August 12. The notification was dated April 4, but only showed up for me today.

I wonder why my post from August 12 was fine for nearly 8 months, and then all of a sudden, when the same website runs an article critical of Facebook, my 8-month-old post is a problem. Hmm.

Riiiiiight. Cybersecurity.

This isn’t even the first time they’ve done this to me.

On September 11, 2021, they removed my post about the social network Mastodon (click that link for screenshot). A post that, incidentally, had been made 10 months prior to being removed.

While they ultimately reversed themselves, I subsequently wrote Facebook’s Blocking Decisions Are Deliberate — Including Their Censorship of Mastodon.

That this same pattern has played out a second time — again with something that is a very slight challenege to Facebook — seems to validate my conclusion. Facebook lets all sort of hateful garbage infest their site, but anything about climate change — or their own censorship — gets removed, and this pattern persists for years.

There’s a reason I prefer Mastodon these days. You can find me there as @jgoerzen@floss.social.

So. I’ve written this blog post. And then I’m going to post it to Facebook. Let’s see if they try to censor me for a third time. Bring it, Facebook.

06 April, 2024 02:00PM by John Goerzen

hackergotchi for Mirco Bauer

Mirco Bauer

Secure USB boot with Debian

Foreword

The moment you leave your laptop, say in a hotel room, you can no longer trust your system as it could have been modified while you were away, which is also known as the "evil maid" attack. Think you are safe because you have a crypted disk? Well, if the boot partition is on the laptop itself, it can be manipulated and you will not notice because the boot partition can't be encrypted. The BIOS needs to access the MBR and boot loader and that loads the Linux kernel, all unencrypted. There has been some reports lately that the Linux cryptsetup is insecure because you can spawn a root shell by hitting the enter key for 70 seconds. This is not the threat to your system, really. If someone has physical access to your hardware, he can get a root shell in less than a second by passing init=/bin/bash as parameter to the Linux kernel in the boot loader regardless if cryptsetup is used or not. The attacker can also use other ways like booting a live system from CD/USB etc. The actual insecurity here is the unencrypted boot partition and not some script that gets executed from it. So how to prevent this physical access attack vector? Just keep reading this guide.

This guide explains how to install Debian securely on your laptop with using an external USB boot disk, such as a standard USB memory stick. The disk inside the laptop should not contain your /boot partition since that is an easy target for manipulation. An attacker could for example change the boot scripts inside the initrd image to capture your passphrase of your crypted volume. With an USB boot partition, you can unplug the USB stick after the operating system has booted. Best practice here is to have the USB stick together with your bunch of keys. That way you will disconnect your USB stick early after the boot as finished so you can put it back into your pocket.

Secure Hardware Assumptions

We have to assume here that the hardware you are using to download and verify the install media is safe to use. Same applies with the hardware where you are doing the fresh Debian install. Say the hardware does not contain any malware in the form of code in EFI or other manipulation attempts that influence the behavior of the operating system we are going to install.

Download Debian Install ISO

Feel free to use any Debian mirror and install flavor. For this guide I am using the download mirror in Germany and the DVD install flavor.

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/debian-8.6.0-amd64-DVD-1.iso

Verify hashsum of ISO file

To know if the ISO file was downloaded without modification we have to check the hashsum of the file. The hashsum file can be found in the same directory as the ISO file on the download mirror. With hashsums if a single bit differs in the file, the resulting SHA512 sum will be completely different.

Obtain the hashsum file using:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS

Calculate a local hashsum from the downloaded ISO file:

sha512sum debian-8.6.0-amd64-DVD-1.iso

Now you need to compare the hashsum with that is in the SHA512SUMS file. Since the SHA512SUMS file contains the hashsums of all files that are in the same directory you need to find the right one first. grep can do this for you:

grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS

Both commands executed after each other should show following output:

$ sha512sum debian-8.6.0-amd64-DVD-1.iso
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso
$ grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso

As you can see the hashsum found in the SHA512SUMS file matches with the locally generated hashsum using the sha512sum command.

At this point we are not finished yet. These 2 matching hashsums just means whatever was on the download server matches what we have received and stored locally on your disk. The ISO file and SHA512SUM file could still be a modified version!

And this is where GPG signatures chime in, covered in the next section.

Download GPG Signature File

GPG signature files usually have the .sign file name extension but could also be named .asc. Download the signature file using wget:

wget http://ftp.de.debian.org/debian-cd/current/amd64/iso-dvd/SHA512SUMS.sign

Obtain GPG Key of Signer

Letting gpg verify the signature will fail at this point as we don't have the public key of the signer:

$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: Can't check signature: No public key

Downloading a key is trivial with gpg, but more importantly we need to verify that this key (DA87E80D6294BE9B) is trustworthy, as it could also be a key of the infamous man-in-the-middle.

Here you can find the GPG fingerprints of the official signing keys used by Debian. The ending of the "Key fingerprint" line should match the key id we found in the signature file from above.

gpg:                using RSA key DA87E80D6294BE9B

Key fingerprint = DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

DA87E80D6294BE9B matches Key fingerprint = DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B

To download and import this key run:

$ gpg --keyserver keyring.debian.org --recv-keys DA87E80D6294BE9B

Verify GPG Signature of Hashsum File

Ok, we are almost there. Now we can run the command which checks if the signature of the hashsum file we have, was not modified by anyone and matches what Debian has generated and signed.

gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "Debian CD signing key <debian-cd@lists.debian.org>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

The important line in this output is the "Good signature from ..." one. It still shows a warning since we never certified (signed) that Debian key. This can be ignored at this point though.

Write ISO Image to Install Media

With a verified pristine ISO file we can finally start the install by writing it to an USB stick or blank DVD. So use your favorite tool to write the ISO to your install media and boot from it. I have used dd and a USB stick attached as /dev/sdb.

dd if=debian-8.6.0-amd64-DVD-1.iso of=/dev/sdb bs=1M oflag=sync

Install Debian on Crypted Volume with USB boot partition

I am not explaining each step of the Debian install here. The Debian handbook is a good resource for covering each install step.

Follow the steps until the installers wants to partition your disk.

There you need to select the "Guided, use entire disk and set up encrypted LVM" option. After that select the built-in disk of your laptop, which usually is sda but double check this before you go ahead, as it will overwrite the data! The 137 GB disk in this case is the built-in disk and the 8 GB is the USB stick.

It makes no difference at this point if you select "All files in one partition" or "Separate /home partition". The USB boot partition can be selected a later step.

Confirm that you want to overwrite your built-in disk shown as sda. It will take a while as it will write random data to the disk to ensure there is no unencrypted data left on the disk from previous installations for example.

Now you need to enter your passphrase that will be used to protect the private key of the crypt volume. Choose something long enough like a sentence and don't forget the passphrase else you can no longer access your data! Don't save the passphrase on any computer, smartphone or password manager. If you want to make a backup of your passphrase then use a ball pen and paper and store the paper backup in a secure location.

The installer will show you a summary of the partitioning as shown above but we need to make the change for the USB boot disk. At the moment it wants to put /boot on sda which is the built-in disk, while our USB stick is sdb. Select /boot and hit enter, after that select "Delete this partition".

After /boot was deleted we can create /boot on the USB stick shown as sdb. Select sdb and hit enter. It will ask if you want to create an empty partition table. Confirm that question with yes.

The partition summary shows sdb with no partitions on it. Select FREE SPACE and select "Create a new partition". Confirm the suggested partition size. Confirm the partition type to be "Primary".

It is time to tell the installer to use this new partition on the USB stick (sdb1) as /boot partition. Select "Mount point: /home" and in the next dialog select "/boot - static files of the boot loader" as shown below:

Confirm the made changes by selecting "Done setting up the partition".

The final partitioning should look now like the following screenshot:

If the partition summary looks good, go ahead with the installation by selecting "Finish partitioning and write changes to disk".

When the installer asks if it should force EFI, then select no, as EFI is not going to protect you.

Finish the installation as usual, select your preferred desktop environment etc.

GRUB Boot Loader

Confirm the dialog that wants to install GRUB to the master boot record. Here it is important to install it to the USB stick and not your built-in SATA/SSD disk! So select sdb (the USB stick) in the next dialog.

First Boot from USB

Once everything is installed, you can boot from your USB stick. As simple test you can unplug your USB stick and the boot should fail with "no operating system found" or similar error message from the BIOS. If it doesn't boot even though the USB stick is connected, then most likely your BIOS is not configured to boot from USB media. Also a blank screen and nothing happening is usually meaning the BIOS can't find a boot device. You need to change the boot setting in your BIOS. As the steps are very different for each BIOS, I can't provide a detailed step-by-step list here.

Usually you can enter the BIOS using F1, F2 or F12 after powering on your computer. In the BIOS there is a menu to configure the boot order. In that list it should show USB disk/storage as the first position. After you have made the changes save and exit the BIOS. Now it will boot from your USB stick first and GRUB will show up and proceeds with the boot process till it will ask for your passphrase to unlock the crypt volume.

Unmount /boot partition after Boot

If you boot your laptop from the USB stick, we want to remove the stick after it has finished booting. This will prevent an attacker to make modifications to your USB stick. To avoid data loss, we should not simply unplug the USB stick but unmount /boot first and then unplug the stick. Good news is that we can automate this unmounting and you just need to unplug the stick after the laptop has finished booting to your login screen.

Just add this line to your /etc/rc.local file:

umount /boot

After boot you can once verify that it automatically unmounts /boot for you by running:

mount | grep /boot

If that command produces no output, then /boot is not mounted and you can safely unplug the USB stick.

Final Words

From time to time you need to upgrade your Linux kernel of course which is on the /boot partition. This can still be done the regular way using apt-get upgrade, except that you need to mount /boot before that and unmount it again after the kernel upgrade.

Enjoy your secured laptop. Now you can leave it in a hotel room without the possibility of someone trying you obtain your passphrase by putting a key logger in your boot partition. All the attacker will see is a fully encrypted harddisk. If he tries to mess with your crypted disk, you will notice as the decryption will fail.

Disclaimer: there are still other attack vectors possible, but they are much harder to do. Your hardware or BIOS can still be modified. But not by holding down the enter key for 70 seconds or by booting a live system.

06 April, 2024 06:26AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Trying to explain analogue clock.

Trying to explain analogue clock. It's hard to explain. Tried adding some things for affordance, and it is still not enough. So it's not obvious which arm is the hour and which arm is the minute. analog clock

06 April, 2024 03:37AM by Junichi Uekawa

April 05, 2024

Paul Wise

FLOSS Activities March 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Administration

  • Debian wiki: approve accounts

Communication

  • Respond to queries from Debian users and contributors on the mailing lists and IRC

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

05 April, 2024 11:28PM

hackergotchi for Bits from Debian

Bits from Debian

apt install dpl-candidate: Sruthi Chandran

The Debian Project Developers will shortly vote for a new Debian Project Leader known as the DPL.

The DPL is the official representative of representative of The Debian Project tasked with managing the overall project, its vision, direction, and finances.

The DPL is also responsible for the selection of Delegates, defining areas of responsibility within the project, the coordination of Developers, and making decisions required for the project.

Our outgoing and present DPL Jonathan Carter served 4 terms, from 2020 through 2024. Jonathan shared his last Bits from the DPL post to Debian recently and his hopes for the future of Debian.

Recently, we sat with the two present candidates for the DPL position asking questions to find out who they really are in a series of interviews about their platforms, visions for Debian, lives, and even their favorite text editors. The interviews were conducted by disaster2life (Yashraj Moghe) and made available from video and audio transcriptions:

  • Andreas Tille [Interview]
  • Sruthi Chandran [this document]

Voting for the position starts on April 6, 2024.

Editors' note: This is our official return to Debian interviews, readers should stay tuned for more upcoming interviews with Developers and other important figures in Debian as part of our "Meet your Debian Developer" series. We used the following tools and services: Turboscribe.ai for the transcription from the audio and video files, IRC: Oftc.net for communication, Jitsi meet for interviews, and Open Broadcaster Software (OBS) for editing and video. While we encountered many technical difficulties in the return to this process, we are still able and proud to present the transcripts of the interviews edited only in a few areas for readability.

2024 Debian Project Leader Candidate: Sruthi Chandran

Sruthi's interview

Hi Sruthi, so for the first question, who are you and could you tell us a little bit about yourself?

[Sruthi]:

I usually talk about me whenever I am talking about answering the question who am I, I usually say like I am a librarian turned free software enthusiast and a Debian Developer. So I had no technical background and I learned, I was introduced to free software through my husband and then I learned Debian packaging, and eventually I became a Debian Developer. So I always give my example to people who say I am not technically inclined, I don't have technical background so I can't contribute to free software.

So yeah, that's what I refer to myself.

For the next question, could you tell me what do you do in Debian, and could you mention your story up until here today?

[Sruthi]:

Okay, so let me start from my initial days in Debian. I started contributing to Debian, my first contribution was a Tibetan font. We went to a Tibetan place and they were saying they didn't have a font in Linux.

So that's how I started contributing. Then I moved on to Ruby packages, then I have some JavaScript and Go packages, all dependencies of GitLab. So I was involved with maintaining GitLab for some time, now I'm not very active there.

But yeah, so GitLab was the main package I was contributing to since I contributed since 2016 to maybe like 2020 or something. Later I have come [over to] packaging. Now I am part of some of the teams, delegated teams, like community team and outreach team, as well as the Debconf committee. And the biggest, I think, my activity in Debian, I would say is organizing Debconf 2023. So it was a great experience and yeah, so that's my story in Debian.

So what are three key terms about you and your candidacy?

[Sruthi]:

Okay, let me first think about it. For candidacy, I can start with diversity is one point I started expressing from the first time I contested for DPL. But to be honest, that's the main point I want to bring.

[Yashraj]:

So for diversity, if you could break down your thoughts on diversity and make them, [about] your three points including diversity.

[Sruthi]:

So in addition to, eventually when starting it was just diversity. Now I have like a bit more ideas, like community, like I want to be a leader for the Debian community. More than, I don't know, maybe people may not agree, but I would say I want to be a leader of Debian community rather than a Debian operating system.

I connect to community more and third point I would say.

The term of a DPL lasts for an year. So what do you think during, what would you try to do during that, that you can't do from your position now?

[Sruthi]:

Okay. So I, like, I am very happy with the structure of Debian and how things work in Debian. Like you can do almost a lot of things, like almost all things without being a DPL.

Whatever change you want to bring about or whatever you want to do, you can do without being a DPL. Anyone, like every DD has the same rights. Only things I feel [the] DPL has hold on are mainly the budget or the funding part, which like, that's where they do the decision making part.

And then comes like, and one advantage of DPL driving some idea is that somehow people tend to listen to that with more, like, tend to give more attention to what DPL is saying rather than a normal DD. So I wanted to, like, I have answered some of the questions on how to, how I plan to do the financial budgeting part, how I want to handle, like, and the other thing is using the extra attention that I get as a DPL, I would like to obviously start with the diversity aspect in Debian. And yeah, like, I, what I want to do is not, like, be a leader and say, like, take Debian to one direction where I want to go, but I would rather take suggestions and inputs from the whole community and go about with that.

So yes, that's what I would say.

And taking a less serious question now, what is your preferred text editor?

[Sruthi]:

Vim.

[Yashraj]:

Vim, wholeheartedly team Vim?

[Sruthi]:

Yes.

[Yashraj]:

Great. Well, this was made in Vim, all the text for this.

[Sruthi]:

So, like, since you mentioned extra data, I'll give my example, like, it's just a fun note, when I started contributing to Debian, as I mentioned, I didn't have any knowledge about free software, like Debian, and I was not used to even using Linux. So, and I didn't have experience with these text editors. So, when I started contributing, I used to do the editing part using gedit.

So, that's how I started. Eventually, I moved to Nano, and once I reached Vim, I didn't move on.

Team Vim. Next question. What, what do you think is the importance of the Debian project in the world today? And where would you like to see it in 10 years, like 10 years into the future?

[Sruthi]:

Okay. So, Debian, as we all know, is referred to as the universal operating system without, like, it is said for a reason. We have hundreds and hundreds of operating systems, like Linux, distributions based on Debian.

So, I believe Debian, like even now, Debian has good influence on the, at least on the Linux or Linux ecosystem. So, what we implement in Debian has, like, is going to affect quite a lot of, like, a very good percentage of people using Linux. So, yes.

So, I think Debian is one of the leading Linux distributions. And I think in 10 years, we should be able to reach a position, like, where we are not, like, even now, like, even these many years after having Linux, we face a lot of problems in newer and newer hardware coming up and installing on them is a big problem. Like, firmwares and all those things are getting more and more complicated.

Like, it should be getting simpler, but it's getting more and more complicated. So, I, one thing I would imagine, like, I don't know if we will ever reach there, but I would imagine that eventually with the Debian, we should be able to have some, at least a few of the hardware developers or hardware producers have Debian pre-installed and those kind of things. Like, not, like, become, I'm not saying it's all, it's also available right now.

What I'm saying is that it becomes prominent enough to be opted as, like, default distro.

What part of Debian has made you And what part of the project has kept you going all through these years?

[Sruthi]:

Okay. So, I started to contribute in 2016, and I was part of the team doing GitLab packaging, and we did have a lot of training workshops and those kind of things within India. And I was, like, I had interacted with some of the Indian DDs, but I never got, like, even through chat or mail.

I didn't have a lot of interaction with the rest of the world, DDs. And the 2019 Debconf changed my whole perspective about Debian. Before that, I wasn't, like, even, I was interested in free software.

I was doing the technical stuff and all. But after DebConf, my whole idea has been, like, my focus changed to the community. Debian community is a very welcoming, very interesting community to be with.

And so, I believe that, like, 2019 DebConf was a for me. And that kept, from 2019, my focus has been to how to support, like, how, I moved to the community part of Debian from there. Then in 2020 I became part of the community team, and, like, I started being part of other teams.

So, these, I would say, the Debian community is the one, like, aspect of Debian that keeps me whole, keeps me held on to the Debian ecosystem as a whole.

Continuing to speak about Debian, what do you think, what is the first thing that comes to your mind when you think of Debian, like, the word, the community, what's the first thing?

[Sruthi]:

I think I may sound like a broken record or something.

[Yashraj]:

No, no.

[Sruthi]:

Again, I would say the Debian community, like, it's the people who makes Debian, that makes Debian special.

Like, apart from that, if I say, I would say I'm very, like, one part of Debian that makes me very happy is the, how the governing system of Debian works, the Debian constitution and all those things, like, it's a very unique thing for Debian. And, and it's like, when people say you can't work without a proper, like, establishment or even somebody deciding everything for you, it's difficult. When people say, like, we have been, Debian has been proving it for quite a long time now, that it's possible.

So, so that's one thing I believe, like, that's one unique point. And I am very proud about that.

What areas do you think Debian is failing in, how can it (that standing) be improved?

[Sruthi]:

So, I think where Debian is failing now is getting new people into Debian. Like, I don't remember, like, exactly the answer. But I remember hearing someone mention, like, the average age of a Debian Developer is, like, above 40 or 45 or something, like, exact age, I don't remember.

But it's like, Debian is getting old. Like, the people in Debian are getting old and we are not getting enough of new people into Debian. And that's very important to have people, like, new people coming up.

Otherwise, eventually, like, after a few years, nobody, like, we won't have enough people to take the project forward. So, yeah, I believe that is where we need to work on. We are doing some efforts, like, being part of GSOC or outreachy and having maybe other events, like, local events. Like, we used to have a lot of Debian packaging workshops in India. And those kind of, I think, in Brazil and all, they all have, like, local communities are doing. But we are not very successful in retaining the people who maybe come and try out things.

But we are not very good at retaining the people, like, retaining people who come. So, we need to work on those things. Right now, I don't have a solid answer for that.

But one thing, like, I was thinking about is, like, having a Debian specific outreach project, wherein the focus will be about the Debian, like, starting will be more on, like, usually what happens in GSOC and outreach is that people come, have the, do the contributions, and they go back. Like, they don't have that connection with the Debian, like, Debian community or Debian project. So, what I envision with these, the Debian outreach, the Debian specific outreach is that we have some part of the internship, like, even before starting the internship, we have some sessions and, like, with the people in Debian having, like, getting them introduced to the Debian philosophy and Debian community and Debian, how Debian works.

And those things, we focus on that. And then we move on to the technical internship parts. So, I believe this could do some good in having, like, when you have people you can connect to, you tend to stay back in a project mode.

When you feel something more than, like, right now, we have so many technical stuff to do, like, the choice for a college student is endless. So, if they want, if they stay back for something, like, maybe for Debian, I would say, we need to have them connected to the Debian project before we go into technical parts. Like, technical parts, like, there are other things as well, where they can go and do the technical part, but, like, they can come here, like, yeah.

So, that's what I was saying. Focused outreach projects is one thing. That's just one.

That's not enough. We need more of, like, more ideas to have more new people come up. And I'm very happy with, like, the DebConf thing. We tend to get more and more people from the places where we have a DebConf. Brazil is an example. After the Debconf, they have quite a good improvement on Debian contributors.

And I think in India also, it did give a good result. Like, we have more people contributing and staying back and those things. So, yeah.

So, these were the things I would say, like, we can do to improve.

For the final question, what field in free software do you, what field in free software generally do you think requires the most work to be put into it? What do you think is Debian's part in that field?

[Sruthi]:

Okay. Like, right now, what comes to my mind is the free software licenses parts. Like, we have a lot of free software licenses, and there are non-free software licenses.

But currently, I feel free software is having a big problem in enforcing these licenses. Like, there are, there may be big corporations or like some people who take up the whole, the code and may not follow the whole, for example, the GPL licenses. Like, we don't know how much of those, how much of the free softwares are used in the bigger things.

Yeah, I agree. There are a lot of corporations who are afraid to touch free software. But there would be good amount of free software, free work that converts into property, things violating the free software licenses and those things.

And we do not have the kind of like, we have SFLC, SFC, etc. But still, we do not have the ability to go behind and trace and implement the licenses. So, enforce those licenses and bring people who are violating the licenses forward and those kind of things is challenging because one thing is it takes time, like, and most importantly, money is required for the legal stuff.

And not always people who like people who make small software, or maybe big, but they may not have the kind of time and money to have these things enforced. So, that's a big challenge free software is facing, especially in our current scenario. I feel we are having those, like, we need to find ways how we can get it sorted.

I don't have an answer right now what to do. But this is a challenge I felt like and Debian's part in that. Yeah, as I said, I don't have a solution for that.

But the Debian, so DFSG and Debian sticking on to the free software licenses is a good support, I think.

So, that was the final question, Do you have anything else you want to mention for anyone watching this?

[Sruthi]:

Not really, like, I am happy, like, I think I was able to answer the questions. And yeah, I would say who is watching. I won't say like, I'm the best DPL candidate, you can't have a better one or something.

I stand for a reason. And if you believe in that, or the Debian community and Debian diversity, and those kinds of things, if you believe it, I hope you would be interested, like, you would want to vote for me. That's it.

Like, I'm not, I'll make it very clear. I'm not doing a technical leadership part here. So, those, I can't convince people who want technical leadership to vote for me.

But I would say people who connect with me, I hope they vote for me.

05 April, 2024 06:36PM by Yashraj Moghe with The Debian Publicity Team

April 04, 2024

John Goerzen

The xz Issue Isn’t About Open Source

You’ve probably heard of the recent backdoor in xz. There have been a lot of takes on this, most of them boiling down to some version of:

The problem here is with Open Source Software.

I want to say not only is that view so myopic that it pushes towards the incorrect, but also it blinds us to more serious problems.

Now, I don’t pretend that there are no problems in the FLOSS community. There have been various pieces written about what this issue says about the FLOSS community (usually without actionable solutions). I’m not here to say those pieces are wrong. Just that there’s a bigger picture.

So with this xz issue, it may well be a state actor (aka “spy”) that added this malicious code to xz. We also know that proprietary software and systems can be vulnerable. For instance, a Twitter whistleblower revealed that Twitter employed Indian and Chinese spies, some knowingly. A recent report pointed to security lapses at Microsoft, including “preventable” lapses in security. According to the Wikipedia article on the SolarWinds attack, it was facilitated by various kinds of carelessness, including passwords being posted to Github and weak default passwords. They directly distributed malware-infested updates, encouraged customers to disable anti-malware tools when installing SolarWinds products, and so forth.

It would be naive indeed to assume that there aren’t black hat actors among the legions of programmers employed by companies that outsource work to low-cost countries — some of which have challenges with bribery.

So, given all this, we can’t really say the problem is Open Source. Maybe it’s more broad:

The problem here is with software.

Maybe that inches us closer, but is it really accurate? We have all heard of Boeing’s recent issues, which seem to have some element of root causes in corporate carelessness, cost-cutting, and outsourcing. That sounds rather similar to the SolarWinds issue, doesn’t it?

Well then, the problem is capitalism.

Maybe it has a role to play, but isn’t it a little too easy to just say “capitalism” and throw up our hands helplessly, just as some do with FLOSS as at the start of this article? After all, capitalism also brought us plenty of products of very high quality over the years. When we can point to successful, non-careless products — and I own some of them (for instance, my Framework laptop). We clearly haven’t reached the root cause yet.

And besides, what would you replace it with? All the major alternatives that have been tried have even stronger downsides. Maybe you replace it with “better regulated capitalism”, but that’s still capitalism.

Then the problem must be with consumers.

As this argument would go, it’s consumers’ buying patterns that drive problems. Buyers — individual and corporate — seek flashy features and low cost, prizing those over quality and security.

No doubt this is true in a lot of cases. Maybe greed or status-conscious societies foster it: Temu promises people to “shop like a billionaire”, and unloads on them cheap junk, which “all but guarantees that shipments from Temu containing products made with forced labor are entering the United States on a regular basis“.

But consumers are also people, and some fraction of them are quite capable of writing fantastic software, and in fact, do so.

So what we need is some way to seize control. Some way to do what is right, despite the pressures of consumers or corporations.

Ah yes, dear reader, you have been slogging through all these paragraphs and now realize I have been leading you to this:

Then the solution is Open Source.

Indeed. Faults and all, FLOSS is the most successful movement I know where people are bringing us back to the commons: working and volunteering for the common good, unleashing a thousand creative variants on a theme, iterating in every direction imaginable. We have FLOSS being vital parts of everything from $30 Raspberry Pis to space missions. It is bringing education and communication to impoverished parts of the world. It lets everyone write and release software. And, unlike the SolarWinds and Twitter issues, it exposes both clever solutions and security flaws to the world.

If an authentication process in Windows got slower, we would all shrug and mutter “Microsoft” under our breath. Because, really, what else can we do? We have no agency with Windows.

If an authentication process in Linux gets slower, anybody that’s interested — anybody at all — can dive in and ask “why” and trace it down to root causes.

Some look at this and say “FLOSS is responsible for this mess.” I look at it and say, “this would be so much worse if it wasn’t FLOSS” — and experience backs me up on this.

FLOSS doesn’t prevent security issues itself.

What it does do is give capabilities to us all. The ability to investigate. Ability to fix. Yes, even the ability to break — and its cousin, the power to learn.

And, most rewarding, the ability to contribute.

04 April, 2024 10:07PM by John Goerzen

April 03, 2024

hackergotchi for Guido Günther

Guido Günther

Free Software Activities March 2024

A short status update of what happened on my side last month. I spent quiet a bit of time reviewing new, code (thanks!) as well as maintenance to keep things going but we also have some improvements:

Phosh

Phoc

phosh-mobile-settings

phosh-osk-stub

gmobile

Livi

squeekboard

GNOME calls

Libsoup

If you want to support my work see donations.

03 April, 2024 10:12AM