| |
Posted on

Table of Contents

Mise en Scene

I recently spent a couple days moving from one type of universally unique identifier to a different one, for an in-progress database-backed web-app. The initial work didn't take very long, but debugging the serialization and deserialization of the new IDs took another day and a half, and in the end, the alleged mystery of why it wasn't working was a red herring due to my own stupidity. So come with me on an exciting voyage of discovery, and once again, learn from my folly!

Keys, primarily

Most large distributed programs that people interact with daily via HTTP are, in essence, a fancy facade for some kind of database. Facebook? That's a database. Gmail? That's a database.

that's a database

wikipedia? that's a database.

In most databases, each entry ("row") has a field that acts as a primary key, used to uniquely identify that row inside the table it's in. Since databases typically contain multiple tables, and primary keys have to be unique only within their own table, you could just use a simple integer that's automatically incremented every time you add a new record, and in many databases, if you create a table without specifying a primary key, they will automatically and implicitly use a mechanism like that. You may also recognize the idea of "serial numbers", which is what these sorts of IDs are.

This is often totally fine! If you only ever have one copy of the database, and never have to worry about inserting rows from a different instance of the database, then you can just use those simple values and move on your merry way.

However, if you ever think you might want to have multiple instances of your database running, and want to make sure they're eventually consistent with each other, then you might want to use a fancier identifier for your primary keys, to avoid collisions between them.


A popular type for fancy keys is called a v4 UUIDs. These are 128-bit random numbers1, and when turned into a string, usually look something like 1c20104f-e04f-409e-9ad3-94455e5f4fea; this is called the "hyphenated" form, for fairly obvious reasons. Although sometimes they're stored in a DB in that form directly, that's using 36 bytes to store 16 bytes' worth of data, which is more than twice as many bytes as necessary. And if you're a programmer, this sort of conspicuous waste is unconscionable.

You can cut that to 32 bytes by just dropping the dashes, but then that's still twice as many bytes as the actual data requires. If you never have to actually display the ID inside the database, then the simplest thing to do is just store it as a blob of 16 bytes2. Finally, optimal representation and efficiency!


And at first, that's what I did. The external library I'm using to interface with my database automatically writes UUIDs as a sequence of sixteen bytes, if you specified the type in the database3 as "blob", which I did.

But then I saw a blog post where the following tidbit was mentioned:

We prefer using an Universally Unique Lexicographically Sortable Identifier (ULID) for these idempotency keys instead of a random version 4 UUID. ULIDs contain a 48-bit timestamp followed by 80 bits of random data. The timestamp allows ULIDs to be sorted, which works much better with the b-tree data structure databases use for indexing. In one high-throughput system at Shopify we’ve seen a 50 percent decrease in INSERT statement duration by switching from UUIDv4 to ULID for idempotency keys.

Whoa, that sounds great! But this youtube video tempered my expectations a bit, by describing the implementation-dependent reasons for that dramatic improvement. Still, switching from UUIDs to ULIDs couldn't hurt4, right? Plus, by encoding the time of creation (at least to the nearest millisecond), I could remove a "created at" field from every table that used them as primary keys. Which, in my case, would be all of them, and I'm worried less about the speed of inserts than I am about keeping total on-disk size down anyway.

I was actually already familiar with the idea of using time-based sortable IDs, from KSUIDs. It's an attractive concept to me, and I'd considered using them from the get-go, but discarded that for two main reasons:

  • they're FOUR WHOLE BYTES!!! larger than UUIDs
  • I'd have to manually implement serialization/deserialization, since SQLx doesn't have native support for them

In reality, neither of those are real show-stoppers; 20 vs. 16 bytes is probably not that significant, and I'd have to do the manual serialization stuff for anything besides a less-than-8-bytes number or a normal UUID. Still, four bytes is four bytes, and all other things being equal, I'd rather go for the trimmer, 64-bit-aligned value.

Finally, I'd recently finished with adding some ability to actually interact with data in a meaningful way, and to add new records to the database, which meant that it was now or never for standardizing on a type for the primary keys. I was ready to do this thing.

Serial problems

"Deserilization" is the act of converting a static, non-native representation of some kind of datatype into a dynamic, native computer programming object, so that you can do the right computer programming stuff to it. It can be as simple as when a program reads in a string of digit characters and parses it into a real number, but of course the ceiling on complexity is limitless.

In my case, it was about getting those sixteen bytes out of the database and turning them into ULIDs. Technically, I could have let Rust handle that for me by automatically deriving that functionality. There were a couple snags with that course, though:

  • the default serialized representation of a ULID in the library I was using to provide them is as 26-character strings, and I wanted to use only 16 bytes in the database
  • you could tell it to serialize as a 128-bit number, but that merely kicked the problem one step down the road since SQLite can only handle up to 64-bit numbers, as previously discussed, so I'd still have to manually do something for them

This meant going all-in on fully custom serialization and deserialization, something I'd never done before, but how hard could it be? (spoiler: actually not that hard!)

Great coders steal

Something I appreciate about the Rust programming language is that because of the way the compiler works5, the full source code almost always has to be available to you, the end-user coder. The culture around it is also very biased toward open source, and so all the extremely useful libraries are just sitting there, ready to be studied and copied. So the first thing I did was take a look at how SQLx handled UUIDs:

impl Type<Sqlite> for Uuid {
    fn type_info() -> SqliteTypeInfo {

    fn compatible(ty: &SqliteTypeInfo) -> bool {
        matches!(ty.0, DataType::Blob | DataType::Text)

impl<'q> Encode<'q, Sqlite> for Uuid {
    fn encode_by_ref(&self, args: &mut Vec<SqliteArgumentValue<'q>>) -> IsNull {


impl Decode<'_, Sqlite> for Uuid {
    fn decode(value: SqliteValueRef<'_>) -> Result<Self, BoxDynError> {
        // construct a Uuid from the returned bytes

There's not a ton going on there, as you can see. To "encode" it just gets the bytes out of the UUID, and to "decode" it just gets the bytes out of the database. I couldn't use that exactly as done by the SQLx authors, as they were using datatypes that were private to their crate, but it was close enough; here's mine:

impl sqlx::Type<sqlx::Sqlite> for DbId {
    fn type_info() -> <sqlx::Sqlite as sqlx::Database>::TypeInfo {
        <&[u8] as sqlx::Type<sqlx::Sqlite>>::type_info()

impl<'q> Encode<'q, Sqlite> for DbId {
    fn encode_by_ref(&self, args: &mut Vec<SqliteArgumentValue<'q>>) -> IsNull {

impl Decode<'_, Sqlite> for DbId {
    fn decode(value: SqliteValueRef<'_>) -> Result<Self, sqlx::error::BoxDynError> {
        let bytes = <&[u8] as Decode<Sqlite>>::decode(value)?;
        let bytes: [u8; 16] = bytes.try_into().unwrap_or_default();

(In order to implement the required methods from SQLx, I had to wrap the ULID in a new, custom type, which I called DbId, to comply with the orphan rules.)

That's only half the story, though. If all I had to worry about was getting data in and out of the database, that would be fine, but because I'm building a web app, I need to be able to include my new ID type in messages sent over a network or as part of a web page, and for that, it needed to implement some functionality from a different library, called Serde. My original implementation for deserializing looked like this:

struct DbIdVisitor;

impl<'de> Visitor<'de> for DbIdVisitor {
    type Value = DbId;

    // make a DbId from a slice of bytes
    fn visit_bytes<E>(self, v: &[u8]) -> Result<Self::Value, E>
        E: serde::de::Error,

    // make a DbId from a Vec of bytes
    fn visit_byte_buf<E>(self, v: Vec<u8>) -> Result<Self::Value, E>
        E: serde::de::Error,

    // you get the picture
    fn visit_string() ...
    fn visit_u128() ...
    fn visit_i128() ...

In my mind, the only important pieces were the visit_bytes() and visit_byte_buf() methods, which worked basically the same as the decode() function for SQLx. I mean, as far as I could tell, the only time something would be encountering a serialized DbId would be in the form of raw bytes from the database; no one else would be trying to serialize one as something else that I didn't anticipate, right?



A puzzling failure

As soon as my code compiled, I ran my tests. Everything passed... except for one, that tested logging in.

This was very strange. All the other tests were passing, and basically every operation requires getting one of these IDs into or out of the database. But at this point, it was late, and I set it down until the next day.

When in doubt, change many things at once

The next day I sat back down to get back to work, and in the course of examining what was going on, realized that I'd missed something crucial: these things were supposed to be sortable. But the way I was inserting them meant that they weren't, because of endianness.

More like shmexicographic, amirite

"ULID" stands for "Universally Unique Lexicographically Sortable Identifier"6. "Lexicographic order" basically means, "like alphabetical, but for anything with a defined total order". Numbers have a defined total order; bigger numbers always go after smaller.

But sometimes numbers get sorted out of order, if they're not treated as numbers. Like say you had a directory with twelve files in it, called "1.txt" up through "12.txt". If you were to ask to see them listed out in lexicographic order, it would go like:

$ ls

This is because '10' is "less than" '2' (and '0' is "less than" '.', which is why "10.txt" is before "1.txt"). The solution, as all data-entering people know, is to pad the number with leading '0's:

$ ls

Now the names are lexicographically sorted in the right numerical order7.

So, now that we're all expert lexicographicographers, we understand that our IDs are just supposed to naturally sort themselves in the correct order, based on when they were created; IDs created later should sort after IDs created earlier.

The implementation for my ULIDs only guaranteed this property for the string form of them, but I was not storing them in string from. Fundamentally, the ULID was a simple 128-bit primitive integer, capable of holding values between 0 and 340,282,366,920,938,463,463,374,607,431,768,211,455.

But there's a problem: I was storing the ID in the database as a sequence of 16 bytes. I was asking for those bytes in "native endian", which in my case, meant "little endian". If you're not familiar with endianness, there are two varieties: big, and little. "Big" makes the most sense for a lot of people; if you see a number like "512", it's big-endian; the end is the part that's left-most, and "big" means that it is the most-significant-digit. This is the same as what westerners think of as "normal" numbers. In the number "512", the "most significant digit" is 5, which correspends to 500, which is added to the next-most-significant digit, 1, corresponding to 10, which is added to the next-most-significant digit, which is also the least-most-significant-digit, which is 2, which is just 2, giving us the full number 512.

If we put the least-significant-digit first, we'd write the number 512 as "215"; the order when written out would be reversed. This means that the lexicographic sort of 512, 521 would have "125" come before "215", which is backwards.

Little-endianness is like that. If a multibyte numeric value is on a little-endian system, the least-significant bytes will come first, and a lexicographic sorting of those bytes would be non-numeric.

The solution, though, is simple: just write them out in big-endian order! This was literally a one-line change in the code, to switch from to_ne_bytes() ("ne" for "native endian") to to_be_bytes(). I confirmed that the bytes written into were being written in the correct lexicographic order:

sqlite> select hex(id), username from users order by id asc;

Note the first six characters are the same, for these two users created some time apart8.

Boom. "Sorted".

The actual problem

Except that the logins were still broken; it wasn't just the test. What was even stranger is that with advanced debugging techniques9, I confirmed that the login was working. By which I mean, when the user submitted a login request, the function that handled the request was:

  • correctly confirming password match
  • retrieving the user from the database

The second thing was required for the first. It was even creating a session in the session table:

sqlite> select * from async_sessions;

I noticed that the ID was present in the session entry, but as what looked like an array of decimal values. The less not-astute among you may have noticed that the session table seemed to be using JSON to store information. This wasn't my code, but it was easy enough to find the culprit:

    pub fn insert(&mut self, key: &str, value: impl Serialize) -> Result<(), serde_json::Error> {
        self.insert_raw(key, serde_json::to_string(&value)?);

This was in the external library I was using to provide cookie-based sessions for my web app, and was transitively invoked when I called the login() method in my own code. Someone else was serializing my IDs, in a way I hadn't anticipated!

The way that Serde decides what code to call is based on its data model. And wouldn't you know it, the following words are right there, hiding in plain sight, as they had been all along:

When deserializing a data structure from some format, the Deserialize implementation for the data structure is responsible for mapping the data structure into the Serde data model by passing to the Deserializer a Visitor implementation that can receive the various types of the data model...


  • seq
    • A variably sized heterogeneous sequence of values, for example Vec<T> or HashSet<T>. ...


The flexibility around mapping into the Serde data model is profound and powerful. When implementing Serialize and Deserialize, be aware of the broader context of your type that may make the most instinctive mapping not the best choice.

Well, when you put it that way, I can't help but understand: I needed to implement a visit_seq() method in my deserialization code.

fine, fine, I see the light

fine, fine, i see the light

You can see that here if you'd like, but I'll actually come back to it in a second. The important part was that my logins were working again; time to party!

Wait, why isn't it broken?

I'd just spent the day banging my head against this problem, and so when everything worked again, I committed and pushed the change and signed off. But something was still bothering me, and the next day, I dove back into it.

All my serialization code was calling a method called bytes(), which simply called another method that would return an array of 16 bytes, in big-endian order, so it could go into the database and be sortable, as discussed.

But all10 my deserialization code was constructing the IDs as though the bytes were little-endian. Which lead me to ask:

what the fuck?

Like, everything was working. Why did I need to construct from a different byte order? I felt like I was losing my mind, so I reached out to the Recurse Center community and presented my case.

Basically, I showed that bytes were written correctly, resident in the DB in big-endian form, but then were "backwards" coming out and "had to be" cast using little-endian constructors ("from_ne_bytes()").

What had actually happened is that as long as there was agreement about what order to use for reconstructing the ID from the bytes, it didn't matter if it was big or little-endian, it just had to be the same on both the SQLx side and on the Serde side. This is also irrespective of the order they were written out in, but again, the two sides must agree on the convention used. Inside the Serde method, I had added some debug printing of the bytes it was getting, and they were in little-endian order. What I had not realized is that that was because they were first passing through the SQLx method which reversed them.

Mmmmm, delicious, delicous red herring.

Two people were especially helpful, Julia Evans and Nicole Tietz-Sokolskaya; Julia grabbed a copy of my database file and poked it with Python, and could not replicate the behavior I was seeing, and Nicole did the same but with a little Rust program she wrote. Huge thanks to both of them (but not just them) for the extended rubber ducking! And apologies for the initial gas-lighting; Julia was quite patient and diplomatic when pushing back against "the bytes are coming out of the db backwards".

Lessons learned

Welp, here we are, the end of the line; I hope this has been informative, or barring that, at least entertaining. Or the other way around, I'm not that fussy!

Obviously, the biggest mistake was to futz with being clever about endianness before understanding why the login code was now failing. Had I gotten it working correctly first, I would have been able to figure out the requirement for agreement on convention between the two different serialization systems much sooner, and I would not have wasted mine and others' time on misunderstanding.

On the other hand, it's hard to see these things on the first try, especially when you're on your own, and are on your first fumbling steps in a new domain or ecosystem; for me, that was getting into the nitty-gritty with Serde, and for that matter, dealing directly with serialization-specific issues. Collaboration is a great technique for navigating these situations, and I definitely need to focus a bit more on enabling that11.

In the course of debugging this issue, I tried to get more insight via testing, and though that helped a little, it was not nearly enough; the problem was that I misunderstood how something worked, not that I had mistakenly implemented something I was comfortable with. Tests aren't a substitute for understanding!

And of course, I'm now much more confident and comfortable with Serde; reading the Serde code for other things, like UUIDs, is no longer an exercise in eye-glaze-control. Maybe this has helped you with that too?


Technically, most v4 UUIDs have only 122 random bits, as six out of 128 are reserved for version information.


Some databases have direct support for 128-bit primitive values (numbers). The database I'm using, SQLite, only supports up to 64-bit primitive values, but it does support arbitrary-length sequences of bytes called "blobs".


I'm using SQLite for reasons that I plan to dive into in a different post, but "blob" is specific to it. In general, you'll probably want to take advantage of implementation-specific features of whatever database you're using, which means that your table definitions won't be fully portable to a different database. This is fine and good, actually!


You may wonder: have I benchmarked this system with UUIDs vs. ULIDs? Ha ha, you must have never met a programmer before! No, of course not. But, that's coming in a follow-up.


If the code you're using has generics in it, then the compiler needs to generate specialized versions of that generic code based on how you use it; this is called "monomorphization", and it requires the original generic source to work. That's also true in C++, which is why most templated code is header-only, but Rust doesn't have header files.


I guess the extra 'U' and 'S' are invisible.


Is this confusing? Yes, 100%, it is not just you. Don't get discouraged.


The 6 most-significant bytes make up the timestamp in a ULID, which in the hex dump form pasted there would be the first twelve characters, since each byte is two hex digits.


"adding dbg!() statements in the code"


Upon further review, I discovered that the only methods that were constructing with little-endian order were the SQLx decode() method, and the Serde visit_seq() method, which were also the only ones that were being called at all. The visit_bytes() and visit_byte_buf() methods, that I had thought were so important, were correctly treating the bytes as big-endian, but were simply never actually used. I fixed in the next commit


I've described my current practices as "solo-yolo", which has its plusses and minuses, as you may imagine.

:: , , , , , ,