parker higgins dot net

Public (sub-)domains

The tremendous influx of traffic to Mastodon got me thinking that it might finally be time to set up my own instance, and how-to posts from Jacob and Simon have only increased that interest. But as a little branding excercise, and especially if I want to offer accounts to a few close friends, surely I could do something a little more fun than just my first and last name.

Many Mastodon instances are on subdomains, and since the early days weirder new-style TLDs have been de rigueur. (The flagship has always been at a .social!) So I set out to find three-word phrases where the third word is a 4+-letter top-level domain, using as my first source text Moby Dick.

The results were great! The script I wrote output all possible options, which I then spot-checked to see which were available, but I’ve since updated the script to do a quick whois check to see if the domain is already registered. (whois support is a little spotty for some of the weirder domains, so many are inconclusive, but I was surprised at some of the good ones available.) As of right now, here are some possible instances available for registration:

  • certain.fragmentary.parts
  • famous.whaling.house
  • moreover.unhesitatingly.expert
  • however.temporary.fail
  • almost.microscopic.network
  • should.nominally.live
  • another.whaling.voyage
  • surprising.terrible.events

Wouldn’t those all be great places to call your home in the fediverse?

Normally I would wonder to myself if this kind of thought experiment is cool but this time I feel like I’ve got external validation in the form of the reaction to this thread on Mastodon, which has also been great. Somebody even bought the saddest.city domain on the strength of the strangest.saddest.city find.

People responded with some cool possible instance names from The Great Gatsby, Frankenstein, White Noise, the King James Bible and more. Really fun.

The little Python script that finds these uses NLTK to tokenize big text files first into sentences and then, within sentences, into words. Then it checks to see if there are three long-ish words in a row where the third one is on a list of TLDs. Since posting that script on Mastodon yesterday, I’ve updated it with the built-in whois check as well.

As of now, I’m still tooting from a boring old (well-run!) general purpose instance, though who knows… with almost.microscopic.network available, maybe I will move soon.

New Rossword puzzle

Sorry I Haven’t Posted for a while!

I do of course intend to return to the blog, yadda yadda, lots of updates to share. One quick thing that merits an update today is that I’ve co-constructed a crossword puzzle with Ross Trudeau over at Rossword Puzzles. Go check it out.

Ross is a mentor to many in the crossword world, and I’m lucky to have weaseled my way into that cohort through being friends with him. We also (and this should be the subject of another blog post! The inspiration is returning, see!) host a regular Twitch stream called Cursewords Live, which involves a lot of solving puzzles in my cursewords software.

Ross and I previously co-constructed a Universal Sunday puzzle (available as a PDF) published on January 10 this year.

1923 zine and website launch

Towards the beginning of this year I ran a Kickstarter campaign for a monthly zine of archival material from the year 1923.

That year has some copyright and public domain significance. For twenty years following the 1998 Copyright Term Extension Act, 1922 was the most recent publication year for which a given work could be categorically determined to be in the public domain. That made “1922 and earlier” a common category for digitized databases of older works. On January 1, that border advanced for the first time in two decades, and the zine is something of a celebration of that fact.

Anyway.

The zines sold out quickly—I only intended to offer a hundred subscriptions, which were snapped up in about two hours—and so I added a “digital subscription” option. That meant I would have to figure out how to distribute the zines digitally too, but I was probably going to want to do that anyhow.

I sent out and posted (for backers only) the first two issues together after the campaign closed in the beginning of February. This week I sent out and posted the third issue, and so I’ve removed the “paywall” for the first two.

Which means: you can now check out the January and Feburary issues of 1923 at 1923.press.

This has been a big endeavor for me, and I’ve had to learn a lot about a tiny area of publishing and fulfillment.

In the coming days I’m going to publish some notes about my zine-creating process, which I think is sort of idiosyncratic but may also be instructional. I’ve written a few scripts and one-liners to ease the process, and I’d love if those ended up being helpful to other zine makers!

Introducing: cursewords, a crossword puzzle solving interface for the terminal

I’m releasing new software today for solving crossword puzzles in the terminal. cursewords is a small Python program to open, navigate, and solve puzzles stored as .puz files. If you’re a Mac or Linux user, you can install it today by running pip3 install --user cursewords in your terminal, and then use the cursewords command to open a .puz file on your computer.

In case you’re not a crossword nerd: the .puz file was developed for popular solving software called AcrossLite, and it remains the most popular format for transmitting crosswords online, from independent creators all the way up to the New York Times.

cursewords in action on my terminal

In fact, many independent puzzle creators only distribute their puzzles as digital files. For example, I subscribe to a handful of excellent puzzle outlets—American Values Club, The Inkubator, Crossword Nation, Fireball Crosswords—that don’t offer an online solver or an app like the Times does. As a Linux user, I didn’t have a lot of options to open them: AcrossLite isn’t compatible, Web-based solutions have their limitations, and beyond that, I wanted to be able to introduce fun features like a “downs only” mode that hides the across clues. (You can try it: running cursewordswith the --downs-only flag activates this very challenging mode.)

But also, I liked the challenge of writing my own software as a way of thinking more about how crosswords are built and how we hold them in our head for navigation. I also love the retro-computing aesthetic that comes with terminal applications, and—while I think this is the first ever terminal crossword client—that it’s mostly based on tech that has remained unchanged for decades. In that way I’ve likened it to efforts to, say, imagine what a car built with first-century technology would look like: it’s not necessarily the most useful or the best, but it’s instructive (and in my case, actually works)!

As the name may suggest, cursewords relies on a famed programming library called curses that helps to build text-based user interfaces. It is also heavily indebted to a curses wrapper called blessed, and a library called puz that reads and writes .puz files.

If you’re interested in cursewords, please give it a try and let me know how it works for you. I’ve been very interested in how to solve this problem for over a month now and I am excited to talk about it more publicly.

Shutting down @LinkArchiver, the Twitter link backup bot

After a little over a year of service, @LinkArchiver, the Twitter bot that automatically made Internet Archive backups of the links you tweeted, has archived its last link. In that time it archived somewhere around 7.2 million links total from about 9,000 users.1 The last link it archived was this LA Times story about Verizon throttling California firefighters, tweeted on Thursday morning.

LinkArchiver stopped working this week when Twitter turned off the User Stream API that it relied on. Under the hood, LinkArchiver was only looking at its timeline, so it could use Twitter’s built-in following features to make its user list. Since that API change, it can’t pull down a “stream” of its timeline, and so would have to be redesigned to continue to work.

Even as this project is shutting down, I consider it a pretty major success. I am very grateful to Jacob Hoffman-Andrews for pitching me the underlying idea. Writing the code (and seeing it get an enthusiastic reception) was a great way to kick off my time at Recurse Center last summer. I’m also grateful to Ben Cotton who gave it a nice write-up at opensource.com when it launched.

I’ve had a few people ask me about archiving and backup options now that this is no longer available. I’m considering doing something similar for Mastodon, or for plain RSS feeds, but I also don’t want to downplay the fact that Internet Archive does a very good job of running the Wayback Machine crawler, and so the main value I can add is adding a personal layer. In any future work on things like LinkArchiver, I’d want to keep track of that.

There’s also a way to do a Twitter redesign with existing APIs, probably. Instead of getting a stream from Twitter that pinged on new tweet events, it could request new tweets at regular intervals, using an API that’s still operational. If somebody wants to write that, they’re welcome to, but given the way Twitter is, I’m not eager to do so.

  1. “Quote tweets” are treated like links to tweets, and constituted about a third of the total links. Something like 4.8 million links backed up were at domains other than Twitter.