blog

What I Do When I Can’t Sleep


šŸ”— a linked post to every.to » — originally shared here on

Listening to him name exactly what he liked about the wine turned up the hairs on the back of my neck. Suddenly, my own experience of the wine felt deeper and more satisfying. I wanted to know: How was he able to identify these flavors?

He told me the secret was to blindly smell things and try to label them. You see, the part of your brain that’s responsible for smells is naturally mute. It’s called the olfactory bulb, and it’s an ancient fist of neurons just behind your eyes. It has only indirect backroad connections to the parts of the brain that control language, like Broca’s area. So, even though you might have an easy time knowing whether or not you like a scent, it’s not easy to label that scent with a word. It’ll feel like groping through a dark closet for something you know is there, but can’t quite find.

I did not know this, and now I will be nerding out on ways to train my taste buds.

But this article isn’t just about developing your sense of taste. It’s also about defining your style.

In much the same way the author describes how he is, for the first time in his life, articulating what he likes as a writer, I am starting to do the same with several areas of my life.

Case in point: I have a huge collection of quotes that you can see on the main page of this site.

I just fed all of them through ChatGPT with this prompt:

I’ve attached a series of quotes which are meaningful to me. First off, tell me the consistent themes you recognize across all of the quotes. Then, tell me what this says about the things that inspire me.

Here is the result:

Across all the quotes, several consistent themes emerge:

  1. Growth and Self-improvement: Many quotes focus on the importance of personal growth, overcoming challenges, and continuous improvement.
  2. Perseverance and Resilience: There's a strong theme of enduring hardship, staying strong in the face of adversity, and the importance of perseverance.
  3. Wisdom and Perspective: The quotes often reflect on acquiring wisdom, gaining perspective, and understanding life's deeper meanings and truths.
  4. Happiness and Contentment: There's an emphasis on finding happiness, being content with what one has, and the pursuit of joy in life.
  5. Community and Relationships: Several quotes highlight the value of community, the importance of relationships, and the strength that comes from vulnerability and connection with others.

These themes suggest that you are inspired by ideas related to personal growth, resilience in the face of adversity, the pursuit of wisdom and understanding, the importance of happiness and contentment, and the value of community and relationships. It paints a picture of someone who values both internal development and external connections, seeking both personal fulfillment and meaningful interactions with others.

:extremely Marge Simpson voice: ā€œWell, duh.ā€

(For real though, pretty awesome that we have a tool that is exceptionally good at summarizing large amounts of data and picking out patterns throughout the dataset.)

Continue to the full article


THE 88Ɨ31 ARCHIVE


šŸ”— a linked post to hellnet.work » — originally shared here on

This site contains 29257 unique* 88x31 buttons that I scraped from the GeoCities archives compiled by the incredible ARCHIVE TEAM before GeoCities' demise in late 2009.

I shouldn’t go through all ~30,000 images to find the ones I made for Tim’s World or That’s Unpossible, right?

…right?

Continue to the full article


The Robot Report #1 — Reveries


šŸ”— a linked post to randsinrepose.com » — originally shared here on

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, ā€œWell, how do you know it’s true? How do you know the robot isn’t hallucinating?ā€ Before I explain my process, I want to point out that I don’t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, it’s not the robot’s responsibility to tell the truth; it’s your job to understand what is and isn’t true.

You’re being changed by the things you see and read for your entire life, and hopefully, you’ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, it’s stopped, and you wonder, ā€œIs this true?ā€

Knowing when to question truth is fundamental to being a human. Unfortunately, we’ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isn’t robots doing it better; it’s the humans getting worse.

I’m working on an extended side quest and in the past few hours of pairing with ChatGPT, I’ve found myself constantly second guessing a large portion of the decisions and code that the AI produced.

This article pairs well with this one I read today about a possible social exploit that relies on frequently hallucinated package names.

Simon Willison writes:

Bar Lanyado noticed that LLMs frequently hallucinate the names of packages that don’t exist in their answers to coding questions, which can be exploited as a supply chain attack.

He gathered 2,500 questions across Python, Node.js, Go, .NET and Ruby and ran them through a number of different LLMs, taking notes of any hallucinated packages and if any of those hallucinations were repeated.

One repeat example was ā€œpip install huggingface-cliā€ (the correct package is ā€œhuggingface[cli]ā€). Bar then published a harmless package under that name in January, and observebd 30,000 downloads of that package in the three months that followed.

I’ll be honest: during my side quest here, I’ve 100% blindly run npm install on packages without double checking official documentation.

These large language models truly are mirrors to our minds, showing all sides of our personalities from our most fit to our most lazy.

Continue to the full article


Why Houses Don’t Look Like Houses Anymore


šŸ”— a linked post to thenation.com » — originally shared here on

I’ve owned my own home for close to five years now, and I’m slowly coming around to the idea of making major changes to it in order to make it feel like it is mine.

During the pandemic, we poured a patio in the front of our house and spent nearly every day sitting on it.

In fact, that patio led to the formation of several enduring relationships with my neighbors.

I find it tough to shake the renter’s mindset, where I can’t do anything to affect the ā€œresale valueā€ of my home because… well, maybe the next owner won’t buy it because of the deep purple walls in the basement.

But the more I lean into tweaking what we have, the more I feel comfortable, productive, and happy. I’m incredibly grateful to have property which I can modify however I see fit to improve the wary of life for my family and myself.

This article also made me reflect on how toxic it can be to covet other people’s homes:

We should always remember that the purpose of a home is for living and that decoration, for many, is a form of self-expression. Media literacy, which has improved with regard to beauty and fashion content, lags when it comes to architecture and interior design. Changing that begins with realizing that most homes don’t actually look like hotel lobbies or real estate listings. They, rather joyfully, look like homes—dust bunnies and all.

Continue to the full article


The ā€˜Enshittification’ of TikTok


šŸ”— a linked post to wired.com » — originally shared here on

Here is how platforms die: First, they are good to their users; then they abuse their users to make things better for their business customers; finally, they abuse those business customers to claw back all the value for themselves. Then, they die.

I call this enshittification, and it is a seemingly inevitable consequence arising from the combination of the ease of changing how a platform allocates value, combined with the nature of a "two-sided market," where a platform sits between buyers and sellers, hold each hostage to the other, raking off an ever-larger share of the value that passes between them.

If you’ve spent much time in the same tech bubbles as me this past year, you’ve probably come across this article already.

At a bare minimum, I’m sure you’ve seen the phrase ā€œenshittification.ā€

Once you understand the concept, you do start to see the pattern unfold around you constantly. 1

While there are countless examples of this natural platform decay within our virtual world, what about the physical world?

Is enshittification simply human nature, an inescapable fate for any collaborative endeavor above a certain size?

And if enshittification is not inevitable, what are the forces that lead to it, and how can we combat them when building our own communities?


  1. Case in point: the Conde Nast-owned WIRED website on which this article was published. I’m using a Shortcut on my iPad to post this article, and while sitting idle at the top of the post, I've seen three levels of pop ups appear which cover the article content. I haven’t even scrolled the page yet!  

Continue to the full article


101 things I would tell my self from 10 years ago


šŸ”— a linked post to approachwithalacrity.com » — originally shared here on

I’m a sucker for this style of post. This one in particular is jam packed with so many great pieces of advice that I had to read it three times before sharing it.

Here’s the very first item on her list. If it speaks to you, take ten minutes and thoughtfully consider the other 100 items.

  1. You are overly obedient. You not only do what people tell you to do, but find it hard to imagine any world other than the one they present to you. Spend more time thinking about what you want, in isolation from the pressures of the world. (Keep this in mind while you read the rest of this very prescriptive document.)

Continue to the full article


The Internet Needs to Change


šŸ”— a linked post to youtube.com » — originally shared here on

I hate the internet.

...that's a lie. I love it, but I hate the algorithms.

That's also a lie... I love the algorithms.

I watched this video on the plane ride back from Nickelodeon Resort yesterday, and I have to say, it got me.

Hank's assessment of how the algorithms deployed by social networks come up short in actually giving us what we want is spot on.

It's why I love how many friends are spinning up their own newsletters. And this new newsletter was a no brainer instasubscribe.

Ever since my buddy Paul gifted me a premium subscription to Garbage Day, I've been a voracious newsletter subscriber. They do a great job of filling the void that Google Reader left in my life.1

This website has been my way of curating the internet, sharing things I've found that interest me, but maybe I should start a newsletter myself and do things in both places.

Should I tell my impostor syndrome to shove it and start my own newsletter, y'all?


  1. I do need to find a way to get them out of my inbox, though. I really should move all my subscriptions into Feedbin so they show up in my RSS reader app. 


Juan L. Otaiza - System of a Down - Relaxing Piano Version


šŸ”— a linked post to youtube.com » — originally shared here on

The algorithms1 blessed me with this video last week, and I find myself coming back to it when I'm doing deep work.

I also am enjoying his Rammstein version, and I am eagerly looking forward to checking out the Linkin Park and Avenged Sevenfold ones soon.

If I could play the piano, this is absolutely the kind of stuff I would want to play.


  1. Speaking of algorithms, you should watch Hank Green's latest video that I just wrote about. 


All about the money


šŸ”— a linked post to builders.genagorlin.com » — originally shared here on

One can of course relate to money in pathological ways. For those whose standard of value resides not in a conception of their fully-lived life (a la the builder), but in the impressions or judgments of others (be it God or society or their parents or some other ā€œdrill sergeantā€), money means whatever it purportedly means to those others—status, in some circles, or wicked materialism in others, or in still others, ā€œprivilegeā€ to be forgiven with obligate philanthropy.Ā 

By contrast, a builder’s relationship to money is not mediated by any of these external intermediaries. She understands that money is a medium of value exchange, and what she values is set by the life she wants to build and the world she wants to live in.Ā 

There are also simpler pathologies, such as when fear or insecurity drives founders to pursue short-term monetary gains over the longer-term health and durability of their business. But such financial anxieties can be diagnosed and remedied by re-orienting toward the overarching goal of building one’s best life, which presumably includes a healthy and durable version of one’s business (or whatever one is building) as part of it.

Quite a useful way to reframe money and its importance to a well considered life.

Continue to the full article


Claude and ChatGPT for ad-hoc sidequests


šŸ”— a linked post to simonwillison.net » — originally shared here on

I’m an unabashed fan of Simon Willison’s blog. Some of his posts admittedly go over my head, but I needed to share this post because it gets across the point I have been trying to articulate myself about AI and how I use it.

In the post, Simon talks about wanting to get a polygon object created that represents the boundary of Adirondack Park, the largest park in the United States (which occupies a fifth of the whole state!).

That part in and of itself is nerdy and a fun read, but this section here made my neck hurt from nodding aggressively in agreement:

Isn’t this a bit trivial? Yes it is, and that’s the point. This was a five minute sidequest. Writing about it here took ten times longer than the exercise itself.

I take on LLM-assisted sidequests like this one dozens of times a week. Many of them are substantially larger and more useful. They are having a very material impact on my work: I can get more done and solve much more interesting problems, because I’m not wasting valuable cycles figuring out ogr2ogr invocations or mucking around with polygon libraries.

Not to mention that I find working this way fun! It feels like science fiction every time I do it. Our AI-assisted future is here right now and I’m still finding it weird, fascinating and deeply entertaining.

Frequent readers of this blog know that a big part of the work I’ve been doing since being laid off is in reflecting on what brings me joy and happiness.

Work over the last twelve years of my life represented a small portion of something that used to bring me a ton of joy (building websites and apps). But somewhere along the way, building websites was no longer enjoyable to me.

I used to love learning new frameworks, expanding the arsenal of tools in my toolbox to solve an ever expanding set of problems. But spending my free time developing a new skill with a new tool began to feel like I was working but not getting paid.

And that notion really doesn’t sit well with me. I still love figuring out how computers work. It’s just nice to do so without the added pressure of building something to make someone else happy.

Which brings me to the ā€œside questā€ concept Simon describes in this post, which is something I find myself doing nearly every day with ChatGPT.

When I was going through my album artwork on Plex, my first instinct was to go to ChatGPT and have it help me parse through Plex’s internal thumbnail database to build me a view which shows all the artwork on a single webpage.

It took me maybe 10 minutes of iterating with ChatGPT, and now I know more about the internal workings of Plex’s internal media caching database than I ever would have before.

Before ChatGPT, I would’ve had to spend several hours pouring over open source code or out of date documentation. In other words: I would’ve given up after the first Google search.

It feels like another application of Morovec’s paradox. Like Gary Casparov observed with chess bots, it feels like the winning approach here is one where LLMs and humans work in tandem.

Simon ends his post with this:

One of the greatest misconceptions concerning LLMs is the idea that they are easy to use. They really aren’t: getting great results out of them requires a great deal of experience and hard-fought intuition, combined with deep domain knowledge of the problem you are applying them to. I use these things every day. They help me take on much more interesting and ambitious problems than I could otherwise. I would miss them terribly if they were no longer available to me.

I could not agree more.

I find it hard to explain to people how to use LLMs without more than an hour of sitting down and going through a bunch of examples of how they work.

These tools are insanely cool and insanely powerful when you bring your own knowledge to them.

They simply parrot back what it believes to be the most statistically correct response to whatever prompt was provided.

I haven’t been able to come up with a good analogy for that sentiment yet, because the closest I can come up with is ā€œit’s like a really good personal assistantā€, which feels like the same analogy the tech industry always uses to market any new tool.

You wouldn’t just send a personal assistant off to go do your job for you. A great assistant is there to compile data, to make suggestions, to be a sounding board, but at the end of the day, you are the one accountable for the final output.

If you copy and paste ChatGPT’s responses into a court brief and it contains made up cases, that’s on you.

If you deploy code that contains glaring vulnerabilities, that’s on you.

Maybe I shouldn’t be lamenting that I lost my joy of learning new things about computers, because I sure have been filled with joy learning how to best use LLMs these past couple years.

Continue to the full article