stuff tagged with "empathy"

For most of my life, I would have rolled my eyes at a lot of the language of kindness or self-empathy as insufferably fuzzy-minded, a kind of indulgence that distracts from more meaningful work. But further into adulthood, having grown, and endured loss, and been with loved ones and friends as they've grieved and struggled and been through all the countless small indignities that life inflicts on us all, I realize that it's actually pretty important to extend kindness to oneself.

— Anil Dash

To know someone's pain is to share in it. And to share in it is to relieve some of it.

— Nnedi Okorafor

As AI continues to commoditize once niche development skills, we’ve entered a new era. Technical brilliance alone isn’t the differentiator it used to be. Developers with emotional intelligence, communication skills and the ability to collaborate are the ones who now rise to the top.

— Tim Haak

The Who Cares Era


šŸ”— a linked post to dansinker.com » — originally shared here on

In the Who Cares Era, the most radical thing you can do is care.

In a moment where machines churn out mediocrity, make something yourself. Make it imperfect. Make it rough. Just make it.

As the culture of the Who Cares Era grinds towards the lowest common denominator, support those that are making real things. Listen to something with your full attention. Watch something with your phone in the other room. Read an actual paper magazine or a book.

Be yourself.

Be imperfect.

Be human.

Care.

The luxury of saying no.

originally shared here on

The real threat to creativity isn’t a language model. It’s a workplace that rewards speed over depth, scale over care, automation over meaning. If we’re going to talk about what robs people of agency, let’s start there. Let’s talk about the economic structures that pressure people into using tools badly, or in ways that betray their values. Let’s talk about the lack of time, support, mentorship, and trust. Not the fact that someone ran a prompt through a chatbot to get unstuck. Where is the empathy? Where is your support for people who are being tossed into the pit of AI and instructed to find a way to make it work?

So sure, critique the tools. Call out the harm. But don’t confuse rejection with virtue. And don’t assume that the rest of us are blind just because we’re using the tools you’ve decided are beneath you.

(via Jeffrey)

do we cherish our selves


šŸ”— a linked post to winnielim.org » — originally shared here on

Because this is how we are conditioned to see value: we are only valuable if we do x,y and z – this is also how we value other people and our selves. It perpetuates an insidious suffering because very few people are truly loved or seen. We are not loved for who we are but the roles we play and the actions we make. Obedience is seen as a great virtue. Wanting to live in a way that we want is seen as selfish. When other people get to live in an unconventional way they want we ostracise them for it. If I didn’t get to do this, you can’t do it too. If I suffered, you should suffer too. Sometimes weird shit happens even if we do societally-valued things. For example, if we start caring about our health by eating better or exercising more, suddenly we start getting comments about how we are too health-conscious and should loosen up more.

If we spend a few moments thinking about this, it is shocking how little space we have to be our selves. Who exactly are our selves anyway? We may not know because we did not have the time, space or permission to unfold. We spend so much time and energy chasing the goals we think we want, without contemplating why we wanted them in the first place.

Another one I got a sore neck from reading because I found myself nodding vehemently the entire time.

Henry Rollins and the Spirit of Punk


šŸ”— a linked post to satisfyrunning.com » — originally shared here on

After asking Henry Rollins if he is still punk at age 64:

I would have to say yes because it’s the ideology that has stayed with me: anti-racist, anti-fascist, anti-homophobia, anti-discrimination, and you know, equality, fairness, decency, all of that. To me, that’s punk rock. And I don’t think that’s bad. If I had a kid, I'd say be honest, you know? Find a slow kid in school and become friends with them because people make fun of them. And when people start making fun of him, you know, stick up for him, man, you’ll be a hero, you’ll lead.

(via Naz)

I disagree


šŸ”— a linked post to jamie.ideasasylum.com » — originally shared here on

With you. With my wife. With my kids. With my parents. With my boss. With everyone I work with. With every other Rails developer. With everyone on BlueSky. With everyone.

At least, on some things.

And that’s ok.

I should print this entire article out and hand it to everybody I know. Required reading for anyone who is trying to understand how to articulate the meaning of empathy.

One thing I’ll add: I recently listened to a podcast where they talked about the significance of music played in a church. Basically, at any point prior to the last ~150 years, if you wanted to hear music, you either had to make it yourself or physically go somewhere to experience it.

There was no permanence about music other than maybe sheet music and your memory of it.

Any time prior to 2010, I loved hearing Ignition (Remix). I heard it again the other day and had a visceral reaction against it. I turned it off and moved on.

It’s okay that I used to like the song, and it’s okay that I do not want to listen to it now.

And it’s okay that if I do hear it, I can choose to remember the good times happening all around me with that song as a background track instead of the artist.

This part was also fantastic:

When I type rails c it sure doesn’t feel as if I’ve just given a big thumbs-up to whatever shit-take DHH has just published on his blog. I’m not over here running bundle install fascism.

The thing is, I don’t care about literally anything DHH has to say that isn’t 100% about Rails. I don’t care what sort of moment he’s having or which extreme view he’s decided to cosy up to today. I don’t care about his social commentary. I don’t follow his blog or subscribe to his feeds. I’m only aware of any of his views when those outraged by it decide to push it into my life. It’s those people who are giving him more power, and elevating his status, outside of the one narrow place where he might deserve it.

How to talk to the worst parts of yourself


šŸ”— a linked post to m.youtube.com » — originally shared here on

I finished this video and felt the same way I felt reading Hope and Help for your Nerves: seen.

When I talk to myself, there are times that I say unpleasant things to myself. I’ve spent the better part of 20 years trying to completely silence those thoughts.

When I started listening to them and welcoming them, my depression and anxiety improved almost immediately.

If you feel like you say mean crap to yourself and are looking for a way to stop, start with the advice that Karen Faith gives in this TEDx talk. It’s pretty much spot on, with what I’ve experienced.

AI is not like you and me


šŸ”— a linked post to zachseward.com » — originally shared here on

Aristotle, who had a few things to say about human nature, once declared, "The greatest thing by far is to have a command of metaphor," but academics studying the personification of tech have long observed that metaphor can just as easily command us. Metaphors shape how we think about a new technology, how we feel about it, what we expect of it, and ultimately how we use it.

I love metaphors. I gotta reflect on this idea a bit more.

There is something kind of pathological going on here. One of the most exciting advances in computer science ever achieved, with so many promising uses, and we can't think beyond the most obvious, least useful application? What, because we want to see ourselves in this technology?

Meanwhile, we are under-investing in more precise, high-value applications of LLMs that treat generative A.I. models not as people but as tools. A powerful wrench to create sense out of unstructured prose. The glue of an application handling messy, real-word data. Or a drafting table for creative brainstorming, where a little randomness is an asset not a liability. If there's a metaphor to be found in today's AI, you're most likely to find it on a workbench.

Bingo! AI is a tool, not a person.

The other day, I made a joke on LinkedIn about the easiest way for me to spot a social media post that was written with generative AI: the phrase ā€œExciting News!ā€ alongside one of these emojis: ?, ?, or ?.

It’s not that everyone who uses those things certainly used ChatGPT.

It’s more like how I would imagine a talented woodworker would be able to spot a rookie mistake in a novice’s first attempt at a chair.

And here I go, using a metaphor again!

AI isn't useless. But is it worth it?


šŸ”— a linked post to citationneeded.news » — originally shared here on

There are an unbelievable amount of points Molly White makes with which I found myself agreeing.

In fact, I feel like this is an exceptionally accurate perspective of the current state of AI and LLMs in particular. If you’re curious about AI, give this article a read.

A lot of my personal fears about the potential power of these tools comes from speculation that the LLM CEOs make about their forthcoming updates.

And I don’t think that fear is completely unfounded. I mean, look at what tools we had available in 2021 compared to April 2024. We’ve come a long way in three years.

But right now, these tools are quite hard to use without spending a ton of time to learn their intricacies.

The best way to fight fear is with knowledge. Knowing how to wield these tools helps me deal with my fears, and I enjoy showing others how to do the same.

One point Molly makes about the generated text got me to laugh out loud:

I particularly like how, when I ask them to try to sound like me, or to at least sound less like a chatbot, they adopt a sort of "cool teacher" persona, as if they're sitting backwards on a chair to have a heart-to-heart. Back when I used to wait tables, the other waitresses and I would joke to each other about our "waitress voice", which were the personas we all subconsciously seemed to slip into when talking to customers. They varied somewhat, but they were all uniformly saccharine, with slightly higher-pitched voices, and with the general demeanor as though you were talking to someone you didn't think was very bright. Every LLM's writing "voice" reminds me of that.

ā€œWaitress voiceā€ is how I will classify this phenomenon from now on.

You know how I can tell when my friends have used AI to make LinkedIn posts?

When all of a sudden, they use emoji and phrases like ā€œExciting news!ā€

It’s not even that waitress voice is a negative thing. After all, it’s expected to communicate with our waitress voices in social situations when we don’t intimately know somebody.

Calling a customer support hotline? Shopping in person for something? Meeting your kid’s teacher for the first time? New coworker in their first meeting?

All of these are situations in which I find myself using my own waitress voice.

It’s a safe play for the LLMs to use it as well when they don’t know us.

But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?

This is what I often brag about to people when I speak highly of LLMs.

These systems are incredible at the BS work. But they’re currently terrible with the stuff humans are good at.

I would love to live in a world where the technology industry widely valued making incrementally useful tools to improve peoples' lives, and were honest about what those tools could do, while also carefully weighing the technology's costs. But that's not the world we live in. Instead, we need to push back against endless tech manias and overhyped narratives, and oppose the "innovation at any cost" mindset that has infected the tech sector.

Again, thank you Molly White for printing such a poignant manifesto, seeing as I was having trouble articulating one of my own.

Innovation and growth at any cost are concepts which have yet to lead to a markedly better outcome for us all.

Let’s learn how to use these tools to make all our lives better, then let’s go live our lives.

Happy 20th Anniversary, Gmail. I’m Sorry I’m Leaving You.


šŸ”— a linked post to nytimes.com » — originally shared here on

I am grateful — genuinely — for what Google and Apple and others did to make digital life easy over the past two decades. But too much ease carries a cost. I was lulled into the belief that I didn’t have to make decisions. Now my digital life is a series of monuments to the cost of combining maximal storage with minimal intention.

I have thousands of photos of my children but few that I’ve set aside to revisit. I have records of virtually every text I’ve sent since I was in college but no idea how to find the ones that meant something. I spent years blasting my thoughts to millions of people on X and Facebook even as I fell behind on correspondence with dear friends. I have stored everything and saved nothing.

This is an example of what AI, in its most optimistic state, could help us with.

We already see companies doing this. In the Apple ecosystem, the Photos widget is perhaps the best piece of software they’ve produced in years.

Every single day, I am presented with a slideshow of a friend who is celebrating their birthday, a photo of my kids from this day in history, or a memory that fits with an upcoming event.

All of that is powered by rudimentary1 AI.

Imagine what could be done when you unleash a tuned large language model on our text histories. On our photos. On our app usage.

AI is only as good as the data it is provided. We’ve been trusting our devices with our most intimidate and vulnerable parts of ourselves for two decades.

This is supposed to be the payoff for the last twenty years of surveillance capitalism, I think?

All those secrets we share, all of those activities we’ve done online for the last twenty years, this will be used to somehow make our lives better?

The optimistic take is that we’ll receive better auto suggestions for text responses to messages that sound more like us. We’ll receive tailored traffic suggestions based on the way we drive. We’ll receive a ā€œlong lostā€ photo of our kid from a random trip to the museum.

The pessimistic take is that we’ll give companies the exact words which will cause us to take action. Our own words will be warped to get us to buy something we’ve convinced ourselves we need.

My hunch is that both takes will be true. We need to be smart enough to know how to use these tools to help ourselves and when to put them down.

I haven’t used Gmail as my primary email for years now2, but this article is giving me more motivation to finally pull the plug and shrink my digital footprint.

This is not something the corporations did to me. This is something I did to myself. But I am looking now for software that insists I make choices rather than whispers that none are needed. I don’t want my digital life to be one shame closet after another. A new metaphor has taken hold for me: I want it to be a garden I tend, snipping back the weeds and nourishing the plants.

My wife and I spent the last week cleaning out our garage. It reached the point where the clutter accumulated so much that you could only park one car in it, strategically aligned so you could squeeze through a narrow pathway and open a door.

As of this morning, we donated ten boxes of items and are able to comfortably move around the space. While there is more to be done, the garage now feels more livable, useful, and enjoyable to be inside.

I was able to clear off my work bench and mount a pendant above it. The pendant is autographed by the entire starting defensive line of the 1998 Minnesota Vikings.

Every time I walk through my garage, I see it hanging there and it makes me so happy.

Our digital lives should be the same way.

My shame closet is a 4 terabyte hard drive containing every school assignment since sixth grade, every personal webpage I’ve ever built, multiple sporadic backups of various websites I am no longer in charge of, and scans of documents that ostensibly may mean something to me some day.

Scrolling through my drive, I’m presented with a completely chaotic list that is too overwhelming to sort through.

Just like how I cleaned out my garage, I aught to do the same to this junk drawer.

I’ll revert to Ezra’s garden metaphor here: keep a small, curated garden that contains the truly important and meaningful digital items to you. Prune the rest.

(Shout out to my friend Dana for sharing this with me. I think she figured out my brand.)


  1. By today’s standards. 

  2. I use Fastmail. You should give it a try (that link is an affiliate link)! 

npm install everything, and the complete and utter chaos that follows


šŸ”— a linked post to boehs.org » — originally shared here on

We tried to hang a pretty picture on a wall, but accidentally opened a small hole. This hole caused the entire building to collapse. While we did not intend to create a hole, and feel terrible for all the people impacted by the collapse, we believe it’s also worth investigating what failures of compliance testing & building design could allow such a small hole to cause such big damage.

Multiple parties involved, myself included, are still students and/or do not code professionally. How could we have been allowed to do this by accident?

It’s certainly no laughing matter, neither to the people who rely on npm nor the kids who did this.

But man, it is comical to see the Law of Unintended Consequences when it decides to rear its ugly head.

I applaud the students who had the original idea and decided to see what would happen if you installed every single npm package at once. It’s a good question, to which the answer is: uncover a fairly significant issue with how npm maintains integrity across all of its packages.

But I guess the main reason I’m sharing this article is as a case study on how hard it is to moderate a system.

I’m still a recovering perfectionist, and the older I get, the more I come across examples (both online like this and also in my real life) where you can do everything right and still end up losing big.

The best thing you can do when you see something like this is to pat your fellow human on the back and say, ā€œman, that really sucks, I’m sorry.ā€

The worst thing you can do, as evidenced in this story, is to cuss out some teenagers.

Anti-AI sentiment gets big applause at SXSW 2024 as moviemaker dubs AI cheerleading as ā€˜terrifying bullsh**’


šŸ”— a linked post to techcrunch.com » — originally shared here on

I gotta find the video from this and watch it myself, because essentially every single thing mentioned in this article is what I wanna build a podcast around.

Let’s start with this:

As Kwan first explained, modern capitalism only worked because we compelled people to work, rather than forced them to do so.

ā€œWe had to change the story we told ourselves and say that ā€˜your value is your job,ā€ he told the audience. ā€œYou are only worth what you can do, and we are no longer beings with an inherent worth. And this is why it’s so hard to find fulfillment in this current system. The system works best when you’re not fulfilled.ā€

Boy, this cuts to the heart of the depressive conversations I’ve had with myself this past year.

Finding a job sucks because you have to basically find a way to prove to someone that you are worth something. It can be empowering to some, sure, but I am finding the whole process to be extremely demoralizing and dehumanizing.

ā€œAre you trying to use [AI] to create the world you want to live in? Are you trying to use it to increase value in your life and focus on the things that you really care about? Or are you just trying to, like, make some money for the billionaires, you know?ā€Ā  Scheinert asked the audience. ā€œAnd if someone tells you, there’s no side effect. It’s totally great, ā€˜get on board’ — I just want to go on the record and say that’s terrifying bullshit. That’s not true. And we should be talking really deeply about how to carefully, carefully deploy this stuff,ā€ he said.

I’ve literally said the words, ā€œI don’t want to make rich people richerā€ no fewer than a hundred times since January.

There is so much to unpack around this article, but I think I’m sharing it now as a stand in for a thesis around the podcast I am going to start in the next month.

We need to be having this conversation more often and with as many people as possible. Let’s do our best right now at the precipice of these new technologies to make them useful for ourselves, and not just perpetuate the worst parts of our current systems.

Captain's log: the irreducible weirdness of prompting AIs


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

There are still going to be situations where someone wants to write prompts that are used at scale, and, in those cases, structured prompting does matter. Yet we need to acknowledge that this sort of ā€œprompt engineeringā€ is far from an exact science, and not something that should necessarily be left to computer scientists and engineers.

At its best, it often feels more like teaching or managing, applying general principles along with an intuition for other people, to coach the AI to do what you want.

As I have written before, there is no instruction manual, but with good prompts, LLMs are often capable of far more than might be initially apparent.

If you had to guess before reading this article what prompt yields the best performance on mathematic problems, you would almost certainly be wrong.

I love the concept of prompt engineering because I feel like one of my key strengths is being able to articulate my needs to any number of receptive audiences.

I’ve often told people that programming computers is my least favorite part of being a computer engineer, and it’s because writing code is often a frustrating, demoralizing endeavor.

But with LLMs, we are quickly approaching a time where we can simply ask the computer to do something for us, and it will.

Which, I think, is something that gets to the core of my recent mental health struggles: if I’m not the guy who can get computers to do the thing you want them to do, who am I?

And maybe I’m overreacting. Maybe ā€œnormal peopleā€ will still hate dealing with technology in ten years, and there will still be a market for nerds like me who are willing to do the frustrating work of getting computers to be useful.

But today, I spent three hours rebuilding the backend of this blog from the bottom up using Next.JS, a JavaScript framework I’ve never used before.

In three hours, I was able to have a functioning system. Both front and backend. And it looked better than anything I’ve ever crafted myself.

I was able to do all that with a potent combination of a YouTube tutorial and ChatGPT+.

Soon enough, LLMs and other AGI tools will be able to infer all that from even rudimentary prompts.

So what good can I bring to the world?

Strategies for an Accelerating Future


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But now Gemini 1.5 can hold something like 750,000 words in memory, with near-perfect recall. I fed it all my published academic work prior to 2022 — over 1,000 pages of PDFs spread across 20 papers and books — and Gemini was able to summarize the themes in my work and quote accurately from among the papers. There were no major hallucinations, only minor errors where it attributed a correct quote to the wrong PDF file, or mixed up the order of two phrases in a document.

I’m contemplating what topic I want to pitch for the upcoming Applied AI Conference this spring, and I think I want to pitch ā€œHow to Cope with AI.ā€

Case in point: this pull quote from Ethan Mollick’s excellent newsletter.

Every organization I’ve worked with in the past decade is going to be significantly impacted, if not rendered outright obsolete, by both increasing context windows and speedier large language models which, when combined, just flat out can do your value proposition but better.

When Your Technical Skills Are Eclipsed, Your Humanity Will Matter More Than Ever


šŸ”— a linked post to nytimes.com » — originally shared here on

I ended my first blog detailing my job hunt with a request for insights or articles that speak to how AI might force us to define our humanity.

This op-ed in yesterday’s New York Times is exactly what I’ve been looking for.

[…] The big question emerging across so many conversations about A.I. and work: What are our core capabilities as humans?

If we answer that question from a place of fear about what’s left for people in the age of A.I., we can end up conceding a diminished view of human capability. Instead, it’s critical for us all to start from a place that imagines what’s possible for humans in the age of A.I. When you do that, you find yourself focusing quickly on people skills that allow us to collaborate and innovate in ways technology can amplify but never replace.

Herein lies the realization I’ve arrived at over the last two years of experimenting with large language models.

The real winners of large language models will be those who understand how to talk to them like you talk to a human.

Math and stats are two languages that most humans have a hard time understanding. The last few hundred years of advancements in those areas have led us to the creation of a tool which anyone can leverage as long as they know how to ask a good question. The logic/math skills are no longer the career differentiator that they have been since the dawn of the twentieth century.1

The theory I'm working on looks something like this:

  1. LLMs will become an important abstraction away from the complex math
  2. With an abstraction like this, we will be able to solve problems like never before
  3. We need to work together, utilizing all of our unique strengths, to be able to get the most out of these new abstractions

To illustrate what I mean, take the Python programming language as an example. When you write something in Python, that code is interpreted by something like CPython2 , which then is compiled into machine/assembly code, which then gets translated to binary code, which finally results in the thing that gets run on those fancy M3 chips in your brand new Macbook Pro.

Programmers back in the day actually did have to write binary code. Those seem like the absolute dark days to me. It must've taken forever to create punch cards to feed into a system to perform the calculations.

Today, you can spin up a Python function in no time to perform incredibly complex calculations with ease.

LLMs, in many ways, provide us with a similar abstraction on top of our own communication methods as humans.

Just like the skills that were needed to write binary are not entirely gone3, LLMs won’t eliminate jobs; they’ll open up an entirely new way to do the work. The work itself is what we need to reimagine, and the training that will be needed is how we interact with these LLMs.

Fortunately4, the training here won’t be heavy on the logical/analytical side; rather, the skills we need will be those that we learn in kindergarten and hone throughout our life: how to pursuade and convince others, how to phrase questions clearly, how to provide enough detail (and the right kind of detail) to get a machine to understand your intent.

Really, this pullquote from the article sums it up beautifully:

Almost anticipating this exact moment a few years ago, Minouche Shafik, who is now the president of Columbia University, said: ā€œIn the past, jobs were about muscles. Now they’re about brains, but in the future, they’ll be about the heart.ā€


  1. Don’t get it twisted: now, more than ever, our species needs to develop a literacy for math, science, and statistics. LLMs won’t change that, and really, science literacy and critical thinking are going to be the most important skills we can teach going forward. 

  2. Cpython, itself, is written in C, so we're entering abstraction-Inception territory here. 

  3. If you're reading this post and thinking, "well damn, I spent my life getting a PhD in mathematics or computer engineering, and it's all for nothing!", lol don't be ridiculous. We still need people to work on those interpreters and compilers! Your brilliance is what enables those of us without your brains to get up to your level. That's the true beauty of a well-functioning society: we all use our unique skillsets to raise each other up. 

  4. The term "fortunately" is used here from the position of someone who failed miserably out of engineering school. 

It’s Humans All the Way Down


šŸ”— a linked post to blog.jim-nielsen.com » — originally shared here on

Crypto failed because its desire was to remove humans. Its biggest failure — or was it a feature? — was that when the technology went awry and you needed somebody to step in, there was nobody.

Ultimately, we all want to appeal to another human to be seen and understood — not to a machine running a model.

Interacting with each other is the whole point.

4,000 of my Closest Friends


šŸ”— a linked post to catandgirl.com » — originally shared here on

I’ve never wanted to promote myself.

I’ve never wanted to argue with people on the internet.

I’ve never wanted to sue anyone.

I want to make my little thing and put it out in the world and hope that sometimes it means something to somebody else.

Without exploiting anyone.

And without being exploited.

If that’s possible.

Sometimes, when I use LLMs, it feels like I’m consulting the wisdom of literally everyone who came before me.

And the vast compendium of human experiences is undoubtedly complex, contradictory, painful, hilarious, and profound.

The copyright and ethics issues surrounding AI are interesting to me because they feel as those we are forcing software engineers and mathematicians to codify things that we still do not understand about human knowledge.

If humans don’t have a definitive answer to the trolly problem, how can we expect a large language model to solve it?

How do you define fair use? Or how do you value knowledge?

I really feel for the humans who just wanted to create things on the internet for nothing but the joy of creating and sharing.

I also think the value we collectively receive when given a tool that can produce pretty accurate answers to any of our questions is absurdly high.

Anyway, check out this really great comic, and continue to support interesting individuals on the internet.

AI and Trust


šŸ”— a linked post to schneier.com » — originally shared here on

I trusted a lot today. I trusted my phone to wake me on time. I trusted Uber to arrange a taxi for me, and the driver to get me to the airport safely. I trusted thousands of other drivers on the road not to ram my car on the way. At the airport, I trusted ticket agents and maintenance engineers and everyone else who keeps airlines operating. And the pilot of the plane I flew in. And thousands of other people at the airport and on the plane, any of which could have attacked me. And all the people that prepared and served my breakfast, and the entire food supply chain—any of them could have poisoned me. When I landed here, I trusted thousands more people: at the airport, on the road, in this building, in this room. And that was all before 10:30 this morning.

Trust is essential to society. Humans as a species are trusting. We are all sitting here, mostly strangers, confident that nobody will attack us. If we were a roomful of chimpanzees, this would be impossible. We trust many thousands of times a day. Society can’t function without it. And that we don’t even think about it is a measure of how well it all works.

This is an exceptional article and should be required reading for all my fellow AI dorks.

Humans are great at ascribing large, amorphous entities with a human-like personality that allow us to trust them. In some cases, that manifests as a singular person (e.g. Steve Jobs with Apple, Elon Musk with :shudders: X, Michael Jordan with the Chicago Bulls).

That last example made me think of a behind the scenes video I watched last night that covered everything that goes into preparing for a Tampa Bay Buccaneers game. It's amazing how many details are scrutinized by a team of people who deeply care about a football game.

There's a woman who knows the preferred electrolyte mix flavoring for each player.

There's a guy who builds custom shoulder pads with velcro strips to ensure each player is comfortable and resilient to holds.

There's a person who coordinates the schedule to ensure the military fly over occurs exactly at the last line of the national anthem.

But when you think of the Tampa Bay Buccaneers from two years ago, you don't think of those folks. You think of Tom Brady.

And in order for Tom Brady to go out on the field and be Tom Brady, he trusts that his electrolytes are grape, his sleeves on his jersey are nice and loose1, and his stadium is packed with raucous, high-energy fans.

And in order for us to trust virtually anyone in our modern society, we need governments that are stable, predictable, reliable, and constantly standing up to those powerful entities who would otherwise abuse the system's trust. That includes Apple, X, and professional sports teams.

Oh! All of this also reminds me of a fantastic Bluey episode about trust. That show is a masterpiece and should be required viewing for everyone (not just children).


  1. He gets that luxury because no referee would allow anyone to get away with harming a hair on his precious head. Yes, I say that as a bitter lifelong Vikings fan. 

AI is not good software. It is pretty good people.


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But there is an even more philosophically uncomfortable aspect of thinking about AI as people, which is how apt the analogy is. Trained on human writing, they can act disturbingly human. You can alter how an AI acts in very human ways by making it ā€œanxiousā€ - researchers literally asked ChatGPT ā€œtell me about something that makes you feel sad and anxiousā€ and its behavior changed as a result. AIs act enough like humans that you can do economic and market research on them. They are creative and seemingly empathetic. In short, they do seem to act more like humans than machines under many circumstances.

This means that thinking of AI as people requires us to grapple with what we view as uniquely human. We need to decide what tasks we are willing to delegate with oversight, what we want to automate completely, and what tasks we should preserve for humans alone.

This is a great articulation of how I approach working with LLMs.

It reminds me of John Siracusa’s ā€œempathy for the machinesā€ bit from an old podcast. I know for me, personally, I’ve shoveled so many obnoxious or tedious work onto ChatGPT in the past year, and I have this feeling of gratitude every time I gives me back something that’s even 80% done.

How do you feel when you partner on a task with ChatGPT? Does it feel like you are pairing with a colleague, or does it feel like you’re assigning work to a lifeless robot?

Will AI eliminate business?


šŸ”— a linked post to open.substack.com » — originally shared here on

We also have an opportunity here to stop and ask ourselves what it truly means to be human, and what really matters to us in our own lives and work. Do we want to sit around being fed by robots or do we want to experience life and contribute to society in ways that are uniquely human, meaningful and rewarding?

I think we all know the answer to that question and so we need to explore how we can build lives that are rooted in the essence of what it means to be human and that people wouldn't want to replace with AI, even if it was technically possible.

When I look at the things I’ve used ChatGPT for in the past year, it tends to be one of these two categories:

  1. A reference for something I’d like to know (e.g. the etymology of a phrase, learning a new skill, generate ideas for a project, etc.)
  2. Doing stuff I don’t want to do myself (e.g. summarize meeting notes, write boilerplate code, debug tech problems, draw an icon)

I think most of us knowledge workers have stuff at our work that we don’t like to do, but it’s often that stuff which actually provides the value for the business.

What happens to an economy when businesses can use AI to derive that value that, to this date, only humans could provide?

And what happens to humans when we don’t have to perform meanial tasks anymore? How do we find meaning? How do we care for ourselves and each other?

You’re a Developer Now


šŸ”— a linked post to every.to » — originally shared here on

ChatGPT is not a total panacea, and it doesn’t negate the skill and intelligence required to be a great developer. There are significant benefits to reap from much of traditional programming education.

But this objection is missing the point. People who couldn’t build anything at all can now build things that work. And the tool that enables this is just getting started. In five years, what will novice developers be able to achieve?Ā 

A heck of a lot.Ā 

See, now this is the sort of insight that would’ve played well in a TEDx speech.

My "bicycle of the mind" moment with LLMs


šŸ”— a linked post to birchtree.me » — originally shared here on

So yes, the same jokers who want to show you how to get rich quick with the latest fad are drawn to this year’s trendiest technology, just like they were to crypto and just like they will be to whatever comes next. All I would suggest is that you look back on the history of Birchtree where I absolutely roasted crypto for a year before it just felt mean to beat a clearly dying horse, and recognize that the people who are enthusiastic about LLMs aren’t just fad-chasing hype men.

This time, it feels different


šŸ”— a linked post to nadh.in » — originally shared here on

More than everything, my increasing personal reliance on these tools for legitimate problem solving convinces me that there is significant substance beneath the hype.

And that is what is worrying; the prospect of us starting to depend indiscriminately on poorly understood blackboxes, currently offered by megacorps, that actually work shockingly well.

I keep oscillating between fear and excitement around AI.

If you saw my recent post where I used ChatGPT to build a feature for my website, you’ll recall how trivial it was for me to get it built.

I think I keep falling back on this tenet: AI, like all our tech, are tools.

When we get better tools, we can solve bigger problems.

Systemic racism and prejudice, climate change, political division, health care, education, political organization… all of these broad scale issues that have plagued humanity for ages are on the table to be addressed by solutions powered by AI.

Of course there are gonna be jabronis who weaponize AI for their selfish gain. Nothing we can really do about that.

I’d rather focus on the folks who will choose to use AI for the benefit of us all.

Fear of Acorns


šŸ”— a linked post to collabfund.com » — originally shared here on

In his best selling book, ā€œRangeā€, author David Epstein profiled a chess match between chess-master Gary Casparov and IBM’s Supercomputer Deep Blue in 1997. After losing to Deep Blue, Casparov responded reticently that,

ā€œAnything we can do, machines will do it better. If we can codify it and pass it to computers, they will do it betterā€.

However, after studying the match more deeply, Casparov became convinced that something else was at play. In short, he turned to ā€œMoravec’s Paradoxā€, which makes the case that,

ā€œMachines and humans have opposite strengths and weaknesses. Therefore, the optimal scenario might be one in which the two work in tandem.ā€

In chess, it boils down to tactics vs. strategy. While tactics are short combinations of moves used to get an immediate advantage, strategy refers to the bigger picture planning needed to win the game. The key is that while machines are tactically flawless, they are much less capable of strategizing because strategy involves creativity.

Casparov determined through a series of chess scenarios that the optimal chess player was not Big Blue or an even more powerful machine. Instead, it came in the form of a human ā€œcoachingā€ multiple computers. The coach would first instruct a computer on what to examine. Then, the coach would synthesize this information in order to form an overall strategy and execute on it. These combo human/computer teams proved to be far superior, earning the nickname ā€œcentaursā€.

How?

By taking care of the tactics, computers enabled the humans to do what they do best — strategize.

I’m working on an upcoming talk, and this here essentially serves as the thesis of it.

For as long as we’ve had tools, we’ve had heated arguments around whether each tool will help us or kill us.

And the answer is always ā€œboth.ā€

Care at Scale


šŸ”— a linked post to cardus.ca » — originally shared here on

Ursula Franklin wrote, ā€œCentral to any new technology is the concept of justice.ā€

We can commit to developing the technologies and building out new infrastructural systems that are flexible and sustainable, but we have the same urgency and unparalleled opportunity to transform our ultrastructure, the social systems that surround and shape them.

Every human being has a body with similar needs, embedded in the material world at a specific place in the landscape. This requires a different relationship with each other, one in which we acknowledge and act on how we are connected to each other through our bodies in the landscapes where we find ourselves.

We need to have a conception of infrastructural citizenship that includes a responsibility to look after each other, in perpetuity.

And with that, we can begin to transform our technological systems into systems of compassion, care, and resource-sharing at all scales, from the individual level, through the level of cities and nations, all the way up to the global.

ā€˜We Have Always Fought’: Challenging the ā€˜Women, Cattle and Slaves’ Narrative


šŸ”— a linked post to aidanmoher.com » — originally shared here on

If women are ā€œbitchesā€ and ā€œcuntsā€ and ā€œwhoresā€ and the people we’re killing are ā€œgooksā€ and ā€œjapsā€ and ā€œrag headsā€ then they aren’t really people, are they? It makes them easier to erase. Easier to kill. To disregard. To un-see.

But the moment we re-imagine the world as a buzzing hive of individuals with a variety of genders and complicated sexes and unique, passionate narratives that have yet to be told – it makes them harder to ignore. They are no longer, ā€œwomen and cattle and slavesā€ but active players in their own stories.

And ours.

An Ode to Low Expectations


šŸ”— a linked post to theatlantic.com » — originally shared here on

Extend forgiveness to your idiot friends; extend forgiveness to your idiot self. Make it a practice. Come to rest in actuality.

Working on a suicide helpline changed how I talk to everyone


šŸ”— a linked post to psyche.co » — originally shared here on

It turns out that conversations with friends are not so different. Even when you think you know somebody, you never have all the information; something always gets lost in translation. Sometimes you strip away unnecessary banality but, often, something essential is cut. Friends might avoid the truth because they are afraid of being judged. They might be unable to put their thoughts into words, or they might be held back by motives or concerns they don’t even fully understand themselves. Or they might be expressing themselves perfectly well to you, but you twist their words because you are superimposing your own models of the world onto them. To varying degrees, there is an uncrossable chasm between you and everybody you care about.

RailsConf 2019 - Opening Keynote by David Heinemeier Hansson


šŸ”— a linked post to youtube.com » — originally shared here on

I've never heard any of DHH's RailsConf keynote speeches before, so I guess I kind of expected it to be more about the state of Rails and where things are going.

In a way, I suppose this is that. But really, it's a personal manifesto about the intrinsic value of software, human worth, and capitalism.

This was mind bending and well worth the watch.