blog

eternal woodstock


šŸ”— a linked post to bnet.substack.com » — originally shared here on

As people keep trying to make Twitter 2 happen, we are now in a period that I'm calling Eternal Woodstock — every few weeks, users flock en masse to new platforms, rolling around in the mud, getting high on Like-dopamine, hoping that they can keep the transgressive, off-kilter meme magic going just a little longer, even though social-media culture already been fully hollowed out and commercialized.

I haven’t signed up for any of the new Twitter clones. I do have a Mastodon account that I created back before Twitter got terrible, but besides a futile one week attempt to get into it, it too has sat dormant.

Maybe this is just part of progressing through life, progressing through society and culture.

It’s something I’ve noticed now with having kids: as a kid, you are extremely tuned into social status. Everyone else listens to the ZOMBIES 3 soundtrack? Now you have to be into it. Your little brother likes it now? Now you have to be too good for it.

But for that brief moment, you feel like you’re ahead of the game. You’re a tastemaker.

The times where I’ve genuinely been the happiest in my life have been when I’ve done something just for myself. If it makes those around me impressed or weirded out or indifferent, it was of zero consequence to me.

The short list of things I can think of that fit that bill: this blog (which has existed in some shape since I was in sixth grade), making clips for television production class, learning something new, 90s/00s pro wrestling, running, and playing the guitar.

It’s only when I start to look around at others when I start to get depressed.

And maybe that’s a key insight into why I feel like I feel right now. I don’t have a job at the moment. At my age, your social status is determined by things like the vacations you go on, the home you have, and the title you hold.

But really, none of that stuff matters. What matters is the stuff that brings you joy.

It just so happens that those things, in fact, do bring me joy. The vacations I’ve gone on in the past 12 months have been the happiest I’ve been in ages. I spent all morning deep cleaning several rooms in my house, and it feels incredible.1 Building software and solving problems for people is what makes me happy, not being a director of this or a chief whatever.

I guess what I’m trying to say is: I should stop feeling guilty about not posting a whole lot on social media.

My home is this website. People can come here if they wanna hang out.

Sure, I’ll poke my head up and see what’s going on with others around me on occasion, but I don’t need to feel compelled to chase the feelings that come alongside taste-making.

Those feelings are like capturing lightning in a bottle, and ultimately lead me to my deepest forms of depression.


  1. Even though I know the kids are gonna mess it up in roughly 4 minutes, that’s okay. It’s their house, too.  

Continue to the full article


JavaScript Bloat in 2024


šŸ”— a linked post to tonsky.me » — originally shared here on

It’s not just about download sizes. I welcome high-speed internet as much as the next guy. But code — JavaScript — is something that your browser has to parse, keep in memory, execute. It’s not free. And these people talk about performance and battery life...

Call me old-fashioned, but I firmly believe content should outweigh code size. If you are writing a blog post for 10K characters, you don’t need 1000Ɨ more JavaScript to render it.

I’ll be honest: I’m a bad modern front end dev.

I only have a limited amount of experience with frameworks like Vue and React.

But this blog post gets to the reason why: massive JavaScript framework bloat is often not necessary.

As you can see in this post, many of these incredibly basic sites that display text (like Medium and Substack) still require 4mb of JS code! That’s insane!

It’s like the old axiom goes: use the right tool for the job.

And maybe think twice before slapping a thousand marketing pixels on your landing page. šŸ˜…

Continue to the full article


Spoiler Alert: It's All a Hallucination


šŸ”— a linked post to community.aws » — originally shared here on

LLMs treat words as referents, while humans understand words as referential. When a machine ā€œthinksā€ of an apple (such as it does), it literally thinks of the word apple, and all of its verbal associations. When humans consider an apple, we may think of apples in literature, paintings, or movies (don’t trust the witch, Snow White!) — but we also recall sense-memories, emotional associations, tastes and opinions, and plenty of experiences with actual apples.

So when we write about apples, of course humans will produce different content than an LLM.

Another way of thinking about this problem is as one of translation: while humans largely derive language from the reality we inhabit (when we discover a new plant or animal, for instance, we first name it), LLMs derive their reality from our language. Just as a translation of a translation begins to lose meaning in literature, or a recording of a recording begins to lose fidelity, LLMs’ summaries of a reality they’ve never perceived will likely never truly resonate with anyone who’s experienced that reality.

And so we return to the idea of hallucination: content generated by LLMs that is inaccurate or even nonsensical. The idea that such errors are somehow lapses in performance is on a superficial level true. But it gestures toward a larger truth we must understand if we are to understand the large language model itself — that until we solve its perception problem, everything it produces is hallucinatory, an expression of a reality it cannot itself apprehend.

This is a helpful way to frame some of the fears I’m feeling around AI.

By the way, this came from a new newsletter called VectorVerse that my pal Jenna Pederson launched recently with David Priest. You should give it a read and consider subscribing if you’re into these sorts of AI topics!

Continue to the full article



Beat anxiety with the most addictive experience on Earth


šŸ”— a linked post to youtube.com » — originally shared here on

Really straight forward advice here:

  1. Write down ten things that you’re grateful for, and write each one three times. (This points out to the brain things that have already happened that are good, which lets us take in less negative stuff)
  2. Practice mindfulness for 11 minutes a day. (This is proven to calm down your nervous system and make you less emotionally reactive)
  3. Exercise 20-40 minutes until the voice in your head gets quiet and your lungs open up. (This releases nitric oxide and also resets the nervous system)

I think I can incorporate the gratefulness piece into my journaling habit I’ve developed.

I have never been able to get a mindfulness practice to stick, but hey, maybe that’s something I can try to start tomorrow.

Exercise has been, admittedly, hit or miss these past several months. I do enjoy Apple Fitness workouts, but I miss the runner’s high I used to get with running. I need another goal-based exercise activity to keep myself on track.

But I digress: all of these serve as catalysts to get you into a state of flow, which, as mentioned in this video, is one of the greatest experiences you can ever feel.


Strategies for an Accelerating Future


šŸ”— a linked post to oneusefulthing.org » — originally shared here on

But now Gemini 1.5 can hold something like 750,000 words in memory, with near-perfect recall. I fed it all my published academic work prior to 2022 — over 1,000 pages of PDFs spread across 20 papers and books — and Gemini was able to summarize the themes in my work and quote accurately from among the papers. There were no major hallucinations, only minor errors where it attributed a correct quote to the wrong PDF file, or mixed up the order of two phrases in a document.

I’m contemplating what topic I want to pitch for the upcoming Applied AI Conference this spring, and I think I want to pitch ā€œHow to Cope with AI.ā€

Case in point: this pull quote from Ethan Mollick’s excellent newsletter.

Every organization I’ve worked with in the past decade is going to be significantly impacted, if not rendered outright obsolete, by both increasing context windows and speedier large language models which, when combined, just flat out can do your value proposition but better.

Continue to the full article


The U.S. Census Is Wrong on Purpose


šŸ”— a linked post to ironicsans.beehiiv.com » — originally shared here on

According to the just-published 2020 U.S. Census data, Monowi now had 2 residents, doubling its population.

This came as a surprise to Elsie, who told a local newspaper, ā€œThen someone’s been hiding from me, and there’s nowhere to live but my house.ā€

It turns out that nobody new had actually moved to Monowi without Elsie realizing. And the census bureau didn’t make a mistake. They intentionally changed the census data, adding one resident.

Today, I learned about the concept of differential privacy.

Continue to the full article


The 25 Percent Rule for Tackling Technical Debt


šŸ”— a linked post to shopify.engineering » — originally shared here on

Addressing technical debt is rarely about making time for large fixes. It’s about setting strong examples for improving code in our daily work. It’s about celebrating the ability to refactor code to make it easier to work with.

I really like the approach the author takes in categorizing the various types of technical debt one might come across when building software.

The part that I found most enlightening was about yearly debt:

Yearly Debt is the kind where after lots of conversations, someone concludes a rewrite is the only solution. Sometimes a rewrite may be the only solution. Sometimes you may have a Ship of Theseus problem on your hands where you need to slowly and methodically replace parts until the system is the same but different.

Sometimes, though, this isn’t really debt. It’s possible that your dilemma is the result of growth or changing markets. In that respect, calling it debt does a disservice to our success, and distracts from solving the problem of growth.

Brilliant. The ā€œdebtā€ metaphor is apt because not all debt is created equally.

If your town grows into a city, you eventually need to take out debt to build out new infrastructure. You might need to add a few lanes to the main bridge that passes through town. You might need to add more parks or theatres or schools to attract more people.

This incurs debt, for sure, but the payoff comes down the road when you now have an attractive city with amenities that help keep the city vibrant and growing.

The same applies to building software. Sometimes, the algorithm that got you here won’t work for the new customer you want to attract. Sometimes, the frameworks you used to build your mobile app are no longer able to support the hot new feature you want to add.

When framed like that, you no longer call these projects ā€œdebtā€ā€¦ you call them investments.

Investments are different from the sort of debt you incur from re-landscaping your back yard for the third time in four years.

Continue to the full article


Why "Random Access Memories" is a Masterpiece


šŸ”— a linked post to youtube.com » — originally shared here on

This album essentially served as the soundtrack of the early days of the Jed Mahonis Group.

Whenever we needed a day to be heads down, this album would be turned on repeat.

Whenever there was a late night push and we needed the extra motivation to get through it, this album was on repeat.

I came across this video describing the inner turmoil that Daft Punk was feeling while making this album, and I couldn’t help but feel the similarities to my present day situation.

I have long considered this album to be in my top 5 favorites of all time, but this YouTube video made me understand and appreciate it a whole lot more. I should see if there are similar videos for my other favorite albums.

File this video under ā€œreasons I love the internet.ā€


Representation Engineering Mistral-7B an Acid Trip


šŸ”— a linked post to vgel.me » — originally shared here on

In October 2023, a group of authors from the Center for AI Safety, among others, published Representation Engineering: A Top-Down Approach to AI Transparency. That paper looks at a few methods of doing what they call "Representation Engineering": calculating a "control vector" that can be read from or added to model activations during inference to interpret or control the model's behavior, without prompt engineering or finetuning.

Being Responsible AI Safety and INterpretability researchers (RAISINs), they mostly focused on things like "reading off whether a model is power-seeking" and "adding a happiness vector can make the model act so giddy that it forgets pipe bombs are bad."

But there was a lot they didn't look into outside of the safety stuff. How do control vectors compare to plain old prompt engineering? What happens if you make a control vector for "high on acid"? Or "lazy" and "hardworking? Or "extremely self-aware"? And has the author of this blog post published a PyPI package so you can very easily make your own control vectors in less than sixty seconds? (Yes, I did!)

It’s been a few posts since I got nerdy, but this was a fascinating read and I couldn’t help but share it here (hat tip to the excellent Simon Willison for the initial share!)

The article explores how to improve the way we format data before it gets fed into a model, which then leads to better performance of the models.

You can use this technique to build a more resiliant model that is less prone to jailbreaking and produces more reliable output from a prompt.

Seems like something I should play with myself!

Continue to the full article