blog

Finding Fulfillment


🔗 a linked post to longform.asmartbear.com » — originally shared here on

It is possible to be empowered to work how you want (Autonomy), to be leveraging your skills and expertise (Mastery), and to be proud of your role in a cause (Purpose / Why), and yet still dislike every day of your existence. More than contentment (ikigai), you need Joy.

Not only is this possible, it is common. There’s the classic example of the startup founder who wakes up six years into the journey, realizing she’s been surreptitiously brought to a boil, burned out, dreading each day, drinking too much “to turn my brain off so I can sleep” but actually because she’s deeply unhappy.

What I enjoyed about this article was the Venn diagram showing you need to find something at the intersection of joy, skill, and need. If you only intersect two of the three, you will fall into a specific trap.

For instance, if you have joy and need but not skill, you are falling for “indulgent failure”. Or if you want the recipe for classic burn out, take skill and need but leave out joy.

Continue to the full article


Wind the clock


🔗 a linked post to citationneeded.news » — originally shared here on

Many of us have looked back on historic events where people have bravely stood up against powerful adversaries and wondered, “what would I have done?” Now is your chance to find out. It did not just start with this election; it has been that time for a long time. If you’re just realizing it now, get your ass in gear. Make yourself proud.

Continue to the full article


Please publish and share more


🔗 a linked post to micro.webology.dev » — originally shared here on

Friends, I encourage you to publish more, indirectly meaning you should write more and then share it.

It’d be best to publish your work in some evergreen space where you control the domain and URL. Then publish on masto-sky-formerly-known-as-linked-don and any place you share and comment on.

You don’t have to change the world with every post. You might publish a quick thought or two that helps encourage someone else to try something new, listen to a new song, or binge-watch a new series.

It’s a real gift to see my friends post stuff online. Go post more!

Continue to the full article


Apple Intelligence message summarization is delightfully unhinged

originally shared here on

I got a message from my group chat with my boys. I looked at the Apple Intelligence-generated summary and it said:

(3) Flying too close to the sun, experiencing AI chaos.

I think this is my current favorite implementation of AI because it makes the messaging experience completely unpredictable.

Like, what could that summary actually be about?

What series of three messages could that unravel to?

Apple Intelligence (and most generative AI tools) work really well when the text is predictable. Business cases are perfect for these summarizations, because business talk is relatively predictable (what with its “action items” and “agendas” and whatnot).

A group chat filled with inside jokes is not gonna make sense to an AI unless it’s been trained to do so.

Which has led to one of the best messaging experiences I’ve experienced in decades: trying to guess from the AI-generated summary what the individual texts will actually say.

Some examples:

(3) Tired and wants candy before 8:45am, stuck on a song.

(3) Item unavailable due to legal holding period for used goods.

(9) Kirk on 8th, guest room set up, Sam may forgive Pat, Aldi groceries ordered.


October 2024 Observations

originally shared here on

  • It's amazing how fast my mental health torpedoes when I get a terrible night of sleep.

  • One parenting tip that's helped me cope with big emotions: reframe the situation from "you versus me" to "us versus the problem." It's not "why did you clog the toilet and let poop water overflow over the edge," it's "how can we make it so our toilet doesn't get clogged with an entire roll of toilet paper anymore?" Ask me how I came up with that specific scenario!

  • Focus remains a challenge for me. I would love nothing more than to be able to set a schedule and stick to it, but when I go to sit down and honor the schedule, my body does everything in its power to stop me in my tracks. I can't tell why... maybe there's something more wrong with me, maybe I'm not disciplined enough. Maybe it's something else.

  • Much of my 2024 experience involved adding a new entry to the list of questions that cycle in my inner monologue: "are these feelings just a part of the human experience, or is there a way to better way to process and cope with these feelings?"

  • There's a quote by Yohji Yamamoto that goes, "Start copying what you love. Copy, copy, copy, copy. And at the end of the copy, you will find yourself." I wrote that down nearly two decades ago, and it's only in the last few months that I've started to understand what it means.

  • My inability to manage tasks is what likely led to me getting sick going into my anniversary trip to New York. Everything is a choice, and sometimes, you gotta be okay with the consequences of the choices you make. I decided to spend an entire afternoon shopping and playing pull tabs at our old neighborhood bar with my wife instead of building graphics for a show I worked on. Then I had to stay up until 11pm building those graphics. Was it worth it? ...absolutely.

  • If you ever want to see a masterclass in problem solving, go sit in the booth during a live television broadcast.

  • Of all the terrifying places on earth, the one which still frightens me the most is sleeping in an unfamiliar bed.

  • I'd like to further explore the intersection of fear and confidence.

  • I spent a few days in New York, and it was fascinating to see the role that selfishness plays in that culture. In the midwest, cooperativeness is a necessity... if you were a dick to your neighbor in the summer, he might not wanna lend you firewood when you're freezing to death in the winter. In New York, everyone's selfishness stands in as a proxy for respect. People are curt not out of hostility, but as if to say "I won't take up any more of your time than I need to."

  • I've known my wife for nearly 14 years now, and it took all this time to feel like I understand her. And now that I do, I love her even more, and I'm so lucky to have been married to her for a decade.

  • I watched the entire "Mr. McMahon" docu-series on Netflix in a couple days (thanks Covid lol), and there was a moment in there where Shawn Michaels was talking about the kickback they were receiving from parents in the late 90s. His philosophy at the time was "if you don't like it, be a parent and ban your kids from watching it." Now that he has kids, he's realizing that you can't exactly do that. We can't shelter our kids from the realities of our society. There's so much good and so much bad that we are exposed to in our lives, and it's our job as parents not to shelter our kids from it, but help them learn how to navigate it.

  • That being said: I loved the attitude era. I loved the campy stories of irreverent punks beating up their bosses, sticking up for themselves, meting out their own brand of vigilante justice. It is (and was) also super messed up. It can be both of those things.

  • In the past, starting something new meant I should make huge, sweeping changes to my entire life. New job? That must also mean new exercise routine, new meal habits, and new hobbies. 36 year old Tim realizes that I can only bite off so much, and it would be more sustainable to focus on doing well at my new job, and then taking on new challenges once I am settled in.

  • I like to think that if the famous writers throughout history had the same tech as us, they'd have their own RSS feeds and publish their own thoughts frequently on their blogs.

  • There was a moment last week where I was grilling wings and watching my wife try to get our new moped running, my son argue about being outside (it was gorgeous out and I made him get off of Minecraft to enjoy it lol), and my daughter raise hell with the neighbor kids. I was listening to a new album, and reflecting on how much fun I had at work learning new stuff all week. That's when it dawned on me: "I've made it."

  • I don't think my parents and teachers growing up were wrong to focus on teaching us skills we need to survive in this world. I just wish they'd also have taught us how to enjoy things, too.

  • Dreamworks is more than capable of serving as stiff competition to the Disney empire. The Wild Robot was really good! I wish there were more studios cranking out enjoyable, emotionally-charged stories catered toward a family audience in animated form.

  • RuPaul often says, "if you can't love yourself, how in the hell are you gonna love someone else?" I find it difficult to love myself. All the techniques I've used to address my debilitating impostor syndrome involve some variant of tough love, and believe it or not: that never really helped me much. What's working for me currently is talking to myself the way I talk to my kids. Be positive. Focus on what you can change. Be humble and admit when you need help. And be there for others when they need you, too.

  • I've struggled most of my life with feeling art. I look at a painting and can only see it at a purely technical level, as if knowing why an artist used a specific brand of acrylic paint explains the motivation behind the work. I've typically been more fascinated with how people do things rather than what message they're trying to convey. All this to say: I watched Jumanji again for the first time in years last week. I've seen that movie at least two dozen times, and I was legitimately spooked by it. Mid-20s Tim would watch that movie and think "I wonder how they pulled off that stampede shot inside the house?" Early-30s Tim would watch that movie and think, "were people in the 60s so into themselves that they didn't notice a child wandering into an active construction site and retrieving a treasure chest that was there in plain sight?" This time, I just felt myself as each of the characters. How it would feel to lose my parents in a car accident. How it would feel as a busy aunt who suddenly has to deal with two children. How it would feel to be a hunter whose only motivation is to murder the person who rolled the dice.

  • I was raised to understand that love is showing someone how to avoid mistakes. As I reflect on that, I'd amend that belief to say that love is helping someone learn from their own mistakes and being there for them with firm support when they do screw up.

  • Alexi Pappas once said, "Whenever you’re chasing a big dream, you’re supposed to feel good a third of the time, okay a third of the time, and crappy or not great a third of the time, and if you feel roughly in those ratios, it means you are in fact chasing a dream." I've been slowly working my way back into running shape, and I can confirm that I feel that way in those ratios.

  • Running at 5:30a means I get to wander through my neighborhood and see everyone’s festive and spooky Halloween decorations instead of everyone’s political signs.

  • One of the hardest aspects of being a software engineer is that the implementation details of your job change all the time. Did you know that in Ruby, if you pass variables into a method with the same name as the method is expecting (like a_method(property_1: property_1, foo: foo)), you can shorthand it to be like a_method(property_1:, foo:)? I learned that this week!

  • If art is finding a way to express what is rattling around in your head to others, then maybe writing code is actually my artistic expression.

  • When it comes to empathy, I've never struggled with the "getting into someone's mind" part. What I've struggle with is accepting that the other person's point of view is valid. And I'm still working on that.


Down With The System: A Memoir (of sorts)


🔗 a linked post to amzn.to » — originally shared here on

System of a Down holds a very special place in my heart.

I was in seventh grade when Toxicity was released. I remember sitting in church on Good Friday a few months later and hearing the story of Jesus' execution on the cross. When my pastor, who was reading from the scriptures, got to the part where he shouts, "Father, why have you forsaken me?", my sister and I looked at each other and shared a knowing realization: "oh man, that's from the bible?"

I've been drawn to System mostly because of the instrumentals. Lyrics have not traditionally captured my attention when listening to music.

It took me a few years to discover that all the members of the band were Armenian-Americans. Until reading this book, I didn't give Armenia much thought. The last time I recall giving much consideration to the Middle East in general was in tenth grade world history class. I couldn't have picked out Armenia on a map if you had asked me.

Serj Tankian (the lead singer of System) recently released his memoir, and the title adeptly appends "of sorts" to that noun.

Yes, there are plenty of great stories in this book about Serj's experience with System of a Down, but I'd argue more than 25% of the book serves as a history lesson about Armenia for ignorant Westerners like me.

Even though I'm not much of a lyrics guy, it's hard to miss the humanitarian messages when they're shouted at you by Serj.

Like in "P.L.U.C.K.", from their debut self-titled album1:

Revolution, the only solution,
The armed response of an entire nation,
Revolution, the only solution,
We've taken all your shit, now it's time for restitution.

Or "Cigaro" from Mezmerize2:

We're the regulators that de-regulate
We're the animators that de-animate
We're the propagators of all genocide
Burning through the world's resources
Then we turn and hide

Reading this book made so many of these songs come to life in a new way for me, especially reading of the horrible atrocities committed by the Turkish government. Serj really opens up about some deep, painful generational trauma that explains his drive for justice.

I also loved his reflection on what System means to him today. The closing chapter of the book talks about the 2023 Sad, Sick World show in Las Vegas. He went into the show feeling like System was nothing more than a cover band at this point, but came out of it feeling joy.3 I sure hope I can see them perform live one day.

If you're a System fan like me, I could not recommend this book any more highly. If it weren't for the fact that it's currently 6:15am, I would be blasting them in my house right now.


  1. P.L.U.C.K. is an acronym for "Politically Lying, Unholy, Cowardly Killers," which sort of tells you how they feel about the Turkish government. 

  2. I have a hard time selecting my favorite System album because they all honestly hold a special place in my heart. But with Mesmerize coming out my senior year of high school and "Radio/Video" becoming the theme song to many of my favorite memories of that time, I would be hard pressed to not stick with that one as my favorite. 

  3. Sad, Sick World was put on by the same group that did When We Were Young. During WWWY, I couldn't help but wonder if the artists felt the same joy we did. I'm pleased to read that it did. 

Continue to the full article


The Verge Endorses Kamala Harris


🔗 a linked post to theverge.com » — originally shared here on

Collective action problem is the term political scientists use to describe any situation where a large group of people would do better for themselves if they worked together, but it’s easier for everyone to pursue their own interests. The essential work of every government is making laws that balance the tradeoffs between shared benefits and acceptable restrictions on individual or corporate freedoms to solve this dilemma, and the reason people hate the government is that not being able to do whatever you want all the time is a huge bummer. Speed limits help make our neighborhoods safer, but they also mean you aren’t supposed to put the hammer down and peel out at every stoplight, which isn’t any fun at all.

Every Verge reader is intimately familiar with collective action problems because they’re everywhere in tech. We cover them all the time: making everything charge via USB-C was a collective action problem that took European regulation to finally resolve, just as getting EV makers to adopt the NACS charging standard took regulatory effort from the Biden administration. Content moderation on social networks is a collective action problem; so are the regular fights over encryption. The single greatest webcomic in tech history describes a collective action problem.

The problem is that getting people to set aside their own selfishness and work together is generally impossible even if the benefits are obvious, a political reality so universal it’s a famous Tumblr meme. 

In general, I don’t like to discuss politics on here. I figure if you’re reading my blog, you probably have a vague idea of what my political beliefs are.

But this endorsement of Kamala Harris isn’t just an endorsement of her and her politics. In fact, there is hardly any mention of her in here.

In fact, this endorsement is an endorsement for the concept of democracy.

The key part about Kamala is toward the end, which sums up why I’m gonna vote for her:

In many ways, the ecstatic reaction to Harris is simply a reflection of the fact that she is so clearly trying. She is trying to govern America the way it’s designed to be governed, with consensus and conversation and effort. With data and accountability, ideas and persuasion. Legislatures and courts are not deterministic systems with predictable outputs based on a set of inputs — you have to guide the process of lawmaking all the way to the outcomes, over and over again, each time, and Harris seems not only aware of that reality but energized by it. More than anything, that is the change a Harris administration will bring to a country exhausted by decades of fights about whether government can or should do anything at all.

People love to say “the government is broken”, but often fail to ask any follow-up questions. You know, like "why is it broken" and "how can we fix it?"

When I see something broken, my first instinct is to figure out how it got broken in the first place. "Broken", by definition, implies there is a state of "functioning." If we want to "fix" it, we need to agree on what "functioning" means.1

If we agree that our country is broken, then we need to agree on a vision for what a functioning country is.

When building software, there are plenty of excuses we could make as to why our system is broken. A junior engineer might blame the users. They're dumb, they're using it wrong, they don't understand the elegance of the solution we've built for them.

As you get more senior, you start to realize just how reductive and silly those arguments are. We can't control our users, and we will likely never understand them. But we can perform user testing and spend time with our customers. We learn how they actually use the software. We dig to uncover other problems they have so we can adjust our software to meet those needs.

I think what bothers me about our current political climate is that we are quick to jump to these reductive ideas which are proven to be ineffective. We have to work together and keep trying new things.

We're better than this. We all need each other, often more than we are willing to admit.

It’s a lesson I’m trying to impart on my kids. They constantly fight with each other, their feelings pouring out of them like a fire hydrant when they don’t get what they want.

I get it. It’s like The Rolling Stones said: “you can’t always get what you want, but if you try, you’ll find you’ll get what you need.”

We need America. We need to come together and curb our natural tendency toward hostility against anything that is different.

But even if you’re apolitical, I encourage you to read this excellent essay. It makes me proud to be an American at a time where it feels dangerous to be proud.


  1. This is probably why I enjoy software engineering: there is almost always a clear definition of "functioning" and logical reason why a system is "broken", and as a result, there is almost always a logical solution to keep the system working for as many people as possible.  

Continue to the full article


Fantastic Builders and Where to Find Them


🔗 a linked post to builders.genagorlin.com » — originally shared here on

A need to “prove oneself” to internalized authority figures leads to things like climbing conventional status ladders, or staying in an unhappy marriage, or piling up as much money as possible to preserve the appearance of having “made it”.

What motivated Esther to do things like take a receptionist job at a film company, pick up her life and move to San Francisco, and risk her savings on her startup was something far more personal and idiosyncratic: a conception of the interests she wanted to explore, the people she wanted to meet, the products she wanted to create, the life she envisioned and wanted to build for herself—and, yes, the proof that she really could count on herself to do it.

This is super inspiring on so many levels.

It seems like life becomes a little more palatable once you figure out who you are and start leaning into that.

Continue to the full article


Every map of China is wrong. And this is intentional



🔗 a linked post to medium.com » — originally shared here on

I work in a climate tech startup, and although I don’t directly manipulate geospatial data in my role (at least not at the moment), I’m very interested in this aspect of our work. I came across a seemingly innocuous message on Slack about how we had less information for a particular carbon offset project because of the China GPS shift problem.

This naturally piqued my interest — I didn’t know Chinese geospatial data would be any different from the rest of the world. Hadn’t this been one of the few areas where we all agreed about the right way to do things?

The more I delved into this topic, the more interesting stuff I found, and the more it made sense from a Chinese perspective.

I know I’ve historically railed on the frustrations engineers face when dealing with time zones, so tickle me surprised when I saw Naz link to this post giving me a new esoteric cause to rail on: GPS shift!

Continue to the full article


Demystifying Artificial Intelligence in Non Profits - Webinar Recap

originally shared here on

Demystifying Al for Nonprofits - Practical Use Cases, Ethical Concerns, and How to Get Started

I recently gave a talk about artificial intelligence that was specifically catered to those in the nonprofit world. Here's a recap of the talk using Simon Willison's annotated talk format.


Introduction - Al is a tool for everybody.

I firmly believe that AI is a tool for everyone.

I’ve been immersed in technology ever since I built my first website at eight years old. For the last three decades, I've eagerly followed every major technological breakthrough, examining each under the lens of "okay, so what's useful with this one?"

This recent breakthrough in AI technology, in particular, gives me the same level of excitement that I got when I built my first website or jailbroke my iPhone for the first time.

There is so much potential with AI, and the best part is that you don't need to know everything about AI in order to get value from it—just a bit of training on how to integrate these tools into your life.

Think about your car: unless you're a gear head, you probably don't know the first thing about how pistons work within an engine, and yet you don't need to know that in order to drive it efficiently. You do, however, need take to take classes to learn how to operate it properly and safely.

The same goes for these new artificial intelligence tools. And here's some good news: like all of your ancestors before you, you can totally figure out how this new tool works with just a little guidance.

My hope is that this talk serves as the first step in your training process for learning about AI. You should leave here with a basic understanding of how these tools are designed to work, as well as some ideas for how to incorporate them into your life.


What is AI? - Artificial Intelligence is a field of science studying how to get computers to reason, learn, and act like humans.

So, what is artificial intelligence?

Artificial intelligence is a field of science focused on getting computers to act, think, and reason like humans.

Human intelligence, unlike other forms we see in nature, excels at pattern recognition and decision-making—two complex skills that AI aims to replicate.

A graph showing a select sampling of the various offshoots within artificial intelligence (e.g. machine learning, natural language processing, computer vision, etc.)

A common misconception about artificial intelligence is that it's one thing. While there are some who are working on artificial general intelligence (like HAL-9000), most researchers in the AI space aren't working on building an all-purpose form of intelligence. Instead, they focus on digitizing specific areas of intelligence.

For instance, natural language processing helps computers understand not just words but the meaning behind them, while computer vision enables machines to recognize and process visual information.

Each of these offshoots serves unique functions.

A helpful analogy is to think of AI as a toolkit, like walking into a hardware store and asking for a hammer.

The clerk will likely ask which kind because there are various types—sledgehammers, jackhammers, ball-peen hammers, etc.

AI is similar; you need to know what problem you’re solving in order to choose the right tool.

Recently, advancements in AI have led to generative AI models, like ChatGPT and Google’s Gemini, which can create new content. But to understand where generative AI fits, let’s discuss some foundational AI concepts.

Artificial intelligence is the parent circle, which contains all the disciplines we use to teach computers how to do "human things".

Artificial intelligence, as we discussed earlier, is a broad field focused on teaching computers to perform human-like tasks.

Within artificial intelligence, we can use machine learning to get a computer to teach itself without humans explicitly programming them.

Within the broad field of artificial intelligence, there's machine learning, where we teach computers to learn without direct human programming.

Within machine learning, deep learning enables machines to build representations of how complex things work in real life.

A subset of machine learning is deep learning, which allows computers to create complex digital representations of real-world objects.

Within deep learning, Generative Al creates new content based on patterns they learned through training.

After reaching this level, we enter generative AI, where computers use learned representations to generate new content based on recognized patterns.


Machine learning relies on labelled data (e.g. this is a picture of a traffic light and this is *not* a picture of a traffic light).

To explain machine learning, imagine teaching a computer to recognize a traffic light.

You’d feed it thousands of pictures of traffic lights and train it to differentiate between traffic and non-traffic lights.

After undergoing thousands (or even millions) of tests, the computer program can predict with increasing accuracy, for example, “Yes, this is a traffic light,” or “No, this is not a traffic light.”

You have to decide up front what you want to call a "traffic light." Do hand drawn pictures of traffic lights count? How about in some countries where they don't use traffic lights but rather people directing traffic? How about traffic lights intended for bicycle traffic rather than cars?

You want to make sure during its training that you give it data relevant to the task you want it to perform.

For example, edge cases arise.

  • Do you want your model to say that a hand-drawn traffic light counts as a traffic light?
  • Some countries don't use traffic lights, but rather use humans to direct traffic... do those count?
  • Newer traffic lights are geared toward specific modes of transportation, like bicycles. Are those traffic lights?

As you make these decisions and label your data accordingly, the training process leads to a model capable of identifying traffic lights based on patterns it’s learned.

You are helping with that labeling process every time you do a Captcha.

(By the way: every time you fill out a Captcha online, you are helping Google to train its models to recognize various elements it may encounter on the road. Thanks for the free labor, everyone!)

Deep learning takes machine learning a step further by identifying more complex elements within its training data and making even more nuanced predictions.

Machine learning is cool and has a ton of practical use cases, but what if we wanted to have the computer understand something more complex, like the color of the traffic light?

Neural networks are the form of AI that lets us pass in an image and have it tell more detailed information about it without humans expressly programming it to do so.

Deep learning takes machine learning a step further, using neural networks to analyze data in stages, like a detective reconstructing a crime scene. At each stage, the network gathers specific details—colors, shapes, textures—and then combines these details into a fuller, more nuanced picture, like a detective piecing together a mystery from small clues.

With our traffic light example, each layer in our neural network focuses on specific aspects of the image, such as color, shapes, or textures, to interpret complex visuals, like recognizing whether a traffic light is red, yellow, or green.

Deep learning helps computers identify the color of a traffic light in any condition (daytime/night time, rain/clear, etc.)

This depth is essential, especially in dynamic environments like self-driving cars, where traffic lights look different depending on the time of day, weather conditions, or lighting.

With enough examples, deep learning models can accurately identify traffic lights in all these conditions, forming the backbone of many AI applications, including autonomous vehicles and medical imaging.

All Machine Learning is just prediction!

The big takeaway about machine learning and deep learning is that they're primarily tools for making well-optimized predictions based on patterns in past data. They use advanced probability and optimization to make 'best-guess' predictions—calculations that may seem insightful but are based purely on mathematical patterns, not true understanding.

None of this stuff is actually "alive" or "conscious" (as best we can tell... more on that in the "black box problem" section below).

All it is doing is saying "based on what I've learned while training on the data you gave me, I am making a prediction that this image contains a traffic light, or this image contains a "green" traffic light."

Generative Al systems predict what word is most likely to come next in a sentence

Now, let’s take it further.

What if instead of guessing what is inside an image, we can take these models and have them predict what what word comes next in a sentence?

That's what generative AI is doing!

Large Language Models (LLMs) are trained on tons of text to predict what word will most likely appear next.

By training a neural network on vast amounts of text—like public domain books, Reddit comments, and YouTube transcripts—the model becomes exceptionally skilled at predicting the next word in a sentence, mimicking human-like responses across countless topics.

And that's what a large language model does!

If you give a prompt to one of these systems, it will use all the patterns it recognized in training and spit out a very convincing answer to your prompt.

There are lots of ways to predict content... you can do this with text, images, and even audio!

And even more impressive: you can run these models across all kinds of mediums.

Because under the hood, all generative AI tools (ALL of them!) are just running statistical predictions to guess at what is the most likely thing to happen.

If you want a model that can predict what word would come next in a sentence, you'd use ChatGPT or Gemini or Claude.

Images? Midjourney, DALL-E, etc.

Music? Suno.

Let’s pretend to be an LLM together!

At this point, I imagine you are either thinking I'm talking about witchcraft, magic, or complete gibberish... and I suppose at some level, each of those is possible.

But stick with me here while I drive home this point about how these prediction systems work by having the audience here be my collective large language model.

So I'll give you a prompt, and I want you to fill in the blank:

I am going to the store to pick up a gallon of ______?

If I ask you "I am going to the store to pick up a gallon of ______", what would you likely fill that in with?

(In this case, the live audience of this webinar universally said "milk", but I've also heard people say "ice cream", and I can definitively say that those are my kind of people.)

There's one small problem though: I actually didn't get the answer I was looking for. 😬

So I'm gonna give you a different prompt and see if I can get the answer I was looking for.

I am going to the hardware store to pick up a gallon of ______?

"I'm going to the hardware store to pick up a gallon of ______".

(In this case, the live audience universally said "paint", which was the word I was originally looking for.)

When you read the sentence for my first prompt and see "store", you subconsciously tap into your previous experiences with the word. If you grew up in Minnesota like me, you associate the word "store" with concepts like "grocery store", "Target", or "Walmart."

In that context, you are gonna be thinking about what they sell by the gallon in those places. Again, that's likely milk or ice cream.

In my second prompt, your brain is airlifted out of Target and dropped into a Menards or Home Depot. In this new context, you aren't thinking about milk anymore. You're thinking about paint, oil, water, or other chemicals that are sold by the gallon.

This shift in prompt context illustrates how generative AI works: it predicts based on the most likely answer, given the context.

Recap of Generative Al: 

1. Machine learning tools are only making predictions. (They don't “know” anything)
2. Generative Al are trained on tons of data to recognize patterns
3. Predicts what the next likely word/words will be that answer a given prompt (store / hardware store)

So, in summary: machine learning and deep learning models are about making predictions based on patterns in data.

Generative AI takes that one step further, creating new content based on what’s likely to come next in a sequence.

What is the point of all this?

I get that this is a lot, and it's overwhelming to have sixty years of advancements in machine learning thrown at you in about ten minutes.

So let's get to the point of all of this. Why does it matter that we have a computer program that just predicts the most likely word to finish a sentence?

Because it turns out that there are plenty of cases where it's really helpful to get the most likely response to a question!

It's not like you'd want to trust these things implicitly, because as we know, life doesn't always align with what is average.

So when we say "don't trust these things because they're not telling the truth", we mean it! They're not built to be "truthful"; they're built to be "the most likely to be truthful" (which is a big nitpick, for sure, but an important nuance to understand when working with AI!).

Take legal advice, for example. Again, do not trust these things for legal advice, but let's say you need to draft a non-disclosure agreement.

In the old days, you would go to a lawyer who would pull out their own template, make some specific modifications to fit your needs, and pass it along. There's three delicious billable hours right there.

Today, you could go to a large language model and describe the sort of things you'd want your NDA to contain. The LLM would then give you the most likely provisions that are included in NDAs. You could then take that draft and shoot it to your attorney for review. That's 30 billable minutes instead of 3 billable hours.

That's the power of AI. That's why I'm so excited for these generative AI tools. They aren't going to replace humans; they're going to augment them.


Tip 1: Get your own hands dirty.

Let’s move on to some practical tips for adopting AI in your organization.

My first tip: you gotta get your own hands dirty and get hands-on experience with these tools.

As a leader, experimenting directly with these tools will help you understand their potential and limitations.

In my career so far, I've noticed that most companies follow a path of hiring consultants to come in and help them adopt new technology. With AI, I encourage you to get familiar with it yourself before shelling out for third party advice.

Action step: Encourage yourself and your employees to use AI tools like ChatGPT for small tasks—drafting emails, summarizing reports, or answering questions—and share what they've learned with the team.

Tip 2: Encourage psychological safety.

My second tip is to foster psychological safety.

AI adoption requires trial and error, and studies show many employees hesitate to use AI tools at work due to fears of being seen as cheating or potentially automating themselves out of a job.

Create a culture where experimenting with AI is encouraged and celebrated.

Action step: Try running an “AI hackathon” where employees explore AI tools in a low-stakes environment, share their findings, and foster team learning.

Tip 3: Clean data is everything.

Third: clean data is essential.

AI models are only as good as the data they’re trained on, so ensure your organization’s data is organized and free from errors. The better your data, the better your AI models will perform.

And as we'll discuss in the pitfalls section: "dirty" data will lead to biased and inaccurate results.

Action step: Every company has at least one person who loves working with spreadsheets; tap into their skills to spearhead data-cleaning initiatives.

Tip 4: Start small, build up from there.

The fourth tip: start small.

Don’t try to replace entire workflows with AI right away. Start small, focusing on simple, manageable projects, and scale based on what works.

A great place to start is inviting an AI bot into your virtual meetings to record notes and generate summaries. Be careful to not set it up to "auto join" every meeting (you probably don't want it in a sensitive HR meeting, for example), but give that a try and see how it performs for you.

Action step: Try using AI to do event survey analysis, basic donor segmentation, or create copy for your newsletters or social media channels.

Tip 5: Iterate on your prompts.

Finally, I can't overstate the importance of continually iterating and improving on your prompts.

Remember our "store/hardware store" example? One word made a world of difference in the output.

Similarly, providing an LLM with a prompt like "Summarize this report" will yield different results from "Create a one-paragraph summary highlighting the most important program outcomes from this report."

The field of research which tries to figure out how to get the most out of these tools is called "prompt engineering". You can find tons of great resources online and on YouTube for how to best phrase things for different types of models. For example, the prompts that work best for ChatGPT are different than Claude. And the prompts you use for a text generator will be different than an image generator like Midjourney.

Prompt Chaining

« Prompt 1: You are an expert with filling out grant applications. Review this grant application and our organization’s mission statement. Provide a list of tangible ways we are best suited to win this application.

« Prompt 2: Using the list you generated in the previous prompt, create a cover letter for our grant application highlighting the ways we align with the grant’s purpose.

A prompt engineering tricks that I use all the time is called "prompt chaining."

Prompt chaining involves using the result from one prompt as the foundation for the next prompt.

Instead of asking an LLM to generate a cover letter for a grant application, you could first ask an LLM to review both a grant application and your organization's mission, and then provide a list of areas where there are synergies.

Then, you can take the results from that and ask it to write the letter.

Giving the models time to reason through their answers tends to lead to better outcomes.

An example of chain of thought prompting

Another prompt engineering trick I frequently reach for is called chain of thought.

With this technique, you are asking an LLM to think about a given problem from three distinct perspectives. You then ask it to act as one of those personas and critique the responses of the other two. Finally, you combine the results into a well-considered and well-rounded answer.

As an example: my son does not like to eat pizza. I know... it bums me out, too.

I provided ChatGPT with a bunch of backstory on my son and what we've tried to do to encourage him to try pizza. Then, I said to pretend you are a kindergarten teacher, a child psychologist, and a grandparent. As each of those personas, tell me what approach you would take to get my son to eat pizza.

Next, as each persona, I ask it to reflect on the answers of the other personas. For example, the child psychologist persona would consider the kindergarten teacher and grandma's perspectives and adjust their own response.

Finally, after all personas have reflected on each other's answers, I have the model summarize the best path forward.

This trick works exceptionally well across several different problems. As an engineer, I use it to consider system changes as an engineer, as an end user, and as a business executive. It can provide some insights which you may have otherwise missed.

Tips for Adopting Al

So in order to integrate AI successfully, treat it as a tool that augments, rather than replaces, human judgment.

Every time I fire up an AI assistant, I like to think of it as an eager intern who is exceptionally smart but exceptionally naive. I do not take its output as gospel; rather, I use it as a foundation and build on it from there.

The best way to integrate AI into your workflows is to use it for routine tasks, and keep human oversight for critical decisions.

Finally, I'll take this time to further emphasize that all AI outputs are based on probability, not the truth. Always review and adjust outputs as needed.


Ethical Considerations & Pitfalls: Bias in Al

Alright, we've covered what artificial intelligence is, and we've gotten through some tips for adopting AI into your organization.

Now, let's talk about areas where AI can fall flat.

First: bias.

If you recall, at the beginning of this talk, we described artificial intelligence as being focused on getting computers to be like humans.

Humans are inherently biased, and AI, trained on human-generated data, often reflects this bias. Achieving true “unbiased” AI is a complex, if not impossible, task.

I propose you think of AI in the same context: there is no such thing as an unbiased AI model.

AI models are only as good as the data with which you train it. Data is one of those things you can pretty easily screw up if you aren't attuned to all of the various forms of bias that could impact your data.

Examples of Bias in Al (Stereotyping Bias, Measurement Bias, and Selection Bias)

There are many different kinds of bias, but I wanted to highlight three specific forms as a starter:

Stereotyping bias: This occurs when AI models perform less accurately for certain groups due to their underrepresentation or misrepresentation in training data, as seen with YouTube's automatic captions, which struggle with Scottish, Indian, and African American accents.

Measurement bias Measurement bias happens when an AI model’s metrics or algorithms lead to systematically skewed outcomes, such as the Apple Card’s algorithm offering men higher credit limits than women with similar financial profiles.

Selection bias: Selection bias arises when training data lacks sufficient diversity, causing models to underperform for certain groups; for instance, breast cancer detection AI trained mainly on female patients performs less accurately for male patients.

There are many more forms of bias that you can research on your own, but the main takeaway here is that all systems are subject to bias depending on what data was used to train it. For this reason, you can't just rely on the output of an AI-led decision.

Ethical Considerations & Pitfalls: The 'Black Box Problem'

As mentioned earlier, another major issue is the “black box” problem.

Deep learning models are like locked safes—each layer hides its ‘reasoning’ behind many interconnected processes, making it nearly impossible for humans to interpret every decision-making step.

This lack of transparency, especially in high-stakes areas like criminal justice or credit scoring, means we’re left trusting the ‘safe’ without ever seeing inside, creating ethical and practical risks.

Once again, this is a reminder that we can’t just accept AI output as absolute truth; careful consideration and oversight are needed to avoid unintentional discrimination or bias.

Ethical Considerations & Pitfalls: It can’t do everything!

Literally every single time new technology drops, some wise guy emerges from the crowd and says, "well, I can't use [insert new tech] to do [insert obvious use case]".

Earlier in this talk, I led off by saying "AI is for everyone." Notice how I didn't say "AI is for every thing."

Of course you can't use AI for everything! AI is not a magic bullet. You gotta know how to deploy it effectively, which is in service of automating predictable, repetitive tasks.

Yes, wise guy, you are right: you aren't gonna want to deploy AI while leading a camping expedition in the Boundary Waters.

But after you complete your expedition and ask for feedback from the program's participants, you could use AI to process those responses and bucket them into understandable and actionable groups.

Ethical Considerations & Pitfalls: Content is (Literally) Average

If you've been paying attention during this entire talk, you'll notice I keep saying things like "AI is picking the most likely word to finish a sentence" and "machine learning is used to make predictions."

If you are relying on a tool to create the most likely response to something, you'll see quickly that the responses are kinda... average.

This can be advantageous, but it's also something to be aware of. By using output that is average by design, you run the risk of blending into everything else out there. (This, by the way, leads to the rise of slop, which is the AI equivalent of spam).

Now, this may be a trade off you are willing to accept in many cases. I, for one, often use AI as a therapist to help me make sense of some thoughts swirling around in my head. This works great, but I use the advice and feedback I get from the model and take it to a human therapist.

The other thing about the content being average: remember how we said that AI doesn't care about truthiness, but rather it cares about finding the thing that is most likely to be truthful? This leads to some concerning behavior called "hallucination", where it will make up facts which aren't actually facts.

You may recall headlines from a year ago where a lawyer used ChatGPT and it hallucinated cases. This sort of thing happens all the time with new technology, especially when it's used by people who aren't properly trained on how to use it (or are swayed by glitzy marketing campaigns which make promises that it can't possibly deliver).

Ethical Considerations & Pitfalls
Mitigation Strategies
- Use Al to assist, but keep human oversight
- Review Al outputs for biases and accuracy
- Make adjustments as needed

Now that you're aware of the pitfalls and risks of using artificial intelligence, how can you mitigate those risks?

Always treat AI as a supportive tool, maintaining human oversight—especially for important decisions where ethics and accuracy are critical.

Always review AI outputs for potential bias and inaccuracies.

Finally, adjust AI-generated content as needed to match your style and objectives. For instance, AI may draft a social media post, but tweaking it to align with your brand's voice adds value.


What's next? Spend ten hours doing tasks with generative Al!

We've covered what AI is, practical tips for adopting it, ethical concerns, and common pitfalls.

So, what's next for you?

Begin by dedicating 10 hours to using generative AI tools to build practical familiarity.

Try asking questions in areas you know well to see how AI performs, and notice where you’d add or change things.

Sharing what you learn with your team encourages experimentation and fosters a learning environment.