blog

Down With The System: A Memoir (of sorts)


🔗 a linked post to amzn.to » — originally shared here on

System of a Down holds a very special place in my heart.

I was in seventh grade when Toxicity was released. I remember sitting in church on Good Friday a few months later and hearing the story of Jesus' execution on the cross. When my pastor, who was reading from the scriptures, got to the part where he shouts, "Father, why have you forsaken me?", my sister and I looked at each other and shared a knowing realization: "oh man, that's from the bible?"

I've been drawn to System mostly because of the instrumentals. Lyrics have not traditionally captured my attention when listening to music.

It took me a few years to discover that all the members of the band were Armenian-Americans. Until reading this book, I didn't give Armenia much thought. The last time I recall giving much consideration to the Middle East in general was in tenth grade world history class. I couldn't have picked out Armenia on a map if you had asked me.

Serj Tankian (the lead singer of System) recently released his memoir, and the title adeptly appends "of sorts" to that noun.

Yes, there are plenty of great stories in this book about Serj's experience with System of a Down, but I'd argue more than 25% of the book serves as a history lesson about Armenia for ignorant Westerners like me.

Even though I'm not much of a lyrics guy, it's hard to miss the humanitarian messages when they're shouted at you by Serj.

Like in "P.L.U.C.K.", from their debut self-titled album1:

Revolution, the only solution,
The armed response of an entire nation,
Revolution, the only solution,
We've taken all your shit, now it's time for restitution.

Or "Cigaro" from Mezmerize2:

We're the regulators that de-regulate
We're the animators that de-animate
We're the propagators of all genocide
Burning through the world's resources
Then we turn and hide

Reading this book made so many of these songs come to life in a new way for me, especially reading of the horrible atrocities committed by the Turkish government. Serj really opens up about some deep, painful generational trauma that explains his drive for justice.

I also loved his reflection on what System means to him today. The closing chapter of the book talks about the 2023 Sad, Sick World show in Las Vegas. He went into the show feeling like System was nothing more than a cover band at this point, but came out of it feeling joy.3 I sure hope I can see them perform live one day.

If you're a System fan like me, I could not recommend this book any more highly. If it weren't for the fact that it's currently 6:15am, I would be blasting them in my house right now.


  1. P.L.U.C.K. is an acronym for "Politically Lying, Unholy, Cowardly Killers," which sort of tells you how they feel about the Turkish government. 

  2. I have a hard time selecting my favorite System album because they all honestly hold a special place in my heart. But with Mesmerize coming out my senior year of high school and "Radio/Video" becoming the theme song to many of my favorite memories of that time, I would be hard pressed to not stick with that one as my favorite. 

  3. Sad, Sick World was put on by the same group that did When We Were Young. During WWWY, I couldn't help but wonder if the artists felt the same joy we did. I'm pleased to read that it did. 

Continue to the full article


The Verge Endorses Kamala Harris


🔗 a linked post to theverge.com » — originally shared here on

Collective action problem is the term political scientists use to describe any situation where a large group of people would do better for themselves if they worked together, but it’s easier for everyone to pursue their own interests. The essential work of every government is making laws that balance the tradeoffs between shared benefits and acceptable restrictions on individual or corporate freedoms to solve this dilemma, and the reason people hate the government is that not being able to do whatever you want all the time is a huge bummer. Speed limits help make our neighborhoods safer, but they also mean you aren’t supposed to put the hammer down and peel out at every stoplight, which isn’t any fun at all.

Every Verge reader is intimately familiar with collective action problems because they’re everywhere in tech. We cover them all the time: making everything charge via USB-C was a collective action problem that took European regulation to finally resolve, just as getting EV makers to adopt the NACS charging standard took regulatory effort from the Biden administration. Content moderation on social networks is a collective action problem; so are the regular fights over encryption. The single greatest webcomic in tech history describes a collective action problem.

The problem is that getting people to set aside their own selfishness and work together is generally impossible even if the benefits are obvious, a political reality so universal it’s a famous Tumblr meme. 

In general, I don’t like to discuss politics on here. I figure if you’re reading my blog, you probably have a vague idea of what my political beliefs are.

But this endorsement of Kamala Harris isn’t just an endorsement of her and her politics. In fact, there is hardly any mention of her in here.

In fact, this endorsement is an endorsement for the concept of democracy.

The key part about Kamala is toward the end, which sums up why I’m gonna vote for her:

In many ways, the ecstatic reaction to Harris is simply a reflection of the fact that she is so clearly trying. She is trying to govern America the way it’s designed to be governed, with consensus and conversation and effort. With data and accountability, ideas and persuasion. Legislatures and courts are not deterministic systems with predictable outputs based on a set of inputs — you have to guide the process of lawmaking all the way to the outcomes, over and over again, each time, and Harris seems not only aware of that reality but energized by it. More than anything, that is the change a Harris administration will bring to a country exhausted by decades of fights about whether government can or should do anything at all.

People love to say “the government is broken”, but often fail to ask any follow-up questions. You know, like "why is it broken" and "how can we fix it?"

When I see something broken, my first instinct is to figure out how it got broken in the first place. "Broken", by definition, implies there is a state of "functioning." If we want to "fix" it, we need to agree on what "functioning" means.1

If we agree that our country is broken, then we need to agree on a vision for what a functioning country is.

When building software, there are plenty of excuses we could make as to why our system is broken. A junior engineer might blame the users. They're dumb, they're using it wrong, they don't understand the elegance of the solution we've built for them.

As you get more senior, you start to realize just how reductive and silly those arguments are. We can't control our users, and we will likely never understand them. But we can perform user testing and spend time with our customers. We learn how they actually use the software. We dig to uncover other problems they have so we can adjust our software to meet those needs.

I think what bothers me about our current political climate is that we are quick to jump to these reductive ideas which are proven to be ineffective. We have to work together and keep trying new things.

We're better than this. We all need each other, often more than we are willing to admit.

It’s a lesson I’m trying to impart on my kids. They constantly fight with each other, their feelings pouring out of them like a fire hydrant when they don’t get what they want.

I get it. It’s like The Rolling Stones said: “you can’t always get what you want, but if you try, you’ll find you’ll get what you need.”

We need America. We need to come together and curb our natural tendency toward hostility against anything that is different.

But even if you’re apolitical, I encourage you to read this excellent essay. It makes me proud to be an American at a time where it feels dangerous to be proud.


  1. This is probably why I enjoy software engineering: there is almost always a clear definition of "functioning" and logical reason why a system is "broken", and as a result, there is almost always a logical solution to keep the system working for as many people as possible.  

Continue to the full article


Fantastic Builders and Where to Find Them


🔗 a linked post to builders.genagorlin.com » — originally shared here on

A need to “prove oneself” to internalized authority figures leads to things like climbing conventional status ladders, or staying in an unhappy marriage, or piling up as much money as possible to preserve the appearance of having “made it”.

What motivated Esther to do things like take a receptionist job at a film company, pick up her life and move to San Francisco, and risk her savings on her startup was something far more personal and idiosyncratic: a conception of the interests she wanted to explore, the people she wanted to meet, the products she wanted to create, the life she envisioned and wanted to build for herself—and, yes, the proof that she really could count on herself to do it.

This is super inspiring on so many levels.

It seems like life becomes a little more palatable once you figure out who you are and start leaning into that.

Continue to the full article


Every map of China is wrong. And this is intentional



🔗 a linked post to medium.com » — originally shared here on

I work in a climate tech startup, and although I don’t directly manipulate geospatial data in my role (at least not at the moment), I’m very interested in this aspect of our work. I came across a seemingly innocuous message on Slack about how we had less information for a particular carbon offset project because of the China GPS shift problem.

This naturally piqued my interest — I didn’t know Chinese geospatial data would be any different from the rest of the world. Hadn’t this been one of the few areas where we all agreed about the right way to do things?

The more I delved into this topic, the more interesting stuff I found, and the more it made sense from a Chinese perspective.

I know I’ve historically railed on the frustrations engineers face when dealing with time zones, so tickle me surprised when I saw Naz link to this post giving me a new esoteric cause to rail on: GPS shift!

Continue to the full article


Demystifying Artificial Intelligence in Non Profits - Webinar Recap

originally shared here on

Demystifying Al for Nonprofits - Practical Use Cases, Ethical Concerns, and How to Get Started

I recently gave a talk about artificial intelligence that was specifically catered to those in the nonprofit world. Here's a recap of the talk using Simon Willison's annotated talk format.


Introduction - Al is a tool for everybody.

I firmly believe that AI is a tool for everyone.

I’ve been immersed in technology ever since I built my first website at eight years old. For the last three decades, I've eagerly followed every major technological breakthrough, examining each under the lens of "okay, so what's useful with this one?"

This recent breakthrough in AI technology, in particular, gives me the same level of excitement that I got when I built my first website or jailbroke my iPhone for the first time.

There is so much potential with AI, and the best part is that you don't need to know everything about AI in order to get value from it—just a bit of training on how to integrate these tools into your life.

Think about your car: unless you're a gear head, you probably don't know the first thing about how pistons work within an engine, and yet you don't need to know that in order to drive it efficiently. You do, however, need take to take classes to learn how to operate it properly and safely.

The same goes for these new artificial intelligence tools. And here's some good news: like all of your ancestors before you, you can totally figure out how this new tool works with just a little guidance.

My hope is that this talk serves as the first step in your training process for learning about AI. You should leave here with a basic understanding of how these tools are designed to work, as well as some ideas for how to incorporate them into your life.


What is AI? - Artificial Intelligence is a field of science studying how to get computers to reason, learn, and act like humans.

So, what is artificial intelligence?

Artificial intelligence is a field of science focused on getting computers to act, think, and reason like humans.

Human intelligence, unlike other forms we see in nature, excels at pattern recognition and decision-making—two complex skills that AI aims to replicate.

A graph showing a select sampling of the various offshoots within artificial intelligence (e.g. machine learning, natural language processing, computer vision, etc.)

A common misconception about artificial intelligence is that it's one thing. While there are some who are working on artificial general intelligence (like HAL-9000), most researchers in the AI space aren't working on building an all-purpose form of intelligence. Instead, they focus on digitizing specific areas of intelligence.

For instance, natural language processing helps computers understand not just words but the meaning behind them, while computer vision enables machines to recognize and process visual information.

Each of these offshoots serves unique functions.

A helpful analogy is to think of AI as a toolkit, like walking into a hardware store and asking for a hammer.

The clerk will likely ask which kind because there are various types—sledgehammers, jackhammers, ball-peen hammers, etc.

AI is similar; you need to know what problem you’re solving in order to choose the right tool.

Recently, advancements in AI have led to generative AI models, like ChatGPT and Google’s Gemini, which can create new content. But to understand where generative AI fits, let’s discuss some foundational AI concepts.

Artificial intelligence is the parent circle, which contains all the disciplines we use to teach computers how to do "human things".

Artificial intelligence, as we discussed earlier, is a broad field focused on teaching computers to perform human-like tasks.

Within artificial intelligence, we can use machine learning to get a computer to teach itself without humans explicitly programming them.

Within the broad field of artificial intelligence, there's machine learning, where we teach computers to learn without direct human programming.

Within machine learning, deep learning enables machines to build representations of how complex things work in real life.

A subset of machine learning is deep learning, which allows computers to create complex digital representations of real-world objects.

Within deep learning, Generative Al creates new content based on patterns they learned through training.

After reaching this level, we enter generative AI, where computers use learned representations to generate new content based on recognized patterns.


Machine learning relies on labelled data (e.g. this is a picture of a traffic light and this is *not* a picture of a traffic light).

To explain machine learning, imagine teaching a computer to recognize a traffic light.

You’d feed it thousands of pictures of traffic lights and train it to differentiate between traffic and non-traffic lights.

After undergoing thousands (or even millions) of tests, the computer program can predict with increasing accuracy, for example, “Yes, this is a traffic light,” or “No, this is not a traffic light.”

You have to decide up front what you want to call a "traffic light." Do hand drawn pictures of traffic lights count? How about in some countries where they don't use traffic lights but rather people directing traffic? How about traffic lights intended for bicycle traffic rather than cars?

You want to make sure during its training that you give it data relevant to the task you want it to perform.

For example, edge cases arise.

  • Do you want your model to say that a hand-drawn traffic light counts as a traffic light?
  • Some countries don't use traffic lights, but rather use humans to direct traffic... do those count?
  • Newer traffic lights are geared toward specific modes of transportation, like bicycles. Are those traffic lights?

As you make these decisions and label your data accordingly, the training process leads to a model capable of identifying traffic lights based on patterns it’s learned.

You are helping with that labeling process every time you do a Captcha.

(By the way: every time you fill out a Captcha online, you are helping Google to train its models to recognize various elements it may encounter on the road. Thanks for the free labor, everyone!)

Deep learning takes machine learning a step further by identifying more complex elements within its training data and making even more nuanced predictions.

Machine learning is cool and has a ton of practical use cases, but what if we wanted to have the computer understand something more complex, like the color of the traffic light?

Neural networks are the form of AI that lets us pass in an image and have it tell more detailed information about it without humans expressly programming it to do so.

Deep learning takes machine learning a step further, using neural networks to analyze data in stages, like a detective reconstructing a crime scene. At each stage, the network gathers specific details—colors, shapes, textures—and then combines these details into a fuller, more nuanced picture, like a detective piecing together a mystery from small clues.

With our traffic light example, each layer in our neural network focuses on specific aspects of the image, such as color, shapes, or textures, to interpret complex visuals, like recognizing whether a traffic light is red, yellow, or green.

Deep learning helps computers identify the color of a traffic light in any condition (daytime/night time, rain/clear, etc.)

This depth is essential, especially in dynamic environments like self-driving cars, where traffic lights look different depending on the time of day, weather conditions, or lighting.

With enough examples, deep learning models can accurately identify traffic lights in all these conditions, forming the backbone of many AI applications, including autonomous vehicles and medical imaging.

All Machine Learning is just prediction!

The big takeaway about machine learning and deep learning is that they're primarily tools for making well-optimized predictions based on patterns in past data. They use advanced probability and optimization to make 'best-guess' predictions—calculations that may seem insightful but are based purely on mathematical patterns, not true understanding.

None of this stuff is actually "alive" or "conscious" (as best we can tell... more on that in the "black box problem" section below).

All it is doing is saying "based on what I've learned while training on the data you gave me, I am making a prediction that this image contains a traffic light, or this image contains a "green" traffic light."

Generative Al systems predict what word is most likely to come next in a sentence

Now, let’s take it further.

What if instead of guessing what is inside an image, we can take these models and have them predict what what word comes next in a sentence?

That's what generative AI is doing!

Large Language Models (LLMs) are trained on tons of text to predict what word will most likely appear next.

By training a neural network on vast amounts of text—like public domain books, Reddit comments, and YouTube transcripts—the model becomes exceptionally skilled at predicting the next word in a sentence, mimicking human-like responses across countless topics.

And that's what a large language model does!

If you give a prompt to one of these systems, it will use all the patterns it recognized in training and spit out a very convincing answer to your prompt.

There are lots of ways to predict content... you can do this with text, images, and even audio!

And even more impressive: you can run these models across all kinds of mediums.

Because under the hood, all generative AI tools (ALL of them!) are just running statistical predictions to guess at what is the most likely thing to happen.

If you want a model that can predict what word would come next in a sentence, you'd use ChatGPT or Gemini or Claude.

Images? Midjourney, DALL-E, etc.

Music? Suno.

Let’s pretend to be an LLM together!

At this point, I imagine you are either thinking I'm talking about witchcraft, magic, or complete gibberish... and I suppose at some level, each of those is possible.

But stick with me here while I drive home this point about how these prediction systems work by having the audience here be my collective large language model.

So I'll give you a prompt, and I want you to fill in the blank:

I am going to the store to pick up a gallon of ______?

If I ask you "I am going to the store to pick up a gallon of ______", what would you likely fill that in with?

(In this case, the live audience of this webinar universally said "milk", but I've also heard people say "ice cream", and I can definitively say that those are my kind of people.)

There's one small problem though: I actually didn't get the answer I was looking for. 😬

So I'm gonna give you a different prompt and see if I can get the answer I was looking for.

I am going to the hardware store to pick up a gallon of ______?

"I'm going to the hardware store to pick up a gallon of ______".

(In this case, the live audience universally said "paint", which was the word I was originally looking for.)

When you read the sentence for my first prompt and see "store", you subconsciously tap into your previous experiences with the word. If you grew up in Minnesota like me, you associate the word "store" with concepts like "grocery store", "Target", or "Walmart."

In that context, you are gonna be thinking about what they sell by the gallon in those places. Again, that's likely milk or ice cream.

In my second prompt, your brain is airlifted out of Target and dropped into a Menards or Home Depot. In this new context, you aren't thinking about milk anymore. You're thinking about paint, oil, water, or other chemicals that are sold by the gallon.

This shift in prompt context illustrates how generative AI works: it predicts based on the most likely answer, given the context.

Recap of Generative Al: 

1. Machine learning tools are only making predictions. (They don't “know” anything)
2. Generative Al are trained on tons of data to recognize patterns
3. Predicts what the next likely word/words will be that answer a given prompt (store / hardware store)

So, in summary: machine learning and deep learning models are about making predictions based on patterns in data.

Generative AI takes that one step further, creating new content based on what’s likely to come next in a sequence.

What is the point of all this?

I get that this is a lot, and it's overwhelming to have sixty years of advancements in machine learning thrown at you in about ten minutes.

So let's get to the point of all of this. Why does it matter that we have a computer program that just predicts the most likely word to finish a sentence?

Because it turns out that there are plenty of cases where it's really helpful to get the most likely response to a question!

It's not like you'd want to trust these things implicitly, because as we know, life doesn't always align with what is average.

So when we say "don't trust these things because they're not telling the truth", we mean it! They're not built to be "truthful"; they're built to be "the most likely to be truthful" (which is a big nitpick, for sure, but an important nuance to understand when working with AI!).

Take legal advice, for example. Again, do not trust these things for legal advice, but let's say you need to draft a non-disclosure agreement.

In the old days, you would go to a lawyer who would pull out their own template, make some specific modifications to fit your needs, and pass it along. There's three delicious billable hours right there.

Today, you could go to a large language model and describe the sort of things you'd want your NDA to contain. The LLM would then give you the most likely provisions that are included in NDAs. You could then take that draft and shoot it to your attorney for review. That's 30 billable minutes instead of 3 billable hours.

That's the power of AI. That's why I'm so excited for these generative AI tools. They aren't going to replace humans; they're going to augment them.


Tip 1: Get your own hands dirty.

Let’s move on to some practical tips for adopting AI in your organization.

My first tip: you gotta get your own hands dirty and get hands-on experience with these tools.

As a leader, experimenting directly with these tools will help you understand their potential and limitations.

In my career so far, I've noticed that most companies follow a path of hiring consultants to come in and help them adopt new technology. With AI, I encourage you to get familiar with it yourself before shelling out for third party advice.

Action step: Encourage yourself and your employees to use AI tools like ChatGPT for small tasks—drafting emails, summarizing reports, or answering questions—and share what they've learned with the team.

Tip 2: Encourage psychological safety.

My second tip is to foster psychological safety.

AI adoption requires trial and error, and studies show many employees hesitate to use AI tools at work due to fears of being seen as cheating or potentially automating themselves out of a job.

Create a culture where experimenting with AI is encouraged and celebrated.

Action step: Try running an “AI hackathon” where employees explore AI tools in a low-stakes environment, share their findings, and foster team learning.

Tip 3: Clean data is everything.

Third: clean data is essential.

AI models are only as good as the data they’re trained on, so ensure your organization’s data is organized and free from errors. The better your data, the better your AI models will perform.

And as we'll discuss in the pitfalls section: "dirty" data will lead to biased and inaccurate results.

Action step: Every company has at least one person who loves working with spreadsheets; tap into their skills to spearhead data-cleaning initiatives.

Tip 4: Start small, build up from there.

The fourth tip: start small.

Don’t try to replace entire workflows with AI right away. Start small, focusing on simple, manageable projects, and scale based on what works.

A great place to start is inviting an AI bot into your virtual meetings to record notes and generate summaries. Be careful to not set it up to "auto join" every meeting (you probably don't want it in a sensitive HR meeting, for example), but give that a try and see how it performs for you.

Action step: Try using AI to do event survey analysis, basic donor segmentation, or create copy for your newsletters or social media channels.

Tip 5: Iterate on your prompts.

Finally, I can't overstate the importance of continually iterating and improving on your prompts.

Remember our "store/hardware store" example? One word made a world of difference in the output.

Similarly, providing an LLM with a prompt like "Summarize this report" will yield different results from "Create a one-paragraph summary highlighting the most important program outcomes from this report."

The field of research which tries to figure out how to get the most out of these tools is called "prompt engineering". You can find tons of great resources online and on YouTube for how to best phrase things for different types of models. For example, the prompts that work best for ChatGPT are different than Claude. And the prompts you use for a text generator will be different than an image generator like Midjourney.

Prompt Chaining

« Prompt 1: You are an expert with filling out grant applications. Review this grant application and our organization’s mission statement. Provide a list of tangible ways we are best suited to win this application.

« Prompt 2: Using the list you generated in the previous prompt, create a cover letter for our grant application highlighting the ways we align with the grant’s purpose.

A prompt engineering tricks that I use all the time is called "prompt chaining."

Prompt chaining involves using the result from one prompt as the foundation for the next prompt.

Instead of asking an LLM to generate a cover letter for a grant application, you could first ask an LLM to review both a grant application and your organization's mission, and then provide a list of areas where there are synergies.

Then, you can take the results from that and ask it to write the letter.

Giving the models time to reason through their answers tends to lead to better outcomes.

An example of chain of thought prompting

Another prompt engineering trick I frequently reach for is called chain of thought.

With this technique, you are asking an LLM to think about a given problem from three distinct perspectives. You then ask it to act as one of those personas and critique the responses of the other two. Finally, you combine the results into a well-considered and well-rounded answer.

As an example: my son does not like to eat pizza. I know... it bums me out, too.

I provided ChatGPT with a bunch of backstory on my son and what we've tried to do to encourage him to try pizza. Then, I said to pretend you are a kindergarten teacher, a child psychologist, and a grandparent. As each of those personas, tell me what approach you would take to get my son to eat pizza.

Next, as each persona, I ask it to reflect on the answers of the other personas. For example, the child psychologist persona would consider the kindergarten teacher and grandma's perspectives and adjust their own response.

Finally, after all personas have reflected on each other's answers, I have the model summarize the best path forward.

This trick works exceptionally well across several different problems. As an engineer, I use it to consider system changes as an engineer, as an end user, and as a business executive. It can provide some insights which you may have otherwise missed.

Tips for Adopting Al

So in order to integrate AI successfully, treat it as a tool that augments, rather than replaces, human judgment.

Every time I fire up an AI assistant, I like to think of it as an eager intern who is exceptionally smart but exceptionally naive. I do not take its output as gospel; rather, I use it as a foundation and build on it from there.

The best way to integrate AI into your workflows is to use it for routine tasks, and keep human oversight for critical decisions.

Finally, I'll take this time to further emphasize that all AI outputs are based on probability, not the truth. Always review and adjust outputs as needed.


Ethical Considerations & Pitfalls: Bias in Al

Alright, we've covered what artificial intelligence is, and we've gotten through some tips for adopting AI into your organization.

Now, let's talk about areas where AI can fall flat.

First: bias.

If you recall, at the beginning of this talk, we described artificial intelligence as being focused on getting computers to be like humans.

Humans are inherently biased, and AI, trained on human-generated data, often reflects this bias. Achieving true “unbiased” AI is a complex, if not impossible, task.

I propose you think of AI in the same context: there is no such thing as an unbiased AI model.

AI models are only as good as the data with which you train it. Data is one of those things you can pretty easily screw up if you aren't attuned to all of the various forms of bias that could impact your data.

Examples of Bias in Al (Stereotyping Bias, Measurement Bias, and Selection Bias)

There are many different kinds of bias, but I wanted to highlight three specific forms as a starter:

Stereotyping bias: This occurs when AI models perform less accurately for certain groups due to their underrepresentation or misrepresentation in training data, as seen with YouTube's automatic captions, which struggle with Scottish, Indian, and African American accents.

Measurement bias Measurement bias happens when an AI model’s metrics or algorithms lead to systematically skewed outcomes, such as the Apple Card’s algorithm offering men higher credit limits than women with similar financial profiles.

Selection bias: Selection bias arises when training data lacks sufficient diversity, causing models to underperform for certain groups; for instance, breast cancer detection AI trained mainly on female patients performs less accurately for male patients.

There are many more forms of bias that you can research on your own, but the main takeaway here is that all systems are subject to bias depending on what data was used to train it. For this reason, you can't just rely on the output of an AI-led decision.

Ethical Considerations & Pitfalls: The 'Black Box Problem'

As mentioned earlier, another major issue is the “black box” problem.

Deep learning models are like locked safes—each layer hides its ‘reasoning’ behind many interconnected processes, making it nearly impossible for humans to interpret every decision-making step.

This lack of transparency, especially in high-stakes areas like criminal justice or credit scoring, means we’re left trusting the ‘safe’ without ever seeing inside, creating ethical and practical risks.

Once again, this is a reminder that we can’t just accept AI output as absolute truth; careful consideration and oversight are needed to avoid unintentional discrimination or bias.

Ethical Considerations & Pitfalls: It can’t do everything!

Literally every single time new technology drops, some wise guy emerges from the crowd and says, "well, I can't use [insert new tech] to do [insert obvious use case]".

Earlier in this talk, I led off by saying "AI is for everyone." Notice how I didn't say "AI is for every thing."

Of course you can't use AI for everything! AI is not a magic bullet. You gotta know how to deploy it effectively, which is in service of automating predictable, repetitive tasks.

Yes, wise guy, you are right: you aren't gonna want to deploy AI while leading a camping expedition in the Boundary Waters.

But after you complete your expedition and ask for feedback from the program's participants, you could use AI to process those responses and bucket them into understandable and actionable groups.

Ethical Considerations & Pitfalls: Content is (Literally) Average

If you've been paying attention during this entire talk, you'll notice I keep saying things like "AI is picking the most likely word to finish a sentence" and "machine learning is used to make predictions."

If you are relying on a tool to create the most likely response to something, you'll see quickly that the responses are kinda... average.

This can be advantageous, but it's also something to be aware of. By using output that is average by design, you run the risk of blending into everything else out there. (This, by the way, leads to the rise of slop, which is the AI equivalent of spam).

Now, this may be a trade off you are willing to accept in many cases. I, for one, often use AI as a therapist to help me make sense of some thoughts swirling around in my head. This works great, but I use the advice and feedback I get from the model and take it to a human therapist.

The other thing about the content being average: remember how we said that AI doesn't care about truthiness, but rather it cares about finding the thing that is most likely to be truthful? This leads to some concerning behavior called "hallucination", where it will make up facts which aren't actually facts.

You may recall headlines from a year ago where a lawyer used ChatGPT and it hallucinated cases. This sort of thing happens all the time with new technology, especially when it's used by people who aren't properly trained on how to use it (or are swayed by glitzy marketing campaigns which make promises that it can't possibly deliver).

Ethical Considerations & Pitfalls
Mitigation Strategies
- Use Al to assist, but keep human oversight
- Review Al outputs for biases and accuracy
- Make adjustments as needed

Now that you're aware of the pitfalls and risks of using artificial intelligence, how can you mitigate those risks?

Always treat AI as a supportive tool, maintaining human oversight—especially for important decisions where ethics and accuracy are critical.

Always review AI outputs for potential bias and inaccuracies.

Finally, adjust AI-generated content as needed to match your style and objectives. For instance, AI may draft a social media post, but tweaking it to align with your brand's voice adds value.


What's next? Spend ten hours doing tasks with generative Al!

We've covered what AI is, practical tips for adopting it, ethical concerns, and common pitfalls.

So, what's next for you?

Begin by dedicating 10 hours to using generative AI tools to build practical familiarity.

Try asking questions in areas you know well to see how AI performs, and notice where you’d add or change things.

Sharing what you learn with your team encourages experimentation and fosters a learning environment.


UnitedHealth says Change Healthcare hack affects over 100 million, the largest-ever US healthcare data breach


🔗 a linked post to techcrunch.com » — originally shared here on

More than 100 million individuals had their private health information stolen during the ransomware attack on Change Healthcare in February, a cyberattack that caused months of unprecedented outages and widespread disruption across the U.S. healthcare sector.

This is the first time that UnitedHealth Group (UHG), the U.S. health insurance provider that owns the health tech company, has put a number of affected individuals to the data breach, after previously saying it anticipated the breach to include data on a “substantial proportion of people in America.”

Really, really hard to feel any sympathy for this organization when you read this a few paragraphs down in the article:

According to its 2023 full-year earnings report, UHG made $22 billion in profit on revenues of $371 billion. [Andrew Witty, CEO of UHC] made $23.5 million in executive compensation the same year.

Let’s say you invest in state-of-the-art workforce development programs, advance threat detection solutions, zero trust and identity management solutions, resilience infrastructure, and throw in some R&D.

Let’s even give UHC the benefit of the doubt and assume they have done all of this.

How are you able to walk away with $22 billion after you’ve allowed the PII of nearly a third of Americans (myself included) to walk out the door and into the hands of cybercriminals?

Nobody else is angry that this news isn’t blasted all over this election cycle?

Nobody else thinks we should be holding this conglomerate’s feet to the fire for this breach?

I’m not here to minimize the importance of other issues like border security and women’s reproductive rights, but I haven’t heard any politician make noise about our horribly inefficient healthcare system at all during this election cycle.

Why can’t we stop picking fights with each other and focus on addressing the systemic issues which lead to companies selling us all out in the name of shareholder value?

Continue to the full article


One Finger Salutes Welcome


🔗 a linked post to cupalo.substack.com » — originally shared here on

“Mom, look at THIS!” said her son (age 6) producing a balled-up fist in the air. Then, as if peeling a banana, he pulled out a tiny middle finger. There it was. In the upright and locked position.

“THIS” was none other than the oh-so-satisfying one finger salute.

🖕

“So, what did you do to him?!?” I asked my Jacksonville neighbor Louise between chuckles.

“I wanted to laugh. But, I remained calm. Validated his frustration at not getting a third popsicle. And explained why THIS wasn’t a good expression of anger.”

Louise then shared her belief that kids need to be a little weird and wild at home. That it’s okay for them to get their “crazy out” at home so they can be (slightly higher) functioning individuals out in the world. Kids, she said, need to trust that they will be safe and loved no matter what. That doesn’t mean she doesn’t discipline, she just doesn’t lose her cool over it.

Sounds like some Dr. Becky-style parenting skills in action here.

Lauren goes on to explain how crucial it is for us to have a space where we can retreat to and be ourselves.

When my daughter stomps her foot and growls at me like a cartoon character when I ask her to brush her teeth, I can’t help but chuckle and say, “you know kid, I wanna do that all the time, too.”

When my son screams in my face because I make him, uh, get dressed in his Halloween costume to go to his school’s Halloween party, I can’t help but chuckle and say, “I get it, man. It sucks to be told what to do.”

One thing that’s been massively helpful in keeping my anxiety and depression in check is to give myself space to be myself. The full version of myself who doesn’t have to censor his out-there thought process for fear of being misunderstood and ridiculed.

My journal is my number one place for this freedom. This blog is my second.

I just finished up my second week in a job. I emphasize the word “job” because I haven’t really had a job in nearly fifteen years. Being in charge of a business is totally different than working for a business.

Working for a business requires conformity by definition. You can’t be cowboying off and doing you own thing if you want to build a system with repeatable success. I get it.

One way I hope to grow at my new job is to figure out how to maintain my individuality and uniqueness while making meaningful contributions to the collective effort.

In other words: how can I be happy and “myself” being the guy rowing an oar in the bottom of the boat rather than being the guy who pounds on the timpani?

Continue to the full article


making things better


🔗 a linked post to explaining.software » — originally shared here on

Tradeoffs exist; improving one aspect of a system can make other aspects worse. As projects grow, our control over them shrinks. Ugly truths abound, and beauty is a luxury we can rarely afford.

Knowing this, however, does not mean accepting it. Confronted with this dissonance, this ugliness, we inevitably gesture towards a better future. We talk about better design, better practices, better processes. We await better abstractions. We imagine a world in which we cannot help but make something beautiful.

This belief in the future, in an unending ascent towards perfection, is a belief in progress. The flaws in this belief — its internal tensions, the fact that it is closer to a theology than a theory — have been pointed out for centuries. It is, nevertheless, an inescapable part of the software industry. Everything we do, whether design or implementation, is oriented towards an imagined future.

This is a beautiful sentiment about software systems which could easily apply to most any system (like, our political and social systems, for example).

Continue to the full article


How to Figure Out What People Want


🔗 a linked post to every.to » — originally shared here on

If you’re thinking, “Figure out the kinds of sequences that generate good responses,” you’re still looking for essences. You’re seeking a list of words that can make someone excited.

Instead, the process of making something that people want is the process of learning, through experiment and error, to be the kind of person who can generate needs, wants, and jobs in other people. 

This kind of person is one who notices that a new restaurant in their neighborhood has a line out the door. Instead of walking by, they walk in. 

They stop to notice the soft, earthy color palette of its interior decoration, one that evokes a coastal Mediterranean village. They see the way its menu layers in unexpected ingredients like za’atar, cinnamon, and chile as subtle references to other cultures and traditions. They notice the feelings that this sequence of experience evokes in them, the way it feels familiar and also pleasantly surprising. They know that if they linger on these feelings, they’ll be able to evoke them later—for themselves and for others—in a logo design or an article.

I’ve been trying to be more aware of things happening to me lately.

I know I can be in my head, a million miles away from reality unfolding before me. I feel more comfortable there, if I’m honest. Reality can be uncomfortable, not quite right for me.

As it turns out, when you retreat from reality too often, you start to forget that while it can be uncomfortable sometimes, its contents can be incredible.

I’m finding that the moments I am aware of what’s happening around me are when I am the happiest.

And it turns out, paying attention to reality with your own unique perspective can really make a difference for others.

Continue to the full article


It turns out I'm still excited about the web


🔗 a linked post to werd.io » — originally shared here on

I was afraid I had become too cynical to find excitement in technology again. It wasn’t true.

While I’ve grown more cynical about much of tech, movements like the Indieweb and the Fediverse remind me that the ideals I once loved, and that spirit of the early web, aren’t lost. They’re evolving, just like everything else.

One thing that excites me about the web is our ability to communicate effortlessly with other people across the world.

It still feels like magic every time I get an instant message from my friend in Uruguay.

Hell, I spent several hours on video chat with my coworker from Brazil today. How insanely cool is that?

I think I just want to find interesting problems to solve using that tech, which feels a bit like “I have a hammer and I’m looking for nails.”

I’m just grateful that people want to pay me to play with computers all day.

Continue to the full article