all posts tagged 'artificial intelligence'

All my beautiful AI children


šŸ”— a linked post to linksiwouldgchatyou.substack.com » — originally shared here on

Because of my experience with Remini ā€” and despite my natural and deep-seated antipathy toward tech solutionism of all sorts ā€” itā€™s impossible for me to dismiss or decry grief tech out of hand. At present, at least half a dozen high-profile start-ups claim they can train interactive chatbots or video avatars to mimic the personalities of the dead; tech-savvy mourners have also turned several general AI apps, such as Remini and Paradot, to grief tech applications.Ā 

These services ā€” marketed under names like Project December, You Only Virtual, HereAfter AI and Seance AI ā€” raise pressing, significant questions around issues like privacy, predatory marketing and consent. What happens if grandma doesnā€™t want to ā€œliveā€ forever? Or her platform ups the cost of a premium subscription? Other commentators and ethicists ā€” including, just last week, the prominent sociologist Sherry Turkle ā€” have voiced concerns that grief tech blurs the boundary between the living and the dead and locks the bereaved in a sort of limbo. Such critics assume that the bereaved cannot spot the illusion of AI chatbots for themselves, and, moreover, that the bereaved should not indulge themselves in any comforting fantasies about death.

But people take comfort in all sorts of stories; I no longer feel wise enough to judge them for that.

First off, huge respect to Caitlin Dewey for sharing this story. It takes guts to be vulnerable and share something this intimate.

Second, consider me privileged, because I would have never considered miscarriage grief as a use case for artificial intelligence.

People grieve in all sorts of ways. Itā€™s not up to me (or you, or anybody) to judge, but it is up to us to show up for each other and be helpful.

I know how important journaling is to my mental health. Thereā€™s something cathartic about forcing yourself to stare directly at your thoughts, putting words to the vague feelings that gnaw at you incessantly.

I can only imagine how cathartic it may feel to someone to see a rendering of what could have been. To give yourself some closure on the possibilities you dreamed for yourself and your future family.

Again, Iā€™m not here to judge or endorse. I find myself once again just impressed at how people are able to figure out how to take technology and use it to deal with their problems.

Continue to the full article


ChatGPT is really bad at generating code to interact with GPT-4

originally shared here on

Lately, I've been working on several projects which interface with the OpenAI APIs.

Since this is 2024 and I'm far too lazy to open the official API documentation and write code myself, my first step in generating code is to ask the 4o model of ChatGPT something like this:

Write a Python script which uses the latest OpenAI APIs. I'd like the function to accept a custom prompt and a custom top-k value. It should return the prompt's response.

It returns something like this:

import openai

def generate_text(prompt, top_k=5):
    # Define your OpenAI API key here
    openai.api_key = 'your-api-key'

    # Define parameters for the completion
    completion = openai.Completion.create(
        engine="text-davinci-003",
        prompt=prompt,
        max_tokens=100,
        top_k=top_k,
        n=1,
        stop=None
    )

    # Return the generated text
    return completion.choices[0].text.strip()

# Example usage
prompt = "Once upon a time"
top_k = 10
generated_text = generate_text(prompt, top_k)
print("Generated Text:", generated_text)

Which looks super great, right?

Except when you go to run it in a terminal, you get this:

openai.lib._old_api.APIRemovedInV1: 

You tried to access openai.Completion, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.

You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface. 

Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`

A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742

If you paste that error into the chat, it'll respond with this:

It seems like there has been a major update in the OpenAI Python library since I last checked. Let me update the script to use the latest version of the library:

And then it spits this out:

import openai

def generate_text(prompt, top_k=5):
    # Define your OpenAI API key here
    openai.api_key = 'your-api-key'

    # Define parameters for the completion
    response = openai.Completion.create(
        engine="davinci-codex",
        prompt=prompt,
        max_tokens=100,
        top_k=top_k,
        n=1
    )

    # Return the generated text
    return response.choices[0].text.strip()

# Example usage
prompt = "Once upon a time"
top_k = 10
generated_text = generate_text(prompt, top_k)
print("Generated Text:", generated_text)

Notice how it still uses the openai.Completion method!

I've tried prompting it with something like:

You're still using the openai.Completion method. Reference the official up to date API documentation to get the latest call that replaces the openai.Completion method.

And it responds with:

Apologies for that oversight. Let me update the script to use the latest API method based on the official documentation:

And here's the code it generates:

import openai

def generate_text(prompt, top_k=5):
    # Define your OpenAI API key here
    openai.api_key = 'your-api-key'

    # Define parameters for the completion
    response = openai.Completion.create(
        engine="davinci-codex",
        prompt=prompt,
        max_tokens=100,
        top_k=top_k,
        n=1,
        temperature=0.7,
        stop=None
    )

    # Return the generated text
    return response.choices[0].text.strip()

# Example usage
prompt = "Once upon a time"
top_k = 10
generated_text = generate_text(prompt, top_k)
print("Generated Text:", generated_text)

Which, again, still uses the openai.Completion method!

I've noticed this sort of "oops, I screwed up, here's the exact same thing I just outputted" behavior appear more frequently when I use the new GPT-4o model.

If I use GPT-4 and I'm using my ChatGPT Plus subscription, I will still run into the issue where its first response references the deprecated method, but if I inform it of its mistake and provide a link to the official documentation, it'll access the web and try to offer something different. (It still generates unusable code lol but it's at least trying to do something different!)

When it comes to Python and Rails code, I'm seeing that the GPT-4o model is not as good at code generation as the previous GPT-4 model.

It feels like the model is always in a rush to generate something rather than taking its time and getting it correct.

It also seems to be biased toward relying on its training for supplying an answer rather than taking a peek at the internet for a better answer, even when you specifically tell it not to do that.

In many cases, this speed/accuracy tradeoff makes sense. But when it comes to code generation (and specifically when it tries to generate code to use their own APIs), I wish it took its time to reason why the code it wrote doesn't work.


Generative AI is for the idea guys


šŸ”— a linked post to rachsmith.com » — originally shared here on

When I started working in tech in the early 2010s I began running in to idea guys. Upon discovering I was a developer, they would get very excited and pitch me their big idea for an app/website/service to see what I thought.

After receiving a polite reaction they would often say something like: ā€œI think it would do really well, if I could only figure out how to get it madeā€. Like the idea was the main thing that mattered, and the pesky task of actually building it was an insignificant little bump along the road to success. At this point I would usually tell them that ideas are worth nothing, until they are implemented.

This post is brilliant. Sometimes, I use generative AI like cardboard to prototype an idea.

Testing out an idea is the first step toward doing the countless hours of work needed to make an idea a real thing.

Since I donā€™t think Iā€™ve said it on this blog yet: ā€œIdeas are worthless. Execution is everything.ā€

(I also love Rachelā€™s footnote after the word ā€œguysā€, saying ā€œit was literally always guysā€ šŸ˜‚)

Continue to the full article


A Plea for Sober AI


šŸ”— a linked post to dbreunig.com » — originally shared here on

Below all this hype, thereā€™s a quiet revolution happening. I keep meeting new companies and seeing new products that make practical use of LLMs. They use them to solve narrow problems and prioritize consistency and efficiency over giant, all-singing, all-dancing models. I keep meeting people who are amplifying their capacity and abilities by handing simple, mundane tasks to AIs, which they then refine and improve. AI-assisted coding has been a boon for early-stage start ups, which are shipping products with a maturity and polish beyond their years.

This is the world of Sober AI.

Sober AI is an incredible concept.

Thereā€™s so much progress that people can make using this tool. While it might not be able to solve all our problems, knowing how to wield it is a superpower in this economy.

Continue to the full article


AI is not like you and me


šŸ”— a linked post to zachseward.com » — originally shared here on

Aristotle, who had a few things to say about human nature, once declared, "The greatest thing by far is to have a command of metaphor," but academics studying the personification of tech have long observed that metaphor can just as easily command us. Metaphors shape how we think about a new technology, how we feel about it, what we expect of it, and ultimately how we use it.

I love metaphors. I gotta reflect on this idea a bit more.

There is something kind of pathological going on here. One of the most exciting advances in computer science ever achieved, with so many promising uses, and we can't think beyond the most obvious, least useful application? What, because we want to see ourselves in this technology?

Meanwhile, we are under-investing in more precise, high-value applications of LLMs that treat generative A.I. models not as people but as tools. A powerful wrench to create sense out of unstructured prose. The glue of an application handling messy, real-word data. Or a drafting table for creative brainstorming, where a little randomness is an asset not a liability. If there's a metaphor to be found in today's AI, you're most likely to find it on a workbench.

Bingo! AI is a tool, not a person.

The other day, I made a joke on LinkedIn about the easiest way for me to spot a social media post that was written with generative AI: the phrase ā€œExciting News!ā€ alongside one of these emojis: šŸš€, šŸŽ‰, or šŸšØ.

Itā€™s not that everyone who uses those things certainly used ChatGPT.

Itā€™s more like how I would imagine a talented woodworker would be able to spot a rookie mistake in a noviceā€™s first attempt at a chair.

And here I go, using a metaphor again!

Continue to the full article


AI isn't useless. But is it worth it?


šŸ”— a linked post to citationneeded.news » — originally shared here on

There are an unbelievable amount of points Molly White makes with which I found myself agreeing.

In fact, I feel like this is an exceptionally accurate perspective of the current state of AI and LLMs in particular. If youā€™re curious about AI, give this article a read.

A lot of my personal fears about the potential power of these tools comes from speculation that the LLM CEOs make about their forthcoming updates.

And I donā€™t think that fear is completely unfounded. I mean, look at what tools we had available in 2021 compared to April 2024. Weā€™ve come a long way in three years.

But right now, these tools are quite hard to use without spending a ton of time to learn their intricacies.

The best way to fight fear is with knowledge. Knowing how to wield these tools helps me deal with my fears, and I enjoy showing others how to do the same.

One point Molly makes about the generated text got me to laugh out loud:

I particularly like how, when I ask them to try to sound like me, or to at least sound less like a chatbot, they adopt a sort of "cool teacher" persona, as if they're sitting backwards on a chair to have a heart-to-heart. Back when I used to wait tables, the other waitresses and I would joke to each other about our "waitress voice", which were the personas we all subconsciously seemed to slip into when talking to customers. They varied somewhat, but they were all uniformly saccharine, with slightly higher-pitched voices, and with the general demeanor as though you were talking to someone you didn't think was very bright. Every LLM's writing "voice" reminds me of that.

ā€œWaitress voiceā€ is how I will classify this phenomenon from now on.

You know how I can tell when my friends have used AI to make LinkedIn posts?

When all of a sudden, they use emoji and phrases like ā€œExciting news!ā€

Itā€™s not even that waitress voice is a negative thing. After all, itā€™s expected to communicate with our waitress voices in social situations when we donā€™t intimately know somebody.

Calling a customer support hotline? Shopping in person for something? Meeting your kidā€™s teacher for the first time? New coworker in their first meeting?

All of these are situations in which I find myself using my own waitress voice.

Itā€™s a safe play for the LLMs to use it as well when they donā€™t know us.

But I find one common thread among the things AI tools are particularly suited to doing: do we even want to be doing these things? If all you want out of a meeting is the AI-generated summary, maybe that meeting could've been an email. If you're using AI to write your emails, and your recipient is using AI to read them, could you maybe cut out the whole thing entirely? If mediocre, auto-generated reports are passing muster, is anyone actually reading them? Or is it just middle-management busywork?

This is what I often brag about to people when I speak highly of LLMs.

These systems are incredible at the BS work. But theyā€™re currently terrible with the stuff humans are good at.

I would love to live in a world where the technology industry widely valued making incrementally useful tools to improve peoples' lives, and were honest about what those tools could do, while also carefully weighing the technology's costs. But that's not the world we live in. Instead, we need to push back against endless tech manias and overhyped narratives, and oppose the "innovation at any cost" mindset that has infected the tech sector.

Again, thank you Molly White for printing such a poignant manifesto, seeing as I was having trouble articulating one of my own.

Innovation and growth at any cost are concepts which have yet to lead to a markedly better outcome for us all.

Letā€™s learn how to use these tools to make all our lives better, then letā€™s go live our lives.

Continue to the full article


Happy 20th Anniversary, Gmail. Iā€™m Sorry Iā€™m Leaving You.


šŸ”— a linked post to nytimes.com » — originally shared here on

I am grateful ā€” genuinely ā€” for what Google and Apple and others did to make digital life easy over the past two decades. But too much ease carries a cost. I was lulled into the belief that I didnā€™t have to make decisions. Now my digital life is a series of monuments to the cost of combining maximal storage with minimal intention.

I have thousands of photos of my children but few that Iā€™ve set aside to revisit. I have records of virtually every text Iā€™ve sent since I was in college but no idea how to find the ones that meant something. I spent years blasting my thoughts to millions of people on X and Facebook even as I fell behind on correspondence with dear friends. I have stored everything and saved nothing.

This is an example of what AI, in its most optimistic state, could help us with.

We already see companies doing this. In the Apple ecosystem, the Photos widget is perhaps the best piece of software theyā€™ve produced in years.

Every single day, I am presented with a slideshow of a friend who is celebrating their birthday, a photo of my kids from this day in history, or a memory that fits with an upcoming event.

All of that is powered by rudimentary1 AI.

Imagine what could be done when you unleash a tuned large language model on our text histories. On our photos. On our app usage.

AI is only as good as the data it is provided. Weā€™ve been trusting our devices with our most intimidate and vulnerable parts of ourselves for two decades.

This is supposed to be the payoff for the last twenty years of surveillance capitalism, I think?

All those secrets we share, all of those activities weā€™ve done online for the last twenty years, this will be used to somehow make our lives better?

The optimistic take is that weā€™ll receive better auto suggestions for text responses to messages that sound more like us. Weā€™ll receive tailored traffic suggestions based on the way we drive. Weā€™ll receive a ā€œlong lostā€ photo of our kid from a random trip to the museum.

The pessimistic take is that weā€™ll give companies the exact words which will cause us to take action. Our own words will be warped to get us to buy something weā€™ve convinced ourselves we need.

My hunch is that both takes will be true. We need to be smart enough to know how to use these tools to help ourselves and when to put them down.

I havenā€™t used Gmail as my primary email for years now2, but this article is giving me more motivation to finally pull the plug and shrink my digital footprint.

This is not something the corporations did to me. This is something I did to myself. But I am looking now for software that insists I make choices rather than whispers that none are needed. I donā€™t want my digital life to be one shame closet after another. A new metaphor has taken hold for me: I want it to be a garden I tend, snipping back the weeds and nourishing the plants.

My wife and I spent the last week cleaning out our garage. It reached the point where the clutter accumulated so much that you could only park one car in it, strategically aligned so you could squeeze through a narrow pathway and open a door.

As of this morning, we donated ten boxes of items and are able to comfortably move around the space. While there is more to be done, the garage now feels more livable, useful, and enjoyable to be inside.

I was able to clear off my work bench and mount a pendant above it. The pendant is autographed by the entire starting defensive line of the 1998 Minnesota Vikings.

Every time I walk through my garage, I see it hanging there and it makes me so happy.

Our digital lives should be the same way.

My shame closet is a 4 terabyte hard drive containing every school assignment since sixth grade, every personal webpage Iā€™ve ever built, multiple sporadic backups of various websites I am no longer in charge of, and scans of documents that ostensibly may mean something to me some day.

Scrolling through my drive, Iā€™m presented with a completely chaotic list that is too overwhelming to sort through.

Just like how I cleaned out my garage, I aught to do the same to this junk drawer.

Iā€™ll revert to Ezraā€™s garden metaphor here: keep a small, curated garden that contains the truly important and meaningful digital items to you. Prune the rest.

(Shout out to my friend Dana for sharing this with me. I think she figured out my brand.)


  1. By todayā€™s standards. 

  2. I use Fastmail. You should give it a try (that link is an affiliate link)! 

Continue to the full article


Confessions of a Viral AI Writer


šŸ”— a linked post to wired.com » — originally shared here on

Lately, Iā€™ve sometimes turned to ChatGPT for research. But Iā€™ve stopped having it generate prose to stand in for my own. If my writing is an expression of my particular consciousness, Iā€™m the only one capable of it. This applies, to be clear, to GPT-3ā€™s line about holding hands with my sister. In real life, she and I were never so sentimental. Thatā€™s precisely why I kept writing over the AIā€™s words with my own: The essay is equally about what AI promises us and how it falls short. As for Sudowriteā€™s proposal to engineer an entire novel from a few keywords, forget it. If I wanted a product to deliver me a story on demand, Iā€™d just go to a bookstore.

But what if I, the writer, donā€™t matter? I joined a Slack channel for people using Sudowrite and scrolled through the comments. One caught my eye, posted by a mother who didnā€™t like the bookstore options for stories to read to her little boy. She was using the product to compose her own adventure tale for him. Maybe, I realized, these products that are supposedly built for writers will actually be of more interest to readers.

I can imagine a world in which many of the people employed as authors, people like me, limit their use of AI or decline to use it altogether. I can also imagine a worldā€”and maybe weā€™re already in itā€”in which a new generation of readers begins using AI to produce the stories they want. If this type of literature satisfies readers, the question of whether it can match human-produced writing might well be judged irrelevant.

I think this is a good example of exactly why I think software engineers are in trouble for many roles.

People hire nerds to get computers to do things for them. Things that are not especially novel to those of us who program computers, but extremely valuable to those who cannot.

Will AI be able to create bespoke, artisanal software better than a creative engineer?

Will AI be able to create bespoke, artisanal music better than a create musician?

Maybe. Maybe not.

But it will surely be able to create adequate software and music faster than humans can.

Iā€™m afraid the days of being paid insane amounts of money because I can get computers to do what I want are approaching an end.

Continue to the full article


What I Do When I Canā€™t Sleep


šŸ”— a linked post to every.to » — originally shared here on

Listening to him name exactly what he liked about the wine turned up the hairs on the back of my neck. Suddenly, my own experience of the wine felt deeper and more satisfying. I wanted to know: How was he able to identify these flavors?

He told me the secret was to blindly smell things and try to label them. You see, the part of your brain thatā€™s responsible for smells is naturally mute. Itā€™s called the olfactory bulb, and itā€™s an ancient fist of neurons just behind your eyes. It has only indirect backroad connections to the parts of the brain that control language, like Brocaā€™s area. So, even though you might have an easy time knowing whether or not you like a scent, itā€™s not easy to label that scent with a word. Itā€™ll feel like groping through a dark closet for something you know is there, but canā€™t quite find.

I did not know this, and now I will be nerding out on ways to train my taste buds.

But this article isnā€™t just about developing your sense of taste. Itā€™s also about defining your style.

In much the same way the author describes how he is, for the first time in his life, articulating what he likes as a writer, I am starting to do the same with several areas of my life.

Case in point: I have a huge collection of quotes that you can see on the main page of this site.

I just fed all of them through ChatGPT with this prompt:

Iā€™ve attached a series of quotes which are meaningful to me. First off, tell me the consistent themes you recognize across all of the quotes. Then, tell me what this says about the things that inspire me.

Here is the result:

Across all the quotes, several consistent themes emerge:

  1. Growth and Self-improvement: Many quotes focus on the importance of personal growth, overcoming challenges, and continuous improvement.
  2. Perseverance and Resilience: There's a strong theme of enduring hardship, staying strong in the face of adversity, and the importance of perseverance.
  3. Wisdom and Perspective: The quotes often reflect on acquiring wisdom, gaining perspective, and understanding life's deeper meanings and truths.
  4. Happiness and Contentment: There's an emphasis on finding happiness, being content with what one has, and the pursuit of joy in life.
  5. Community and Relationships: Several quotes highlight the value of community, the importance of relationships, and the strength that comes from vulnerability and connection with others.

These themes suggest that you are inspired by ideas related to personal growth, resilience in the face of adversity, the pursuit of wisdom and understanding, the importance of happiness and contentment, and the value of community and relationships. It paints a picture of someone who values both internal development and external connections, seeking both personal fulfillment and meaningful interactions with others.

:extremely Marge Simpson voice: ā€œWell, duh.ā€

(For real though, pretty awesome that we have a tool that is exceptionally good at summarizing large amounts of data and picking out patterns throughout the dataset.)

Continue to the full article


The Robot Report #1 ā€” Reveries


šŸ”— a linked post to randsinrepose.com » — originally shared here on

Whenever I talk about a knowledge win via robots on the socials or with humans, someone snarks, ā€œWell, how do you know itā€™s true? How do you know the robot isnā€™t hallucinating?ā€ Before I explain my process, I want to point out that I donā€™t believe humans are snarking because they want to know the actual answer; I think they are scared. They are worried about AI taking over the world or folks losing their job, and while these are valid worries, itā€™s not the robotā€™s responsibility to tell the truth; itā€™s your job to understand what is and isnā€™t true.

Youā€™re being changed by the things you see and read for your entire life, and hopefully, youā€™ve developed a filter through which this information passes. Sometimes, it passes through without incident, but other times, itā€™s stopped, and you wonder, ā€œIs this true?ā€

Knowing when to question truth is fundamental to being a human. Unfortunately, weā€™ve spent the last forty years building networks of information that have made it pretty easy to generate and broadcast lies at scale. When you combine the internet with the fact that many humans just want their hopes and fears amplified, you can understand why the real problem isnā€™t robots doing it better; itā€™s the humans getting worse.

Iā€™m working on an extended side quest and in the past few hours of pairing with ChatGPT, Iā€™ve found myself constantly second guessing a large portion of the decisions and code that the AI produced.

This article pairs well with this one I read today about a possible social exploit that relies on frequently hallucinated package names.

Simon Willison writes:

Bar Lanyado noticed that LLMs frequently hallucinate the names of packages that donā€™t exist in their answers to coding questions, which can be exploited as a supply chain attack.

He gathered 2,500 questions across Python, Node.js, Go, .NET and Ruby and ran them through a number of different LLMs, taking notes of any hallucinated packages and if any of those hallucinations were repeated.

One repeat example was ā€œpip install huggingface-cliā€ (the correct package is ā€œhuggingface[cli]ā€). Bar then published a harmless package under that name in January, and observebd 30,000 downloads of that package in the three months that followed.

Iā€™ll be honest: during my side quest here, Iā€™ve 100% blindly run npm install on packages without double checking official documentation.

These large language models truly are mirrors to our minds, showing all sides of our personalities from our most fit to our most lazy.

Continue to the full article