all posts tagged 'engineering'

“What have you tried?”


🔗 a linked post to multiline.co » — originally shared here on

Early on, I picked up the habit of checking with him when some technical thing or other wasn’t working the way I expected:

  • “I can’t connect to the thingamabob :(”
  • “The whatchamacallit isn’t working :(”
  • “How do I fix the doohickey?”
  • etc.

Without fail, James’s response would be not an answer but a question, one that has shaped my thinking ever since:

What have you tried?

I just stuck that on post it note and stuck it to my monitor for tomorrow morning.

Continue to the full article


The Dynamic Between Domain Experts & Developers Has Shifted


🔗 a linked post to dbreunig.com » — originally shared here on

During the peak of mobile app madness, iOS and Android developers would often find themselves cornered by friends, relatives, and random people at parties.

“I’ve got a great idea for an app
”

More often than not, this dreaded sentence would be followed by a hard sell when the developer didn’t display adequate enthusiasm. If the developer didn’t act fast and feign the exact right level of approval — enough to communicate they ‘got’ the idea but not so much that they’d be asked to build it — the idea guy would advance onto hashing out NDAs, equity allocations, and asking when coding can start.

Recently, I’ve noticed the AI era is a bit different. The balance of power has shifted. Builders need domain experts as much as domain experts need builders.

You can no longer simply copy an app model with a few improvements or obsess over user feedback as you sharpen your prototype towards product-market fit.

To build a differentiated AI product you need training data and examples curated by a domain expert.

I don't think the role of a software engineer is going to go away, but I do think personally, I'm not gonna cut it anymore as "just a software engineer."

The real value is in pairing someone who knows how these AI systems work with someone who knows how to get deep with a real world problem.

Continue to the full article



In Praise of “Normal” Engineers


🔗 a linked post to spectrum.ieee.org » — originally shared here on

A lot of technical people got really attached to our identities as smart kids. The software industry tends to reflect and reinforce this preoccupation at every turn, as seen in Netflix’s claim that “we look for the top 10 percent of global talent” or Coinbase’s desire to “hire the top 0.1 percent.” I would like to challenge us to set that baggage to the side and think about ourselves as normal people.

It can be humbling to think of yourself as a normal person. But most of us are, and there is nothing wrong with that. Even those of us who are certified geniuses on certain criteria are likely quite normal in other ways—kinesthetic, emotional, spatial, musical, linguistic, and so on.

Software engineering both selects for and develops certain types of intelligence, particularly around abstract reasoning, but nobody is born a great software engineer. Great engineers are made, not born.

I read this article twice last night. I haven't come across any article that spoke to my massive professional anxieties/impostor syndrome as well as this one.

One of my biggest pet peeves with being around smart people is when people explain things using big words. It feels like it takes so much more effort to understand tough concepts when they are saddled with jargon and ACT words.

I also enjoyed this point about building teams:

We place too much emphasis on individual agency and characteristics, and not enough on the systems that shape us and inform our behaviors.

I believe a whole slew of issues (candidates self-selecting out of the interview process, diversity of applicants, and more) would be improved simply by shifting the focus of hiring away from this inordinate emphasis on hiring the best people and realigning around the more reasonable and accurate right people.

It’s a competitive advantage to build an environment where people can be hired for their unique strengths, not their lack of weaknesses; where the emphasis is on composing teams; where inclusivity is a given both for ethical reasons and because it raises the bar for performance for everyone. Inclusive culture is what meritocracy depends on.

Continue to the full article


Practice Guide for Computer

originally shared here on

Originally adapted from Ron Miller's Advanced Improv Practice Guide, and discovered at the bottom of jyn's incredible blog post titled "i'm just having fun", which is a must-read.

Before starting your daily practice routine, read and seriously consider the following:

A. DAILY AFFIRMATIONS

  1. How fortunate I am that in this life I am one who has been allowed to create beauty with computer.
  2. It is my responsibility to create peace, beauty, and love with computer.

B. I WILL BE KIND TO MYSELF

  1. IT IS ONLY COMPUTER
  2. No matter my level of development in computer, how good or bad I think I am, it is only computer and I am a beautiful person.
  3. I will not compare myself with my colleagues. If they do computer beautifully, I will enjoy it and be thankful and proud that I live in fellowship with them.
  4. There will always be someone with more abilities in computer than my own as there will be those with less.

C. REASONS TO DO COMPUTER

  1. To contribute to the world's spiritual growth.
  2. To contribute to my own self-discovery and spiritual growth.
  3. To pay homage to all the great practitioners of computer, past and present, who have added beauty to the world.

D. RID YOUR SELF OF THE FOLLOWING REASONS FOR BEING A PRACTITIONER OF COMPUTER

  1. To create self-esteem
  2. To be "hip"
  3. To manipulate
  4. To get rich or famous

Why We've Tried to Replace Developers Every Decade Since 1969


🔗 a linked post to caimito.net » — originally shared here on

Here’s the paradox that makes this pattern particularly poignant. We’ve made extraordinary progress in software capabilities. The Apollo guidance computer had 4KB of RAM. Your smartphone has millions of times more computing power. We’ve built tools and frameworks that genuinely make many aspects of development easier.

Yet demand for software far exceeds our ability to create it. Every organization needs more software than it can build. The backlog of desired features and new initiatives grows faster than development teams can address it.

This tension—powerful tools yet insufficient capacity—keeps the dream alive. Business leaders look at the backlog and think, “There must be a way to go faster, to enable more people to contribute.” That’s a reasonable thought. It leads naturally to enthusiasm for any tool or approach that promises to democratize software creation.

The challenge is that software development isn’t primarily constrained by typing speed or syntax knowledge. It’s constrained by the thinking required to handle complexity well. Faster typing doesn’t help when you’re thinking through how to handle concurrent database updates. Simpler syntax doesn’t help when you’re reasoning about security implications.

Continue to the full article


Big O


🔗 a linked post to samwho.dev » — originally shared here on

Big O notation is a way of describing the performance of a function without using time. Rather than timing a function from start to finish, big O describes how the time grows as the input size increases. It is used to help understand how programs will perform across a range of inputs.

In this post I'm going to cover 4 frequently-used categories of big O notation: constant, logarithmic, linear, and quadratic. Don't worry if these words mean nothing to you right now. I'm going to talk about them in detail, as well as visualise them, throughout this post.

I have a minor in computer science, and I remember sitting through many explanations of the importance of Big O notation, yet it hasn’t really mattered much in my career until recently.

If you have heard of Big O but aren’t clear on how it works, give this post a shot. It contains a lot of great visualizations to help drive the point home.

Continue to the full article


The Curse of Knowing How, or; Fixing Everything


🔗 a linked post to notashelf.dev » — originally shared here on

Too many bangers to pull out of this one. Well worth a full read. But here are a couple juicy pull quotes to whet your pallette:

Programming lures us into believing we can control the outside events. That is where the suffering begins. There is something deeper happening here. This is not just about software.

I believe sometimes building things is how we self-soothe. We write a new tool or a script because we are in a desperate need for a small victory. We write a new tool because we are overwhelmed. Refactor it, not because the code is messy, but your life is. We chase the perfect system because it gives us something to hold onto when everything else is spinning.


I’m trying to let things stay a little broken. Because I’ve realized I don’t want to fix everything. I just want to feel OK in a world that often isn’t. I can fix something, but not everything.

You learn how to program. You learn how to fix things. But the hardest thing you’ll ever learn is when to leave them broken.

And maybe that’s the most human skill of all.

Continue to the full article


Experience Doesn't Stack: The Myth of Collective Knowledge


🔗 a linked post to joanwestenberg.com » — originally shared here on

We should stop worshipping numerical comfort. Twenty partial views don’t make a whole picture. They make noise. They make an echo. They create professionalized, sanitized, panel-approved blindness.

If you're lucky enough to know someone with twenty years of scar tissue in a domain, listen. Don't just ask what they know. Ask what they've unlearned. Ask what they stopped saying because nobody understood. That's where the signal lives.

“Worshipping numerical comfort” is a fantastic phrase that I’ll be pondering here for the next few days.

Continue to the full article


AI assisted search-based research actually works now


🔗 a linked post to simonwillison.net » — originally shared here on

I’m writing about this today because it’s been one of my “can LLMs do this reliably yet?” questions for over two years now. I think they’ve just crossed the line into being useful as research assistants, without feeling the need to check everything they say with a fine-tooth comb.

I still don’t trust them not to make mistakes, but I think I might trust them enough that I’ll skip my own fact-checking for lower-stakes tasks.

This also means that a bunch of the potential dark futures we’ve been predicting for the last couple of years are a whole lot more likely to become true. Why visit websites if you can get your answers directly from the chatbot instead?

The lawsuits over this started flying back when the LLMs were still mostly rubbish. The stakes are a lot higher now that they’re actually good at it!

I can feel my usage of Google search taking a nosedive already. I expect a bumpy ride as a new economic model for the Web lurches into view.

I keep thinking of the quote that “information wants to be free”.

As the capabilities of open-source LLMs continue to increase, I keep finding myself wanting a locally-running model at arms length any time I’m near a computer.

How many more cool things can I accomplish with computers if I can always have a “good enough” answer at my disposal for virtually any question for free?

Continue to the full article