NOTE: AI did not generate any of the content on this page.

AI is EVERYWHERE these days. Since the launch of ChatGPT and it’s ilk, that seems to be all anyone can talk about in the media particularly when discussing technology. I have mixed feelings about the rise of the robots. AI usage at this point mainly falls into 2 categories: 1) doing the grunt work, and 2) doing the creative work.

My workplace has embraced AI in a big way. All of us in engineering use agentic AI IDEs to increase the pace of innovation. We use more general purpose chat AI tools across the entire company for various research purposes. Now that I’ve invested more heavily into using these tools, I’m even more torn on using them.

Grunt Work

AI is great for doing grunt work. By grunt work I mean the tedious, mostly adminsitrative tasks that help you know where you are and where you’re going and when you’ll get there but don’t actually move the work forward. As an example, my team was suddenly tasked with a large, complex, high-priority infrastructure project this week with many moving parts. I’m on the SRE team, so if things don’t go well it can have big impact upon the greater engineering team’s ability to innovate and deploy new products and bugfixes. No pressure.

We’re a Jira-using organization, so this new project was set up in Jira as an epic with many associated tasks. I wanted to discover what a reasonable timeline to accompiish everything might look like. So, as a team we estimated the time to work on each task and made sure to link tickets as blockers - if ticket B could not start work until ticket A was complete, that affects the timeline. Then I let Cursor loose - I had it read the epic and build a gantt chart showing the estimated best-case timeline, current status, and other relevant implementation details. Our original eyeball estimate: we could do this in 2 weeks. The gantt chart showed that was a best-case scenario but also we could get the critical tasks done within those 2 weeks and push out the tasks required for sustainability to the following week. Win number one for the AI agent.

Then it was time to start work on the project. Like many engineers my first instinct is to say to myself, “I know how do accomplish this” and start doing the work. This time I took a step back and used my new AI companion to read the Jira ticket I was currently working on and create a requirements document. This helps my team improve discipline around well-defined Jira tickets, and helps me think about the entire solution before starting work. The first task I picked up was expected to take 2 days to complete, and I didn’t write a single line of code in the first day. I just chatted back and forth with the AI agent. Reading the initial requirements assessment, I was able to see some issues - I told the agent and had it adjust the requirements appropriately. Over and over and over and OVER again. Once I was done with that I just told the agent to do the work, and it did. Then during code review a separate AI bot found a couple of issues so I was able to fix those before pulling the trigger and deploying.

Overall I found the entire process quite satisfying. I got to concentrate on the design work, not on whether the syntax was correct. I didn’t have to figure out where in the code base the necessary components lived. I ended up with a better final implementation than I would have without it and I got to do the interesting work.

Creative Work

When ChatGPT first took the world by storm, the public went wild trying to get it to do new things. Very rapidly we started getting social media posts generated by chat bots, and not long after that the images arrived. Sure it was easy to tell that the picture of the guy with 6 fingers and 3 legs was AI generated, but quality continues to improve making it increasingly more difficult to differentiate from the real thing. I’m not going to get into how this gets weaponized by a variety of politicians and robocall scammers although those are very real problems. I’m more interested in what that does to and for real people out in the world.

I’ve always been involved in creative endeavors. Outside of my day job I have been known to sing and dance and act and fight all for the entertainment of others (and frankly for my own entertainment as well). I also enjoy writing, sometimes fiction and sometimes non-fiction like this, despite not doing nearly as much as I would like. Now we’re seeing increasing occurrences of books and other content generated (I hesitate to call it “written”) by AI. I find this quite distressing.

I have said for a long time that AI can generate something that’s… fine. It will generally be average content without exploring new ground or doing anything unexpected. It may be entertaining to those who haven’t read the source material that the LLM was trained on. Translate that as “the younger generation growing up with it.” This is not a dig on the younger generation - it’s that way for all of us. How many times have you heard a band cover a song without realizing it’s a cover? I’ve seen this in my own child (about to go off to college) - much of the music they listen to has a great deal of computer-generated content, and it lacks variety, innovation, and soul (don’t tell them I said that). It requires the human element to generate something truly great. Human-generated content is a result of that person’s (or group’s) entire lived experience. That’s not to say that because it was generated by humans it will be great - there are plenty of examples of terrible art out there. (I’m reminded of the painting of a famouse celebrity I once saw donated to a charity auction. We only knew who it was because it was written on the back. It looked like he was wearing a football helmet - that was his hair.) What I’m saying is that without that spark of humanity greatness will never come. We never would have gotten Bohemian Rhapsody if Freddie Mercury and Queen weren’t thwarting the powers-that-be telling them they couldn’t. Or think about the impressionist movement in painting, or the use of dystopian storytelling such as 1984 or The Handmaid’s Tale to explore authoritarianism.

Publishers need content. That’s how they survive. Particularly in this increasingly fragmented media landscape with shortening attention spans due to Twitter and the like combined with increasing demands on our time. Publishers being able to publish content quickly and cheaply helps their business. The so-called authors who submit the content simply like the idea of writing rather than the process itself. Writing is an outlet of exploration and entertainment - it helps the author explore their own ideas and opinions and the world around them. Books generated by AI bypass that entire process, so the only perspective we as readers get is that of an amalgam of all the content previously consumed by that LLM. We as readers don’t get anything new, and the auther doesn’t get anything from it beyond (if they’re lucky) notoriety and a paycheck. At least they can avoid the anxiety of taking that risk and exposing some part of themselves to the world at large.

The Future

Despite my cautious enthusiasm for using AI tools for software engineering work, I have concerns for the future. Business incentives are to do more, do it faster, bring products to market sooner. Focus on quarterly earnings and shareholder value. I have long held that this approach is short-sighted and does not position the company for long-term sustainability. The aphorism “penny wise, pound foolish” comes to mind.

Admittedly we are in the early days, but the current trend is to have AI write all the code and have engineers review the code. Reading code can be harder in many ways, as it relies on the engineer to see what might be wrong. Seeing that the code zigs when it should zag is one thing, but what if a piece is completely left out? Seeing what’s not there is much more difficult. Part of learning to do that is writing code yourself. What does this portend for the software engineering workforce? In order to meet quarterly earnings goals, will the workforce trend towards cheaper junior engineers since they won’t actually be writing code? How does this trend alter the path for advancement from junior to senior and beyond? Worse yet, will this lead to shrinking the workforce entirely as pay rates and employment opportunities decline, leading to fewer people entering the pipeline through college education?

Personally, I am insulated to an extent from these trends. I’m late in my career, and while this potentially tectonic shift within the industry may make the road rocky I don’t anticipate the road ahead of me disappearing entirely before I’m done. I am concerned for what this means 5-10 years out. Transitions are always hard, but folks just starting college now are doing so under adverse conditions, and by the time they finish they may end up with degrees that are no longer desirable by employers. I have yet to hear any business leaders consider what this might mean for their business that far out.

There are two primary aspects to engineering:

  • Doing things right - AI can absolutely do this. Right now it needs some coaching to get there, but give it time.
  • Doing the right things - AI does not know what “the right things” are. Those are business decisions and can change rapidly, particularly in smaller companies. We need humans here to wrangle the AI and keep AI on track. We as humans need to keep this one in mind as we move forward.