Are the "good at Google" jobs safe in the era of AI?

Created with Midjourney, Prompt by Daniel Lucas

The first half of my career looked like a scatter plot of technical roles. Tech support, systems administrator, network engineer, cybersecurity engineer, database administrator, software implementer, etc. Whether I was helping individual users troubleshoot video driver issues, configuring a new SAN for a virtualization cluster, or migrating a global enterprise off of legacy ERP systems, a recurring thought always haunted me:

Why are they paying me so much to Google stuff?

I was not alone. My peers and I would often opine on this question, usually with humor but also an unspoken agreement that we would keep this glitch in the economy hidden from those who held the purse strings. Even software developers with whom I worked side-by-side throughout my career had similar suspicions about their good fortune to be alive in the time of online communities like Stack Overflow.

They called us geniuses these purse string holders. They seemed genuinely impressed by our abilities.

And we felt safe.

But we knew our mystique was an illusion. That our superpower was not innate genius or a massive store of knowledge. Our superpower was only that we lived our lives based on a few simple yet strongly held beliefs:

That the likelihood of us being the first in the world to be faced with a given problem or objective was so small as to be dismissible.

That this meant almost every problem was solvable if we found the right forum discussion, blog post, technical document, e-book, code snippet, best practices guide, YouTube channel, or user group.

That all we needed to succeed at virtually anything was the faintest hint of a solution discovered on the internet and a bit of trial & error.

(Ok, sometimes a LOT of trial & error and a fridge full of Mountain Dew.)

Essentially, we were good at Google. Our anti-superpower.

Like many IT generalists, there came a point in my career when, to progress both financially and spiritually, I had to choose between deep technical specialization and a management track. I chose the dark side and so haven't had to think about the illusion of genius in IT, the economic glitch that fed us, in a long time.

Just as with my former technical peers, it became an unspoken assumption that the "great job" I gave to my teams in private was vastly different than the "great jobs" showered upon them by the rest of the business. Their praise came with awe. Mine came with a knowing wink.

But after a long career in technology leadership, which in essence is the translation of business strategy into technical solutions, I understand now something crucial:

Customers almost never assign value to process, only to the finished product.

They don't care about tooling, methodologies, languages, or what percentage of your code was copied from posts on Stack Overflow. And my strong prediction is they won't care whether AI tools built most of the finished product either.

So, the answer to the headline question is yes. The "good at Google" jobs are safe in the era of AI. But only if we continue to adapt and learn to use the best possible tools that enable us to build the most valuable solutions for our customers.

A common warning for knowledge workers and professionals today is that AI won't take your job, a person using AI tools will.

Luckily for my fellow technologists, learning new tools has always been our anti-superpower.


Daniel Lucas is the Founding Partner of THRDparty Advisors. He advises private equity firms and their portfolio on technology & cybersecurity through every investment stage.


AI Content Disclosure: The image in this article was created using MidJourney. No AI tools were used for ideation or text generation.

Next
Next

Deal-killing Diligence Findings