Transcript (edited by A.I.)
Note: this was transcribed by Otter.ai and cleaned up by OpenAI’s ChatGPT. It therefore may not be 100% accurate, but it’s pretty darn close and saved us hours of time.
Good morning, everybody. I know, David, you weren’t referring to me as trying to get me before I’m dead—but thank you for having me before I’m dead, at least. I hope you weren’t referring to me!
Super excited to be here with you today to talk about artificial intelligence. I had the opportunity to chat with some of you last night at the cocktail hour, and everybody’s got an opinion about AI—myself included. Hopefully I can get through some interesting things for you here and tackle AI in some interesting ways—but in a very practical manner, right?
I’ve got about 390 slides to get through in about 30 minutes or so. You think I’m kidding? No, I actually do. So I better get going. I better cover a lot of ground. We’ve got to move fast.
I’m going to talk about a pope wearing a puffer coat, a Palm Pilot, an old man hugging a unicorn, the book All Quiet on the Western Front, a squid, a Moon Pie—and then the Terminator. Just to give you an idea, it’s your typical PowerPoint presentation.
At which point you’re probably wondering: who is this guy, right?
Well, I actually am a civil engineer. I studied civil engineering, went to Georgia Tech, and practiced it for about a decade—which probably means I know nothing, right? To some of you engineers and contractors, I do at least know which end of the shovel works. So give me that much credit.
Then, around the time I turned 30 or so, this thing called the internet came along. You may have heard of it—it’s going to be big one day. I strongly encourage you to look into this internet thing.
I spent about a decade doing “the internet.” Then I got into social media and spent another decade doing that. I moved to New York City—yeah, I know, “New York City?!”—I’m a Georgia boy, but I ended up in New York, where I still am.
And then I found myself on television, which was kind of interesting. This is me jousting with Stuart Varney on Fox Business—I think we were talking about Facebook at the time. Why? Because I was in and around big tech. I did a lot of work with big tech. I partnered with, sold to, worked for many of these companies—Facebook, Twitter, Google, Apple, Oracle. These companies are very much in the news. You can say these companies are the news.
I did all that, had some wonderful experiences over a 30–35-year period. But I never left civil engineering. I wanted to come full circle. So when we sold my previous company and I was thinking about what to do next, I found myself drawn back into this world.
Beacon is my company, and we provide AI-powered software. We’re working with the Georgia DOT and others to help improve transportation—some of the same things you’re trying to do, just from a very different angle. From the AI angle.
Technology is obviously a big part of what’s happening in DOTs. And AI specifically—you can’t talk about tech without talking about AI.
But we’ve got to solve real problems, right? This is not AI theory. I’m not here to talk about machine learning algorithms. We’ve got to actually solve real problems. You all know that. You’re out there building roads, maintaining roads, operating infrastructure. This is a real, physically grounded world. We need to be real and practical.
One of the things I heard in hundreds of conversations with DOTs and others in the ecosystem is that the biggest challenges have nothing to do with AI. I could talk about autonomous drones or robot fleets, but what I hear about is paperwork. Or, I guess now, PDF-work. We’ve “digitized”—great, so now you have a thousand PDFs and you’re hitting Control+F.
This is our digitized future? We’ve digitized the past—and now we’re drowning in it. I hear: “I’ve got too many documents. Stored in email, in SharePoint, everywhere. I can’t think.” That’s where I see opportunity.
We’re working with DOTs, including Russell in Georgia, doing some exciting work. We’re just getting started, but it really centers on three things: data, words, and location.
Sounds basic, right? But the real value comes at the intersection of those three things. You’ve got a stack of forms and a job site—figure out what’s going on. We do that as humans all day, every day. But our technology does a terrible job of it.
So we ask: can we make those PDFs geographically aware? Can we allow natural language queries? Can we refer between different types of data?
That’s what we’re building. AI software in the transportation world to help DOTs do their jobs—faster, better, safer. The same mindset applies to your business too. And I’ll give you some practical tips for that. But first: how should we think about AI?
I’ve come to three conclusions:
- AI is bad
- AI is good
- AI is misunderstood
All three can be true. And are true.
Let’s rewind 26 years. Be kind—rewind. Blockbuster, anyone?
Back in 1999, the internet felt very much like AI does today. Something big, something real, something we didn’t quite know what to do with. We had AOL, Yahoo, dial-up modems, AltaVista—then along comes Google. Founded in September 1998. Who knew?
We went from dial-up to broadband, to search, to video, to mobile, to 5G, to cloud. Could anyone in 1999 have predicted that? No. I was doing it professionally and I didn’t know where it was going. That’s AI today. We don’t know where it’s going. We just know it’s big—and we better start riding that horse before it leaves the barn.
In 2000, Palm was worth more than Apple, Nvidia, and Amazon combined. Wild, right?
The point isn’t that we should have bought Apple stock. The point is—we have no idea where AI will take us. It’s going to create trillion-dollar companies. It’s going to create failures too.
It’s consequential. As consequential as the internet itself.
And that leads us back to: AI is bad, AI is good, AI is misunderstood.
I’m an optimist. But let’s be real—AI has substantial harms. Copyright? The New York Times sued OpenAI and Microsoft. We’re headed for multiple Supreme Court cases. What does copyright even mean in the age of AI?
Then there are deepfakes. Remember the Pope in the puffer coat? Totally fake. Funny, maybe. But this stuff gets dangerous.
With a Pixel phone, you can generate fake damage on a highway with a prompt. Just speak your lie into existence. That’s disturbing.
Even worse—someone could take a photo of your friend and make them look like a drug user. This doesn’t require Photoshop expertise anymore. It just requires malice.
A Wall Street Journal reporter built a self-running propaganda machine for $105. That’s the world we’re in.
But I don’t believe in the “Skynet is coming” narrative. That’s not the danger. I can barely keep my phone charged—how’s a killer robot supposed to run for 120 years on a fuel cell?
And if nothing else, the Terminator was defeated twice—by a really angry single mom. So yeah, I’m betting on the humans.
But let’s be grounded.
These image generators? They can’t handle hands or feet. They hallucinate all kinds of stuff. An old man hugging a unicorn—with a horn going through his head. The world’s longest cow. Google’s Gemini telling people to eat rocks and add glue to pizza sauce—because it was trained on Reddit.
And then, of course, the squid story.
A Wharton professor uploaded All Quiet on the Western Front to Claude and told it to “remove the squid.” Claude says, “There is no squid.” The professor insists. Claude apologizes. Eventually, it invents anti-squid propaganda for a book that never mentioned a squid.
It’s not about the squid. It’s about what these models can be tricked into doing.
Now, attention. Let’s talk attention.
The most valuable real estate in the world is not Tokyo or London. It’s your smartphone. 12–16 square inches. 5.5 hours a day. 37 trillion minutes a year, in the U.S. alone.
Facebook, Google, TikTok—they monetize attention better than anyone. Algorithmic feeds are endless. Tailored to you. That’s how you get Moon Pies for breakfast, lunch, and dinner—because you like Moon Pies.
And when it becomes a problem? Big Tech says, “We’ll take away two Moon Pies.” That’s the kind of “solution” we get.
Meanwhile, synthetic data is flooding the internet—data created by AI to train more AI. A mirror facing a mirror. What could go wrong?
So again—AI is bad, AI is good, AI is misunderstood.
But I’m hopeful.
Let’s get to your business.
You have tight margins. You manage bids, projects, payments. AI won’t solve any of that. But it can help with all of it.
Remember your first personal computer? Your first laptop? Your brick phone?
This is that. But faster.
And your people—the next generation—they’re already using AI tools like ChatGPT. They need mentoring. And yes, there will be inaccuracies.
That brings me to my three practical tips:
1. Change Your Default Search Engine
Change it to ChatGPT (or another LLM). Not just use it—make it your default. See what happens. For me, 3 out of 4 times, it was better than Google. I’m not going back.
2. Ask Agents Complex Business Questions
Use GPT-4o or other advanced reasoning agents. Ask for complex outputs—like marketing slogans, logos, business plans, financial forecasts, even website code. It might be average now. But it took two minutes. And it’s only going to get better.
3. Record Your Zoom/Teams Calls
Not creepily. Not all of them. But most meetings are boring—AI doesn’t mind. It’ll transcribe, summarize, and give you much better recall. Build a knowledge base you can query. It helps you stay organized and detailed.
So—AI is bad. AI is good. AI is misunderstood.
But maybe—what if AI could?
Thank you very much.