Audio and slides from Jim Anderson’s presentation to the AASHTO Committee on Traffic Safety Operations, August 27, 2024. “A.I. is Bad. A.I. is Good. A.I. is Misunderstood.”
Transcript
Note: this was transcribed by Otter.ai and cleaned up by OpenAI’s ChatGPT. It therefore may not be 100% accurate, but it’s pretty darn close and saved us hours of time 🙂
Okay, all right, good morning. Jim Anderson here to talk about AI, as Larry said, and my sincere hope is that this is not just another presentation about AI. You’ve heard lots of basic presentations, so ideally, I can give you a different perspective. I certainly have a different background. I’ve got about 20-25 minutes I’ll take. We will have time for Q&A, so think about that. If we don’t get to everything, I’ll be around the rest of the conference, during breaks, etc. I do have 297 slides…
(Laughter)
You think I’m joking. Actually, I’m not. I really do have 297 slides. There’s no truth to the rumor that this is death by PowerPoint. I promise! You know, I heard that this morning. But with the Q&A period, the provoking of thoughts in the context of things that matter to you—right? I’ve been talking a lot with folks in the transportation space: safety, workforce… you don’t need me to tell you about that.
In working through all of this, I and AI kept landing on three themes: AI is bad, AI is good, and AI is misunderstood. You’ll hear more about that here. So, let’s dive in. I’ve got a lot to cover. When I look at this, I’m a civil engineer by education. I practiced for about a decade, and then this thing called the internet came along—you may have heard of it, it’s going to be big one day. I see a pattern here, and my background informs what I’m getting ready to tell you. I’ve partnered with, sold to, or worked for a whole lot of big tech companies: Facebook, Twitter, Google, Apple, Oracle, etc. That was my background for the past 15-20 years. Looping back into the world of civil engineering and transportation, over the past 18 months or so, I’ve had the privilege of having hundreds of conversations with you all. Listening to you, really trying to understand what matters to you, and I really appreciate many of you in this audience who have contributed to my knowledge of what you’re currently facing.
It’s clear to me that your biggest challenges have nothing to do with AI. At least on the surface. You’re dealing with all of the challenges you’ve been talking about this morning, and actually, I think that’s the opportunity: to use AI in some pretty basic ways. As much as we might like to think about autonomous robot drone fleets automatically, dynamically… yeah, okay, whatever—that’s not your reality. Your reality, as I hear from you, is you’ve got massive quantities of information that go far beyond your ability to effectively manage.
Somebody from the Maine DOT, who I believe is in this room, said, “I’ve got documents stored in email, SharePoint, so many other places.” I’d be willing to bet every one of you has said that at some point. I mean, we’re all drowning in information. And again, I think there’s a striking parallel to the early days of the internet here. There’s a real opportunity. Whatever we think about AI today is sort of like what we were thinking about the internet back in 1998. Like, was that 25-26 years ago? It’s like, holy cow, we’ve come a long way. And remember back in the day, what you thought about the internet when it first came along: the gargling modems, dial-up, and all of that? We’ve come so far, it’s easy to forget how far we’ve come.
Remember AOL? Right? Dial-up internet. Yahoo!—this amazing thing where you could find information, which was basically a directory. AltaVista was the search engine I loved. It was owned by Digital Equipment, whose goal was to sell computers and chips, which is why Google came along and ate their lunch. By the way, Google was founded in September 1998, if you want to feel like, “Wow, I should have founded Google in August 1998.” We’ve come a long way. Who could have possibly known what was going to happen when we went from dial-up to broadband to video to mobile to 5G to social to cloud, and so many other changes along the way? It’s easy to forget how far we’ve come.
And I give you that in the context of AI. I love this tweet from someone who said, “In the year 2000, Palm (the Palm Pilot) was worth more than Apple, Nvidia, and Amazon combined.” Now, their market caps are trillions of dollars, and Palm Pilots are out of business. I say that not because any of us can predict the future, but because it’s hard to know where we’re going. Technological leaps and billions of dollars of investment capital are creating these trillion-dollar companies. We have no idea where this is going. I do this all day, every day, and even I don’t know exactly where the future is heading.
Back to the themes: AI is bad, AI is good, and AI is misunderstood. We need to bring a healthy dose of skepticism. It’s not going to solve all of our problems; it’s also going to bring some significant harms and risks. One of the biggest, I believe, is copyright. The whole concept of copyright—how people create and get paid—is under assault. My prediction? We’re going to have five Supreme Court cases on copyright in the next decade. You’ve seen the headlines: the New York Times suing Microsoft over OpenAI, The Atlantic saying generative AI is challenging centuries-old copyright laws. Our legal and regulatory frameworks are not equipped to deal with all this AI stuff.
Then there’s the issue of deep fakes. Anybody see the picture of the Pope in a puffer coat? I actually saw it and thought, “I have no reason to doubt this,” even though it was a little strange. But it was fake. The Pope wasn’t wearing a puffer coat. Google just announced in its Pixel phone the ability to add things to a photo using voice or text. You could take a picture of an MTA train in New York and add a waterfall to it—completely fictitious, but it could be done by voice command. Then there’s the more harmful side of this, where someone took a photo of a friend and turned her into a drug addict with just a few prompts. It’s not real, but it looks like it is. What could possibly go wrong with that, right?
This world of fakes and AI hallucinations is both fascinating and alarming. For example, there was the AI-generated image of an old wise man hugging a unicorn—except the horn was going through his head. And the text-based hallucinations? A geologist supposedly said you should eat one small rock per day. This geologist didn’t exist, and the advice was obviously false. Then there was the suggestion to add glue to pizza to make the cheese stick. AI doesn’t always get it right, but it’s not going away. That genie is out of the bottle.
I think we’ll see an enormous regulatory and legal struggle ahead.
You’ve seen stories like, “How I Built an AI-Powered, Self-Running Propaganda Machine for $105,” where someone coded a system that generated politically slanted, AI-created content that wasn’t true. It was a deliberate experiment, but it demonstrates how easily AI can be manipulated to produce false information.
Some of these AI hallucinations are comical, but others can get dangerous. There was the famous AI-generated image of an old man hugging a unicorn. Looks impressive at first glance, until you notice that the horn is going straight through the man’s head. Or the case where an AI suggested adding non-toxic glue to a pizza to help the cheese stick.
These kinds of mistakes make it easy to dismiss AI, but we can’t just shut it all down. The challenge is figuring out how to harness its potential while mitigating its risks. For example, Google trained its AI on Reddit threads. If you’ve ever spent time on Reddit, you know sarcasm is prevalent, and that may be why their AI makes some odd recommendations.
There’s a broader issue here: when AI learns from other AI-generated data, we enter this strange feedback loop, creating a mirror within a mirror. AI’s reliance on training data produced by other AI could create real challenges down the line, as it might just repeat or amplify errors or biases already present in the system.
That brings us to the idea of thoughtful contrarians. I really appreciate people like Gary Marcus, Missy Cummings, and Philip Koopman. They provide a grounded, skeptical viewpoint on AI, helping us think critically. Missy Cummings, for example, is a former fighter pilot and has been very vocal about the risks of over-relying on AI in autonomous driving. She talks about how AI in cars sees parts of the scene but doesn’t necessarily understand the whole. This can lead to dangerous situations where AI misinterprets a scene.
For instance, Toyota was conducting a test with a safety driver in Massachusetts, and the AI got confused by a truck. It saw a truck, a fence, a sign, and other objects, but couldn’t make sense of the whole image and essentially stopped. No one was hurt, but it highlights the challenges AI faces when interpreting complex environments.
There was also the tragic case of an Apple engineer who was killed in a crash while using Tesla’s autopilot on Highway 101 in California. He was playing a video game on his phone at the time, which is obviously a bad idea. But it raises questions about the role Tesla played in setting expectations for how autonomous their system really is. The car misinterpreted the situation and crashed into a median, killing the engineer.
Tesla’s marketing of its “Full Self-Driving” feature is a prime example of technology being irresponsibly overhyped. Despite the name, the car still requires active supervision by the driver. Recently, Tesla changed the label on its website to “Full Self-Driving (Supervised),” which is a step in the right direction, but it doesn’t undo the damage that’s already been done in terms of public perception.
We’ve seen other issues, too, like the recent update to Tesla’s self-driving software causing cars to run red lights. The more automation improves, the more complacent drivers become, thinking they don’t need to supervise the system as closely. That’s a dangerous combination. The problem with supervised approaches is that the automation gets better, but human supervision gets worse. People trust the system more and pay less attention, which can have deadly consequences.
This isn’t an abstract problem. It’s happening now, and people are dying because of it. The public is being fed a story that autonomous driving is just around the corner, but that’s not the case. It’s not exponential improvement—it’s incremental, and we may never reach 100%. We might only get to 85%, and that’s not good enough when lives are on the line.
Now, let’s take a turn back to safety and look at the broader issue of consumer attention. I spent 15 years working to capture consumer attention, and it’s highly relevant to transportation. We all know what a smartphone is, and we all have one. People spend hours every day glued to their phones, and it’s a huge challenge to compete with that for attention.
Consider this: the most valuable real estate in the world isn’t in New York or Hong Kong; it’s on your phone’s screen. The average American spends over five hours a day on their phone, and companies are pouring millions of dollars into capturing that attention. The endless scrolling we do on platforms like LinkedIn, TikTok, and Facebook is carefully engineered to keep us engaged. These algorithms know exactly what we like and keep feeding it to us.
It’s like being given chocolate cake for every meal because the algorithm knows you love it. It doesn’t know when to stop because the system is designed to maximize profit. So, it just keeps feeding you more chocolate cake until you can’t stand it anymore. And even then, the companies only cut back slightly—they’re still making money, just a little less. The same logic applies to social media: they want to keep you glued to your phone for as long as possible.
This becomes even more concerning when you think about the effect on our kids. They’re growing up with their noses in their phones, and it’s hard to pull them away. AI-generated content is going to make this even worse. As AI generates more and more content, it creates a feedback loop, where the training data AI uses is increasingly generated by other AI. This can create distortions we can’t yet fully understand.
And if you think it’s tough dealing with tech companies like Google today, just wait. It’s only going to get harder. These companies are threatened by AI and are fighting to protect their turf. But things can change fast—Palm Pilot was once more valuable than Apple and Amazon. Who’s to say Google won’t face a similar fate one day?
That leads me to modify my three themes slightly: AI is bad, AI is good, and AI is not yet well understood. And now, I want to shift to talk a bit about the workforce and how AI can help you do more with less.
You’re facing workforce challenges: you need to do more with fewer resources. You have a new generation of workers coming in, often working from home and in need of mentoring. They’re comfortable with tools like ChatGPT, and there’s an opportunity to use AI to make their jobs easier. AI can help focus on the fundamentals, like delivering projects and managing complex workflows.
Here are three use cases we’re currently working on:
1. Automating Meeting Transcripts:
You have project meetings twice a week. AI can automatically transcribe those meetings and distill them into actionable summaries. No one wants to rewatch meeting recordings or sift through the notes, but AI can do that and give you a concise summary.
2. Summarizing Vendor Proposals:
Imagine getting five vendor proposals with 50 questions each. AI can summarize those answers into 30-word summaries for each question, making it easier to compare and evaluate.
3. Querying SOPs in Real-Time:
Operators in traffic management centers (TMCs) often need to respond to events in real-time, and they’re overwhelmed with documentation. AI can provide a simple interface to query SOPs and provide answers based on current situations. This is still a proof of concept, but it has the potential to improve decision-making in real-time.
AI isn’t perfect, but neither are the systems you use today. Mistakes happen, but AI can help manage the overwhelming volume of information you’re dealing with. Think of it as having 100 enthusiastic interns at your disposal 24/7. They need supervision, but they’re eager to help.
Your aging workforce is also a challenge. AI can help capture institutional knowledge before it’s lost when employees retire. We should be thinking about AI as an enablement tool, not just a replacement.