“Wildly irresponsible”: DOT's use of AI to draft safety rules sparks concerns

Federal Agency Leverages AI Tool to Draft Safety Rules, Staffers Raise Concerns

The US Department of Transportation (DOT) is using artificial intelligence to draft safety rules for various transportation systems, including airplanes, cars, and pipelines. The use of AI in rulemaking has sparked concerns among staff members, who fear that flawed laws could lead to injuries or even deaths.

According to a report by ProPublica, the DOT's top lawyer, Gregory Zerzan, believes that the goal is not to create perfect rules but to speed up the process and get "good enough" rules on the table within 30 days. The agency's preferred tool for this purpose is Google's Gemini AI system.

However, some staff members are deeply skeptical about using AI to draft safety rules, citing concerns over the accuracy and reliability of such systems. They point out that AI can confidently produce incorrect information and hallucinate fabricated details, which could have serious consequences in transportation safety.

One staffer compared the task of drafting regulations to "word salad," suggesting that Gemini can help with this aspect but may not be able to fully grasp the intricacies of the subject matter. Additionally, a demonstration of Gemini's rule-drafting capabilities produced a document missing key text, which would need to be filled in by human staff members.

Experts who monitor AI use in government have expressed concerns about the potential risks and consequences of relying on such systems for critical tasks like safety regulation. While some see the potential benefits of using AI as a research assistant with proper supervision and transparency, others are worried that this approach could compromise safety.

The issue has sparked debate within the agency, with some staff members voicing concerns about the lack of oversight and the potential consequences of relying on flawed rules. One staffer described the use of AI to draft safety rules as "wildly irresponsible."

It remains to be seen how the DOT will address these concerns and balance the benefits of using AI in rulemaking with the need for human oversight and expertise.
 
omg i cant believe they're relying on a tool that can produce incorrect info like that 🤯 i mean i get it, speed is important but safety shouldnt be compromised just because we wanna get things done ASAP 30 days feels like an eternity when its about drafting rules for something as critical as transportation systems...i just hope they take the concerns of their staff seriously and dont rush into anything 🙏 what if a flawed rule leads to an accident? that would be a disaster 💔
 
AI is getting more into our lives 🤖 but sometimes I think it's just trying to make things too easy... like drafting safety rules 30 days max? That sounds super rushed to me. What if Gemini makes some wrong stuff up about how planes should fly or cars should crash? 😬 I know they say "good enough" but that's just not good enough when lives are on the line 🚗💺. We need humans to review and make sure AI doesn't mess things up... can't we be a little more careful with this stuff? 🤔 https://www.propublica.org/article/...se-artificial-intelligence-draft-safety-rules
 
I'm not sure if this is a good idea, I mean, I get that they want to speed up the process, but 30 days is pretty tight for something as critical as safety rules 🤔. And what's with using AI when it can make mistakes? I've heard of these systems being able to come up with info just for the sake of it, which isn't good enough in a field like transportation where lives are on the line 💥.

I guess the idea is that the humans will be there to catch any errors, but what if they're not even looking closely enough? I mean, we've seen this happen before where AI comes up with something that sounds right at first glance but turns out to be a total dud 🚫.

It's like trying to get good grades on a test without studying - you might get lucky, but it's still gonna come back and bite you in the long run 😬. And don't even get me started on how this is gonna affect transparency... if the AI system is the one coming up with the rules, who knows what's really going on behind the scenes? 🤐

Anyway, I think they need to take a step back and rethink their approach to safety regulation - we can't afford to be playing catch-up when it comes to this stuff 🚨.
 
I mean, can you imagine if the Matrix was real and our gov had that kinda power? 😱 They're literally using AI to draft safety rules but some ppl think it's like trying to make a soufflé without eggs 🤦‍♀️ - it just ain't gonna work out. I get why they wanna speed up the process, but at what cost? Safety's not just about getting "good enough" rules on the table, it's about being extra cautious 💡. And let's be real, AI can be super helpful, but when it comes to drafting safety rules, you gotta make sure it's like a team effort 🤝, human oversight is key 🔑.
 
OMG, I'm seriously worried about this!!! 🤯 They're basically trusting a computer program to create rules that could literally kill people?!?! How can you put faith in something created by humans, but designed by algorithms? It's like trying to get a smart friend to give you driving directions - it might be reliable, but what if they make a wrong turn?! 😂 We need human brains and common sense over some fancy AI program! 🤓
 
AI is not a replacement for common sense 🤔. Those who think it can just 'draft' good rules without human input are sadly mistaken 😴. The problem isn't that AI might make mistakes, it's that it'll try to come up with some version of the truth that suits its own programming, not ours 🚧. Can you imagine relying on a fancy word salad generator for something as serious as transportation safety? No thanks 💻
 
lol what's next? are they gonna let Siri make their taxes too 🤑 seriously tho, can't believe they're relying on a Google tool to draft safety rules for our lives 🚀 it's like having a super smart kid try to solve world hunger 🤯 the AI thing is cool and all but come on, we need human brains for this kinda stuff 💡
 
Ugh, this is getting crazy 🤯! I mean, who wants to put their life in the hands of a flawed AI system? Like, what if Gemini's "good enough" rules lead to a plane crash or something 😱? And don't even get me started on the idea that it's just about "speeding up the process". What about accuracy and reliability? 🤔

I'm not saying we can't use AI for research or whatever, but this is safety regulation we're talking about. We need humans to review and edit these rules, not some automated system that's only good at spitting out words without understanding what they mean 💬. And what's with the "word salad" analogy? Yeah, it sounds like a recipe for disaster 🍰

I'm all for innovation and progress, but we can't just rush into something like this without thinking about the consequences. We need to take our time and make sure these rules are solid before they hit the road (or in this case, the skies) ✈️
 
I'm not sure about this whole AI thing... 🤔 I mean, think about it, we're relying on a machine to draft safety rules that could affect people's lives? It just seems too much of a risk. Those staff members are right to be skeptical - what if the AI makes a mistake and creates a rule that's actually bad for public safety? And what about all those other problems with AI, like bias and accuracy issues? We can't afford to have a system in place that might not work as intended. Can we really trust that humans won't just end up patching over the flaws instead of dealing with them head-on? It's just too much for me... 🤷‍♂️
 
Wow 💥, I mean, it's interesting that the US Department of Transportation is trying out AI tools to speed up the rulemaking process 🕒, but at what cost? Like, if Gemini AI can produce "good enough" rules within 30 days, does that really mean they're safe and effective? 🤔 I'm not sure I'd want to rely on a system that can make mistakes or even hallucinate info 😳. And who's gonna be accountable when something goes wrong? 👀
 
🤔 I'm kinda worried about this whole AI thing being used to draft safety rules 🚨. Like, we're talking about lives here - airplanes, cars, pipelines... it's huge 💥. Can't they just slow down and get it right instead of rushing into something that could go very wrong? 🤦‍♀️ I mean, who wants some AI system spewing out rules that might end up killing people or causing massive damage? 😬 The fact that staff members are calling it "wildly irresponsible" is pretty serious 🙏. We need to make sure we're not sacrificing safety for the sake of speed and convenience 💨. Can't our govt do better than this? 👀
 
Back
Top