Washington — The promise and pitfalls of artificial intelligence regarding worker safety was the subject of a Feb. 11 hearing before the House Workforce Protections Subcommittee.
“These tools can be invaluable for augmenting worker safety,” Rep. Glenn Grothman (R-WI) said in his opening statement, “but there must be space for human oversight and employers should be wary of delegating responsibility for worker safety to AI.”
Grothman raised three key questions during that statement:
- How can the effectiveness of these tools be validated?
- How can all parties involved, including employers, employees and unions, understand the technology’s role as part of safety practices?
- What safeguards are necessary to protect worker privacy while still allowing meaningful data collection and analysis?
In her opening statement, Rep. Ilhan Omar (D-MN), the subcommittee’s ranking member, noted that “automation is not a distant theory. It’s already embedded in hiring, scheduling, surveillance and safety systems. Right now, employers are making decisions that will shape conditions for millions of people, often without sufficient transparency, worker input or guardrails.”
Omar called for government action because of issues such as “invasive surveillance systems and dangerous work speed quotas.”
She continued: “The workplace risks posed by these automated technologies are real and immediate.” She later pointed to community impacts from data centers that “consume enormous amounts of energy, which can strain aging infrastructure and drive up energy bills and have adverse effects on the environment.”
Further, Omar called for NIOSH, OSHA and state governments to be fully funded so they can be better positioned to help protect workers. She also said that “strong safety standards” can foster innovation, using the example of OSHA’s standard on cotton dust (1910.1043).
“As a result of that standard, not only did fewer textile workers develop brown lung disease, but the industry also became more productive and efficient,” she said.
In response to questions from Grothman, two witnesses from industry associations and another from a technology company emphasized that having human oversight, or “having a human in the loop,” is critical.
They also detailed uses of AI and other safety technology that can be helpful to worker safety, such as wearables that can detect heat stress or other issues, sensors, AI-enabled cameras, exoskeletons, predictive analytics, and automation-assisted technologies.
Grothman also asked about any examples of technology doing “bad things” and harming workers. Former OSHA leader Doug Parker, now a senior advisor for the National Employment Law Project, gave an example from his tenure at the agency: A manufacturing worker in Ohio was crushed against a wall by a robot because no one reprogrammed the device after maintenance.
“That’s an example of humans being in charge but also complacent,” Parker said.
In his opening testimony, Parker said many AI tools on the market “focus on modifying worker behavior instead of addressing the root causes of hazards.” He called for hazards to be eliminated, isolated or controlled through engineering.
“Skipping these steps by simply training these workers to adapt to hazards makes workplaces less safe,” he testified.
He added that AI that tracks worker activity can create physical or psychosocial hazards because of fear of surveillance, loss of privacy or anxiety about job loss.
“Physical injuries and illnesses can result from production pressures that cause workers to skip needed rest breaks or work at unsafe speeds,” Parker testified. “These risks can be reduced through prevention by design.”
link
