By using artificial intelligence (AI) in a thoughtful governance program, manufacturers can manage workforce shortages.
By Sean C. Griffin
Manufacturers and other companies are facing a critical shortage of skilled workers in manufacturing that is outpacing educational institutions’ training ability. As baby boomers retire without sufficient younger workers to replace them, the problem will only get worse. To address this issue, many companies are turning to artificial intelligence (AI) to compensate for labor shortages.
AI refers to computer programs that can perform actions typically requiring human intelligence. For example, finding your way from Point A to Point B used to require you to use your intelligence to read a map and navigate your path. Now, however, you just tell your car’s GPS where to go, and the AI figures out how to get there, taking into account traffic patterns, speed traps, and tolls.
Just like AI can direct your driving, it can direct your employees to optimize their productivity. For example, AI can also assume basic tasks that would typically involve employees, such as the use of customer service chatbots to answer basic questions without involving call center employees. In this way, AI can free up employees to tackle more complicated tasks that may require human creativity.
AI can also fill skill gaps. Organizations are using AI to automate the detection and response to ransomware and other cyber-attacks. More broadly, AI might be able to notice transferable skills better than humans can; for example, an AI algorithm might notice that your receptionist has developed skills that would make her an exceptional salesperson.
However, using AI can pose legal and logistical pitfalls. Using AI in a way that complies with the law and fulfills your requirements requires a robust AI governance program.
For example, AI can produce a statement that is very convincing and completely wrong – a phenomenon called “hallucination.” If your car’s GPS has ever led you to the wrong location, you have experienced this. Sometimes this happens because the AI was given bad information, but even AI supplied with good information can hallucinate, to your company’s detriment. And your employees cannot produce good work with bad information any more than an apple tree can produce pears.
Also, a lot of real-world situations can confuse AI. AI can only recognize a pattern it has seen before, and if it encounters something it has not seen before, it may react unpredictably. For example, putting a sticker on a stop sign can flummox an AI system, and it can frequently misidentify images. Misidentifying images in real-world situations can cause problems if organizations employ facial or image recognition technology.
These problems can be managed, however. Through AI governance, companies can mitigate these issues to use AI safely, productively, and effectively.
For example, AI can only supplement human thought, not replace it. So appropriate AI usage requires humans to monitor what AI is doing. Your company should no more have AI running without human monitoring than you would follow your GPS’s instructions into a lake. Without appropriate monitoring, your AI can easily start hallucinating and promulgating incorrect information across your organization, or it can perpetuate biases that your company is legally obligated to avoid.
This monitoring will have to take place in the context of written policies and procedures. Just like you would tell your teenager how to drive a car before letting them behind the wheel, you should have written policies in place to inform your employees on the safest, most effective use of AI. These procedures will need buy-in from your organization’s relevant stakeholders and be reviewed by legal counsel knowledgeable about AI. And your organization will have to leverage its culture to ensure that the key personnel know about the plan and can implement it properly.
Also, your company will need to have an AI incident response plan. We tell teenagers what to do if they have an accident, and the same proactive preventative strategy applies to AI. An incident response plan will inform your company how to address problems before they arise, rather than forcing you to scramble in real-time to scrap together a suboptimal solution to a foreseeable problem. Should litigation or a government enforcement proceeding follow an AI incident, a written incident response plan can offer welcome guidance and protection.
Like a car, AI can make you more productive and get you to where you’re going faster. Also like a car, AI can land you in a wreck if you’re not careful. With thoughtful AI governance, your company can enjoy the benefits and manage AI’s risks.
About the Author:
Sean C. Griffin is a member of Robinson+Cole’s Manufacturing Law Industry Team with over 30 years of litigation experience and 20 years of experience in cybersecurity. Whether as a business litigator or a leader in data privacy and artificial intelligence, Sean spots issues and anticipates contingencies to craft effective litigation and cybersecurity strategies.
Patti Jo Rosenthal chats about her role as Manager of K-12 STEM Education Programs at ASME where she drives nationally scaled STEM education initiatives, building pathways that foster equitable access to engineering education assets and fosters curiosity vital to “thinking like an engineer.”