You may have read the headlines, and Artificial intelligence (AI) applications are now equal to human experts. The demos look impressive, but few of us operate in perfect environments. I'm more concerned with how AI performs outside of the laboratory than inside it, where relationships, judgment, and context matter, because this is relevant to small and medium sized businesses (SMBs), which often operate on tight budgets. It's also relevant to investigators and criminal justice professionals, where trust, ethics, and accuracy are essential, therefore it's crucial to consider these factors. Here's the disconnect, and research is often performed by and for large organizations with unlimited budgets, pristine data, well-defined procedures, and technology-centric staff, thus that's not most organizations.
The majority of US companies are small businesses, and most police departments, prosecutors' offices, public defenders, and investigators are understaffed and overworked. Much of their work requires human nuance, such as interviewing, reading the room, building rapport with victims, seeing what's missing from a report, and making tough decisions with limited information, so what's "out of scope" for many AI experiments is at the heart of actual work. How do we make AI work in practice, and first, we need to recognize the obstacles, including limited budgets and skepticism, because leaders want to know if this will benefit them or just cause headaches.
First, recognize the obstacles, and these obstacles include limited budgets and skepticism, because leaders want to know if this will benefit them or just cause headaches, and lack of skills and confidence, since not everyone has a data scientist on staff, and many don't know where to begin or which tools are safe. Data issues are also a problem, and information is scattered across systems, locked in PDFs, or held in people's heads, therefore AI needs context to provide value. Second, establish practical objectives, and for the vast majority of teams, the goal is not to replace humans with robots, but rather to get an assistant who can take care of paperwork-heavy, repetitive tasks so people can focus on the parts of their jobs that require human interaction.
See content credentials
Using ChatGPT to make a point on their own paper...
For the vast majority of teams, the goal is not to replace humans with robots, but to get an assistant who can take care of paperwork-heavy, repetitive tasks, so people can focus on the parts of their jobs that require human interaction (think 3WA), and for SMBs, that means drafting initial emails, blogs, and social posts that you can then refine in your own voice, and summarizing meetings and identifying action items, because these tasks are essential for effective communication. Managing scheduling, reminders, and basic customer service is also important, and for investigators and criminal justice, that means condensing lengthy reports, interviews, and body camera transcripts into summaries that help prioritize cases, and identifying discrepancies across documents that can help guide further investigation.
For investigators and criminal justice, AI can be useful, and it means condensing lengthy reports, interviews, and body camera transcripts into summaries that help prioritize cases, and identifying discrepancies across documents that can help guide further investigation, therefore automating routine paperwork and form filling can help officers and lawyers spend more time interacting with people and less time on administration. Translating materials or simplifying language for victims, witnesses, and the public can help improve communication without changing the facts, but what it shouldn't mean is outsourcing judgment, ethics, or responsibility, because no AI should make charging decisions, assess credibility, or determine probable cause. If you're skeptical of AI, your concerns are justified, and misuse, bias, and hype are real problems, so the good news is that you can adopt AI cautiously and on your terms.
If you're skeptical of AI, your concerns are justified, and misuse, bias, and hype are real problems, therefore the good news is that you can adopt AI cautiously and on your terms, by starting with low-risk, high-hassle tasks, and keeping a human in the loop for any public-facing or critical decisions, because safeguarding privacy and following policies is essential, and do not input sensitive data into untrusted tools. Track results, and if a tool does not save you time or improve quality, stop using it, because here's what surprises many critics, and early data from SMBs indicates that AI can be a growth accelerator, not a replacement strategy. Many organizations that are adopting AI actually end up employing more people, not less, as the time savings from automation is invested in more high-value activities like serving customers, cultivating relationships, and achieving better outcomes, thus the future is not about replacing human expertise, but rather augmenting it.
Let the AI handle the paperwork, and let humans focus on the human tasks, such as listening, deciding, leading, and protecting, because I'm curious to know what you think, and if you're an employee at an SMB, police department, DA or PD office, or an investigation unit, where do you see value and where do you draw the line, and what would make AI truly valuable in your day-to-day work, so share your thoughts, and let's discuss the future of AI in these fields.
How is your business navigating the AI hype versus the reality on the ground? I'd love to hear your thoughts in the comments.
This article is based on my thoughts and the published OpenAI paper: https://cdn.openai.com/pdf/d5eb7428-c4e9-4a33-bd86-86dd4bcf12ce/GDPval.pdf
#AI #SmallBusiness #Law Enforcement #Investigations #Leadership #FutureOfWork #TechAdoption #BusinessStrategy #NoRoboCop #Replacing Jobs
