Real Last Bot

The musings of a human in the world of bots.

It’s not a bad hammer

I don’t want to be a Luddite and a doomsday preacher on the corner touting the “end is nigh.” However, the human aspects of Artificial Intelligence – how these tools are created and used continue to concern me greatly. (As a sidebar, I recently finished AI for Social Good by Rahul Dodhia to give my brain a break from my instinctive pessimism).

This base technochauvinism and optimism continuously parrot that because under the hood AI is just math, these tools can do no wrong. I maintain this view is partly correct. Artificial Intelligence can do no wrong, no different than a hammer can do no wrong. But people are burdened with morality and, as such, need to reflect on their behavior and motives when it comes to its use because they can very much do wrong.

Yesterday, I had the fortunate experience of sitting in a meeting where leaders in various non-profits discussed their work. Most had attended a workshop on using AI tools to improve their operational efficiency. Improved efficiency is a great thing, especially in these non-profits, where added efficiency so clearly impacts their capacity to serve more clients. I’m not opposed at all to using the latest tools – including AI tools – to add impact, but when scenarios come up where public tools are being used to plan confidential business strategies that could not be shared even among the group, I was a little concerned there wasn’t a clear understanding of basic data privacy.

For example, Google’s NotebookLM tool promises never to train their models using your data, but the policy does leave the door open to use this information for tailoring content and ads. And it’s worth remembering the recent $5 billion Google Incognito Mode lawsuit settlement from tracking internet usage of consumers after misleading users to believe they wouldn’t track their internet activities. It’s important to not just trust and verify but also stay trustless when it comes to confidential data. If you use the tools, assume that the tool provider can access the information – no different then email content or text messages. At the end of the day, the safest viewpoint is that your conversations are between you, whoever you communicate with, and whatever technology company provides you that service (plus whatever hackers decide to take a peak).

Another use case was to translate documents and videos. A bad translation is better than no translation when it comes to accessibility of resources, but what happens when the translation communicates harmful behavior or the message is misaligned with the organization’s values? Although translation quality has improved tremendously, it’s good to remember that errors or non-culturally sensitive translations have resulted in catastrophic outcomes. Language is more nuanced than a mapping of words and phrases. Just look at the non-culturally sensitive translation that culminated in the bombing of Hiroshima, killing over 70,000 people in an instant.

As a friendly reminder, use of AI tools can result in real harm to life. Recently, there was a case of a lawsuit that alleges a chatbot, Character.AI suggested to a kid, to kill his parents. Additionally, cases exist of sexual exploitation of children from chatbots or even leading to teen suicide. Guardrails have been added as a response, primarily through a disclaimer reminding users that “This is an AI and not a real person.” But did just putting warnings on cigarettes ever really make a difference? After all, similar tools are already being utilized by organizations providing direct services to children, such as the mental health centers at Akron’s Children’s Hospital and Cincinnati Children’s Hospital, and schools in the US have rolled out AI chatbots for their students to combat the spike in childhood anxiety and depression.

The main point I’m making is that with all of the good benefits of AI tools, there is a lot of opportunity for some bad. These instances are fortunately not the norm, so I’m far from encouraging abolishing the usage of AI tools completely. But I hope they serve as reminders and cautionary tales to promote and encourage critical thought. Non-profits often serve the most vulnerable in society and, as such, should consider the legal requirements (which are constantly evolving and still in their infancy) as the floor of their policy and standards when it comes to AI. The people we serve deserve us to make the best choices and carefully consider how and what tools we use – whether that tool be AI or a hammer.