Artificial Intelligence & Ethics: Knowledge Worker Anxiety

Artificial Intelligence & Ethics: Knowledge Worker Anxiety
Photo by Edgar / Unsplash

Douglas Adams once said:

"I've come up with a set of rules that describe our reactions to technologies:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you're thirty-five is against the natural order of things."

ChatGPT, Gpt4, Microsoft 365 Copilot, Google Workspaces AI, Anthropic Claude, Meta Llama, Stable Diffusion, Midjourney, Dall E 2, Huggingface, and Replika just to name a few technologies that have been released in the past year. We thought that manual labor was in trouble but look at all the white-collar angst.

We here at Top Tunnel are but humble messengers. Please do not shoot us! As we have looked at before, the technology is exciting and some of it is ruffling a few feathers. There is excitement, hostility, avoidance, ignorance, and dismissal in the air and the big players behind this push are not asking for permission, just forgive if at all.

The Main Ethical Concerns

  1. Bias and Discrimination - One of the most significant ethical problems with AI is the potential for bias and discrimination. AI systems learn from the data they are fed, and if that data is biased or discriminatory, the system will make biased and discriminatory decisions. For example, a hiring algorithm that is trained on historical data may discriminate against women, people of color, and other marginalized groups.
  2. Privacy Concerns - AI systems collect vast amounts of data, including personal information, and there is a risk that this data can be misused. The use of AI in surveillance, for example, raises concerns about privacy violations. AI-powered facial recognition systems can be used to identify individuals without their consent, and this information can be misused.
  3. Autonomous Decision Making - AI systems are capable of making decisions on their own, and this raises concerns about who is responsible for the decisions made by these systems. For example, if an autonomous vehicle makes a decision that results in an accident, who is responsible for the consequences? The manufacturer, the programmer, or the AI system itself? Moving fast and breaking things is too big of a risk here.
  4. Job Displacement - AI has the potential to automate many jobs, and this raises concerns about job displacement. While automation can improve efficiency and reduce costs, it can also lead to job loss, which can have significant social and economic consequences. The rapid change may be too much for a person who still has rent to pay and not enough time and money to retrain. The social tension is heated.
  5. Human Interaction and Empathy - AI systems lack human empathy, and this can be a problem in situations where human interaction is necessary. One example is a chatbot that provides customer support. It will not understand the emotional state of the person it is interacting with and may respond inappropriately. Some humans don't respond well either but that is another thing altogether.
  6. The weaponisation of AI - AI has the potential to be used for harmful purposes, such as developing autonomous weapons and an army of fake internet users for propaganda purposes. Weapons that have a mind of their own are already a topic explored in media like Terminator and The Matrix. And remember all the noise about Russian bots manipulating American elections. Imagine PropagandaGPT.

The social and psychological concerns

It is quite natural for people to feel apprehensive about the unknown. When it comes to artificial intelligence (AI), this fear can manifest in different ways. Some individuals may lack a clear understanding of AI's capabilities, limitations, and potential implications, which can cause caution or even fear. As a result, they may hesitate to embrace new technology, fearing the unknown.

Furthermore, the rise of AI may be perceived as a threat to human autonomy and control over various aspects of life, leading to a sense of powerlessness. The idea that machines might eventually take over human decision-making can be a frightening thought for many people, causing them to feel a loss of control over their lives.

Another factor that contributes to this fear is the way in which people derive a sense of self-worth and identity from their skills, knowledge, and abilities. If AI is perceived as superior in certain aspects, individuals might feel threatened, leading to a defensive attitude to protect their self-esteem and social identity. This is because people often take pride in their unique abilities and accomplishments, and the thought of being outperformed by a machine can be unsettling.

Fear not! With great changes, there are great opportunities. We will look at what do about it in another piece.