Should we be worried about AI?

Artificial intelligence (AI) is a new type of technology that is here to stay and being embedded in to our daily life. It’s already being used to analyses medical scans, filters spam, recommends movies, detects fraud, manages logistics, and control robots. Its rapid growth raises questions about how far it should go and what risks it might create. Concern about AI is reasonable, but fear alone is unhelpful. The technology will keep advancing; the task is to guide its development responsibly. Informed citizens, ethical design, and hopefully strong regulation can ensure AI remains a benefit rather than a risk to society.

What AI Is

AI refers to computer systems that learn from data to make predictions or decisions. It recognises patterns in text, images, numbers, or speech and uses that information to perform tasks that computers previously struggled at performing. AI is basically a digital brain simulation.  You can learn more about how AI works on my previous article here.

Why It’s Being Adopted

Organisations use AI to increase efficiency, reduce costs, and handle complex data.

  • Business: automates customer service, marketing, and inventory.
  • Healthcare: detects diseases and predicts treatment outcomes.
  • Transport: optimises routes and prevents congestion.
  • Education: adapts lessons to students’ needs.

Despite talk of an AI bubble, its integration will continue because it delivers measurable productivity gains. Most AI models are constantly being optimised to improve their human usability, which should generally push AI into becoming a better tool for us to use rather than a dangerous entity. Trying to push AI beyond our entire current human knowledge will be near impossible, but we can expect it to become vastly quicker and cheaper to use, as the hardware AI relies on still has significant room for improvement.

Main Risks of AI

1. Misinformation and deepfakes

AI can generate convincing but false photos, videos, and text. As quality improves, verifying truth will become difficult, threatening journalism and public trust. Deepfake files surged from 500,000 in 2023 to a projected 8 million in 2025, with fraud attempts spiking 3,000% in 2023 and financial losses exceeding $200 million in Q1 2025 alone.

  • Preventive steps: Promote media literacy, develop content verification standards, use watermarking or authentication tools, support advanced detection technologies, and implement new legislation (e.g., Denmark’s 2025 deepfake laws).

2. Job displacement and inequality

Automation will replace many repetitive and administrative roles. New jobs will emerge, but not all workers will transition easily. As of late 2025, widespread displacement has not yet occurred, but projections estimate up to 92 million jobs displaced globally by 2030, with 30% of US workers fearing replacement in the near term.

  • Preventive steps: Expand access to retraining and digital education, implement fair workforce policies, and explore mechanisms such as an “AI tax” to fund support for displaced workers.

3. Concentration of power

Developing advanced AI requires vast data and computing resources, often controlled by a few corporations or states. This centralisation risks monopolies and limited oversight. AI data-centre energy demand is projected to reach 165–326 terawatt-hours annually by 2028, with only 32 nations currently hosting specialised facilities.

Those with the best AI could gain more power by having far superior intelligence and technical research capabilities in science.

  • Preventive steps: Support open-source research, enforce competition/antitrust laws, encourage transparent governance, and strengthen open-source initiatives as seen in recent US executive orders.

4. Cyber and security threats

AI can be used both to secure and to attack networks. Automated hacking, phishing, and identity theft continue to rise. Cyber attacks increased 50% in the UK over the past year, with AI-orchestrated espionage detected in 2025; global cybercrime costs are expected to reach $10.5 trillion by 2025.

  • Preventive steps: Strengthen cybersecurity standards, educate users, ensure government-level defence strategies, and deploy AI-powered autonomous threat-hunting systems.

5. Loss of trust and autonomy

Algorithms influence what people see online and the decisions they make. Overreliance on AI can erode personal judgment and informed choice. Only 8.5% of users fully trust AI-generated overviews, and 71% of organisations cannot yet fully trust autonomous agents.

  • Preventive steps: Design transparent and explainable systems, protect user data, require human oversight for significant decisions, and build “trusted autonomy” through robust regulation and data-protection reforms.

6. Autonomous robots

AI is increasingly used in machines that move and act in the real world—such as warehouse robots, self-driving vehicles, drones, home assistants, and emerging humanoid robots (e.g., Tesla Optimus, Figure). While these improve efficiency, malfunctions or misuse create safety and ethical risks, particularly with autonomous military systems.

Also, the shear number of robots could displace a lot of jobs, making an economic shift too fast for sociality to keep up, putting large numbers of people out of work.

  • Preventive steps: Require human supervision for robots operating in public or high-risk settings, enforce strict safety and liability standards, and maintain international limits on lethal autonomous weapons systems.

Managing the Technology

AI itself is not inherently harmful; its effects depend on how humans design and apply it. The key challenge is governance—ensuring safety, fairness, and accountability keep pace with capability. Public understanding, ethical standards, and international cooperation are essential. AI can enhance healthcare, improve disaster response, accelerate scientific research, and support environmental management. When directed by clear rules and human values, it becomes a tool for solving major global problems.

The Tech People behind AI

Ironically, the main drivers behind the push for AI come from much darker ideas. Their personal ego and desire to create a super-intelligence are the central forces. They want to build something far smarter than the smartest person in the world, hoping to become a god by being the first to converse with it. These same people also believe that, no matter what happens, biological life will eventually be replaced by digital life.

Digital superintelligence is definitely coming. The human brain is limited by our skull, while digital brains have no theoretical limit. AI today already looks smart, but it’s largely faking it; its apparent intelligence comes from a strong grasp of knowledge and the ability to produce output quickly, but its general thinking ability isn’t better than any person’s. That’s mainly because all teaching material trains it to be and think like the average person. Teaching it to be super-human can only come from a super-human teacher. The material we use today will shape the future of AI.

In 2017, when Elon Musk was asked what keeps him up at night, he said he was really worried about AI and had already been speaking to world leaders about his concerns. Today he’s more optimistic, believing that as long as AI is taught with truthful information, it shouldn’t become a threat and should remain a very useful tool. To that end, he started xAI with the main goal of it being trained on the most truthful factual information possible.  But no matter how smart AI gets, it can never be conscious.

Leave a Reply