You, Your Small Business, and AI.  AI Safety.

An important distinction: there is a difference between Safe AI and AI Safety. 

 Following on our previous blogs about AI definitions and acronyms, in this blog we are looking at AI Safety in your business. The first obvious question will be who is going to be liable if something went wrong? A bit of a moot question if you ask me, for if something were to go wrong it will most probably mean the end of your fully automated business. As for Safe AI? That is, in my opinion, a misnomer. I do not think there is such a thing as safe AI unless, maybe, if it is narrow AI — when a programme was made for one specific function only and then, also, only for the time being. Bear with me… According to those in the know, if we expect Safe AI it would be like asking for a perpetual safety device; specific models can be made safe, but not every future model. What is also significant is that both Gemini and OpenAI have had many resignations in key safety personnel positions which may have deprioritized safety in favour of product speed and profit— or maybe they’ve realised trying to build it safe is a waste of time and money and also a bottomless pit?

 So, back to AI Safety in your company; some points to consider for the short term future, seeing that we are moving to a point in the longer term future where everything will be different; i.e. probably no more separate apps, no more search engines, no more browsers. Therefore from a world of operating systems to them being subsumed into a more integrated agentic form, resulting in a real, talking assistant (as opposed to just question and answer) in your pocket, 24/7 who knows all your info better than you do. 

 In the meantime, consider these points:

 1. The impact system crashes will have on consistent performance and productivity in your business.

2. Safeguards against unauthorized access to sensitive information and data breaches to extract information from the AI. 

3. Cybersecurity and cyberwarfare risks, including espionage, sabotage, propaganda, and denial-of-service attacks.

4. Company responsibility in the event of a data breach.

5. Prioritising the preservation of individual privacy and human rights.

6. Ensuring human control over key decisions that impact employees, customers, and company culture.

7. Safeguards against excessive AI autonomy within the business over all.

8. Ensuring fairness and ethics in all HR matters and prioritising human benefits in the workplace. 

9. Accountability for errors or harm caused by AI systems in unfair decision-making.

10. Potential biases in AI models, particularly in areas such as hiring, monitoring, lending, resource allocation, access to training or information, facial recognition, illegal surveillance.

11. Awareness of the  unpredictability of autonomous agents working on the different processes in your company—especially the ability of agents communicating with each other and working together in a language you don’t understand. 

12. Prevention of autonomous agents being used for malicious purposes, eg. AI manipulation of opinions and the amplification of social divisions.

13. The spread of misinformation within the company.

14. Monitoring for unethical AI practices that may become ingrained in algorithms.

15. The risk of AI systems trained on historical data that may now be outdated or illegal.

16. The potential replacement of human judgment and creativity.

17. You don’t want systems in your company that are not under your control. “Keep the human in the loop.”

18. It is very important to stick to ethical values. 

19. Proof of personhood might become a huge (and important) issue.

 Jensen Huang of Nvidia, the tech giant known for its powerful Graphics Processing Units (GPUs) that revolutionized PC gaming and now power the modern Artificial Intelligence (AI) revolution, making him a pivotal figure in technology and one of the world's wealthiest individuals, who co-founded Nvidia in 1993, leading it from a graphics chip maker to the dominant force in accelerated computing, crucial for AI models like ChatGPT :

No one really knows the security implications of AI.” 

 Eric Schmidt, the former Google CEO, emphasizes “a balance between innovation and regulation to ensure human control and prevent misuse by malicious actors.”

 Conclusion

There are a lot of dilemmas when it comes to the development of this technology — speed, greed and shortcuts on safety being the most concerning for us humans. User beware. 

 

Better workflows, better business

Are your current systems and processes hindering your business from achieving its next growth milestone? Now there is a smarter way to get work done.