Integrating Cybersecurity, Psychological Factors, and Governance in AI

Part III of III

This is Part 3 of a three-part series tackling the topic of generative AI tools. This third installment is “Safeguarding Ethical Development in Chat GPT and Other LLMs through a Comprehensive Approach: Integrating Security, Psychological Considerations, and Governance.”

In this three-part series, we have centered on a holistic approach that emphasizes safeguarding ethical development of ChatGPT and other Language Learning Models (LLMs). It is vital to adopt a comprehensive approach that seamlessly integrates practical security measures, psychological considerations, and robust governance protocols.

AI technology
artificial intelligence (ai) and machine learning (ml)

In Part 1, we began by exploring how practical security measures can protect these aspects:

  • Addressing prompt engineering abuse, especially DAN 13.5
  • Tackling the absence of security-by-design in AI developments
  • Treating generative AI tools as third-party service providers

Concluding with recommended security measures for AI LLM development.

In Part 2 of this series, the emphasis was a human-centric approach to generative AI models, focusing on the psychological insights during their design and development, via:

  • Safeguarding the sensitive nature of human trust
  • Integrating an approach that includes expertise in psychology, sociology, and linguistics
  • Tackling issues of code ingestion, referred to as “Algorithmic bias,” whether implicit or explicit

Concluding with recommendations and psychological considerations to explore when addressing trust, ethics, privacy, and biases for AI developments.  

Now, for Part 3: Governance circling in AI’s LLM waters.

Imagine for a moment that Generative AI is the “boat” in a sea filled with various types of sharks. Some sharks see the boat as a possible ally for food, while others view the boat as a threat. This analogy captures the dynamics between governance and generative AI. Neither are acquainted with each other, yet there’s a sense of recognition. So, are we allies or adversaries? Typically, governance and compliance is perceived as the “shark.” Governance is made up of large bodies of global agencies and organizations. These entities are responsible for ensuring safeguards, policies, and regulations are established in place for societal well-being. However, within the realm of generative AI, governance, though present, is still in its infancy.

Amid rapid technological advancement, the Biden Administration in the United States took a commendable step by unveiling a “Blueprint for an AI Bill of Rights,” emphasizing oversight and security. Preceding this, UNESCO’s “Recommendation on the Ethics of Artificial Intelligence” earned the nod from an impressive 193 Member States in November 2021. Simultaneously, numerous nations are charting their courses with distinctive guidelines to anchor ethical AI practices. These encompass crucial pillars such as:

  • Privacy and data governance
  • Transparency
  • Accountability
  • Non-discrimination
  • Comprehensive oversight, among others.

Several global initiatives have emerged, setting benchmarks for governing AI developments, including:

Navigating the governance maze of generative AI

Let’s remember that the emergence of generative AI is capable of autonomously generating creative outputs, which emphasizes the critical need to establish robust governance, risk management, and compliance measures. These measures are vital to ensure the ethical, legal, and responsible use of this advancing technology.

It is essential, however, to acknowledge the existing gap in our current security standards. Can we effectively utilize the established security frameworks to implement operational controls across all stages of AI development? If so, is there a standardized set of metrics to measure and track success? Should there be?

Personally, I perceive this as a complex issue that warrants careful consideration. The rapid evolution of generative AI and other AI domains, such as robotics, adds to the intricacy of the matter.

Merely incorporating existing security or privacy standards (e.g., GDPR, HIPAA, NIST SP 800-53, 37, 39, ISO 27001/2:2022, or others) into AI development might be akin to placing a Band-Aid on a bullet wound. Such an approach may cover the surface, but it won’t prevent potential infection, leaving us vulnerable to risks and attacks.

To further support my argument, in a recent article titled; “Europe Makes First Move Toward Regulating AI with EU AI,” written by Cam Sivesind, Journalist and Content Strategist for SecureWorld, he features insightful interviews with several security leaders in the country. The focus of these discussions is the European Union (EU) AI Act, adopted by Parliament on June 14, 2023. While I concur with many of the comments presented in the article, it is important to note that this move is not entirely unprecedented for the European Parliament. Throughout the decades, they have consistently established themselves as global leaders in developing governance roadmaps to safeguard privacy and data.

This EU AI Act, as I interrupted, is proactive but with a twist of lime (which is more bitter than lemon). The guidelines are to ensure a risk-based approach for the purposes of ensuring accountability to the potential threats of ethical and/or legal implications, which makes sense. As an example, imposing prohibition on certain AI Practices such as: “AI systems with an unacceptable level of risk to people’s safety would therefore be prohibited, such as those used for social scoring (classifying people based on their social behaviour or personal characteristics),” according to the article, The European Union (EU) AI Act.

This restriction outlined in the new EU AI Act stems from the foundational principles established by GDPR. My premise aligns with Craig Jones, Vice President of Security Operations at Ontinue, who stated in Sivesind’s article: “The EU AI Act, much like its predecessor GDPR, could indeed set global norms, given the transnational nature of technology companies and digital economies. While GDPR became a model for data privacy laws, the AI Act might become a template for AI governance worldwide, thereby elevating global standards for AI ethics and safety.”

Jordan Fischer, Cyber Attorney and Partner at Constancy, supported Jones by stating: “With the passage of this AI regulation in the EU, the EU is again signaling that it intends to lead in attempts to regulate technology and find some balance between innovation and protecting the users of that technology.”

Jones continues to validate my earlier point by stating: “The downside might be that it could temper the pace of AI innovation, making the EU less attractive for AI startups and entrepreneurs. The balance between transparency and protection of proprietary algorithms also poses a complex challenge.”

To effectively address the complex challenge of governing generative AI tools or other AI technologiesā€”while simultaneously monitoring risks and vulnerabilitiesā€”we must first begin with modeling a proactive versus reactive approach. Careful consideration of the most suitable and efficient security and governance standards for AI deployment is paramount. Blindly adopting existing standards without thorough evaluation could still expose organizations (consumers) to malicious attacks and threats. As technologists, it is our responsibility to find comprehensive solutions that align with the dynamic landscape of AI and its associated technologies.

Today, we have access to a plethora of security and governance standards, making the task of aligning AI products with governance and risk requirements during deployment seem straightforward. However, from an audit and compliance perspective, it is important to acknowledge that AI products, including generative AI tools, are constantly evolving. To ensure that the creative potential of AI innovation is not stifled, we must consider a different approach.

Drawing from my extensive experience, specifically in Governance Risk and Compliance (GRC), I advocate that we should approach implementing requirements from a holistic perspective. By developing a set of standards using a hybrid methodology, we can effectively address the intricate challenges posed by AI (UX/UI). Executing a creative approach to governing the risk associated with these elements could pave the way for a more secure and responsible use of AI technology while fostering innovation, safeguarding against potential threats, and ensuring data privacy is protected. It will empower us to stay ahead of evolving risks while promoting ethical and legal practices throughout the AI ecosystem.

Conclusion

In the journey of AI evolution, ethical developments of tools such as ChatGPT and other LLMs stand to be a vital part of our societal ecosystem. Safeguarding the integrity and utility of these models demands a comprehensive strategy. For that reason, it is imperative to underscore the need for a robust approach to governance, to address the inevitable challenges. Existing security or privacy standards might not be enough; a holistic and hybrid methodology is needed for effective governance. The goal is to balance innovation while ensuring data privacy, ethical practices, and legal regulations.

We (technologists) require a strategy that merges security, psychology, and governance, making ethical AI integration both secure and value-driven. Thus, “Safeguarding Ethical Development in ChatGPT and Other LLMs” is not just a mantra; it’s a roadmap, guiding us toward a harmonious integration of AI in our lives, achieved through a comprehensive approach that prioritizes both security and humanity.


Special appreciation:
Major gratitude to my special team (Derrick JacksonNadiya Aleem, and Wood Carrick) for the encouragement and support! Believe it or not, this series was written while I was recovering from robotic open-heart surgery. AI wasn’t just a hot topic to tackle; for me, it was a lifesaver!

Big thanks to Cam Sivesind and the SecureWorld team, and to friends and colleagues who have circulated these articles.

Publication: Integrating Cybersecurity, Psychological Factors, and Governance in AI

Sign Up For Our Newsletter

Newsletter Signup Form

Name(Required)
Posted in

Share this article

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top