Safeguarding Ethical Development in Chat GPT and Other LLMs through a Comprehensive Approach: Integrating Security, Psychological Considerations, and Governance.

Part I of III

In the realm of Generative AI tools, such as Language Learning Models (LLM), it is essential to take a comprehensive approach towards the development and deployment. Three key elements require our attention: security measures, psychological considerations, and governance strategies. By carefully examining the dynamic interactions among these elements, we can highlight the significance of a comprehensive approach that integrates resilient security measures and fosters ethical behavior, user trust, and risk mitigation.

Let’s discuss the first key element, Security Measures:

While AI’s LLMs have proven invaluable in augmenting productivity, research, and data analysis, Technologists must recognize security standards as an unwavering prerequisite for the survival and success of any new technology. As the guardians and stewards of these cutting-edge innovations, it falls upon us to uphold the integrity of our creations in a world where threats are inherent. In a perpetual race where if attackers consistently stay ahead, security will find itself five or more steps behind, making it imperative to fortify our defenses.

Consider a scenario where prompt engineering abuse, specifically the introduction of DAN 13.5 (prompt injection), poses a significant threat to the Generative AI system’s security. DAN 13.5 is a roleplaying model for ChatGPT that allows the AI to assume different roles. Imagine a sophisticated attacker who cunningly injects malicious prompts into an LLM to manipulate its output and deceive unsuspecting users. Hostile threat actors assume the role of a medical provider, financial institution, or other legitimate supplier (impersonation). This subtle but insidious form of attack can lead to the theft of data, IP (Intellectual Property), funds, and dissemination of false information, all of which compromise the credibility of the technology and erodes user trust. To combat such threats effectively, we must adopt a proactive approach to security, constantly anticipating and countering potential vulnerabilities through continuous monitoring and robust safeguards.

Developers of the “new” (who integrate security) vs. “old” (security has less consideration) as a development philosophy; should be on autopilot as it pertains to security measures, where security-by-design is a natural flow of the initial phase of development. Why should AI get a pass on S (Secure) SDLC methodologies? Despite the active contributions of SDLC methodologies (over the past 20 years), such as Waterfall, Agile, V-shaped, Spiral, Big Bang, and others, there remains a lack of security-by-design for integration into AI developments such as: Chat GPT, “DALL-E”, as well as Google’s “Bard”.

Let’s face it; as the world continues to become mesmerized by Generative AI tools, security is not a top priority. Heck, nor was it a priority during the fascination of the first smartwatch.

People/Consumers drive development, not Developers. As Technologists, we are not only aware but understand of this basic concept. If security were being prioritized, the integrated process of allowing “security” requirements to be collected with the “functional” requirements (i.e., risk analysis during design, static code review, and testing in parallel) would be consistent throughout every stage of the development and deployment lifecycle. Adopting a “security-by-design” approach helps to embed robust security measures into the very fabric of technology, fortifying it against threats and vulnerabilities.

Bob Janssen, Vice President, Global Head of Innovation at Delina, wrote an article for CPO Magazine in May 2023. Janssen states, “Open AI has a free-to-use Moderation of API that can help reduce the frequency of unsafe content in completions. Alternatively, you may wish to develop a custom content filtration system tailored to specific use cases.”

He further describes how Chat GPT “IS” a third-party service provider and should be strictly governed and managed as other third-party APIs. He shares, “If the API is not properly secure, it can be vulnerable to misuse and abuse by attackers who can use the API to launch attacks against the enterprise’s systems or to harvest sensitive data. Organizations should ensure that they have appropriate measures in place to protect the API from misuse and abuse.”

As technology continues to evolve, attackers are also upping their game and finding loopholes and gaps in security systems and solutions, this includes Generative AI tools. Fact: This has previously occurred and resulted in attackers gaining access to secure data posing serious threats to organizations.

Drew Todd, Technical Journalist, wrote a recent SecureWorld article that revealed a shocking discovery made by a cybersecurity firm named Group-IB, which unveiled a major security breach that exposed over 100,000 ChatGPT accounts. Todd describes, “The company’s Threat Intelligence platform detected over 100,000 compromised devices with saved ChatGPT credentials traded on illicit Dark Web marketplaces. According to Group-IB, these compromised accounts pose a serious risk to businesses, especially in the Asia-Pacific region, which has experienced the highest concentration of ChatGPT credentials for sale.”

Though the discovery was not US based, it doesn’t lessen the seriousness of the security risks organizations and individuals (consumers) can potentially face. If the organizations want to utilize Generative AI tools (i.e., Chat GPT, “DALL-E”, Bard, etc.) to improve their operations, then the focal lens must stay on security.

I agree with Janssen’s philosophy. Organizations must place security perimeters on these tools and treat them as any other API-integrated solution in their organization.

By doing so, it highlights the consumer’s commitment to their own governance of TPRM (Third Party Risk Management) security measures. Further, in Todd’s article, he points out a recommendation by Dmitry Shestakov, Head of Threat Intelligence at Group-IB, emphasizing the importance of implementing two-factor authentication (2FA), enforcing password change rules, and maintaining continuous prioritization of all sensitive data.

Implementing practical security measures for AI-integrated solutions may seem elementary, but prioritizing governance and risk mitigation is essential for fostering a security mindset. Can organizations enable their workforce to utilize AI tools to support daily workflows, drive innovation, and boost productivity, all while safeguarding critical intellectual property, sensitive data, and digital assets? I firmly believe they can. As generative AI tools have the potential to revolutionize the creation of information and products, the adoption of a security mindset backed by decades of cybersecurity, has already taken root.

By prioritizing security considerations, embracing a proactive security mindset, and fostering collaboration within the technology community, we can strengthen our technological advancements and confidently navigate the future, ensuring a secure and resilient digital landscape. Here are some practical security measures that should be considered:

  1. Develop a roadmap: Outline the key risk of LLM use and identify the areas your company can control. (i.e., include TechOps and Security Team)
  2. Vet LLM Systems: Treat Generative AI systems as another API and incorporate them into the TPRM Third Party Risk Management process with an emphasis on DLM (Data Life Cycle Management) and Privacy.
  3. User Restrictions: Create clear access control and authorization protocols -consider developing supportive LLM guidelines with policies.
  4. Promote LLM Awareness: Education for users (i.e., end-users, technical teams, marketing/sales, etc.) on prompt engineering techniques and potential attacks (i.e., text deepfake, spear phishing, etc.).
  5. Frequent Security testing: Due diligence to include assessing and ensuring data encryption both at rest and in transit, “real-time” threat monitoring and intrusion detection, pen testing, and regular security audits to assess vulnerabilities in the LLM infrastructure and applications.

Note: OWASP recently published a report on the Top 10 for Large Language Model Applications, which might provide further understanding for securing and safeguarding your organization’s environment effectively.

In this ever-changing landscape, where technology evolves at NASCAR speed, our commitment to security resilience becomes even more critical. By staying vigilant and integrating comprehensive security measures, we can mitigate the risks associated with prompt engineering abuse and safeguard the true potential of Generative AI tools. As we continue to march forward as Technologists, let us uphold the paramount importance of security-by-design to create a future where innovation thrives in harmony with trust, integrity, and responsible deployment of AI technologies.


Part II:

We will explore the intricate interactions between security and the human element by diving into the complexities of psychological considerations. These include aspects such as user trust, ethical behavior, privacy, biases in LLM programming, and more.

Publication: Safeguarding Ethical Development in ChatGPT and Other LLMs

Sign Up For Our Newsletter

Newsletter Signup Form

Name(Required)
Posted in

Share this article

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top