A traditional approach for integrating Security and Privacy will not work for building Security & Privacy into AI systems. These systems suffer from a number of novel unresolved vulnerabilities. Contact us to help build SPIT in your AI systems using our framework.

SPIT © Pamela Gupta 2019. All Rights Reserved.

I am concerned about lack of guidance around building security integrated into building Machine Learning Systems, and other critical controls around Privacy, Integrity and Transparency. Secure SDLC for traditional development took years to become formalized and adopted, we dont have the luxury of waiting as ML systems are already deployed and have much higher impact and riskier implications.

As AI and its subset Machine Learning systems continue to increase in breadth and depth around us from systems being used in courts around the country to assist in determining length of incarceration to connected systems to home based devices such as Alexa, Siri and Google home – one glaring gap and risk is that of security in the development of these systems. Traditional security SDLC is not going to be sufficient to identify security, privacy vulnerabilities in these systems. I know this as some of my “Mona Lisa”‘s in security work has been creating holistic risk based security development programs for Fortune 500 companies.

Artificial Intelligence systems require a different approach that includes the traditional security methods such as access control etc but more, a lot more – I am proposing a model that aims to build 4 critical components as a part of the build process. Security, Privacy, Integrity and Transparency so we can ensure we have secure, resilient systems with outcomes that we can trust.

I am very excited about creating an approach that has been published by PenTest Magazine .

Please contact me with your thoughts on this critically required framework.

#artificial_intelligence #security #aiethics #privacy