This episode currently has no reviews.
Submit ReviewMike Canfield, Morgan Stanley’s Head of Europe Sustainability Research, discusses why ensuring safe and responsible artificial intelligence is essential to the AI revolution.
----- Transcript -----
Mike Canfield: Welcome to Thoughts on the Market. I'm Mike Canfield, Morgan Stanley's Europe, Middle East and Africa Head of Sustainability Research.
Today I'll discuss a critical issue on a hot topic: How safe is AI?
It's Thursday 10th of October at 2pm in London.
AI is transforming the way that we live, work, and connect. It's really got the potential at every level and aspect of society, from personal decisions to global security. But as these systems become ever more integrated into our critical functions – whether that's healthcare, transportation, finance, or even defense – we do need to develop and deploy safe AI that keeps pace with the velocity of technological advances.
Market leaders, academic think tanks, NGOs, industry bodies, intergovernmental organizations have all attempted to codify what safe or responsible AI should look like. But at the most fundamental level, the guidelines and standards we've seen so far share a number of clear similarities. Typically, they focus on fostering innovation in practical terms, as well as supporting economic prosperity – but also asserting the need for AI systems to respect fundamental human rights and values and to demonstrate trustworthiness.
So where are we now in terms of regulations around the world?
The EU's AI Act leads the way with its detailed risk-based approach. It really focuses on transparency as well as risks to people and fundamental rights. In the USA, while there's no comprehensive federal regulation or legislation, there are some federal laws that offer some sector specific guidance on AI applications. Things like the National Defense Authorization Act of 2019 and the National AI Initiative Act of 2020. Alongside those, President Biden's published an executive order on AI, promoting safety, responsible innovation, and supporting Americans and their rights, including things like privacy. In Asia Pacific, meanwhile, countries are working to establish their own guidelines on consumer protection, privacy, and transparency and accountability.
In general, it’s very clear that policymakers and regulators increasingly expect AI systems developers to adopt what we'd call the socio-technical approach, focused on the interaction between people and technology. Having examined numerous existing regulations and foundational standards from around the world, we think a successful policymaking approach requires the combination of four core conceptual pillars.
We've called them STEP. That's Safety, Transparency, and Ethics and Privacy. With these core considerations, AI can take a step – pun intended – in the right direction. Within safety, the focus is on reliability of systems, avoiding harm to people and society, and preventing misuse or subversion. Transparency includes a component of explainability and accountability; so, systems allowing for future feedback and audits of outcomes. Ethically, the avoidance of bias, preventing discrimination, inclusion, and the respect for the rule of law are key components. Then finally, privacy considerations include elements like data protection, safeguards during operation, and allowing users consent in data used for training.
Of course, policymakers contend with a variety of challenges in developing AI regulations. Issues like bias, like discrimination, implementing guardrails without stifling innovation, the sheer speed at which AI is evolving, legal responsibility, and much more beyond. At its most basic, though, arguably the most critical challenge of regulating AI systems is that the logic behind outcomes is often unknown, even to the creators of AI models, because these systems are intrinsically designed to learn.
Ultimately, ensuring safety and responsibility in the use of AI is an essential step before we can really tap into ways AI could positively impact society. Some of these exciting opportunities include things like improving education outcomes, smart electric grid management, enhanced medical diagnostics, precision agriculture, and biodiversity monitoring and protection efforts. AI clearly has enormous potential to accelerate drug development, to advance material science research, to boost manufacturing efficiency, improve weather forecasting, and even deliver better natural disaster predictions.
In many ways, we need guardrails around AI to maximize its potential growth.
Thanks for listening. If you enjoy the show, please do leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
Mike Canfield, Morgan Stanley’s Head of Europe Sustainability Research, discusses why ensuring safe and responsible artificial intelligence is essential to the AI revolution.
----- Transcript -----
Mike Canfield: Welcome to Thoughts on the Market. I'm Mike Canfield, Morgan Stanley's Europe, Middle East and Africa Head of Sustainability Research.
Today I'll discuss a critical issue on a hot topic: How safe is AI?
It's Thursday 10th of October at 2pm in London.
AI is transforming the way that we live, work, and connect. It's really got the potential at every level and aspect of society, from personal decisions to global security. But as these systems become ever more integrated into our critical functions – whether that's healthcare, transportation, finance, or even defense – we do need to develop and deploy safe AI that keeps pace with the velocity of technological advances.
Market leaders, academic think tanks, NGOs, industry bodies, intergovernmental organizations have all attempted to codify what safe or responsible AI should look like. But at the most fundamental level, the guidelines and standards we've seen so far share a number of clear similarities. Typically, they focus on fostering innovation in practical terms, as well as supporting economic prosperity – but also asserting the need for AI systems to respect fundamental human rights and values and to demonstrate trustworthiness.
So where are we now in terms of regulations around the world?
The EU's AI Act leads the way with its detailed risk-based approach. It really focuses on transparency as well as risks to people and fundamental rights. In the USA, while there's no comprehensive federal regulation or legislation, there are some federal laws that offer some sector specific guidance on AI applications. Things like the National Defense Authorization Act of 2019 and the National AI Initiative Act of 2020. Alongside those, President Biden's published an executive order on AI, promoting safety, responsible innovation, and supporting Americans and their rights, including things like privacy. In Asia Pacific, meanwhile, countries are working to establish their own guidelines on consumer protection, privacy, and transparency and accountability.
In general, it’s very clear that policymakers and regulators increasingly expect AI systems developers to adopt what we'd call the socio-technical approach, focused on the interaction between people and technology. Having examined numerous existing regulations and foundational standards from around the world, we think a successful policymaking approach requires the combination of four core conceptual pillars.
We've called them STEP. That's Safety, Transparency, and Ethics and Privacy. With these core considerations, AI can take a step – pun intended – in the right direction. Within safety, the focus is on reliability of systems, avoiding harm to people and society, and preventing misuse or subversion. Transparency includes a component of explainability and accountability; so, systems allowing for future feedback and audits of outcomes. Ethically, the avoidance of bias, preventing discrimination, inclusion, and the respect for the rule of law are key components. Then finally, privacy considerations include elements like data protection, safeguards during operation, and allowing users consent in data used for training.
Of course, policymakers contend with a variety of challenges in developing AI regulations. Issues like bias, like discrimination, implementing guardrails without stifling innovation, the sheer speed at which AI is evolving, legal responsibility, and much more beyond. At its most basic, though, arguably the most critical challenge of regulating AI systems is that the logic behind outcomes is often unknown, even to the creators of AI models, because these systems are intrinsically designed to learn.
Ultimately, ensuring safety and responsibility in the use of AI is an essential step before we can really tap into ways AI could positively impact society. Some of these exciting opportunities include things like improving education outcomes, smart electric grid management, enhanced medical diagnostics, precision agriculture, and biodiversity monitoring and protection efforts. AI clearly has enormous potential to accelerate drug development, to advance material science research, to boost manufacturing efficiency, improve weather forecasting, and even deliver better natural disaster predictions.
In many ways, we need guardrails around AI to maximize its potential growth.
Thanks for listening. If you enjoy the show, please do leave us a review wherever you listen and share Thoughts on the Market with a friend or colleague today.
This episode currently has no reviews.
Submit ReviewThis episode could use a review! Have anything to say about it? Share your thoughts using the button below.
Submit Review