In a rapidly evolving AI landscape, Carlos Torres, Mozilla’s Chief Legal Officer, shares his expertise on critical issues shaping the future of AI governance, the balance between open and proprietary AI, and the transformative potential of AI for in-house legal teams. From advocating for trust, transparency, and equitable access in regulation to exploring how AI can elevate the legal profession, Carlos emphasizes the need for thoughtful, balanced approaches to ensure AI serves the public interest while fostering innovation.
Merlin Beyts: You’ve focused a lot on AI governance, particularly around Open AI. What do you think are the most important considerations when it comes to the regulation of AI?
Carlos Torres: Any regulation in AI should prioritize trustworthiness, promote openness, and ensure equitable access that is in the public interest. Some of the considerations that can help harness AI’s potential while safeguarding societal interests are:
- Trustworthiness and Accountability: Regulation should ensure AI systems are trustworthy, prioritizing human agency and accountability. Clear guidelines and oversight mechanisms are essential to prevent misuse and harm, much of which already occurs today.
- Openness and Transparency: Promoting open-source AI can accelerate innovation by creating shared building blocks and enabling critical public-interest research. Regulations must embrace meaningful transparency, even beyond openness, to facilitate scrutiny and accountability.
- Market Access: Ensuring that AI development is not monopolized by a few large companies is crucial. Regulations should facilitate access to necessary resources like computing power and data for smaller players, especially outside of the Global Majority.
- Privacy Protection: Strong privacy regulations are essential to prevent a race to the bottom – which will benefit the largest, entrenched players. AI must be developed in ways that respect individuals’ privacy rights and responsibly stewards collected data.
- International Cooperation: Collaborative international frameworks can ensure coherent and inclusive AI governance while aiding global compliance. These should involve diverse voices and avoid being captured by vested interests by engaging true multi-stakeholder input.
- Proportional Regulation: Regulatory obligations should be proportionate, especially for open-source AI, to avoid stifling grassroots innovation while ensuring safety and accountability across the AI development stack.
Merlin Beyts: What do you think about the debate over Open AI vs Proprietary AI? Can the two work hand in hand?
Carlos Torres: “Open” AI and proprietary AI can absolutely co-exist and complement each other. Openness is not a problem but part of the solution to creating a more trustworthy AI ecosystem. Open development practices drive innovation by providing accessible tools for researchers and smaller players, fostering a diverse and competitive ecosystem. They also enhance transparency and accountability, allowing for greater scrutiny and trust in AI systems. Proprietary AI, on the other hand, offers specialized, commercially driven solutions that can address specific market needs. Proprietary systems can learn from open practices, adopting transparency and collaborative approaches to improve their offerings and build trust with users.
By leveraging the strengths of both open and proprietary AI, we can create a balanced ecosystem where innovation thrives, safety and fairness are prioritized, and the public interest is upheld. Balanced regulation is key to ensuring that both approaches contribute positively, promoting responsible development and widespread benefits from AI technology.
Merlin Beyts: How do you think AI (in any form) will change the working lives of in-house legal teams over the next 3 – 5 years?
Carlos Torres: I’m thinking of this from first principles: an in-house lawyer should ideally be a trusted advisor to the business. I look at AI as complementary and augmenting the work in-house legal teams do and an opportunity for in-house legal teams to demonstrate leadership in this new space. AI can be leveraged in many ways now, including managing routine tasks, developing strong and more efficient compliance programs, summarizing cases and regulations, and deploying more robust contract management systems and processes. New AI technologies for lawyers are being developed every day and like any technology, AI must be used in a manner that conforms to a lawyer’s professional responsibility obligations, such as confidentiality and privacy, and duties of competence and diligence (i.e., accuracy). In other words, AI needs to be used thoughtfully.
Nonetheless, I see the use of AI as elevating the role of the legal department. In-house lawyers will get back more time to focus on delivering strategic advice and becoming more valuable partners to the company. AI is also a leadership opportunity. In-house legal teams can demonstrate leadership by guiding their company and stakeholders on the implementation of safe and responsible AI governance programs.
Thinking back over the last few decades, business productivity technologies like redlining and e-discovery software have allowed in-house lawyers to focus on higher-value work, and I believe AI will do the same at an exponential level. It’s an exciting time to practice law in-house by adopting new skills and embracing AI thoughtfully.
Read the full LegalTech Diaries Volume 6: https://www.legaltech-talk.com/wp-content/uploads/2024/11/LegalTech-Diaries-Volume-6.pdf