Using AI Responsibly
Maryland’s vision for responsible AI
AI is evolving rapidly - both the opportunities it affords, and the risks it presents
From the start, Maryland's AI efforts have been driven by two goals: Improving the ways we design and deliver services for Marylanders, and upskilling our own state workforce. Maryland commits to using this powerful technology in ways that are responsible, ethical, beneficial, and trustworthy. All of our work aims to build the capabilities that bring this goal to life.
In January 2024, Governor Wes Moore signed an executive order - PDF that roots all state use of AI in a set of foundational principles. Our state AI policies and governance begin from this baseline to ensure we “first do no harm.”
Fairness and equity
The State's use of AI must acknowledge the fact that AI systems can perpetuate harmful biases. Maryland takes steps to mitigate those risks, in order to avoid discrimination or disparate impact to individuals or communities based on their race, color, ethnicity, sex, religion, age, ancestry or national origin, disability, veteran status, marital status, sexual orientation, gender identity, genetic information, or any other classification protected by law.
Innovation
When used responsibly in human-centered and mission-aligned ways, AI can be a tremendous force for good. The State commits to exploring ways AI can be leveraged to improve State services and resident outcomes.
Privacy
Individuals' privacy rights should be preserved by design in the State's use of AI, while ensuring that data creation, collection, and processing are secure and in line with all applicable laws and regulations.
Safety, security, and resiliency
AI presents new challenges and opportunities to ensure the safety and security of Maryland residents, infrastructure, systems, and data. The State commits to adopting best practice guidelines and standards to surface and mitigate safety risks stemming from AI, while ensuring AI tools are resilient to threats.
Validity and reliability
AI systems can change over time. The State should have mechanisms to ensure that these systems are working as intended, with accurate outputs and robust performance.
Transparency, accountability, and explainability
The State's use of AI should be clearly and regularly documented and disclosed. The outputs of AI systems in use by the State should be explainable and interpretable to oversight bodies and residents, with clear human oversight. This builds trust in our work to serve citizens and state staff, alongside transparency into how we use AI in our work.