Request Software or Technology
On this page
- Maryland’s AI intake process
- Submitting your AI intake request
- Which details should I include in my request to use AI?
- Assessing risk when requesting AI
- Risk assessment matrix
- Risk Assessment Matrix
- What happens if my AI use case is high risk?
- How will my AI request be reviewed?
- Approval and next steps
- Monitoring and Managing AI Risk
- Monitoring and Managing AI Risk
- Appendix 1: Example Ticket
- Appendix 2: Prompt template for Gemini and guidance document
- Software that my agency uses added new AI features. What should I do?
Maryland’s AI intake process
If you have reviewed Maryland’s responsible AI use policy and want to move forward with an AI use case, then it’s time to proceed through the AI intake process.
This process provides the review framework that powers AI projects and products across the State. It takes a risk-based approach to help state civil servants shepherd their ideas from conception to completion.
Maryland's AI intake process takes an advisory approach based on risk to each user group. Low-risk use cases moving forward faster, while high-risk use cases receive due diligence and oversight as a collaboration between Maryland's Department of Information Technology (DoIT) and the agencies that request each use case.
DoIT will improve this process over time, with stronger guardrails and faster throughput to support agencies across Maryland.
Submitting your AI intake request
If you would like to use AI in your work, DoIT can help you gain access. Request software from DoIT through the following steps via ServiceNow. Note that you must be logged into your state email account using the State's two-factor authentication:
- Select the Request a Service box on the left;
- Select the General Inquiry box on the left;
- Submit your request in the What is your question? box.
DoIT will route your request to the AI team, which will review your proposal and work with you to find the best solution for your agency’s use case.
Which details should I include in my request to use AI?
In the What is your question? box, include the following details:
- Explain how your request is an AI use case. (Do you want to buy new software with AI? Gain access to a sandbox so you can compare AI models? Start building a new tool from scratch?)
- Acknowledge you have reviewed Maryland's AI Governance Cards, responsible AI Policy and Implementation Guidance.
- Attach all relevant documentation for this use case.
Once DoIT receives your request, they will send you a form to capture more information about your AI use case. In addition to answering the questions on the intake form, you should provide the following details in your ticket description.
- AI Use Case: Briefly describe the purpose or problem the AI solution aims to address (e.g., process automation, decision support, customer service).
- Business Case: Explain why this AI project is important to your agency. What value or improvements do you predict will it bring? (e.g., cost savings, reduced wait times, enhanced data insights).
- Success Metrics: Identify the key performance indicators or metrics you plan to track (e.g., accuracy rate, user satisfaction, error reduction, time savings).
- Reference Materials / Tools / Solutions: List any relevant studies, tools, existing solutions, or frameworks considered when planning this AI implementation.
- Risk Classification: Please describe the risk level you perceive of using this AI use case. Maryland's Responsible AI Policy shares the State's risk classification system for reference.
Accessibility Through AI Tip: Copy and paste the AI template into Gemini and ask it to take your information (everything you can provide). Gemini can generate the ticket for you, along with asking you any other questions to consider. Review Appendix 2 to see the prompt template.
Assessing risk when requesting AI
DoIT evaluates your AI proposal for risk against our classification matrix. As the submitting requestor, DoIT expects you to assess your AI project’s risk by classifying it against DoIT’s tiers (such as limited-risk vs. high-risk.)
Under SB818 (2024), high-risk AI generally references systems that could impact safety, civil liberties, equal opportunities, access to essential services, or privacy in a significant way. To use the Risk Assessment Matrix, first determine the type of data your AI system will be processing.
To understand the data classification, refer to the Privacy Threshold Analysis and Data Classification Policy. Then, review the definitions of each AI Risk Category to identify which best describes your system's predicted impact.
In some cases, a system may use Level 1 data but still be considered high risk, such as a system that predicts crime hotspots and allocates police resources based solely on publicly available data. Although this system uses public data, it could perpetuate existing biases in policing and lead to over-policing in certain areas, disproportionately affecting specific communities.
If you’re still unsure about your use case’s risk classification, consult with DoIT or default to the higher risk category. Refer to Appendix 1 to see how this information is captured in our example.
Risk assessment matrix
Risk Assessment Matrix
Systems posing extreme risks to public welfare, safety, or rights that cannot be mitigated. They may violate fundamental rights (e.g., unlawful surveillance, social scoring) or fail to align with State values. These systems are banned entirely, regardless of data classification, as no combination of safeguards can offset the risk they pose. Use cases that result in violations of law or core civil liberties are not permissible.
- Data Classification Type(s): No Relevant Data Classification
- There is no corresponding data classification that suggests this risk label. Even if the data in question might normally be classified at Level 1, 2, 3, or 4, the system itself is not permissible due to the nature of its risk, rather than the sensitivity of the data.
For instance, a system that attempts to track or socially score individuals en masse would be “unacceptable,” even if it only accessed public (Level 1) data. The risk category overrides any data classification considerations.
- There is no corresponding data classification that suggests this risk label. Even if the data in question might normally be classified at Level 1, 2, 3, or 4, the system itself is not permissible due to the nature of its risk, rather than the sensitivity of the data.
- Use Case Example(s)
- Unlawful Mass Surveillance: For example, an AI tool that attempts to track constituents’ every movement via facial recognition in real time without oversight or statutory authority.
- Uncontrolled Social Scoring: A system that ranks or penalizes constituents’ daily behavior, akin to certain types of “social credit,” contradicting fundamental principles of fairness and privacy.
(Even if such a system used largely public data, it would remain “unacceptable” based on the AI policy’s ethical and legal prohibitions.)
AI systems that significantly affect individuals or critical government operations. Often influence outcomes related to health, safety, law enforcement, eligibility for essential services, privacy, financial or legal rights, or other high-impact areas (civil rights, civil liberties, equal opportunities, access to critical resources, or privacy).
- Mandatory Safeguards
- Comprehensive Algorithmic Impact Assessment (AIA) before deployment.
- Risk mitigation measures (e.g., bias testing/correction, cybersecurity hardening, etc.).
- Ongoing monitoring, with human-in-the-loop oversight for critical decisions.
- Aligns with NIST AI RMF best practices (mapping, measuring, mitigating, and managing risk continually).
- Data Classification Type(s): Likely Level 4 (Restricted) or Level 3 (Confidential)
- Level 4 (Restricted) data involves severe impacts if disclosed (e.g., Criminal Justice Information (CJI), Federal Tax Information (FTI)). Unauthorized access could cause irreparable harm, may carry legal or criminal consequences, and is typically protected by strict laws (IRS Publication 1075, CJIS Security Policy, etc.).
- Level 3 (Confidential) data includes sensitive information (PII, PHI, financial/student data), which is protected by law from disclosure. Unauthorized access can severely harm individuals or the State and requires safeguards such as encryption at rest/in transit. AI systems that process or make critical decisions using these high-sensitivity data levels fit squarely into High-Risk.
- Use Case Example(s)
- AI for Eligibility Determinations: A tool that decides whether individuals qualify for social services (e.g., Medicaid, housing assistance). It uses Confidential (Level 3) data like PII and financial records. Because erroneous or biased decisions could seriously impact constituents, it is “High-Risk” and requires a full Algorithmic Impact Assessment (AIA).
- Law Enforcement Investigative AI: An AI system that taps into Restricted (Level 4) data (e.g., Criminal Justice Information) to recommend suspect lists or inform sentencing guidelines. Such usage has high stakes for individual rights and must be thoroughly vetted, tested for bias, and subject to human oversight.
- Financial Lending/Underwriting Tool: An AI that sets loan terms for constituents or businesses, using robust sets of personal/financial data. A misclassification, bias, or data breach could cause serious harm, making it High-Risk.
AI systems with moderate or low overall impact that do not autonomously decide critical outcomes for individuals (e.g., no direct control over law enforcement, healthcare coverage, or major financial decisions – humans need to be the decision maker of any AI output). Typically used to improve internal efficiency or enhance customer service with minimal chance of harm if errors occur.
- Mandatory Safeguards
- Light-touch requirements: transparency (e.g., disclose use of AI in chatbots), basic testing for fairness/accuracy.
- If usage expands or new risks emerge, reclassify as High-Risk.
- Data Classification Type(s): Primarily Level 2 (Protected / Internal Use Only)
- Level 2 data might include internal agency documents (draft analyses, budget proposals, memos) or other data not meant for broad public release but not legally restricted. Unauthorized disclosure could result in some form of negative impact, but not on par with potential harms related to PII or CJI.
- Could occasionally involve Level 1 (Public) data if the system uses openly available information but still requires moderate protective measures (e.g., an internal tool that references public stats but must remain behind an internal firewall).
- In general, Limited-Risk systems do not handle highly confidential or restricted data.
- Use Case Example(s)
- AI-Driven FAQ Chatbot (External): A virtual assistant to help the public find information on agency services. It may use a combination of Level 1 public data and some Level 2 internal knowledge base. Impact is moderate—errors might cause confusion, but not irreparable harm.
- Internal Analytics Dashboard: Aggregates departmental performance metrics from Protected/Internal (Level 2) data (like draft budget files) to present dashboards for leadership. Misclassification could cause moderate risk, but it typically doesn’t impact individual rights in a critical way.
- Customer Service Triage Tool: Routes inquiries or complaints internally based on fairly routine categories. The data is mostly administrative and does not involve legally protected personal information.
AI applications that pose negligible risk, usually embedded in standard office software or used for non-critical, purely internal tasks (e.g., spam filters, grammar correction). They do not impact individual rights or major government processes if they malfunction.
- Safeguards
- No special AI-specific approvals beyond standard IT/security reviews.
- Still recommended to track/catalog these tools so that if they malfunction, they can be reassessed for potential reclassification.
- Data Classification Type(s): Most Commonly Level 1 (Public) Data, Possibly Low-Sensitivity Level 2
- Level 1 (Public) data: Data that can be freely shared without negative consequences (e.g., open datasets, publicly available information).
- If the tool processes Level 2 (Internal Use) data, it is typically in ways that present extremely low risk (e.g., scanning a publicly available document for grammar suggestions).
- Minimal-risk AI does not rely on sensitive personal data or restricted data.
- Use Case Example(s)
- Email Spam Filter: Uses low-level AI to sort phishing from legitimate email. Even if it occasionally flags a legitimate message, the risk is negligible. Operates on routine data that is not highly sensitive in bulk form.
- Spell Checker / Grammar Tool: An AI embedded within a word processor that checks for mistakes. The data involved is primarily open or low-sensitivity text, meaning minimal-risk if misclassified or leaked.
- Automated Meeting Scheduler: An AI that finds open calendar slots among staff. The data is mostly staff names and times, not restricted or confidential. If it fails or leaks, the harm is minimal.
What happens if my AI use case is high risk?
These projects require deeper due diligence and oversight. DoIT will your the proposal in depth, alongside asking for more documentation and testing ( like our Algorithmic Impact Assessment and Privacy Impact Assessment). High-risk AI proposals must demonstrate they have robust safeguards. You can complete the AIA as part of your intake submission to speed up DoIT's review process.
How will my AI request be reviewed?
During intake, DoIT will jointly work with your agency to identify risks and propose risk mitigation strategies. For example, DoIT might require your agency to try a proof of concept first (for high-risk AI), or to put specific privacy measures in place.
At this stage, the agency and oversight reviewers collaborate to refine the plan so it meets compliance requirements and responsible use standards without stifling innovation.
Approval and next steps
Once your proposal satisfies the necessary conditions, it is approved for implementation via the appropriate pathway:
- Some projects (especially limited-risk ones) might receive a green light to proceed directly to development or procurement.
- High-risk projects might receive conditional approval, such as to run a proof of concept or to deploy with continuous monitoring requirements. DoIT will clarify those requirements with your agency throughout this review process.
Monitoring and Managing AI Risk
Monitoring and managing risk is a continuous phase that kicks in once an AI system is operating in your agency. Governance doesn’t stop at deployment; agencies should have ongoing oversight, auditing, and risk mitigation for high-risk AI systems. This ensures that AI which was acceptable at launch remains safe and effective as conditions change.
Monitoring and Managing AI Risk
AI synthetic products from AI systems would be governed by the State of MD policies around record management. A record is any documentary material created or received by an agency in connection with the transaction of public business. Agencies will need to consult their record retention schedules regarding retention and disposal requirements. Each agency is required to have a Records Officer from among its executive staff who is responsible for developing and overseeing the agency’s records management program and serves as a liaison to the Maryland Archives, and they should be consulted as well.
This framework will integrate with existing risk management processes and provide tools for evaluating AI-specific risks (like bias, security vulnerabilities, reliability issues). As part of monitoring, you should revisit this assessment and risk classification to ensure there are no changes from your initial assessment.
Set up processes to continuously monitor the AI system’s performance and behavior.
- Performance Metrics: Track metrics that indicate if the AI is working as intended. For example, the accuracy of predictions, the number of errors or exceptions, response time, uptime, etc., depending on the system. A drift in these metrics could signal a problem (e.g., the model’s accuracy might degrade over time as real-world data evolves).
- Outcome Auditing: Regularly audit the outcomes of the AI for fairness and correctness. If it’s supporting decision making, sample those decisions periodically and ensure they align with policy and there’s no systemic bias. This might be done by an internal audit team or in some cases by an external auditor or an AI ethics board. Bias mitigation is not a one-time task; continuous auditing helps catch emergent biases or unintended discrimination.
- User Feedback: Provide channels for users (employees or public users) to report issues with the AI. For instance, if an employee thinks the AI recommendation system is making odd suggestions, they should know how to flag this. Public-facing AI in particular should have a feedback mechanism, and your agency should monitor if there are complaints or questions coming in related to the AI’s decisions.
- Error Logs and Incident Response: Monitor error logs or any fail-safe triggers. If for example the AI system has a fallback when it’s unsure (e.g., escalates to a human or refuses to answer), log those events and review them. Establish an incident response plan for AI issues: if something potentially harmful or unethical is detected (like the AI made a privacy violation or a biased decision), have a plan to immediately intervene (disable the AI function if needed), analyze the issue, and inform the necessary oversight bodies. If individuals might have been negatively impacted by a high-risk AI, DoIT should be notified and the Agency with DoIT’s assistance as needed will guide affected individuals — so loop in DoIT as part of your incident protocol for major issues.
Maryland law specifically requires agencies using high-risk AI to conduct regular impact assessments of those systems. This should be done at least annually - and can be completed by re-evaluating the system via the Algorithmic Impact Assessment.
Be ready to adjust or even deactivate the AI if risks become unacceptable. Managing AI risk is an active process:
- If monitoring shows performance issues, plan a model update or retraining. (Coordinate with the vendor if it’s their product – ensure they provide timely updates or patches.)
- If an audit reveals bias, take corrective action: this could be data augmentation, tweaking decision thresholds, or adding a rule-based layer to counteract the bias. For example, if a recruiting AI is scoring female candidates lower, you might adjust the model or introduce a step to ensure gender-neutral evaluation criteria.
- For security risks (e.g., adversarial attacks or data leaks through the AI), work with your cybersecurity team and DoIT’s Office of Security Management. As AI systems can introduce new attack surfaces, ensure they are part of your cybersecurity audits.
Maintain transparency in your AI operations as part of risk management:
- Continue to update the public AI inventory with current information about the system (see the Transparency section). If the system changes significantly (new version or new uses), that should be reflected.
- Be prepared to share information about the AI’s performance and governance with oversight entities. The AI Subcabinet or legislative committees may request updates or conduct reviews. Having your monitoring data and assessment reports organized will make these interactions smoother and demonstrate your agency’s accountability.
- If appropriate, publish summary results of your AI’s audits or impact assessments on your agency website or in public reports.
Appendix 1: Example Ticket
AI Use Case
We plan to implement an AI-powered chatbot on our public-facing website. The chatbot will assist constituents by:
- Answering frequently asked questions (FAQ)
- Guiding users to relevant web pages, forms, and applications
- Streamlining service requests (e.g., scheduling appointments or accessing records).
The primary goal is to improve customer service efficiency, reduce response times, and ensure constituents get accurate, real-time information about our department’s services.
Business Case
Our department receives a high volume of routine inquiries (e.g., questions about licensing processes, application requirements, and eligibility criteria). Currently, staff must respond to every inquiry via phone or email, leading to backlogs and longer wait times. Deploying an AI chatbot will help our agency gain:
- Cost Savings: Reduce the need for additional customer support staff and free existing staff to handle more complex cases.
- Improved Efficiency: Constituents get quick answers online, 24/7.
- Enhanced Accessibility: The chatbot can provide immediate assistance to users who may have difficulty navigating the site or are unfamiliar with government terminology.
Overall, the chatbot supports our commitment to faster, more transparent services while helping staff dedicate their efforts to high-value tasks.
Success Metrics
- Response Accuracy Rate: Target of at least 90% correct answers for FAQ-related queries.
- User Satisfaction Rating: Gather feedback via optional survey prompts at the end of chat sessions (aim for >80% satisfaction).
- Reduction in Support Tickets: Reduce phone/email inquiries by 30% within the first six months.
- Average Handling Time: Maintain or improve average resolution time compared to existing manual processes.
Risk Level: Limited Risk - The data involved is a combination of public information (Level 1) and internal knowledge (Level 2), as stated in the data classification for Limited Risk.
Appendix 2: Prompt template for Gemini and guidance document
"Hello, I'm [Your name] from [Your agency]. I need help preparing an AI use case ticket for my agency, following the guidance in the Maryland Responsible AI Implementation document. Here is the document I'm working with: [PASTE THE ENTIRE DOCUMENT CONTENT HERE].
Based on the information in this document, and specifically for my role at [Your agency], please:
- Provide me with a template for writing an AI use case ticket, including the key sections I need to cover (AI Use Case, Business Case, Success Metrics, etc.) and questions I should answer.
- Help me brainstorm potential AI use cases specific to my agency, [Your agency]. What are some areas of our agency operations where AI can improve our work and service delivery?
- Give me advice on how to think about these AI use cases within my agency, [Your agency], considering the State of Maryland's risk levels and data classifications. What are some examples of limited-risk versus high-risk AI applications that would be relevant for us?
- Offer tips to identify appropriate success metrics for an AI project, keeping in mind [Your agency]'s specific functions.
- How can I best leverage Gemini to help write my ticket and refine my thinking about potential AI use cases?"
Software that my agency uses added new AI features. What should I do?
Most modern software has AI features and adds new ones often. If your agency uses DoIT-approved software that adds AI features, you are free to use them in accordance with Maryland’s responsible AI use policy. All state employees must use AI in accordance with this policy, regardless of each specific tool.
In cases where your agency handles personal information (PI), you can book office hours with Maryland’s AI Enablement team. They will work with you to use AI productively while keeping data safe.