Skip to main content

An official website of the State of Maryland.

Official websites use .gov
A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS
A lock ( ) or https:// means you’ve safely connected to the .gov website. Share sensitive information only on official, secure websites.

Request Software or Technology

On this page

Maryland’s AI intake process

If you have reviewed Maryland’s responsible AI use policy and want to move forward with an AI use case, then it’s time to proceed through the AI intake process.​​​​

This process provides the review framework that powers AI projects and products across the State. It takes a risk-based approach to help state civil servants shepherd their ideas from conception to completion. 

Maryland's AI intake process takes an advisory approach based on risk to each user group. Low-risk use cases moving forward faster, while high-risk use cases receive due diligence and oversight as a collaboration between Maryland's Department of Information Technology (DoIT) and the agencies that request each use case.

DoIT will improve this process over time, with stronger guardrails and faster throughput to support agencies across Maryland. 

Submitting your AI intake request

If you would like to use AI in your work, DoIT can help you gain access. Request software from DoIT through the following steps via ServiceNow. Note that you must be logged into your state email account using the State's two-factor authentication: 

  1. Select the Request a Service box on the left;
  2. Select the General Inquiry box on the left;
  3. Submit your request in the What is your question? box.

DoIT will route your request to the AI team, which will review your proposal and work with you to find the best solution for your agency’s use case.

Which details should I include in my request to use AI?

In the What is your question? box, include the following details:

  • Explain how your request is an AI use case. (Do you want to buy new software with AI? Gain access to a sandbox so you can compare AI models? Start building a new tool from scratch?)
  • Acknowledge you have reviewed Maryland's AI Governance Cards, responsible AI Policy and Implementation Guidance.
  • Attach all relevant documentation for this use case.

Once DoIT receives your request, they will send you a form to capture more information about your AI use case. In addition to answering the questions on the intake form, you should provide the following details in your ticket description.​​​​​

  • AI Use Case: Briefly describe the purpose or problem the AI solution aims to address (e.g., process automation, decision support, customer service).
  • Business Case: Explain why this AI project is important to your agency. What value or improvements do you predict will it bring? (e.g., cost savings, reduced wait times, enhanced data insights).
  • Success Metrics: Identify the key performance indicators or metrics you plan to track (e.g., accuracy rate, user satisfaction, error reduction, time savings).
  • Reference Materials / Tools / Solutions: List any relevant studies, tools, existing solutions, or frameworks considered when planning this AI implementation.
  • Risk Classification: Please describe the risk level you perceive of using this AI use case. Maryland's Responsible AI Policy shares the State's risk classification system for reference.

Accessibility Through AI Tip: Copy and paste the AI template into Gemini and ask it to take your information (everything you can provide). Gemini can generate the ticket for you, along with asking you any other questions to consider. Review Appendix 2 to see the prompt template.​​​​

Assessing risk when requesting AI

DoIT evaluates your AI proposal for risk against our classification matrix. As the submitting requestor, DoIT expects you to assess your AI project’s risk by classifying it against DoIT’s tiers (such as limited-risk vs. high-risk.)

Under SB818 (2024), high-risk AI generally references systems that could impact safety, civil liberties, equal opportunities, access to essential services, or privacy in a significant way​. To use the Risk Assessment Matrix, first determine the type of data your AI system will be processing. 

To understand the data classification, refer to the Privacy Threshold Analysis and Data Classification Policy. Then, review the definitions of each AI Risk Category to identify which best describes your system's predicted impact.

In some cases, a system may use Level 1 data but still be considered high risk, such as a system that predicts crime hotspots and allocates police resources based solely on publicly available data. Although this system uses public data, it could perpetuate existing biases in policing and lead to over-policing in certain areas, disproportionately affecting specific communities.

If you’re still unsure about your use case’s risk classification, consult with DoIT or default to the higher risk category. Refer to Appendix 1​ to see how this information is captured in our example.​​​​

Risk assessment matrix

Risk Assessment Matrix

Systems posing extreme risks to public welfare, safety, or rights that cannot be mitigated. They may violate fundamental rights (e.g., unlawful surveillance, social scoring) or fail to align with State values. These systems are banned entirely, regardless of data classification, as no combination of safeguards can offset the risk they pose. Use cases that result in violations of law or core civil liberties are not permissible.

  • Data Classification Type(s): No Relevant Data Classification
    • There is no corresponding data classification that suggests this risk label. Even if the data in question might normally be classified at Level 1, 2, 3, or 4, the system itself is not permissible due to the nature of its risk, rather than the sensitivity of the data.
      For instance, a system that attempts to track or socially score individuals en masse would be “unacceptable,” even if it only accessed public (Level 1) data. The risk category overrides any data classification considerations. ​
  • ​​Use Case Example(s)
    • Unlawful Mass Surveillance: For example, an AI tool that attempts to track constituents’ every movement via facial recognition in real time without oversight or statutory authority.​
    • Uncontrolled Social Scoring: A system that ranks or penalizes constituents’ daily behavior, akin to certain types of “social credit,” contradicting fundamental principles of fairness and privacy.​
      (​Even if such a system used largely public data, it would remain “unacceptable” based on the AI policy’s ethical and legal prohibitions.) ​

What happens if my AI use case is high risk?

These projects require deeper due diligence and oversight. DoIT will your the proposal in depth, alongside asking for more documentation and testing ( like our Algorithmic Impact Assessment and Privacy Impact Assessment). High-risk AI proposals must demonstrate they have robust safeguards. You can complete the AIA as part of your intake submission to speed up DoIT's review process.

How will my AI request be reviewed?

During intake, DoIT will jointly work with your agency to identify risks and propose risk mitigation strategies. For example, DoIT might require your agency to try a proof of concept first (for high-risk AI), or to put specific privacy measures in place.

At this stage, the agency and oversight reviewers collaborate to refine the plan so it meets compliance requirements and responsible use standards without stifling innovation.​​​​

Approval and next steps

Once your proposal satisfies the necessary conditions, it is approved for implementation via the appropriate pathway:

  • Some projects (especially limited-risk ones) might receive a green light to proceed directly to development or procurement.​
  • High-risk projects might receive conditional approval, such as to run a proof of concept or to deploy with continuous monitoring requirements.​​ DoIT will clarify those requirements with your agency throughout this review process.

Monitoring and Managing AI Risk

Monitoring and managing risk is a continuous phase that kicks in once an AI system is operating in your agency. Governance doesn’t stop at deployment; agencies should have ongoing oversight, auditing, and risk mitigation for high-risk AI systems. This ensures that AI which was acceptable at launch remains safe and effective as conditions change.​
 

Monitoring and Managing AI Risk

AI synthetic products from AI systems would be governed by the State of MD policies around record management. A record is any documentary material created or received by an agency in connection with the transaction of public business. Agencies will need to consult their record retention schedules regarding retention and disposal requirements. Each agency is required to have a Records Officer from among its executive staff who is responsible for developing and overseeing the agency’s records management program and serves as a liaison to the Maryland Archives, and they should be consulted as well.

Appendix 1: Example Ticket

AI Use Case

We plan to implement an AI-powered chatbot on our public-facing website. The chatbot will assist constituents by:​

  • Answering frequently asked questions (FAQ)
  • Guiding users to relevant web pages, forms, and applications
  • Streamlining service requests (e.g., scheduling appointments or accessing records).

The primary goal is to improve customer service efficiency, reduce response times, and ensure constituents get accurate, real-time information about our department’s services.

Business Case

Our department receives a high volume of routine inquiries (e.g., questions about licensing processes, application requirements, and eligibility criteria). Currently, staff must respond to every inquiry via phone or email, leading to backlogs and longer wait times. Deploying an AI chatbot will help our agency gain:

  • Cost Savings: Reduce the need for additional customer support staff and free existing staff to handle more complex cases.
  • ​Improved Efficiency: Constituents get quick answers online, 24/7.​
  • Enhanced Accessibility: The chatbot can provide immediate assistance to users who may have difficulty navigating the site or are unfamiliar with government terminology.​​

Overall, the chatbot supports our commitment to faster, more transparent services while helping staff dedicate their efforts to high-value tasks.

Success Metrics

  • Response Accuracy Rate: Target of at least 90% correct answers for FAQ-related queries.​
  • User Satisfaction Rating: Gather feedback via optional survey prompts at the end of chat sessions (aim for >80% satisfaction).​
  • Reduction in Support Tickets: Reduce phone/email inquiries by 30% within the first six months.​
  • Average Handling Time: Maintain or improve average resolution time compared to existing manual processes.​​

​Risk Level: Limited Risk - The data involved is a combination of public information (Level 1) and internal knowledge (Level 2), as stated in the data classification for Limited Risk.

Appendix 2: Prompt template for Gemini and guidance document

"Hello, I'm [Your name] from [Your agency]. I need help preparing an AI use case ticket for my agency, following the guidance in the Maryland Responsible AI Implementation document. Here is the document I'm working with: [PASTE THE ENTIRE DOCUMENT CONTENT HERE].

Based on the information in this document, and specifically for my role at [Your agency], please:

  1. Provide me with a template for writing an AI use case ticket, including the key sections I need to cover (AI Use Case, Business Case, Success Metrics, etc.) and questions I should answer.
  2. Help me brainstorm potential AI use cases specific to my agency, [Your agency]. What are some areas of our agency operations where AI can improve our work and service delivery?
  3. Give me advice on how to think about these AI use cases within my agency, [Your agency], considering the State of Maryland's risk levels and data classifications. What are some examples of limited-risk versus high-risk AI applications that would be relevant for us?
  4. Offer tips to identify appropriate success metrics for an AI project, keeping in mind [Your agency]'s specific functions.
  5. How can I best leverage Gemini to help write my ticket and refine my thinking about potential AI use cases?​"

Software that my agency uses added new AI features. What should I do?

Most modern software has AI features and adds new ones often. If your agency uses DoIT-approved software that adds AI features, you are free to use them in accordance with Maryland’s responsible AI use policy. All state employees must use AI in accordance with this policy, regardless of each specific tool.

In cases where your agency handles personal information (PI), you can book office hours with Maryland’s AI Enablement team. They will work with you to use AI productively while keeping data safe.