This guide is intended for those who have already reviewed one of the following:
Both guides introduced key features of the Admin Console as they relate to either the Virtual Study Assistant or the Departmental Assistant.
As a reminder, both the Virtual Study Assistant and Departmental Assistant are UMD Virtual Agents. They share the same Admin Console, with only slight variations between them.
This guide takes a deeper dive into topics that were only briefly mentioned in the introductory materials, as they were less relevant at that stage. It’s not intended to be a comprehensive overview, but rather a follow-up for those who have already read one of the earlier guides—either Virtual Study Assistant – Faculty Guide or Getting Started with Departmental Assistant.
To access your Admin Console, go to https://admin.chatbot.umd.edu. Be sure to log in with your UMD credentials—otherwise, your courses won’t appear.
Although you serve as the administrator of your UMD Virtual Agent, you can always reach out to the DIT AI Solutions Team for help. They’re available to answer questions about the Admin Console or to discuss its features. You can contact them at dit-ais@umd.edu during standard university business hours.
A connector lets UMD Virtual Agent retrieve, send, or process data from another system so it can use that information to answer questions or generate responses. UMD Virtual Agent currently supports four connectors. Some are enabled by default during setup, while others can be added as needed.
For instructors who teach their courses using Canvas, the Canvas connector is the primary source of content for the Virtual Study Assistant. Those who don’t use Canvas typically rely on the Google Drive connector instead.
The Web Scraper is the primary connector used with the Departmental Assistant, but the Google Drive and ServiceNow connectors are also available for additional content sources.
In the navigation menu of the Admin Console, locate Virtual Agent Config and select the link.
Let’s explore each of the four connectors in more detail. You'll find the connector configuration after the following section header which appears after the Virtual Agent Configuration section.
If the Canvas connector is enabled, we assume you're using the Virtual Study Assistant.
When you scroll to the ELMS-Canvas subsection, the interface will display the resources from the course that are used for the Virtual Study Assistant.
In the Actions column, you’ll see either an X or a checkmark. Selecting the X will exclude that ELMS-Canvas resource from being scanned by the Virtual Study Assistant, and its background will change from white to gray to indicate it’s inactive. Conversely, selecting the checkmark will enable scanning for that content in that resource by the Virtual Study Assistant.
Virtual Study Assistant scans ELMS-Canvas resources just after midnight, so any changes will be visible to the students until the following day. If you don’t want to wait for the usual overnight update, you can force a scan to happen right away in the Virtual Agent Config section of the Admin Console. Learn more about this in: Virtual Study Assistant: Admin Console Guide.
Underneath the list of resources is the Course ID. The Course ID is automatically filled out if you created the Virtual Study Assistant using ELMS-Canvas. This course ID refers to an ID within ELMS-Canvas.
Your UMD Virtual Agent comes with a connected Google Drive folder. Any documents placed in this folder—within certain limitations—are automatically processed and may be used in the agent’s responses.
Before the Canvas connector was available, you’d use Google Drive to share course content with the Virtual Study Assistant.
You can still use Google Drive to supplement your course, whether or not you use Canvas. You can add a variety of file types to the Drive, including Word documents, Google Docs, PDFs, Excel files, Google Sheets, PowerPoint presentations, Google Slides, and plain text files.
Locate the following connector.
Under Google Drive Bot Folder, locate the link to your Virtual Agent’s Google Drive, and select it to open the folder. There you will see up to three folders.
By default, you'll see two folders: Public and Private. The UMD Virtual Agent scans both, including the contents of the Private folder.
When a file from the Public folder is used to generate an answer, it will appear in the Virtual Agent's source list as a clickable link. Only users with access to the Google Drive will be able to open the linked file.
When a file from the Private folder is used, it will appear in the source list as "Private Source", without a direct link. In this case, private means the source is hidden.
As shown in the screenshot above, you can also create a folder named Ignore. Any files placed in this folder will not be scanned by the UMD Virtual Agent. This can be useful if you want to delay making certain documents available—for example, until the material has been covered in class.
For those using the Virtual Study Assistant, there’s one additional folder: Video Transcripts. This subfolder, located within the Public folder, is where you can add caption files (as plain text) from your Panopto videos.
The Video Transcripts folder is specific to the Virtual Study Assistant and is not used with the Departmental Assistant.
What do you put in the Video Transcripts folder? This folder is for storing transcripts of your Panopto videos. Adding these files requires a bit of technical know-how—you’ll need to be comfortable using a plain text editor to create and save the transcripts. You’ll also need admin privileges in Panopto to access and extract the video transcripts.
In the navigation pane, click the Captions link (shown below) to view available options.
Panopto refers to them as "Captions," while the UMD Virtual Agent calls them "Video Transcripts." Despite the different terminology, both refer to the same content.
If you haven’t extracted captions from your Panopto video before, you’ll see the following message at the top of the screen:
If this message appears, scroll down to the Request Captions section and select the Order button.
Selecting the Order button prompts Panopto to begin processing the video and extracting captions. Note that the Estimate Total Caption Time displayed above can be misleading — it does not indicate how long the captioning process will take. Instead, it simply reflects the duration of the video.
The process usually takes just a few minutes to complete. However, Panopto doesn’t provide a message or alert when it’s finished.
Refresh the Captions page by navigating away (e.g., to Overview) and then back to the original page. It’s not the most elegant solution, but it gets the job done.
Then click on Captions afterwards. You should now see a new message under Available Captions:
The presence of the disclosure triangle next to the captions indicates they were automatically generated by Panopto. Click it.
When you scroll down, you will see these options.
Choose "Download file" to download the captions currently in use with the Panopto video. This might be Panopto's initial captions or a refined version you uploaded to enhance their quality.
This will download a text file to your browser.
Note that the downloaded file is a plain text file (.txt). Unlike a PDF or Word document, a text file contains only basic, unformatted text—it won’t include fonts, images, or layout styling.
To open and edit it, use a text editor, such as Notepad or Notepad++ (Windows), TextEdit (Mac), or any code editor like VS Code or Sublime Text. A text editor is a simple program designed specifically for working with plain text. Avoid using word processors like Microsoft Word, as they may add formatting or hidden characters that could cause issues.
The first few lines of the downloaded captions file look something like this:
You’ll need to add one line at the very top of the text file—the URL of your Panopto video. You can copy this link directly from Panopto. We’ll show you how to find and copy the link in the next steps.
To begin, go to your video’s summary page. Tap the gear icon to open the Settings screen, then select Share from the navigation panel on the left.
Scroll to the bottom of the Share page and click the Copy Link button. This copies the Panopto video’s URL to your clipboard, so you can paste it into your captions file.
Complete editing the file:
The edited text file should look like
Let's examine the first few lines:
1
refers to the first clip in the video.Once your file is saved, go to your UMD Virtual Agent’s Google Drive. There's a link in the Virtual Agent Config section of the Admin Console.
Then open the Video Transcripts folder, which is located inside the Public folder.
In the top-left corner, look for the New button with a plus (+) sign. Select it, then select File Upload from the menu.
A web scraper automatically collects content from a list of web pages and their subpages, pulling in text that can be used by the UMD Virtual Agent, most often by the Departmental Assistant. In a Retrieval-Augmented Generation (RAG) system, the scraped content is indexed so the assistant can search it and use relevant information to generate accurate answers. This allows the chatbot to stay up to date with your website content without requiring manual input.
The section below provides key details and usage tips for the web scraper connector.
Underneath, you can add URLs for the web scraper to scrape. The web scraper will visit child pages so you do not need to explicitly list out all the pages of your departmental website. There is a limit of 200 web pages that it will scrape. To increase that, please reach out the DIT AI Solutions Team.
The web scraper automatically scans all the URLs you provide, along with any of their child pages. There’s no need to manually add links to individual subpages. After scraping, the Sources column will show the total number of pages that were collected.
The ServiceNow connector can scan UMD’s knowledge bases, along with other parts of the university’s ServiceNow environment. This includes published articles, service catalog entries, and other relevant content, making it easier to surface accurate information in response to user questions.
It’s designed for departments that already have useful content stored in ServiceNow and want to make that information more accessible through UMD Virtual Agent.
Because configuring the ServiceNow connector involves complex settings that could lead to errors, it isn’t available through the standard user interface.
To ensure everything is set up correctly and securely, please contact the DIT AI Solutions Team at dit-ais@umd.edu, and we’ll be glad to assist.
If you're using the Web Scraper connector—the one most commonly used with Departmental Assistant—you can view the sources it has scraped and ingested. Look for Ingested Sources in the Admin Console, located under Virtual Agent Config in the navigation menu.
Make sure you have read the introductory material on Ingested Sources at the following: To Be Added Later.
There are three tabs in Ingested Sources. We covered the first one, Ingested Sources, in the introductory material.
The Broken Webpages tab shows any pages the web scraper couldn’t access. As the name suggests, it lists URLs that failed to load during the scraping process, making it easy to identify and troubleshoot missing content. If needed, you can choose to exclude those URLs from future scraping attempts. In many cases, this table will be empty, which means no broken pages were found.
The Ingest Details tab is for those of you who really want to dig into why certain questions asked by the user might not the chatbot response you expected.
This section involves technical terms like intent and chunk.
Intent refers to the user’s goal or purpose behind a query or prompt. It's a key concept in natural language understanding (NLU), particularly in chatbot design, virtual assistants, and search interfaces.
A chunk is a smaller, self-contained unit of text extracted from a larger source—like a document, web page, or transcript—to make it easier for the LLM to search, understand, and respond accurately.
Each row in this table represents a single chunk of content. You’ll see when it was ingested, where it came from (in this case, the web), its intent (which may not always be meaningful to you), and a snippet of the chunk’s text.
You may occasionally see an empty table. This can happen if there’s a very large number of chunks—when that occurs, the system may fail to retrieve them, resulting in nothing being displayed.
This table was originally created for members of the AI Solutions Team who developed the UMD Virtual Agent, so it’s understandable if some parts are hard to interpret. If you’d like help understanding it or want to learn more, feel free to contact the team.
Question Review can be found in the Admin Console navigation menu under Data Analysis.
The introductory articles covered the basics of the Question Review feature. Specifically, they showed how to view the questions asked by users of the UMD Virtual Agent chatbot, along with the responses it provided.
Ideally, you should review the Question Review section regularly to ensure the virtual agent is providing accurate and helpful responses.
Each row in the Question Review table includes a user question and the chatbot’s response. You’ll also see an Actions column, which contains four icons you can use to take further steps.
Let's go over each icon:
These actions allow you to review questions one at a time. However, certain actions can be done in bulk.
You’ll see checkboxes in the left column of the table. If you’ve reviewed all the questions on a page, you can check each one and then select Mark Selected As Reviewed to update their status. Mark Selected as Test is less commonly used, but it allows you to mark multiple responses as test entries in the same way.
Virtual Agent Analytics appears under Analytics in the navigation menu of the Admin Console.
Once users—whether students or website visitors—begin asking questions, you’ll be able to see how frequently the chatbot is being used. Select the link to see details.
Analytics are organized by month. At the top of the page, you can select the month and year you want to review. Tap that section to choose the specific time period for which you’d like to view analytics.
Below that, you’ll find the metrics for the selected month:
Below the metrics, you’ll see a graph for each of these categories, along with one additional metric: Intent Occurrence. As mentioned earlier, intent refers to the user's goal or purpose behind a query. The UMD Virtual Agent automatically generates intent categories and assigns each question to one of them. However, these categories may not always be intuitive, so you might find them less helpful for interpreting user behavior.
You can export these graphs in several formats, including image, PDF, or CSV. Additionally, you can print the page directly from your browser to save or share the chatbot data.
If you encounter issues that you cannot resolve, email the DIT AI Solutions Team.