The array of Generative AI (Gen AI) tools, as well as technologies that integrate Gen AI functionalities, continues to grow at a rapid pace. We see the presence of these tools not only in professional and educational settings, but across interactions and tasks that we encounter in our daily personal lives, as well.
With this growing access to Gen AI, new opportunities for exciting innovations and transformative ideas that can enhance the learning experience at all levels are now not only possible, but easier to achieve. At the same time, however, the profusion of Gen AI technologies available raises questions about increased barriers and risks, especially related to access, security, and privacy. Ohio State and the College of Education and Human Ecology have contracted with a range of learning technologies to help navigate some of these complexities, ensuring that instructors and students alike are equipped to acquire the skills needed to meet the changing digital landscape in a safe and secure environment.
Each of the tools identified by their functions below have undergone thorough accessibility, privacy, and security reviews and have been approved for use by faculty, staff, and/or students by the University or College.
Summary
Description: Generate visual graphics and templates, visual text effects, and animate characters
Potential Use Cases:
- Have students use Express to generate elements of a creative final project deliverable
- Generate elements to be used in interactive learning content (e.g. infographics)
Ohio State Access:
Description: Autogenerate video and audio captions and transcripts
Potential Use Cases:
- Utilize AI-generated summaries to create class notes or overviews, highlight key ideas, or create presentations
- Review the AI-generated coaching notes to improve future presentations by making them clearer and more engaging
Ohio State Access:
Description: Create summaries from group discussions and search course content for specific results via AI-powered Smart Search
Potential Use Cases:
- Identify common concerns, questions, or gaps in understanding among students that can help guide instruction
- Identify common interests or shared values to help build community
Ohio State Access:
Description: An AI chatbot that can produce creative text and images and help increase efficiency and productivity
Potential Use Cases:
- Use it to outline a lesson plan or larger project
- Create clearer, more concise learning objectives
- Utilize Copilot as an accessibility coach to help generate Alt Text for images and generate appropriate text or naming for URL links
- Ask students to use it as a Peer Reviewer and get feedback based on their work
Ohio State Access:
Description: Generate meeting summaries, action items, metrics for improving presentations, and video chapter bookmarks and overviews
Potential Use Cases:
- Utilize AI-generated summaries to create class notes or overviews, highlight key ideas, or create presentations
- Review the AI-generated coaching notes to improve future presentations by making them clearer and more engaging
Ohio State Access:
Description: Contains a variety of AI functionalities including 2D and 360º image generation, AI populated branching scenarios, and AI generated interactive scenes
Potential Use Cases:
- Generate background images that can be used as a canvas for interactive lesson or activities
- Outline elements of a branching scenario that can be quickly edited and customized
- Auto-populate individual content tags using existing content from a pdf document or existing notes
Ohio State Access:
AI Chatbots
AI chatbots utilize Machine Learning technology to simulate conversations, responding to questions and comments that are input by the user in a human-like way. The outputs produced by the chatbots not only include evaluative responses, but can also consist of entirely new content presented in the form of text, images, or even audio.
Microsoft Copilot with data protection is the University’s only approved Generative AI chatbot technology. Using Copilot with data protection, users can input text or audio prompts and engage in iterative conversations to ask questions, complete daily productivity tasks, and spark creativity without the risk of their input data being shared with external parties. Copilot can understand and communicate in several languages via written text or audio, can produce visual images, and offers an extensive prompt library to help users get started. In addition, users can save prompts that they have customized or created to their personal library for future use or to be shared with others.
Productivity Assistants
This section provides a list of all available Generative AI tools or tools that incorporate Generative AI functionalities that can be used to increase efficiency and improve daily professional and personal productivity. These productivity tasks include, but are not limited to, summarizing meetings or documents, analyzing reporting data, identifying next steps for a project, organizing tasks and priorities, and managing emails.
As of January 2025, CarmenCanvas now includes two AI features that can be enabled by course instructors within a course’s settings:
Smart Search – this feature allows users to search for keywords, topics, and phrases contained within an individual course’s content (Pages, Discussions, Announcements, etc.). If enabled, this search function can be used by both instructors and students, but it can be disabled for students by removing the Smart Search link from the list of course navigation links.
Discussion Summaries – with this feature, particularly using the search box, instructors can utilize Generative AI to quickly summarize student replies to identify common questions, areas of confusion, points of interest, shared ideas, etc. This feature is only available to instructors, TAs, and course graders. Students do not have access to the discussion summaries functionality.
CarmenZoom now incorporates two primary AI features that can provide helpful meeting summaries and notes. These features can be turned on for hosted Zoom meetings in a user’s personal settings. These include:
Meeting summary with AI Companion – this feature provides a brief overview of the recorded or non-recorded meeting discussion. The meeting host has several setting options related to how and with whom meeting summaries are automatically shared
Smart recording with AI Companion – this AI functionality creates a few additional summary items pertaining to the recorded and/or non-recorded meeting, including key highlights from the meeting and next steps that were identified. In addition, this feature offers the ability to turn on a Meeting coach that can offer helpful metrics and insight as to how to improve future presentations. For meetings that have been recorded, the feature can also auto populate chapters and chapter overviews within the recording that can then be reviewed and adjusted by the host.
Note: These features can only be turned on by the meeting host and are not available to participants unless the host has allowed this via their meeting settings.
As referenced in the section on AI Chatbot Tools, Microsoft Copilot is an AI chatbot that simulates human-like conversations. As it can evaluate and respond to questions and prompts created by the user, it has a wide range of productivity-related capabilities. For example, using Copilot, a user can ask the tool to summarize textual documents such as articles or meeting notes. It can also be used as a project management assistant to help assess all required tasks, top priorities, and due dates to create a timeline of work. It could even be utilized as an initial peer reviewer, assessing work against a clear rubric that has been input by the user.
Image Generators
With AI Image Generators users can input textual prompts to create and edit original images, often in a variety of styles. Below is a list of all available supported tools that incorporate image generation capabilities.
Adobe Express is a web-based platform used for creative tasks, such as designing visual graphics, creating and editing videos, and developing simple webpages. The platform incorporates Adobe Firefly, which is Adobe’s own Generative AI model and suite of services that allow users to create and edit images and text effects. Within Adobe Express creators can generate artistic visuals of people, places, and objects, create unique text styles to correspond to different themes, and produce visual templates that can be used for infographics and other visual learning elements. In addition, users can utilize the generate fill and object removal features to edit and refine specific elements of more complex visuals.
The Microsoft Copilot chatbot (described in more detail in the sections above) utilizes the DALL-E 3 model to generate detailed images from text descriptions. A series of four possible images is produced in just a few minutes after the text prompt is submitted. In addition to the initial series of images, users can continue to iterate and adjust the original prompt or even utilize follow up suggestions offered by Copilot alongside its produced images.
ThingLink is a dynamic educational technology supported by the College of Education and Human Ecology that allows content creators to turn a variety of multimedia assets – including standard images and videos, 360º media files, and 3D models – into interactive presentations, active learning exercises, and innovative learning experiences. The ThingLink platform incorporates several Gen AI features and integrations, including Skybox AI. This integration allows content creators to generate 2D and 360º real or imagined background settings directly within ThingLink that can then be utilized for a variety of presentations, virtual tours, and activities. Each instructor has access to 3,000 total image generation prompts at no additional cost. Within ThingLink, the Sykbox integration can be accessed by going to Media > Create > 360º Image > Generate with AI.
Note: While faculty, staff, and students in EHE all have access to create content with ThingLink, only faculty and staff have access to ThingLink’s Gen AI features and integrations. To access these features, faculty and staff must be assigned a Teacher role in ThingLink.
Interactive Content Creators
In addition to the Generative AI image functionalities described above, ThingLink also incorporates a few built in Gen AI functionalities that can help make creating interactive presentations and Scenario Based Learning activities much easier and more efficient. These include ThingLink’s AI Tag Generator and Scenario Builder.
Within any Media scene (image or vide) created in ThingLink users can choose to utilize ThingLink AI to auto populate interactive tags. Tags can be generated either by providing a written description or by uploading a pdf document containing details about the content to be converted into tags.
Within ThingLink’s Scenario builder, content creators have the option to utilize AI to create individual elements of Scenario Based Learning activities or even entire scenarios that can then be edited and customized as needed. Utilizing ThingLink’s built-in AI functionality, users can input a written description of the scenario or upload a pdf document with the details, choose the target language for the activity, and then determine which format the scenario should take: linear or branched. From there, ThingLink will populate suggested question blocks, pathways, and content. Once the layout and initial content have been generated, users have a few additional Gen AI capabilities, such as automatically converting suggested blocks into interactive visual scenes or auto populating additional questions, suggested branches, and written recaps.
Note: While faculty, staff, and students in EHE all have access to create content with ThingLink, only faculty and staff have access to ThingLink’s Gen AI features and integrations. To access these features, faculty and staff must be assigned a Teacher role in ThingLink.
Transcription and Translation Services
This section provides a list of approved educational technologies that contain specific functionalities that can quickly convert information from one format into another (e.g. text-to-speech or speech-to-text) or from one language into another using machine learning and natural language processing.
CarmenCanvas now integrates Microsoft’s Immersive Reader, which can increase access to information and readability of course content for a variety of users. The Immersive Reader functions within any published course page, such as a course homepage or Syllabus, as well as Assignments. Using the immersive reader, users can:
access an audio, or Read Aloud, version of any text that exists on the Page or Assignment
translate part or all of the text into one of several languages that can then be read independently or aloud
access a variety of text formatting options or line focus capabilities to help improve readability
reference a picture dictionary to help clarify and define unfamiliar term
Mediasite is Ohio State’s media storage solution. Self-recordings and screenscasts can be recorded directly in Mediasite or can be uploaded to a user’s Mediasite account. Whenever a user adds a media file to their Mediasite account, those media files will be automatically captioned using the integrated Whisper captioning tool, which can read the media files and quickly generate text captions and transcripts in a number of languages.
Note: Though the media files are automatically transcribed, it is still required that users review and edit these transcriptions for accuracy
Immersive Reader: Microsoft 365 contains several AI transcription and translation functionalities within its products. Most, if not all, Microsoft 365 apps (e.g. Word, PowerPoint, Excel) include Microsoft’s Immersive Reader that allows users to access additional text formats and increased readability options, full text translations, read aloud options, and a picture dictionary.
Dictate and Transcribe: Within the Microsoft Word web browser application (not desktop app), users have access to both the Dictate and Transcribe functionalities. With Dictate, a user can record themselves speaking and Word will auto transcribe the audio into text directly within the Word document. With the Transcribe tool, users can upload an existing audio or video file and Word will auto generate a text transcript that can be edited for accuracy, include individual speaker’s names, and/or include timestamps. Once edited, the entire transcription can then be directly added to and saved as a Word document that can be shared with others. (Note: Microsoft Stream contains a similar transcription tool in the Video settings of each recording that can auto generate an editable caption and transcript file for all video recordings.)
Text-to-speech: In Microsoft’s newest video creation/editing application, Clipchamp, users can create Video Projects that contain a text-to-speech recording feature. Under the Record & create option, users can insert a text script and choose among several voice options and settings (including language spoken, voice pitch, and pace). This will generate a human-like audio recording of the inserted text that is then added to the video timeline where it can be connected with visuals, graphics, and background music.
As referenced in the sections above, ThingLink is an educational technology platform used to create interactive presentations, branching scenario activities, virtual tours, and more. ThingLink integrates Microsoft’s Immersive Reader throughout its platform so that users can choose to have any text content read aloud, translated into another language, or formatted for greater readability.