Digital Learning Program Development

Evaluation of Digital Resources


Every learning object and software package you use is going to impact your students. Sometimes this impact will be positive and will provide value in supporting your learning goals. Sometimes the impact will be negative, that students will be worse-off for having used this product or tool. Sometimes, the impact will be neutral. Also, the best tool may not have the greatest impact. If a tool is pedagogically sound but everyone hates using it, for example, the impact of the tool will be dulled.

Because of the sheer number of software packages and learning objects (a recent survey of North Carolina districts for the Digital Learning Initiative indicated that some districts were paying for 50-100 software packages, not counting what teachers were finding on their own), evaluation of digital resources is everyone’s job, supported by the ITF and the CTO and it is an ongoing task. In most districts, it’s safe to assume that most districts are not training teachers how to properly vet content and do not have structures in place to do so. The EdTech Genome Project is attempting to provide some standard guidance. Their final report is the result of input from hundreds of stakeholders and is a rubric to help schools identify enabling conditions for the adoption of new content.

As a part of the Digital Learning Initiative, the Friday Institute released the North Carolina Quality Review Tools for Digital Learning Resources checklist and rubrics to assist schools in identifying high-quality digital learning resources, along with a selection guide for instructional materials. Achieve.org has also created the EQuIP Rubrics (Educators Evaluating the Quality of Instructional Products) which are designed to support teachers in evaluating learning objects, lesson plans, and student work. Achieve also has an OER Rubric that the NC rubric is based on along with the Temoa OER Rubric out of the Virtual University of the Tecnológico de Monterrey.

Read the Fine Print

One of the critical items when evaluating content for inclusion in a classroom is to read all of the Terms and Conditions for software. These license agreements, policies, etc. on a vendor’s website are legally-binding contracts between a person (or their organization) and the vendor. A teacher who accepts terms and conditions on behalf of their district or in their professional capacity can be held personally liable under North Carolina law. There’s a lot hidden in these agreements, including what the vendor can do with your data, whether or not their data is stored in hostile countries, whether they comply with FERPA and Federal laws, etc. There are also arbitration clauses. Recently, Disney was sued by a widow whose husband was killed at Epcot. However, Disney argued that the woman could not sue because they were subscribers to Disney+ and the Disney+ terms of service required arbitration and waived the right of the subscriber to sue.

AI In Classroom Tools

Increasingly, paid tools are including the use of Artificial Intelligence as the hype around AI builds. This AI may take many forms – from students actually using AI tools like ChatGPT; to intelligent tutors that are dynamically generating learning materials for students; to teachers using AI to adjust the reading level of a text or translate a text into a student’s native language; to teachers using AI to generate content. We’ve discussed some of the advantages and fallbacks of this in previous units. With AI-generated instructional materials and student work, AI hallucinations may generate fake content that’s unreliable for instructional use. Also, there may be FERPA implications for loading student work into an AI. And we discussed policy in the last unit, AI selection should follow school policy.

Evaluation Criteria

Purpose

The very first question to ask when evaluating new content is “what will I be using this for?” In a report for the NC Digital Learning Initiative, we found that most schools are purchasing supplemental content for supplemental purposes and are struggling to provide core content. In addition, teachers enjoy the flexibility of being able to select the content for their own classrooms but also struggle to find high-quality materials. Also, is the content to be used for a specific purpose or is it to be widely used across grade levels?

Technical Compliance

Even the perfect ed-tech tool (if such a thing exists) will do you no good if it isn’t compatible with your school networks and devices. Additionally, a great tool can be a disaster for a CTO if the data aren’t kept secure, or if uptime is an issue. For software and tools that live on a server, consider the following items for evaluation:

  • Where is the server hosted? Does the district have to provide and maintain a server infrastructure for the tool? Vendor-hosted software tends to be more expensive and involves yearly fees, but it removes the hosting and security burden from the district, and makes the vendor responsible for the infrastructure. If the software is vendor-hosted, North Carolina policy requires data to be maintained winth the United States. In addition, NC GS 115C article 29 lists additional student data elements which need to be secured in addition to the requirements from FERPA. Many districts (and the state) now also require a third-party audit of server infrastructure, such as a SOC 2 Type 2 or FedRAMP. You should also ask the vendor who has access to student data, how it is secured, and how it is used in the system. As discussed in Unit 2, tools may access FERPA-protected data for the purposes of delivering instructional content only. Vendors should also be required to provide an uptime guarantee and credit the district for any downtime.
  • Cybersecurity and authentication. How will users log in and get access to the tool and how is access provisioned and restricted? Can the tool integrate with a tool like NC Ed Cloud or Clever or Classlink to provision and authenticate users or does it need to be done manually. Does the tool support single-sign-on? What measures are taken to prevent unauthorized users and hackers from gaining access to the system?
  • Data Ownership. How does a user get access to their data? Can a district download all of the data at once or access raw data? How accessible is the data in general?
  • Integrations. Aside from the standard integrations discussed in the interoperability section, does the tool offer an API and plugin architecture such that other tools can integrate with it or a tool can be extended as needed.

North Carolina has taken recent steps to address these issues with a new framework for third-party data sharing agreements

Alignment to Standards

While this seems like the easiest and most obvious thing to be working on when vetting digital learning content, it is surprisingly difficult. A recent study in conducted by a large school district in 200 of their classrooms indicated that over half of the content being taught was being taught below grade level. In many districts, if you assembled all of the math teachers (for example) and asked them what content was taught under a specific standard and what grade level that content was taught….you would get a wide variety of answers. This problem compounds across districts and when teachers move from district to district. As a result, the first step of determining the quality of alignment to standards is to agree what the standards require. While the unpacking documents help some, there is still consensus that needs to be built within a school and within a district. EdReports, a non-profit organization based in Durham, completes evaluations of alignment to Common Core and Next Generation Science Standards against a rubric that can be adapted to NC standards. Their library only includes procured products at the moment, and only resources that serve an entire curriculum. In short, all learning objects should be reviewed for grade level appropriateness of the content and for the reading level and vocabulary of the content.

Coherence

Especially when curricula are created from a collection of learning objects, and even if each individual learning object is tightly aligned to standards, these objects need to be reviewed holistically to ensure that vocabulary is used and defined consistently, that concepts are introduced and followed-up coherently between lessons, that algorithms and methodology don’t change between activities, and that a student can move through the content from beginning to end linearly.

Cultural Responsiveness

Cultural responsiveness goes beyond simply using diverse sets of names in word problems. Evaluating for cultural responsiveness means that the work is being evaluated to determine if it’s free from any cultural bias and that the examples that are used are locally relevant and can be understood by the students (for example, as a new teacher, I created a math problem involving an escalator - something many of my students had never seen before). In addition, it’s important for all students, including students of color, LGBTQ+ students, students with disabilities, and students of various nationalities and religious backgrounds to be able to see themselves in the content and in the examples. The Metropolitan Center for Research on Equity and the Transformation of Schools out of New York University has created a Culturally Responsive Curriculum Scorecard for teachers to use to evaluate instructional materials.

In addition to cultural representation, digital tools are much easier to localize, or translate into different languages. If your school has a large population of students speaking a language other than English, native support for that language promotes UDL principles and accessibility by allowing students to learn in their native language. With AI, we’ve discussed biases that exist in AI-generated materials, and these are especially important to be aware of with AI in digital content.

Research Base and Instructional Design

Procured content packages, especially ILS systems and intelligent tutors (and even most textbooks), should have a research base behind them that delineates the pedagogical models that the product employs, what implementation with fidelity looks like, how success is measured, and how successful the product is. While the What Works Clearinghouse has very rigorous research standards, it’s a good place to look for the research. Vendors should also be able to share this information with you as well. The presence of rigorous research and evaluation is probably the greatest advantage that procured content has over OER and teacher-created materials.

With OER and teacher-created materials, you should always check for solid instructional design principles.

Peer Feedback

With procured products and large OER collections, there are always other schools and districts that are using the products (if not, that should be a warning sign…). A phone call or email to a peer CTO or ITF can provide feedback about a product that you might not be able to get otherwise. Vendors typically will provide the name of a district that you can call. They should also be able to provide the name of a district with similar demographics to yours.

Many OER platforms also include rating and comment systems to allow teachers to share feedback with other teachers. For teachers, a Critical Friends protocol can be employed to help teachers review and improve their lessons.

Actionable Data

Many instructional software packages provide a plethora of data for analysis. Some of these data are useless (for example, time on page is generally considered to be an invalid metric in unsupervised learning environments - you don’t know if a student spent time on the page reading the content or going to get coffee). However, other packages can zero-in on student knowledge and knowledge gaps and provide teachers with actionable data that they can use to make decisions. The quality and usefulness of these data should be evaluated.

Qualitative Feedback

Before purchasing a product that will be used on an ongoing basis, you should request a demo from the provider for a month or so to allow teachers to begin to learn about it. Collecting qualitative feedback from students and teachers about a product is a good source of information to help determine if the product can be implemented with fidelity in a school and if the teachers and what the user experience may be like.

Total Cost of Ownership (TCO)

Every product, even free products, have a total cost of ownership. Beyond the cost for the product, the cost of all of the computer and servers to run it, training, supplemental materials, support costs, and staffing needs should be factored in. For example, if an ILS system only shows results if administered in a computer lab with a facilitator, the cost of the space, computers, and facilitator should be added on to generate an “all-in” cost. Even free resources and OER are “free like a puppy” in that the time to find and curate these resources combined with the cost of procuring materials creates a total cost of ownership that is much higher than you would expect.

Technical and Training Burden

As a part of determining TCO, the technical burden to implement a product should be analyzed. This includes ensuring the school can meet the technical requirements to deliver the product, can provide the source data to authenticate to the product, and has the technical training to ensure implementation with fidelity. Teacher training is also often overlooked (or removed from a product purchase as a cost savings, an act which dooms many deployments). As a part of the evaluation, the training needs for teachers should also be evaluated and factored in.

Compliance with UDL and Federal Laws

If your district chooses to adopt the UDL framework for content, all instructional resources should be validated against the framework. Additionally, all instructional materials should be required to comply with accessibility guidelines of at least WCAG2.1-AA in order to reduce the district’s legal liability and to provide a high quality experience to all students. Additionally, apps should be compliant with screen reader technologies such as MacOS’s VoiceOver and the other assistive and adaptive tools your district may use. For services that store student data, they should be indicating how they are securing data or FERPA compliance and that they are able to comply with COPPA and CIPA. Google, for example, includes FERPA statements in their GSuite for Education license. This language is not in the language for personal Gmail accounts, and therefore, they would not be FERPA compliant for teacher or student use. Amazon has said that there are no FERPA protections for Alexa, and therefore do not recommend Alexa’s use in the classroom.

North Carolina law also requires that student data may not be mined for advertising. As a rule, contracting language should say that the vendor will comply “with all Federal, State, and Local laws” so that contracts don’t need to be renegotiated as laws change.

Ongoing Evaluation

Most schools purchase instructional resources once and simply pay the bill each year, never evaluating what they have. The evaluation criteria should be applied on an ongoing basis, as digital resources change frequently. Tools that are no longer high-quality should be removed from circulation, and ongoing and detailed data collection is critical to ensuring that this can happen.