Translate

Wednesday, July 23, 2025

Artificial Intelligence and Academic Professions: Confronting the Dangers of Unchecked AI in Higher Ed

Friends:

The AAUP’s 2025 report is a clarion call for faculty, staff, students, and broader academic communities to resist the uncritical rollout of AI and educational

technologies in higher education. Based on input from hundreds of respondents across the country, the report surfaces urgent concerns, including 1) a lack of meaningful professional development around AI, 
2) the erosion of shared governance, 3) worsening working and learning conditions, 4) surveillance issues; and 5) the near-total absence of transparency, accountability, or opt-out policies. 

Most troubling is the normalization of technologies that deepen surveillance, intensify labor, and strip faculty of intellectual and pedagogical autonomy. These tools—often embedded in learning management systems and deployed without consent—are not neutral. They reflect and reproduce structural inequities, exacerbating harms for contingent faculty and marginalized students alike.

I've excerpted a couple of chilling comments that should ring alarms for us all regarding how AI may produce the unintended consequence of students failing to learn.

Respondents were overwhelmingly concerned with student plagiarism made possible by generative AI. Ninety-one percent noted that they were at least some­what concerned about preventing academic dishonesty. However, one respondent wrote, “I am less concerned about the ‘honesty’ part than the ‘failure to learn’ part.” 

Another respondent noted, “It is now more difficult for [students] to develop their thoughts on a topic because they don’t have to spend time with it while they work through writing about it. . . . I am worried that they will never again get the chance to change their opinion as they expose themselves to ideas over the long term.” This distinction between honesty and failure to learn is critical because it highlights one of the core goals of higher education: to develop a well-informed and thoughtful citizenry.

The report insists that we must not accept a future where automation and administrative fiat displace human relationships and democratic decision-making in our institutions. Instead, it calls on us to organize—across departments, job categories, and sectors—for just policies that center faculty governance, student learning, and the public mission of higher education. This includes establishing oversight committees with real power, protecting intellectual property, ensuring data justice, and defending academic freedom at every turn. 

AI is not destiny. It is a site of struggle. And now is a watershed moment. What we choose to do—or fail to do—will reverberate for decades. Let us meet this moment with clarity, courage, and solidarity.

Deep thanks to AAUP for for issuing this essential report and helping guide our path forward in this brave, contested new world. 

-Angela Valenzuela

Artificial Intelligence and Academic Professions

Download
FileArtificial Intelligence and Academic Professions

Executive Summary

Educational technology, or ed-tech, including artificial intelligence (AI), continues to become more integrated into teaching and research in higher education, with minimal oversight. The AAUP’s ad hoc Committee on Artificial Intelligence and Academic Professions—composed of higher education faculty members, staff, and scholars interested in technology and its impact on academic labor—was formed under the assumption that faculty members are best positioned to understand and improve teaching and learning conditions, including the development and implementation of institutional policies around educational technology.

To learn more about the experiences and priorities of AAUP members, the committee conducted a survey with a sample of five hundred members from nearly two hundred campuses across the country, collected during a two-week time period. Respondents emphasized the importance of improving education on AI, promoting shared governance through policies and oversight, and focusing on equity, transparency, and worker protections. Based on those responses, the committee identified the five key concerns listed below and described more fully in the findings section of this report.

1. Improving Professional Development Regarding AI and Technology HarmsDespite the widespread use of ed-tech, there is an overall lack of understanding about the relationship between AI and commonly used data-intensive educational technologies.
Untested and unproven technologies are adopted uncritically
2. Implementing Shared Governance Policies to Promote OversightAI integration initiatives are spearheaded by administrations with little input from faculty members and other campus community members, including staff and students.
High levels of concern arose around AI and technology procurement, deployment, and use; dehumanized relations; and poor working and learning conditions.
3. Improving Working and Learning ConditionsPreexisting work intensification and devaluation are the main reasons respondents give for using AI to assist with academic tasks.
Implementing AI in higher education adds to faculty and staff workloads and exacerbates long-standing inequities.
AI raises concerns around bias, discrimination, and accessibility because of the untested and uneven impacts on students and student learning.
4. Demanding Transparency and the Ability to Opt OutFaculty members and staff lack choice and meaningful avenues to opt out of both AI-based tools and other ed-tech.
Few institutions have created transparent, equitable policies or provided effective professional development opportunities on AI use.
5. Protecting Faculty Members and Other Academic WorkersAcademic workers across job categories are worried about increased reliance on contingent appointments and declining wages.
Respondents expressed concern about academic freedom and intellectual property rights.

The report provides details on the survey’s findings about these concerns and addresses them with recommendations to improve higher education—both broadly and narrowly as it relates to emerging technologies. Faculty members can work to implement these recommendations on their campuses by incorporating guidelines in faculty handbooks and collective bargaining agreements. The recommendations can inform strategy for organizing and policymaking related to AI in higher education institutions and organized labor more generally.

The ad hoc Committee on Artificial Intelligence and Academic Professions has provided a resource guide to help members implement the recommendations of this report.

The report that follows was prepared by the AAUP’s ad hoc Committee on Artificial Intelligence and Academic Professions in May 2025.
Introduction

For decades, there have been significant labor issues around the use of technology in higher education.1Now, however, the uncritical adoption of artificial intelligence (AI) poses a threat to academic professions through potential work intensification and job losses and through its implications for intellectual property, economic security, and the faculty working conditions that affect student learning conditions. In its 2023 Statement on Online Education, the AAUP reaffirmed its principles with regard to the use of technology in higher education, stating that “(1) the use of new technologies in teaching should be for the purpose of advancing the basic functions of colleges and univer­sities to preserve, augment, and transmit knowledge and to foster the abilities of students to learn and (2) as with all other curricular matters, the faculty should have primary responsibility for determining the policies and practices of the institution with regard to online education.”2 The findings of our survey of AAUP members, discussed in this report, show that many institutions diverge from these principles and that most faculty members have little input into how their colleges and universities procure and deploy AI and other educational technology (ed-tech). In their survey responses, AAUP members pleaded for guidance on how to deal with the onslaught of AI in their profes­sional lives. Addressing their concerns, we articulate how academic communities can intervene meaning­ fully in response to issues related to AI and ed-tech in general, because they both promise to become far more entrenched in higher education in the coming years.3

Over the past two decades, colleges and universities have increasingly used ed-tech to implement learning management systems, offer online courses, and store and analyze large and small research datasets.4 At present, legacy ed-tech platforms for course manage­ment and videoconferencing often incorporate massive data collection and analyses with predictive analytics that are similar to AI. Both new and legacy platforms alike use a number of techniques, including AI and related statistical methods applied to large language models and used to analyze, make predictions and recommendations, and, in the case of generative AI, even generate image, text, and video content.

AI is both a marketing term and a usable product. Management in higher education and other sectors, the press, and technology companies often frame AI as something new, opaque, and exceedingly power­ful that will replace many activities based on human intelligence, including labor. At the same time, they encourage public buy-in and network effects—that is, gains in the value of the technology as more people use it. Such framing serves to increase the power of technology firms and employers, thereby shutting down already meager avenues for critique, dissent, negotiation, and refusal.

After decades of funding cuts, many colleges and universities rely on data-intensive technologies for the triage of limited resources. These technolo­gies increasingly use AI to guide decision-making on everything from fundraising to pedagogy.5At many institutions, faculty members are expected to take on more advising, teach more students, and conduct more research—and to manage all these responsibilities with fewer resources. But rather than addressing inequity among faculty members or improving their working conditions, which are student learning conditions, administrations often choose to invest in technological interventions that they perceive as cheaper.

Technological interventions, especially those offered as one-size-fits-all solutions for educational problems, do not improve student, faculty, institu­tional, or research outcomes.6In many instances, their use harms students as well as faculty members and staff.7 Adding to these harms, faculty members, graduate students (including graduate student employees with teaching or research duties), and undergraduate students—who experience directly the impacts of technological triage—are largely excluded from decisions about which platforms and products to develop or use.

According to the principles set forth in the AAUP’s 1966 Statement on Government of Colleges and Universities, it is “the responsibility primarily of the faculty to determine the appropriate curriculum and procedures of student instruction.”8This responsi­bility includes AI and other ed-tech infrastructure. However, many colleges and universities currently have no meaningful shared governance mechanisms around technology, as the findings of this survey suggest, and the explosion of AI has highlighted the need for such mechanisms among faculty members at individual institutions and across the higher education workforce.
Methodology

To gain a better understanding of how AAUP mem­bers are experiencing AI and other ed-tech and what types of concerns they might have, the committee administered the national AAUP Survey on AI and the Profession in December 2024. The survey included Likert-scale items, which were ordered to measure respondents’ attitudes, such as agreement or impor­tance, about the role of technology in higher education and at their institutions; yes-or-no items measuring whether particular tools, initiatives, or policies were in place at their institutions; and open-ended items addressing those tools, initiatives, and policies as well as general concerns regarding the use of technology in higher education.

Participants were AAUP members. Five thousand members were selected from the Association’s active membership list using a random number generator and invited to participate in the online survey through a series of three email messages that provided a survey link. Approximately five hundred responses were received in two weeks and are reflected in the analysis below. Follow-up interviews were conducted in spring 2025 with thirteen respondents; however, findings from these interviews are excluded from this report.

Responses collected from the Likert-scale items were analyzed and are reported at the descriptive level only (including frequencies and percentages). The open-ended items were analyzed using an open-coding process identifying generalized thematic trends. The categorical results reported in this document mainly reflect the trends emerging from the preconceptualized quantitative survey items. In some cases, the report’s presentation of survey results intersperses specific anonymous quotes pertaining to descriptive frequencies and percentages to add voice to participant perspectives conveyed in the report. Overall, the results reflect the views of the faculty members and other academic workers who took the time to respond to the online survey, but it does not necessarily represent the views of the entire AAUP membership or the overall population of academic workers in US higher education.
Findings

The findings below are organized around five key concerns, along with recommendations related to those concerns.

1.Improving Professional Development Regarding AI and Technology Harms

Despite the widespread use of ed-tech, there is an overall lack of understanding about the relationship between AI and commonly used data-intensive educational technologies.

Respondents viewed AI as having the potential to harm or to worsen many aspects of their work, while ed-tech is at least “somewhat helpful.” Eighty-one percent of respondents noted that they use some type of ed-tech, and 45 percent said they see it as at least somewhat helpful. Fifteen percent said they are required to use AI, yet nearly 81 percent reported that they are mandated to use ed-tech systems like the Canvas learning management system (LMS) or Google Suite, which have components that include predictive analytics, even when AI is “turned off.” This sug­gests that many faculty members and other academic workers may not realize that they are using AI-enabled tools for their work. Six percent said that they are required to use AI services like the Turnitin plagiarism detector and viewed Canvas as a data-intensive tool that is synonymous with AI.

Recommendation 1: Colleges and universities should offer better and more critically informed, holistic professional development around AI, including what it is and is not and how it has been incorporated already into ed-tech business models (for example, not all users of the Canvas LMS recognize that its “Intelligent Insights” use AI and data analytics–driven recommen­dations, regardless of whether faculty members plan lessons using the Khan Academy’s Khanmigo “teacher tools” add-on).

Recommendation 2: There is a need for discussions in academic communities that acknowledge technol­ogy as a labor concern and connect it with concerns around AI infrastructure and use in other sectors while underscoring the public service mission of higher education.

Recommendation 3: While administrators set up “initiatives,” they are not doing enough to respond to day-to-day concerns; faculty members and other academic workers need localized policy solutions, including opportunities to directly participate in the development of best practices or guardrails that address deteriorating working and learning conditions.

Untested and unproven technologies are adopted uncritically.

Respondents articulated that AI technology is untested and unreliable in sensitive scenarios and thus ques­tioned if it should be used at all. One respondent noted, “AI is not dependable enough for most sci­entific medical work. I uncover major errors. This is something that teachers and students must be made aware of.” Another highlighted how generative AI interferes with the core goals of education and learn­ing: “Large language models like ChatGPT produce shallow, unoriginal ‘predictive text-y ideas’ and I worry that my students and others will increasingly believe that that’s okay—that there’s nothing better than that to aspire to.”

Recommendation: Professional development around AI should include guidance for determining whether AI is the most appropriate solution for a given prob­lem and for considering whether AI use is responsible, given its potential long-term impact on institutions and academic communities.

2. Implementing Shared Governance Policies to Promote Oversight

AI integration initiatives are spearheaded by administrations with little input from faculty members and other campus community members, including staff and students. High levels of concern arose around AI and technology procurement, deployment, and use; dehumanized relations; and poor working and learning conditions.

Seventy-one percent of respondents said decision-making and AI initiatives are overwhelmingly led by college or university administrations, and many respondents described administrators exerting great effort to introduce AI into research, teaching, policy, and professional development with little meaning­ful input from faculty members, staff, or students. Examples include the development of institutional AI tools, workshops on teaching and detecting plagia­rism, and subscribing to AI tools for students (such as Grammarly, marketed as an “AI writing partner”) without involving faculty members or students in the decision-making process. One respondent noted that “admin doesn’t seem to care about or value faculty input on this or any other topic” and hoped for “more faculty involvement in determining how AI and tech generally are used.”

This finding similarly highlights the importance of implementing AI policies created by and for faculty members, staff, and students.

Recommendation: Institutions should develop meaningful shared governance policies and practices around ed-tech decision-making and use, as discussed in the AAUP’s Statement on Online Education.9A standing or ad hoc committee of faculty members, staff, and students should be elected by their respec­tive constituencies and charged with monitoring, evaluating, and reviewing ed-tech procurement processes and policy. This ed-tech oversight commit­tee shouldhave access to and meaningful input in all parts of the procurement and deployment process;
push for an assessment of the impact of pro­posed ed-tech tools before decisions are made about procurement;
have the ability to meaningfully challenge deci­sions about ed-tech procurement and deployment;
perform ongoing evaluations of ed-tech data flows and uses at the university and vendor levels;
receive institutional funds allocated for these evaluations;
have meaningful levers of enforcement (for example, an agreement by the institution to rescind or abolish contracts for any ed-tech system or vendor that the committee finds harmful or unhelpful); have the ability to suggest new ed-tech policies; monitor accountability of administration members for protecting faculty, staff, and student data; and act as a liaison with the broader campus community.

3. Improving Working and Learning Conditions

Preexisting work intensification and devaluation are the main reasons respondents give for using AI to assist with academic tasks.

A quarter of respondents (25 percent) reported using AI tools or platforms to perform service, admin­istrative duties, and teaching tasks that are often undervalued aspects of academic labor. For example, some respondents said that they used generative AI to write email messages, letters of recommendation, and internal reports or memos and to review grant applica­tions and manuscripts. Respondents also reported using AI tools or platforms for detecting plagiarism and for developing course materials, which are also undervalued but time-consuming and crucial instruc­tional duties.

Respondents were overwhelmingly concerned with student plagiarism made possible by generative AI. Ninety-one percent noted that they were at least some­what concerned about preventing academic dishonesty. However, one respondent wrote, “I am less concerned about the ‘honesty’ part than the ‘failure to learn’ part.” Another respondent noted, “It is now more difficult for [students] to develop their thoughts on a topic because they don’t have to spend time with it while they work through writing about it. . . . I am worried that they will never again get the chance to change their opinion as they expose themselves to ideas over the long term.” This distinction between honesty and failure to learn is critical because it highlights one of the core goals of higher education: to develop a well-informed and thoughtful citizenry.

This finding suggests that there is a need for higher education to refocus on the relational aspects of edu­cation and learning, as opposed to punitive measures that pit already overworked faculty members against debt-burdened students.

Implementing AI in higher education adds to faculty and staff workloads and exacerbates long-standing inequities.

Overall, respondents said that the rollout of AI at their colleges and universities has not made their jobs any better, but it has made some aspects of their work worse. Survey results indicate that AI has generally led to at least somewhat worse outcomes for the teaching environment (according to 62 percent of respondents), pay equity (30 percent), job enthusiasm (76 percent), academic freedom (40 percent), and student success (69 percent).

This finding is important because it emphasizes how the implementation of ed-tech, including AI, is connected to long-standing inequities in higher educa­tion. Required professional development on the use of AI in teaching and research adds to faculty and staff workloads—without evidence that AI improves productivity, pedagogy, or teaching and learning processes or outcomes. Indeed, AI may have negative effects on teaching and learning, especially in some pedagogical contexts.

Eighty-five percent of respondents said that they were at least somewhat concerned about how ed-tech is being implemented at their institutions. When con­sidering areas that may be affected by increased use of AI in higher education, respondents resoundingly (at least 95 percent for each category) stressed the importance of protecting intellectual property rights and academic freedom, implementing meaningful opt-out policies, maintaining data privacy, improving job security and wages, preserving workplace autonomy, and supporting accessibility.

One respondent remarked that “there is ample evidence for the damage done to individuals and to society by many tech products, including generative AI, but not limited to it. However, it is treated as an unqualified good in almost all circumstances and one is required to learn and use certain technologies, even when non-tech options would be better for the work­place environment, student learning, and personal quality of life.” This response suggests the need for humanizing relationships in higher education commu­nities and emphasizing that technocratic solutions (like plagiarism-detection technology) do not by themselves move us closer to caring and effective educational environments.

Recommendation: Promote accountability for inter­nally developed tools or tech company partnerships by requiring tech companies and vendors to provide proof of insurance covering liabilities related to the tech­nology and to include in contracts indemnity clauses that transfer the responsibility for harms enacted (for example, data breaches or racial or socioeconomic discrimination) to the tech company or vendor.Contracts should specify the penalties for any harms and the process for assessing and enforc­ing those penalties.

In many if not all cases the tech company or vendor should be held liable and should pay users or the institution an amount of money proportional to the harm.

Procurement should be overseen by a subcom­mittee of the earlier proposed ed-tech oversight committee with meaningful input from faculty members, staff, and students.

AI raises concerns about bias, discrimination, and accessibility because of its untested and uneven impacts on students and student learning.

Data-intensive technologies have a high likelihood of making recommendations, predictions, and analyses that are biased against historically marginalized people because the data and infrastructures these technologies use is also biased.10 Ninety-eight percent of respon­dents said that supporting accessibility was ranked as at least somewhat important when considering the increased use of AI in higher education. This finding is a reminder that student and faculty access to technol­ogy and learning experiences and ease of use should be core goals of any technologies introduced. However, many respondents also cautioned that these technolo­gies can be so harmful that they should be subjected to thorough review. One respondent flatly charged that AI technology “has become a tool of surveillance by administration.”

Recommendation 1: Require administrations to provide clear statements about how technology monitoring fits within the scope of administrators’ work, including specifics on why it is necessary, what this monitoring entails, and what outcomes may result for those monitored. If monitoring faculty members, staff, or students is proven to be necessary for some educational reason—for example, when an instructor pro­vides assessments on submitted student work using an LMS such as Canvas—any monitoring by the LMS or the institution must not continue indefinitely and should occur only within the framework necessary for a specified task.
The administration is prohibited from using electronic monitoring that results in violation of labor and employment laws; records work­ers off-duty or in sensitive areas; uses high-risk technologies, such as facial recognition; or identifies workers exercising their rights under employment and labor law.

Administrations that electronically monitor employees to assess their performance are required to disclose performance standards to faculty members and staff and apply these stan­dards consistently.
An outside technology governance body should review and document productivity-monitoring and systems for setting performance quotas prior to their use.

Faculty members, staff, and students should be allowed to opt in to and out of monitoring of particular sessions.

Communications made available through any electronic dataset or system are protected under the same principles of academic freedom as print and other traditional media. As discussed in the AAUP’s report Academic Freedom and Electronic Communications, initially published in 1997 and last revised in 2013, this protec­tion applies to email communications, websites, online bulletin boards, LMS content, blogs, list-servs, and social media—as well as to classroom recordings or videoconferencing communication on platforms such as Zoom.11

Recommendation 2: Minimize harms and bias resulting from the use of AI. Campuses must conduct impact assessments of electronic monitoring systems, testing for bias and other harms to faculty members, staff, and students prior to use.
Technology should be accessible for the wide range of needs of faculty members, staff, and students.
Technology should be used to augment acces­sibility to the institutional working or learning environment where necessary.
All technologies used should be subject to regu­lar and ongoing accessibility audits by a group of users approved by the campus AAUP chapter or another independent body, such as the ed-tech oversight committee proposed above or a subcommittee thereof.
Institutional funds should be available for these audit activities.
4. Demanding Transparency and the Ability to Opt Out

Faculty members, staff, and students lack choice and meaningful avenues to opt out of AI-based tools and other ed-tech.

This finding highlights the importance of not only prioritizing the needs and well-being of faculty mem­bers, staff, and students when implementing new AI and other ed-tech systems but also establishing policies that allow them to opt out of such systems. Furthermore, the unquestioned status quo of the continued expansion of AI often forecloses possibilities to negotiate the use of AI.

Recommendation 1: Create meaningful opt-out poli­cies, avoiding one-size-fits-all approaches.

Faculty members, staff, and students should be able to opt out of technology use in ways that will not impose a burden on them or negatively affect their working or learning conditions.

It is the prerogative of educators to determine the best pedagogy in a given context and to decide whether AI engagement in learning is detrimental or simply inappropriate in some cases. Faculty members should be able to opt out of assessments that use AI or other ed-tech tools in classrooms and online or to require the use of other modalities to assess students’ performance, understanding, and knowledge.

Institutions should allow different constituents to explore and establish best practices and protections most appropriate to specific contexts and applications.

Recommendation 2: Protect intellectual property for instructional materials.

Standards should be set for how instructional mate­rials may or may not be used in AI and other ed-tech data streams, including LMS platforms such as Canvas. While course syllabi are considered public documents at some colleges and universities, instructional materi­als such as lectures and original audiovisual materials constitute faculty intellectual property.12 As discussed in the AAUP’s Statement on Online Education, these prin­ciples apply to courses taught in person, online, or in a hybrid format. These principles also apply to AI and ed-tech generally, meaning that instructional materials, like other works of scholarship, must not be incorpo­rated into AI data streams—for example, AI training datasets—without the consent of the creator.13

Recommendation 3: Protect student and instructor privacy.

Data, content, and information collected in AI and other ed-tech data streams should not be the property of the institution or vendors unless they identify and clearly disclose to faculty, students, and administrators a specific educational need. The Family Educational Rights and Privacy Act, a US federal law that protects the privacy of student education records, is a floor and not a ceiling for considering whether data-intensive technologies should be procured and used in a higher education setting.14

Faculty members, staff, and students should be allowed to opt out of having their data, content, or information used or shared at no penalty to them or to their working or learning conditions.

Few institutions have created transparent, equitable policies or provided effective professional development opportunities on AI use.

Respondents noted the need for transparent and equitable policies on AI in their reflections on what they would change about the use of technology in higher education. One respondent emphasized the importance of “fair and equitable policies with clear transparency” for faculty members and students to better understand the acceptable uses of AI. Address­ing student use of AI, another respondent noted that “strategies, resources, and training would be really helpful in navigating this challenge.”

Although 90 percent of respondents reported that their colleges and universities have introduced initiatives around uses of AI for teaching, research, learning, or work, these initiatives have not materi­alized into clear policies on AI implementation and use. This finding aligns with Inside Higher Ed’s 2024 Survey of College and University Chief Academic Officers, which found that 20 percent of colleges and universities have published a policy or policies govern­ing the use of AI, including teaching and research.15 The lack of transparent and equitable policies seems at odds with the cross-campus AI initiatives, work­shops, and expenditures spearheaded by college and university administrations and described by some respondents in terms such as “enormous,” highlighting again how faculty members, staff, and students are left out of major decisions about technology implementa­tion and use. In open-ended responses, survey takers asked for better policies and more rigorous enforce­ment and accountability around technology in higher education. Some argued for guardrails, resources, and recommendations for ethical AI use, while others argued for prohibiting use in certain scenarios.

Faculty members and staff need to have input in eval­uating ed-tech before deployment, to have a say in how that technology is deployed and used, and to participate in ongoing evaluation of the technology and related policy over time. Ongoing communication, professional development, and cultivation of transparency with faculty members and staff will be important. Meaningful shared governance policies and practices should include access to information about the procurement and deploy­ment process and the ability to meaningfully challenge administrations’ decision-making facilitated by data-intensive technology, as discussed earlier.

Recommendation 1: Provide ongoing professional development opportunities.

Faculty members, other academic workers, and students should have access to ongoing professional development—approved by the ed-tech oversight com­mittee described above and organized and paid for by the institution—about technology uses, harms, and benefits.

Recommendation 2: Ensure transparency and disclo­sure in ed-tech and the use of data streams.

Faculty members and other academic workers should haveaccess to institutional technology procurement practices;
transparency regarding the cost of technologies procured and any alternatives;
access to contracts with vendors;
access to data collected about them through ed-tech platforms or electronic monitoring systems;
the right to correct any data collected about them and to hold administrations accountable for adjusting any appointment-related decisions that were based, partially or solely, on inaccurate or biased data;
access to names of “partner companies” and vendors and clear articulations of how they use data streams; and
protection from retaliation for exercising their rights, including private rights of action.
5. Protecting Faculty Members and Other Academic Workers

Academic workers across job categories are worried about increased reliance on contingent appointments and declining wages. Respondents expressed concern about academic freedom and intellectual property rights.

Eighty-seven percent of respondents maintained that it is important to improve job security and wages as AI is rolled out. Among part-time faculty members, there was near unanimity on this issue. Similarly, many respondents said that AI has generally led to worse outcomes for pay equity (27 percent), academic freedom (20 percent), and job enthusiasm (38 percent) at their institutions. Part-time faculty members and librarians were nearly unanimous that AI was lead­ing to worse outcomes in most areas. Eighty-seven percent of respondents said that it is at least somewhat important to protect intellectual property rights over the products of their academic work.

The path of dehumanization and automation is not the only option available. The growing adoption of data-intensive technologies in the workplace repre­sents a critical challenge for workers across industries and job categories, highlighting the urgent need for a new set of labor standards for technology in higher education. These standards must be bold and compre­hensive, keeping pace with the rapid advancements in workplace technologies and addressing the potential risks they pose to faculty members, staff, students, and society more broadly.

Academic workers are intimately familiar with the benefits, shortcomings, and harms of the technologies they use. Their engagement with technology offers insights that can drive meaningful change. It is impor­tant for faculty members and staff to participate actively in deciding which technologies are implemented, how they are used in their workplaces, and how result­ing productivity gains are shared among all campus community members. Campuses can establish higher education workplace policies to harness new technolo­gies and prioritize living-wage jobs, good working conditions that contribute to good learning conditions, and equity across job and identity categories.

Recommendation 1: Maintain protections against work intensification.

Members of the institution’s ed-tech oversight com­mittee should identify issues of work intensification, such as plagiarism checking, as well as invisible labor—unseen and often uncompensated tasks and responsibilities that are essential but frequently overlooked—related to technology implementation. Any technology found by the committee to be meaningfully causing work intensification should be prohibited or curtailed, and the committee should propose “best practices” to minimize work intensification.

Recommendation 2: Provide protections against deskilling and job loss.

Decisions on faculty appointments such as hiring, tenure, promotion, or termination should not rely pri­marily or exclusively on AI or data-intensive analytic technologies. Instead, decision-makers must indepen­dently corroborate the findings and data and provide the faculty member with full documentation, including the actual data used.Data-intensive technologies cannot be used as a pretext for shifting faculty members holding tenure-line appointments to contingent appoint­ments or lower-paid positions.
Data-intensive technologies cannot be used to justify decreasing wages in any way.
Data and information from these technologies cannot be the basis for decisions on faculty appointments such as hiring, reappointment, tenure, promotion, or termination.
If any of the above scenarios occur, a hearing and audit should be held to evaluate the technology and consider prohibiting it.

Recommendation 3: Implement processes that allow faculty members and staff to meaningfully challenge administrative decisions on ed-tech.

There should be ongoing review of, and faculty participation in, decision-making. If reviews find that any technology contributes to deskilling, wage decreases, or job loss or to decreased academic freedom, intellectual property rights, faculty involve­ment in shared governance, or rights to organize for protections, there should be a process for faculty members and staff to meaningfully challenge the use of the offending technology and to reconsider, downsize, renegotiate, or void the contract for that technology.

Any technology that threatens the academic free­dom, role in shared governance, or economic security of faculty members should be prohibited.

Recommendation 4: Protect academic freedom and the right to organize.

Fundamental principles of academic freedom apply as much to AI and other ed-tech data streams as they do to electronic communications in general, including commu­nications among faculty members about their working conditions and organizing on their own behalf.16
Strategy, Targets, Outputs, and Action

The survey findings presented in this report highlight the need to establish structures of bottom-up shared governance to guide decisions around ed-tech, and especially AI, in higher education. The report also points to the importance of fostering solidaristic strat­egies across higher education, education more broadly, white-collar and industrial sectors, and civil society and grassroots organizations fighting on many fronts to establish bottom-up policy around generative AI.
Internal and External Organizing

Targets: AAUP members and the broader higher edu­cation community

There is a lot of work to do to communicate the poten­tial harms related to uncritical deployment of AI and other ed-tech. Academic work and the learning condi­tions of students—and indeed higher education more broadly—are often devalued by technology. There is also a need to establish research functions within the AAUP that facilitate collaboration across associations and unions in higher education and other sectors. Together, these organizations could provide evolving best practices, guidelines, collective bargaining wish lists, ed-tech profes­sional development, and organizing support as well as guidance on individual institutional issues.

External communication strategies: Organize and conduct workshops and develop documentation for faculty members covering ed-tech procurement processes, budget forensics, assessment of the impact of technology, and vendor practices.
Communicate with campus community members, policymakers, and the public through op-eds, AAUP member communications, conferences, meetings, and cross-union, civil society, legislative, and public conversations.
Develop web resources promoting these initiatives and other publicly available materials.

Internal communications strategies:Build out robust faculty, staff, and student educational resources on how technology is an issue that affects academic work, educational environments, and quality of life.
Work toward establishing faculty, staff, and student boards or governing bodies that can hold administrators accountable for their decision-making, with the goal of correcting technology policy failures to serve the educational mission of the institution.
Guardrails and Best Policies

Target: AAUP members

Each of the conceptual recommendations above points to problems and solutions to overcome them. Building on the AFT document detailing “guardrails” for using AI in primary and secondary schools17 and the findings and recommendations in this report, the AAUP should develop and promulgate a set of best practices for policymaking around the use of AI in higher educa­tion. In institutions without a bargaining unit, chapter members and leaders should attempt to adopt these practices through governance bodies, such as academic senates, and put in place mechanisms for enforcement and oversight.
Bargaining

Target: AAUP collective bargaining chapters Establish a wish list developed from the recommenda­tions in this report to be adapted by bargaining-unit legal representatives for each institutional context.

As they draft demands and negotiate agreements with administrations, bargaining units should consult with any internal ed-tech committees or teams they have established.
State Policy

Target: State lawmakers

Currently in the United States, employers are introduc­ing untested data-intensive technologies with almost no regulation or oversight, as former Federal Com­munications Commission Chairman Tom Wheeler documented.18 Workers largely do not have the right to know what data are being gathered about them or whether the data are being shared with others. They do not have the right to review or correct the data. Employers in many states are not required to notify workers about any electronic monitoring or algo­rithms they are basing decisions on, and workers do not have the right to challenge those decisions.

One of the most important strategies for state policy would be providing government agencies and employ­ees the skills and resources necessary to research, educate others about, enhance, and enforce these protections. There should be increases in funding at state and federal levels for that purpose. However, we know that the Trump administration is currently unin­terested in advancing such measures, as it has reversed even the mildest interventions to promote thoughtful, equitable advances in AI.19 At present, even state-level interventions seem unlikely. Nonetheless, we can build momentum for future policy interventions even where it appears there is no way forward.
Activity Organizing target Tools, outputs, and practices
Faculty, staff, and student ed-tech oversight committee Internal to higher education Develop faculty, staff, and student committees and governing bodies that provide oversight on ed-tech procurement processes and policy.
Guardrails and best practices Internal Develop language around AI and other ed-tech deployment to be adapted for collective bargaining contracts and faculty handbooks.
Member outreach and education Internal

Develop outreach materials (reports, one-pagers, FAQs, videos) to distribute to chapter leaders and members.

Host and participate in events to distribute materials and discuss relevant issues.
Structural analysis of education and technology External to higher education Emphasize how systemic inequalities in education combine with other concerns through external-facing outreach and communications.
Solidarity and collective power across sectors External

Collaborate with associations and unions in higher education and other sectors to develop best practices, guidelines, bargaining language, and professional development.

Provide organizing support and advice on issues related to AI and technology deployment in the workplace.
State policy External

Support state-level policies that establish guardrails and regulation on technology deployment in higher education and other sectors, building on existing policy efforts that focus on algorithmic decision-making, worker surveillance, replacing workers with technology, and protecting intellectual property.

Provide guidance by organized labor to government agencies and employees through coordinated outreach and research efforts.


The table above sums up this section’s suggestions about strategies, targets, output, and action.
Conclusion: Next Steps for AI in Higher Education

It is essential that higher education workers are in control of technological advancements affecting their employment. Faculty members and other academic workers are the closest to these technologies and are intimately familiar with their benefits, shortcomings, and harms. Their familiarity with ed-tech prom­ises invaluable insights that can drive meaningful change. Faculty members should actively participate in deciding which ed-tech systems are adopted, how they are implemented in their workplaces, and how the resulting benefits are shared among all academic workers. We can establish appropriate higher educa­tion workplace policy and use our power to harness new technologies for fostering dynamic and produc­tive institutions that prioritize economic security, good faculty working conditions and student learn­ing conditions, and equity for all campus community members, while refusing tools that undermine these aims.

BRITT S. PARIS (Information Studies)
Rutgers University–New Brunswick, chair

CYNTHIA CONTI-COOK (Law)
Collaborative Research Center for Resilience

DANIEL GREENE (Information)
University of Maryland

KYLE M. L. JONES (Library and Information Science)
Indiana University Indianapolis

BRIAN JUSTIE (Information Studies)
University of California, Los Angeles

MATTHEW KIRSCHENBAUM (English)
University of Maryland

LISA KRESGE (Technology and Work)
University of California, Berkeley

EMMA MAY (Library and Information Science)
Rutgers University–New Brunswick

AIHA NGUYEN (Urban Planning)
Data & Society

REBECCA REYNOLDS (Library and Information Science)
Rutgers University–New Brunswick

SERITA SARGENT (Library and Information Science)
Rutgers University–New Brunswick

LINDSAY WEINBERG (Science and Technology Studies)
Purdue University

SARAH MYERS WEST (Communication)
AI Now Institute

DAVID GRAY WIDDER (Software Engineering)
Cornell University

The committee
Notes

1. Howard Besser and Maria Bonn, “Impact of Distance Indepen­dent Education,” Journal of the American Society for Information Science 47, no. 11 (1996): 880–83, https://doi.org/10.1002/(SICI)1097- 4571(199611)47:11<880::AID-ASI14>3.0.CO;2-Z; Christopher Newfield, The Great Mistake: How We Wrecked Public Universities and How We Can Fix Them (Johns Hopkins University Press, 2016); and Andrew Feenberg, “The Online Education Controversy and the Future of the University,” Foundations of Science 22, no. 2 (2017): 363–71, https://doi.org/10.1007/s10699-015-9444-9.

2. AAUP, Policy Documents and Reports, 12th ed. (Johns Hopkins University Press, 2025), 245.

3. Arizona State University, “Arizona State University Collabora­tion with OpenAI Charts the Future of AI in Higher Education,” PR Newswire, January 18, 2024, https://www.prnewswire.com/news-releases/arizona-state-university-collaboration-with-openai-charts-the-future-of-ai-in-higher-education-302038869.html; Kathryn Palmer, “Tech Giants Partner with Cal State System to Advance ‘Equitable’ AI Training,” Inside Higher Ed, February 5, 2025, https://www.insidehighered.com/news/tech-innovation/artificial-intelligence/2025/02/05/cal-state-system-tech-giants-partner.

4. Britt Paris, Rebecca Reynolds, and Catherine McGowan, “Sins of Omission: Critical Informatics Perspectives on Privacy in E-learning Systems in Higher Education,” Journal of the Association for Information Science and Technology 73, no. 5 (2022): 708–25, https://doi.org/10.1002/asi.24575.

5. Kelli Bird, Benjamin Castelman, Yifeng Song, and Zachary Mabel, “Big Data on Campus,” Education Next 12, no. 4 (2021), https://www.educationnext.org/big-data-on-campus-putting-predictive-analytics-to-the-test/.

6. Paris, Reynolds, and McGowan, “Sins of Omission”; Kyle M. L. Jones, “Learning Analytics and Higher Education: A Proposed Model for Establishing Informed Consent Mechanisms to Promote Student Privacy and Autonomy,” International Journal of Educational Technol­ogy in Higher Education 16, no. 1 (2019): 24, https://doi.org/10.1186/s41239-019-0155-0.

7. Hao-Ping (Hank) Lee, Advait Sarkar, Lev Tankelevitch, Ian Drosos, Sean Rintel, Richard Banks, and Nicholas Wilson, “The Impact of Gen­erative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects from a Survey of Knowledge Workers,” in Proceedings of the 2025 CHI Conference on Human Factors in Com­puting Systems, Association for Computing Machinery Digital Library, April 25, 2025, https://doi.org/10.1145/3706598.3713778.

8. AAUP, Policy Documents and Reports, 12th ed. (Johns Hopkins University Press, 2025), 120.

9. Policy Documents and Reports, 12th ed., 245–46.

10. See Paris, Reynolds, and McGowan, “Sins of Omission”; Bird, Castelman, Song, and Mabel, “Big Data on Campus”; Joy Buolamwini, Unmasking AI: My Mission to Protect What Is Human in a World of Machines (Random House, 2023); and Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York Univer­sity Press, 2018).

11. AAUP, “Academic Freedom and Electronic Communications,” Policy Documents and Reports, 12th ed. (Johns Hopkins University Press, 2025), 48.

12. AAUP, “Statement on Intellectual Property,” Policy Documents and Reports, 11th ed. (Johns Hopkins University Press, 2015), 261–63.

13. Policy Documents and Reports, 12th ed., 246.

14. See Kyle M. L. Jones and Amy VanScoy, “The Syllabus as a Student Privacy Document in an Age of Learning Analytics,” Journal of Documentation 75, no. 6 (January 1, 2019): 1333–55, https://doi.org/10.1108/JD-12-2018-0202; Elana Zeide, “The Limits of Education Purpose Limitations,” University of Miami Law Review 71, no. 2 (March 1, 2017): 494; and Paris, Reynolds, and McGowan, “Sins of Omission.”

15. “2024 Survey of College and University Chief Academic Officers,” Inside Higher Ed, https://www.insidehighered.com/reports/2024/04/15/2024-survey-college-and-university-chief-academic-officers.

16. See AAUP, “Academic Freedom and Electronic Communications,” Policy Documents and Reports, 12th ed., 48–63.

17. American Federation of Teachers, “Commonsense Guardrails for Using Advanced Technology in Schools,” published June 18, 2024; updated March 2025, https://www.aft.org/press-release/aft-announces-new-guardrails-artificial-intelligence-nations-classrooms.

18. Tom Wheeler, “The Three Challenges of AI Regulation,” Brookings Institution TechTank blog, June 15, 2023, https://www.brookings.edu/articles/the-three-challenges-of-ai-regulation/.

19. Exec. Order 14179, 90 Fed. Reg. 8741 (January 31, 2025), https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence.
Report category
Topical Reports
Research and Teaching

No comments:

Post a Comment