Paper Three: Education Technology

Compelled Participation and Structural Extraction

Abstract.

This paper applies the Personal Data Royalty Formula (PDR) framework to the education technology sector, with particular focus on the Google Workspace for Education, Instructure's Canvas learning management system, Turnitin's academic integrity platform, and the remote proctoring industry. It documents a participation structure that is categorically different from those examined in Papers One and Two: participation is compelled by law, institutional mandate, and federal funding accountability requirements embedded in the Elementary and Secondary Education Act. Students can't refuse participation without refusing education itself. The survivable refusal condition doesn't merely fail in this domain, it doesn't exist. This paper establishes the extraction surface across the major EdTech platforms, documents the regulatory gap created by the Family Educational Rights and Privacy Act's School Official Exception, examines Turnitin's archive model as the most structurally transparent illustration of compelled participation producing a commercial dataset, and identifies the disclosure gap in Google's financial reporting as evidence itself of the misclassification it documents. The PDR calculation in this domain is more qualitative than in Papers One and Two precisely because the platform whose participation surface is most captive has chosen not to disclose what it earns from that surface. That choice isn't incidental to the argument. It is the argument.

Education technology platforms occupy a unique position in the participation economy. Unlike social media or gaming platforms, their users don't choose to participate. Participation is a condition of receiving an education. Papers One and Two established the participation value baseline for the attention-economy and the free-to-play gaming economy, documenting the structural misclassification of behavioral participation as an ambient byproduct rather than a productive contribution. Both papers operated against platforms where participation was mostly voluntary. This paper moves to a domain where that defense is structurally unavailable. Students don't choose Google Classroom, Canvas, or Turnitin in the same way that users choose Facebook or Roblox. A school district chooses them, a professor requires them, and a federal accountability system incentivizes their adoption. The student encounters them as the infrastructure through which education itself is delivered and recorded. This paper documents what that compelled participation generates, who captures the value it produces, and why the regulatory frameworks designed to protect students address everything except the question the Personal Data Royalty formula asks.

Section 1: The Compulsory Participation Surface Top


The Participation Chain.

In Papers One and Two, the misclassification argument required establishing that nominally voluntary participation wasn't as free as it appeared, that engagement optimization systems eroded refusal capacity over time, that network effects made exit costly, and that the zero-price entry was a calculated acquisition strategy rather than generosity. The critics' response was always available: users could choose otherwise. In education technology, that response isn't available. The participation chain that delivers students to EdTech platforms doesn't begin with a product decision; it begins with law.

Every state in the United States mandates school attendance. The specific ages and structures vary, but the principle is uniform: education is compulsory, and the state enforces that compulsion. A child doesn't choose to attend school the way an adult chooses to open a social media application. Attendance is a legal requirement whose violation carries consequences for the child, the parent, and ultimately the institution. Compulsory attendance delivers students to institutions.

Institutions have adopted digital learning platforms as the infrastructure through which education is delivered, recorded, and assessed. That adoption wasn't accidental and it wasn't purely pedagogical. It was driven, in significant part, by federal accountability requirements that created institutional demand for digital systems capable of managing student participation data at scale.

The Federal Accountability Infrastructure.

The Elementary and Secondary Education Act (ESEA), currently administered through the Every Student Succeeds Act framework, ties Title I federal funding to documented accountability metrics. School districts receiving Title I funds (which includes the majority of American public school districts) must report attendance, assignment completion, assessment results, progress indicators, and intervention outcomes. These records determine funding allocations, trigger intervention requirements, and form the basis of federal compliance reporting. In the decades when ESEA was first enacted, these records were maintained on paper. In the current environment they are maintained digitally, through learning management systems, assessment platforms, and student information systems that aggregate participation data at the district, state, and federal level. Districts can't maintain the reporting requirements that federal funding depends on without digital infrastructure capable of capturing and organizing student participation data at scale. The platforms that provide that infrastructure (Google Classroom, Canvas, Blackboard, and their associated assessment and integrity tools) have become de facto requirements of federal compliance, not merely pedagogical choices. The participation chain is therefore:

Students can't access the curriculum, submit assignments, complete assessments, or receive grades without accessing the platforms. Participation in the platform system isn't a condition attached to a discretionary service; it's a condition of receiving a compulsory public education.

The Sole-Source Infrastructure Problem.

Once a district or university adopts a learning management system, the platform becomes instructional infrastructure in the most literal sense. Assignments are built inside it, course materials live inside it, grading systems run through it, communication between students and faculty runs through it, and academic integrity enforcement runs through it. The course itself exists inside the platform.

Public institutions must open the contract to competitive bids before replacing a Learning Management System (LMS). Those that seek to continue with existing vendors without competitive bidding file sole-source justifications — public documents stating that no reasonable alternative exists. These documents are public records for most universities and school systems, and they are routinely filed for Canvas, Blackboard, Turnitin, and integrated assessment platforms. The justification language is consistent across institutions:

These documents are institutions acknowledging, in formal public records, that the platforms they've adopted are no longer tools that could be replaced. They're infrastructure that students are compelled to use because the course itself exists nowhere else. The sole-source justification is the institutional system admitting what the PDR framework argues: these aren't optional services. They're mandatory participation environments.


Section 2: The EdTech Extraction System Top


Google Workspace for Education.

Google Workspace for Education is the most widely deployed educational technology platform in the United States. Google Classroom alone is used by approximately 60,000 K–12 schools in the United States, and over 80% of the top 100 United States universities have adopted Google Workspace for Education as their primary productivity suite. The platform serves more than 170 million students and educators globally across 230 countries and territories.

The data generated through student use of Google Workspace for Education is extensive. A student using Google Classroom generates document creation and editing behavior, assignment submission patterns, search queries conducted while logged into their educational account, navigation behavior across Google properties, communication patterns through Gmail and Google Meet, and engagement with YouTube when accessed through their institutional account. Each interaction is time-stamped, associated with the student's account, and captured in Google's infrastructure.

The Electronic Frontier Foundation (EFF) filed a complaint with the Federal Trade Commission in 2015 documenting that Google's Chrome Sync feature, enabled by default on school-issued Chromebooks, collected student browsing data and associated it with their educational accounts for purposes extending beyond educational use. The complaint established that even when students navigated to non-educational Google properties (YouTube, Google Maps, and Google News) while logged into their school accounts, that navigation behavior was captured and associated with their educational profiles. Google subsequently modified certain practices and clarified others, but the core issue the EFF identified (that Google collects data on students using non-educational services without parental consent despite having pledged not to) was not fully resolved.

In April 2025, Google was sued in the Northern District of California for collecting thousands of data points spanning children's digital lives through Google Workspace for Education, including hidden tracking technology embedded in the Chrome browser that creates a unique digital fingerprint, allowing Google to track a child even when cookies are disabled or cookie-blocking technologies are in use. The complaint alleged that Google relies on school personnel consent rather than parental consent for this collection. It also alleged that school personnel don't have legal authority to consent on behalf of parents for data collection that extends beyond the educational context.

Canvas and the LMS Ecosystem.

Instructure's Canvas learning management system holds number-one market share in the United States LMS market, used by approximately 36% of North American higher education institutions and a substantial share of K–12 districts. Instructure reported approximately $634 million in trailing twelve-month revenue as of late 2024, on track for its publicly stated target of $1 billion by 2028.

Canvas captures a comprehensive record of student academic behavior: time spent on each assignment, document access patterns, discussion post frequency and content, quiz attempt history, video engagement within Canvas Studio, peer review interactions, grade progression, and login frequency. These all generate participation data that the platform records and that institutions use for academic analytics, early warning systems, and intervention triggers. The data generated through Canvas participation is used by institutions for reporting purposes that connect directly to the federal accountability infrastructure described in Section 1. Assignment completion rates, assessment outcomes, and intervention records generated through Canvas are the inputs to the district and institutional reporting that Title I compliance requires. The student's participation in Canvas is simultaneously academic activity and federal accountability documentation. The platform captures both in the same interaction.

Turnitin and the Archive Problem.

Turnitin serves more than 71 million students across approximately 16,000 institutions globally. California State University procurement records confirm per-student pricing of $2.59 to $2.71 annually for the base plagiarism detection service, with AI detection add-ons priced additionally. Sacra estimates Turnitin generated approximately $203 million in revenue in 2024. Turnitin's business model contains the most structurally transparent illustration of compelled participation producing a commercial dataset in the entire series. The mechanism is simple and worth stating precisely.

Turnitin detects plagiarism by comparing submitted student papers against a proprietary archive of previously submitted documents. When a student submits an assignment through Turnitin, two things happen simultaneously: the assignment is checked for similarity against the existing archive, and the assignment is added to the archive against which future submissions will be compared. The archive grows with every submission. A larger archive produces better plagiarism detection, and better plagiarism detection produces a more commercially valuable product. Universities pay licensing fees for access to the product.

The archive isn't generated by Turnitin employees, it's generated by students who are required to submit assignments through the platform to receive academic credit. The student's submission is simultaneously the service being delivered, the input that improves the service for future users, and the contribution that increases the platform's commercial value. The student contributes to the dataset as a condition of completing coursework, yet the dataset is retained by the platform. The platform sells access to the dataset to the institution that requires the student to contribute to it.

This structure was litigated in A.V. v. iParadigms, LLC, in which students challenged Turnitin's practice of archiving their submissions as copyright infringement. The court ruled that the practice qualified as fair use under the four-factor test, permitting Turnitin to retain and use student submissions in its archive. The ruling was legally correct under the fair use framework. It did not address the question the PDR formula asks, whether the students whose compelled submissions built the archive had recognized standing in the commercial arrangement that the archive made possible. The answer to that question was and remains no.

Remote Proctoring.

The remote proctoring market was valued at approximately $648 to $852 million in 2024, growing at approximately 15% annually. The dominant platforms (Proctorio, Examity, Honorlock, and Respondus) require students to enable their webcam, microphone, and screen sharing during examinations. The data collected includes facial imagery, eye movement patterns, keystroke dynamics, mouse movement, hardware information, ambient audio, room scans, and identity verification documents including government-issued identification.

The Electronic Privacy Information Center filed a formal complaint with the District of Columbia Attorney General in December 2020 documenting that five major proctoring platforms (Respondus, ProctorU, Proctorio, Examity, and Honorlock) collected excessive biometric and personal data beyond what was required to provide proctoring services, deployed opaque AI systems to analyze that data without disclosing their logic or determinations to students, and made it impossible for students enrolled in proctored courses to reasonably avoid these systems.

The behavioral data generated through remote proctoring is qualitatively different from the engagement data generated through LMS platforms. It includes biometric data (facial geometry, eye tracking, and physiological indicators of stress) collected in the student's home, during an assessment that determines academic outcomes, and from a person who can't refuse participation without forfeiting the exam. The Behavioral Signal Category for remote proctoring participation, in the framework established in this series, includes biometric signals of a type not captured on any other platform documented to this point.


Section 3: Revenue and User ScaleTop


The Disclosure Gap

The PDR calculation for education technology faces a structural obstacle that the calculations in Papers One and Two did not encounter. Google doesn't break out education revenue as a separate segment in its Annual Report on Form 10-K. The platform with the most captive participation surface (serving roughly 170 million students and educators, mandatory in most American K–12 schools and widely used in major universities, with no survivable refusal condition) reports its education revenue within the broader Google Services segment, where advertising and subscription revenues are attributed to geographic markets rather than user categories. This is a disclosure choice. Alphabet discloses the revenue it generates from advertising markets with sufficient geographic precision to support the PDR calculations in Paper One. It doesn't disclose the revenue it generates from the educational participation of a captive user base. The most compelled participants in the Google ecosystem are also the participants whose participation value is most obscured in the accounting.

Market estimates and licensing structure analysis produce a range. Google Workspace for Education's paid tiers (Education Standard and Education Plus) are priced at approximately $3 to $5 per student annually for institutional licenses, with the free tier providing basic access while capturing participation data for product improvement and platform development purposes. At 170 million global users, with the United States representing a disproportionate share of paid licenses, education revenue is likely in the range of several hundred million to over one billion dollars annually, though the absence of disclosure prevents the precision the PDR formula requires.

That absence is itself a finding. A company that discloses United States advertising revenue with sufficient precision to calculate per-user participation value for its consumer platforms has chosen not to disclose equivalent figures for its educational platform. The PDR formula is designed to measure value that originates in participation and is captured by platforms that don't recognize its origin. When a platform additionally declines to disclose the revenue that participation generates, the misclassification operates at the level of the accounting system itself, not merely at the level of the individual transaction.

For Canvas, the figures are confirmable. Instructure reported approximately $634 million in trailing twelve-month revenue as of late 2024, with the United States representing the dominant share. Canvas serves a higher education user base of several million active students in the United States, with K–12 users adding substantially to that figure. The PDR baseline for Canvas participation, calculated from confirmed revenue against estimated active United States user population, falls in a range comparable to the lower end of the Roblox calculation established in Paper Two: modest at the per-user level, significant at the aggregate level, and understated because it captures only licensing revenue and not the behavioral analytics value of the participation data the platform assembles.

For Turnitin, the $203 million in estimated 2024 revenue divided across 71 million global students produces a per-student figure of approximately $2.86, which happens to correspond closely to the confirmed California State University per-student pricing, confirming the estimate's reasonableness. For United States students specifically, who represent a disproportionate share of the institutional base, the per-student figure is higher. None of this captures the value of the archive those students' compelled submissions built.


Section 4: The FERPA/COPPA Regulatory Gap Top


The Family Educational Rights and Privacy Act was enacted in 1974. Its core definitions (education records as records maintained by educational institutions) were written for paper filing systems in an era before the commercial internet, before cloud computing, before behavioral analytics, and before the emergence of AI systems trained on student interaction data. The law has been amended but not fundamentally restructured since 1974. Its enforcement mechanism (potential withdrawal of federal funding) has never once been applied in the fifty years of the statute's existence.

The central structural flaw is the School Official Exception. FERPA prohibits educational institutions from sharing students' personally identifiable information without consent, but permits sharing it with school officials who have legitimate educational interests, and with contractors acting on behalf of the institution. In practice, this exception has been interpreted to give EdTech vendors contracted by schools the status of school officials for FERPA purposes, permitting them to access student records without parental consent or notification. Research estimates that the average school district provides data access to approximately 1,449 EdTech vendors per district under this exception.

The definitional gap compounds the exception problem. When a student logs into Google Classroom and the system records which documents they opened, how long they spent on each section, which links they clicked, and what searches they conducted, it's not settled whether those behavioral signals constitute education records protected by FERPA or usage data that falls outside the statute's definitions. The platforms and the Department of Education have no consistent answer. In the gap between the 1974 definition and the 2024 reality of behavioral analytics infrastructure, an enormous volume of student participation data circulates in commercial systems without clear legal protection or recognized participant standing.

The Children's Online Privacy Protection Act (COPPA) provides a different framework for students under 13. COPPA requires parental consent before collecting personal information from children under 13, but provides a school exception that permits schools to consent on behalf of parents when data collection serves educational purposes. The critical limitation is that this school consent is strictly confined to educational use; commercial applications are expressly prohibited. Research has found that 96% of EdTech applications share student data with third parties, likely without the legal authorization the school consent mechanism requires for non-educational purposes.

Neither FERPA nor COPPA addresses the question the PDR framework raises. Both treat student data as a privacy problem, a matter of disclosure, consent, and appropriate use of information. Neither treats it as a participation value problem, as the unrecognized contribution of compelled participants in commercial systems that capture value without acknowledging its origin. A student whose assignment submission builds Turnitin's archive hasn't merely had their data collected without adequate consent; they've contributed to the construction of a commercial asset whose value derives directly from their compelled participation, and they have no legal standing in the arrangement that commercial asset makes possible.


Section 5: The Legitimacy ConditionsTop


Compulsion, Not Deception

The Introduction, Platform Participation, established four conditions whose simultaneous presence is required for participation to function as voluntary exchange: survivable refusal, recognized standing, transparency of terms, and independent jurisdiction. Papers One and Two showed these conditions failing through deception: through engagement optimization, through misclassified inputs, and through terms of service that satisfied legal disclosure requirements without producing functional transparency. In education technology, the conditions fail differently. They fail through compulsion.

Survivable Refusal.

The survivable refusal condition requires that a participant be able to decline participation without immediate consequences to their capacity to function in ordinary economic and social life. In the education technology context, the condition requires that a student be able to decline platform participation without consequences to their capacity to receive an education. That condition can't be satisfied:

The refusal isn't survivable because the education isn't accessible through any other means. This is categorically different from the refusal analysis in Papers One and Two. In those papers, refusal was costly but technically possible. The cost structure made it non-survivable in practice for users whose social and economic infrastructure was organized around the platforms. In education technology, refusal is non-survivable by design. The platform is the course. Refusing the platform means refusing the education.

In constitutional law, systems that condition access to public benefits on surrendering certain rights are analyzed under the doctrine of unconstitutional conditions. The principle (articulated across a line of cases including Koontz v. St. Johns River Water Management District) holds that government cannot require surrender of constitutional rights as the price of receiving a public benefit, even when the benefit itself is legitimate. This paper doesn't make a constitutional claim; it notes the structural similarity: students are required to participate in behavioral data extraction systems as a condition of receiving a compulsory public education. The participatory conditions attached to public schooling exist in an environment the unconstitutional conditions doctrine was designed to identify, where the power differential between the institution and the individual makes the voluntariness of the condition a legal fiction.

Recognized Standing.

Minor students, under the age of 18, have no recognized standing in any legal, commercial, or regulatory framework governing their participation in educational platforms. They can't negotiate terms of service, they can't challenge platform data practices, and they can't claim compensation for participation contributions. Their legal relationship with the platform is mediated through the institution that contracted with the platform, which itself has limited standing to negotiate data terms and frequently lacks the legal resources to modify standard vendor contracts.

For adult students in higher education, recognized standing is formally present but functionally absent. A university student who objects to Turnitin's practice of archiving their submissions has no practical recourse. The platform's terms of service govern the relationship and the institution's contract with Turnitin determines what those terms are. The student's choice is to submit through the required platform or to not submit. The A.V. v. iParadigms ruling confirmed that the student's legal objection to archive use doesn't survive fair use analysis. The standing to challenge the commercial arrangement that the student's compelled contribution makes possible doesn't exist in any current legal framework.

Transparency of Terms.

Transparency of terms requires sufficient knowledge of what participation generates to make a meaningful choice about engagement. In the education technology context, this condition fails at two levels.

At the institutional level, school districts and universities frequently adopt EdTech platforms through procurement processes in which data terms aren't the primary evaluation criteria. Districts lack the legal resources to audit vendor privacy practices, negotiate data use limitations, or ensure that standard contract terms comply with FERPA's requirements. Research from the Fordham Law School study on cloud computing in public schools found that fewer than 25% of agreements between schools and EdTech vendors specified the purpose for which student data could be used.

At the student level, transparency isn't merely insufficient, it's structurally inapplicable. A student required to submit a paper through Turnitin to complete a course assignment isn't presented with a meaningful choice about the terms of that submission. The terms are set by a contract between the institution and the platform to which the student isn't a party. The student's only decision is whether to submit and receive credit or not submit and fail. No disclosure about archive use, data retention, or commercial applications of their submission would change the decision available to them, because the decision available to them isn't about the platform, it's about whether to complete the course.

Independent Jurisdiction.

Students have no independent legal recourse for the participation value misclassification this paper documents. FERPA provides no private right of action; individuals cannot sue under FERPA. COPPA enforcement runs through the FTC against platforms and through the Department of Education against institutions, not through individual student claims. No regulatory framework in any jurisdiction provides a mechanism through which a student could claim compensation for the participation value their compelled submissions generated.

The April 2025 class action against Google, in the Northern District of California, represents an attempt to establish recourse through privacy tort claims. It alleges unauthorized data collection rather than participation value misclassification. It's the closest available legal mechanism to the claim the PDR framework describes, and it operates in a different register: privacy violation rather than unrecognized productive contribution. The independent jurisdiction condition isn't merely weak in education technology, the legal architecture required to satisfy it hasn't been proposed in any jurisdiction that has examined the question.


Section 6: The PDR Formula Applied Top


The four legitimacy conditions fail in education technology, not through the gradual erosion of nominal voluntariness documented in Papers One and Two, but through structural compulsion that makes voluntariness unavailable from the outset. The PDR's legitimacy variable, lambda, doesn't merely approach zero in this domain, it's set to zero by the participation chain itself. The platform's participation is compelled by federal law and institutional mandates. The student at the end of that chain has no more voluntary relationship with the behavioral data extraction system than a worker assigned to a company town has a voluntary relationship with the company store.

In the Origin Economics framework, this extraction is expressed as Y = λ · f(H, K, T): output as a function of human-origin participation, capital, and technology, multiplied by whether the legitimacy conditions of the exchange were satisfied. For education technology platforms, lambda failed before the student ever opened a laptop. The output (Canvas's $634 million in annual revenue, Turnitin's $203 million, the $648 million remote proctoring market, and the undisclosed Google Education revenues embedded in a $350 billion annual report) was produced from H, from human-origin participation, organized by institutional capital and scaled by technology, under conditions where the participant had no survivable refusal, no recognized standing, no transparency of terms, and no independent jurisdiction. H in the PDR formula is annual participation hours, the measurable proxy for everything students contribute to the systems that deliver, record, and assess their education. What those hours contain is larger than any formula captures: academic development, identity formation, intellectual labor, and in the case of Turnitin submissions, the creative and analytical work product that builds a commercial archive.

The Canvas Calculation.

Instructure's approximately $634 million in trailing twelve-month United States-dominant revenue, divided across several million active United States higher education and K–12 users, produces a PDR baseline in the range of $50 to $150 per user per year depending on the denominator estimate. This baseline is lower than the Google and Meta figures from Paper One for two reasons: Canvas revenue is primarily institutional licensing rather than advertising, meaning it doesn't capture the behavioral analytics value of the participation data the platform assembles; and Canvas serves a younger demographic mix with lower advertising market rates. The baseline understates the true participation value for the same reason the Roblox baseline understated it; the platform captures value through multiple mechanisms, only one of which is directly reflected in disclosed revenue.

The Turnitin Calculation.

Turnitin's approximately $203 million in estimated 2024 revenue against 71 million global students produces a per-student figure of approximately $2.86. For United States students, who represent a disproportionate share of the institutional and revenue base, the figure is higher. The $2.86 figure covers only the licensing fee that institutions pay for access to the archive. It doesn't capture the value of the archive itself: the proprietary dataset of student submissions that constitutes Turnitin's core intellectual asset, built entirely from compelled student contributions whose authors received nothing in exchange beyond completion of a course requirement. The archive isn't priced in any disclosed figure. It's the unrecognized productive contribution that the PDR formula was designed to make visible.

The Google Education Gap.

Google doesn't disclose education revenue. The PDR calculation for Google Workspace for Education therefore can't be anchored to a confirmed primary source figure in the way that the Paper One calculations were. What can be confirmed: 170 million global users, 60,000 United States K–12 schools, 80% of top 100 United States universities, and paid tier pricing of approximately $3 to $5 per student annually for institutional licenses. Working backward from market share and pricing structure produces an education revenue estimate in the range of several hundred million to over one billion dollars annually: large enough to be material, imprecise enough to be unverifiable. That imprecision is a disclosure choice by the world's largest technology company, applied to the participation of a captive user base that can't refuse. The PDR formula can't be precisely applied where disclosure doesn't exist. The absence of disclosure is the finding.


Section 7: The Counterarguments Top


The Educational Benefit Objection.

The primary counterargument to the PDR analysis of education technology is that these platforms deliver genuine educational value: that Canvas improves learning management, that Turnitin deters plagiarism and maintains academic integrity, that Google Classroom democratizes access to productivity tools that would otherwise be unavailable to under-resourced districts, and that these services would cost substantially more if priced as direct student charges.

Each of these claims is accurate. None addresses the misclassification argument. The question the PDR framework asks isn't whether educational platforms deliver value but whether the participation those platforms require is correctly classified and whether the students compelled to provide it have any recognized standing in the commercial arrangements that participation makes possible. A student in an under-resourced district who uses Google Classroom because the district can't afford paid alternatives is receiving a genuine educational benefit. They are simultaneously generating behavioral participation data, contributing to Turnitin's archive, and providing the compelled usage base that Google Education's market position depends on. The benefit and the extraction aren't mutually exclusive — they're simultaneous features of the same transaction. Establishing that the benefit is real doesn't establish that the extraction is recognized.

The Institutional Consent Objection.

A regulatory objection holds that institutional adoption of EdTech platforms, governed by FERPA compliance requirements and vendor data processing agreements, constitutes adequate consent on behalf of students and that platforms operating within this framework are operating within legitimate exchange.

This objection conflates institutional consent with participant standing. The institution's contractual relationship with the platform governs what data the platform can collect and how it can be used. It doesn't establish the student as a recognized participant in that arrangement. A student whose compelled submissions build Turnitin's archive isn't a party to the contract between the institution and Turnitin. They have no standing under that contract to claim recognition of their contribution, negotiate terms, or receive compensation. While institutional consent satisfies the legal mechanism FERPA establishes for third-party data sharing, it doesn't satisfy any of the four legitimacy conditions the PDR framework requires. A platform can be in complete FERPA compliance while simultaneously failing every condition of legitimate exchange that Origin Economics defines.

The Free Tier Objection.

An objection specific to Google holds that Google Workspace for Education's free tier represents a straightforward service provision without commercial data extraction, and that the paid tiers provide additional services at disclosed prices representing genuine exchange. This objection is undermined by Google's own disclosure practices and litigation history. The EFF complaint and subsequent analysis documented that behavioral data generated through student use of non-educational Google properties while logged into educational accounts was collected regardless of whether the student's institution was on a free or paid tier. The April 2025 class action documented hidden tracking infrastructure that persists across browsing contexts regardless of account settings. The free tier isn't a charitable provision of educational tools. It's the participation acquisition strategy described in Papers One and Two, deployed in an environment where the participants are younger, more captive, and generating behavioral profiles that extend from the earliest age of digital participation across decades of adult commercial behavior. The free tier's commercial logic is identical to every other zero-price entry point this series has documented. Its educational framing doesn't change its structural classification.


Section 8: Legal and Regulatory Exposure Top


The education technology sector faces regulatory exposure across three distinct frameworks, each addressing a different dimension of the misclassification this paper documents.

The FTC's enforcement authority under COPPA represents the most immediate regulatory risk for platforms operating with K–12 user bases. The Commission's 2023 enforcement action against Amazon for COPPA violations related to Alexa and Ring established that the FTC is willing to pursue enforcement against major technology platforms for data collection practices involving children at scale. In April 2025, Schwarz v. Google LLC alleged hidden tracking of K–12 students without parental consent — conduct that, if proven, would represent COPPA violations at substantial scale. A separate class action, Farwell v. Google LLC, alleging that Google collected biometric data from students through Google Workspace for Education without proper consent, settled for $8.75 million, with final court approval granted October 17, 2025. Google Workspace for Education's presence in the majority of American K–12 schools means that the population of potentially affected students isn't a niche subgroup — it's the majority of American schoolchildren.

FERPA's structural inadequacy has been documented extensively in the academic and policy literature. The Public Interest Privacy Center has proposed amendments to FERPA that would create separate exceptions for school staff and third-party contractors, establishing distinct accountability requirements for EdTech vendors that current law doesn't provide. Until those amendments are enacted, the School Official Exception will continue to function as the mechanism through which student participation data reaches 1,449 vendors per district without the consent, notification, or accountability that the statute's original framers intended.

The A.V. v. iParadigms ruling resolved the copyright question for Turnitin's archive model under existing fair use doctrine. It didn't resolve the labor and property questions the PDR framework raises. The Brazilian Public Ministry of Labor's investigation into Roblox's monetization of minor creative content — documented in Paper Two — establishes precedent for labor frameworks treating the productive output of student and minor participants as a protected category rather than a voluntary contribution. The same analytical structure applies to Turnitin's archive: student submissions are the productive input that builds the commercial asset. The fair use ruling permits the archiving. It doesn't classify the archiving as correctly compensated. Those are different questions, and the second hasn't been answered.

The unconstitutional conditions doctrine doesn't apply directly to private EdTech platforms — it applies to government actors conditioning public benefits on constitutional rights surrender. What the doctrine illuminates for this paper is the structural environment in which EdTech extraction operates. When state compulsory attendance laws deliver students to institutions, institutional federal funding requirements deliver students to digital platforms, and those platforms capture behavioral participation data from students who can't refuse, the resulting participation system isn't the product of private market dynamics. It's the product of public governance structures that have attached a commercial data extraction system to a compulsory public function. That attachment hasn't been examined by any regulatory framework currently in force.

Legislators in sixteen states are now debating restrictions on classroom technology. They're addressing the engagement optimization problem but not touching the participation value problem. Bills that limit daily screen time, ban devices for younger students, or require software vetting are responses to the harm the extraction surface produces: distraction, behavioral disruption, academic decline. They're not responses to the structural misclassification this paper documents. A screen time limit reduces how long a student spends on a platform, but it doesn't recognize the student as the origin of the value that time generates, doesn't establish standing for the student in the commercial arrangements that participation makes possible, and doesn't address what happens to the behavioral data already assembled from years of compelled participation before any limit takes effect. The political energy is real, the coalition is broad, and the legislative momentum is the first sustained public challenge to a decade of unchecked EdTech expansion. It will produce reforms and those reforms will make the extraction surface smaller. They won't make the participant visible within it. (NBC News, Ed Tech Industry Scrambles to Fight a Wave of Bills to Limit Screen Time in Schools, March 2026.)


Section 9: Conclusion Top

The education technology sector generates hundreds of millions of dollars annually from platforms whose users are required by law to participate in the systems through which that revenue is produced. A student submitting an assignment through Turnitin builds the archive that makes the product commercially valuable. A student accessing Canvas builds the engagement record that institutions use for federal compliance reporting and that the platform uses for product development and expansion. A student using Google Classroom on a school-issued Chromebook generates behavioral participation data across Google's property ecosystem from the earliest age at which digital participation begins, extending a longitudinal record that will inform commercial systems for decades of adult purchasing behavior.

The participants who generate this value are students. They can't refuse participation without refusing education. They have no recognized standing in the commercial arrangements their participation makes possible, they aren't informed of what their participation generates, and they have no independent recourse for its misclassification. Lambda fails completely, uniformly, and by design — not through the gradual erosion of nominal voluntariness that Papers One and Two documented, but through the structural compulsion of a participation chain that begins with federal law and ends with a child opening a laptop to complete a homework assignment.

Google doesn't disclose what it earns from educational participation. Turnitin doesn't disclose the value of the archive its users built. The remote proctoring industry doesn't disclose what it does with biometric data collected from students who can't refuse examination. The regulatory frameworks in force treat these as privacy problems — but they're not. They're participation value problems. The distinction determines what remedy is adequate.

A privacy framework produces the right to limit data collection and require deletion. A participation value framework produces recognition of the student as the origin of what their compelled participation generates, and the standing that recognition would require in every commercial arrangement that participation makes possible. No existing legal architecture in the United States establishes that recognition for students. Its absence isn't an oversight. The absence is the structural condition that makes the education technology extraction system commercially viable.

The PDR formula establishes what that absence costs per student per year, derived where disclosure permits and estimated where it does not. The more important finding isn't any individual figure. It's that the most compelled participation surface in the series is also the least disclosed, the least regulated at the level the misclassification requires, and the most structurally dependent on public governance structures that have attached a commercial data extraction system to a compulsory public function without examining what that attachment costs the people it is compelled upon. The price was never zero. For students, it was paid before they were old enough to know a price was being charged.

Participation Value Ledger

Running total after Paper Three. Add only the lines that apply to you.

If you use Google and Meta $723 / year
If you play Roblox $97 / year
Student, per household $53 – $153 / year
Your running total $723 – $973 / year

The lower figure reflects participation common to most internet users. The upper figure includes additional domains examined in this series.

Citations

Alphabet Inc., Annual Report on Form 10-K, fiscal year ended December 31, 2024, filed with the United States Securities and Exchange Commission, February 4, 2025.

Meta Platforms, Inc., Annual Report on Form 10-K, fiscal year ended December 31, 2024, filed with the United States Securities and Exchange Commission, January 29, 2025. Fourth Quarter and Full Year 2024 Earnings Release, filed as Exhibit 99.1.

Roblox Corporation, Annual Report on Form 10-K, fiscal year ended December 31, 2024, filed with the United States Securities and Exchange Commission, February 18, 2025. Referenced for comparative PDR baseline.

Instructure Holdings, Inc., Annual Report on Form 10-K, fiscal year ended December 31, 2024. ir.instructure.com.

Sacra, Turnitin Revenue, Funding and Growth Rate, 2024. sacra.com.

California State University, Procurement Records and System Contract, Turnitin, March 2024 and 2025.

Google for Education Statistics 2026, About Chromebooks, February 2026. aboutchromebooks.com.

Electronic Frontier Foundation, FTC Complaint: Google for Education, December 1, 2015. eff.org.

Electronic Frontier Foundation, Defending Student Data from Classrooms to the Cloud: 2016 in Review, December 2016. eff.org.

Bloomberg Law, Google Hit With Lawsuit Over Data Collection on School Kids, April 8, 2025. news.bloomberglaw.com.

Electronic Privacy Information Center, In re Online Test Proctoring Companies, FTC and DC Attorney General Complaint, December 9, 2020. epic.org.

McDermott Will and Emery, EdTech and Privacy: A Shifting Regulatory Landscape, May 2025. mcdermottlaw.com.

AI Governance Group, FERPA, COPPA, and Beyond: Bridging the EdTech-Education Compliance Gap, June 2025. aigovernancegroup.com. Citing FBI/KI2SIX Study on third-party data sharing.

CoSN, Annual EdTech Leadership Survey, 2023. cosn.org.

National Center for Education Statistics, Digest of Education Statistics, 2024. nces.ed.gov.

Public Interest Privacy Center, Fixing FERPA: Enhancing EdTech Accountability, August 2025. publicinterestprivacy.org.

Intel Market Research, Remote Online Exam Proctoring, Market Outlook 2025–2032, 2024. intelmarketresearch.com. Note: commercial market estimate; methodology not independently verified.

Market Growth Reports, Remote Proctoring Solutions Market Size and Share, 2024. marketgrowthreports.com. Note: commercial market estimate; methodology not independently verified.

Imanol Arrieta-Ibarra, Leonard Goff, Diego Jiménez-Hernández, Jaron Lanier, and E. Glen Weyl, Should We Treat Data as Labor? Moving beyond Free, AEA Papers and Proceedings 108, 2018, pp. 38–42.

Eric Posner and E. Glen Weyl, Radical Markets: Uprooting Capitalism and Democracy for a Just Society, Princeton University Press, 2018; reviewed by Peter Isztin in Oeconomia 9(4), 2019.

Shoshana Zuboff, The Age of Surveillance Capitalism, PublicAffairs, 2019; as analyzed in Rosa Aaron Dufva, In Dialogue with The Age of Surveillance Capitalism, Master's thesis, Tampere University, 2022.

A.V. v. iParadigms, LLC, 562 F.3d 630, United States Court of Appeals for the Fourth Circuit, 2009.

Koontz v. St. Johns River Water Management District, 570 U.S. 595, 2013. Referenced for unconstitutional conditions doctrine.

Schwarz v. Google LLC, filed April 2025, U.S. District Court, Northern District of California.

H.K. et al. v. Google LLC, Case No. CC 20LL00017, Circuit Court of McDonough County, Illinois. Settlement of $8.75 million granted final approval October 17, 2025. Distribution of payments began February 13, 2026.

Elementary and Secondary Education Act, as reauthorized through the Every Student Succeeds Act, 20 U.S.C. § 6301 et seq.