It would be something of an understatement to say that Covid-19 has caused major changes to everyday life around the world. In the world of education, virtual classrooms, with students and teacher connected across the Internet, have become commonplace, and the norm in many countries. The shift from in-person to online learning has not only been something that people have been obliged by circumstances to accept, whether they like it or not, it has also taken place very rapidly. Little thought has been given to potential problems with virtual classrooms, since the emphasis was simply on putting them in place quickly, so that the educational process was disrupted as little as possible. That makes a new paper from Princeton’s Center for Information Technology Policy, a research center that studies digital technologies in public life, particularly valuable:
The paper develops a threat model that describes the actors, incentives, and risks in online education. Our model is informed by our survey of 105 educators and 10 administrators who identified their expectations and concerns. We use the model to conduct a privacy and security analysis of 23 popular platforms using a combination of sociological analyses of privacy policies and 129 state laws… alongside a technical assessment of platform software.
As the researchers point out, the well-established educational norms that protect the privacy of students and teachers in the traditional educational context are absent in the new virtual classrooms. There, they are determined largely by the privacy policies of the companies that make the software. As is usually the case, few people bother understanding or even reading the details. As a result, practices that would be completely unacceptable in the physical classroom – things like surreptiously recording students and teachers, or recording data about their studies – may be happening as a matter of course, but without anyone being aware of that fact. Collecting and storing large quantities of personal data about every student is so easy that it often happens by default. Analyzing the privacy policies, the researchers found that 41% permitted a platform to share data with advertisers, which conflicts with at least 21 US state laws, while 23% allowed a platform to share location data. Clearly, neither would be acceptable in most educational contexts, and this underlines how privacy in the virtual classroom is not well protected for those platforms.
On the plus side, the research also revealed the importance in the US of Data Protection Addenda (DPAs). These are side agreements where universities exploit their size to negotiate extra privacy protection for users of a platform. Recognizing this need, many providers of virtual classroom software offer templates that can be used as the basis of these agreements. The widespread use of DPAs is a reminder that it is often an option for organizations to negotiate stronger data protection for users – there is no requirement to accept low default levels of privacy.
Another serious issue of the rapid, almost desperate implementation of online teaching systems is that security issues tend to be overlooked in the rush to get something working fast. The researchers looked at binary security, known vulnerabilities, and bug bounties, and drew on the Common Vulnerabilities and Exposure (CVE) system, as well as the National Vulnerability Database and its impact and exploitability scores. The platform most widely used among participants in the researchers’ survey, Zoom, was found to have a number of problems:
Zoom has many recent CVEs (11). While intense recent attention is no doubt a contributing factor, the substantial number of recent vulnerabilities suggests a systemic component to Zoom’s security issues. Further, as our evaluation postdated Zoom’s efforts to remediate the aforementioned security issues our results likely understate recent problems with the software. On the other hand, Zoom’s rapid improvement in both software and process (that represented a response to disfavorable media coverage) point to a positive trajectory for Zoom.
The research concludes with some recommendations on ways to protect the privacy of students and teachers, including the following:
universities should not spend all their resources on a complex vetting process before licensing software. That path leads to significant usability problems for end users, without addressing the security and privacy concerns. Instead, universities should recognize that significant user issues tend to surface only after educators and students have used the platforms and create processes to collect those issues and have the software developers rapidly fix the problems.
Although that might seem surprising, it’s a pragmatic response to the difficult situation created by Covid-19. It simply isn’t possible to conduct a leisurely process of exploring every aspect of multiple platforms, before finally making a decision. Instead, the researchers are suggesting, it is more important to plan for problems after a platform has been installed, and students and educators start using it in earnest. The most important thing is to put in place a system to collect those issues in a systematic way so that they can be addressed.
It’s an idea that can be applied more widely. The pandemic has required many rapid changes to how life is conducted, whether in an educational context, in business, or at home. Most people just want something that lets them get on with their lives, at least to the greatest extent possible, which means they are likely to choose solutions quickly, and without the kind of research they would carry out in “normal” times. Software companies benefitting from that rapid uptake would do well to see the problems that will inevitably arise as an opportunity to spot and fix bugs quickly. If they don’t, they are likely to face the kind of public criticism Zoom has experienced for precisely this reason.
Featured image by Danslafrique.