Joe LeMire, Elon University’s chief of campus safety and police, first heard about the use of artificial intelligence software to track student social media activity at the annual North Carolina Association of Campus Law Enforcement Administrators Conference on Feb. 2. 

During a break in between conference sessions, LeMire spoke to other North Carolinian directors and chiefs of police about the practicality of using AI software as a viable campus safety measure. While LeMire said Elon does not currently use or have plans to implement AI software, the concept is on his mind as the practice becomes more common and effective nationwide.

LeMire said the deciding factor on whether or not to implement these softwares in the future is if the AI programs can yield a proper return on investment for safety.

“The AI there is only as good as what you plug in,” LeMire said. “Think about the code words that you could put into an AI software that you would want to be tipped off if it was out there. So the return would be how often did that actually work and give you actionable information that you could do something with in order to keep campus safe.”

In light of Tyre Nichols’ death at the hands of police Jan. 7, LeMire said police departments across the country kept a close eye on social media after protests erupted in Memphis and concern developed over where else they might take place — Burlington, Greensboro and Charlotte included. Nichols, a Black man, was pulled over by Memphis police for alleged reckless driving Jan. 7. Nichols’ attorneys state that body camera footage reveals police verbally threatening Nichols as well as physically holding him down, kicking him, pepper spraying him and beating him with a baton before he died, according to ABC News

For protests and events that could become violent or dangerous, LeMire said he could see a blanket AI software that could efficiently alert and dispatch responders as needed — as opposed to officers having to manually navigate social media by searching keywords or terms.

“Every active shooter event or every major thing that happens nationwide, if you ever go back, you always find those people who said, ‘Oh, yeah, so and so said, XYZ.’ Well, nobody reported it for whatever the reason,” LeMire said. “If we had that software, you might catch where people are because there's a great percentage of people that commit violence, that foreshadow that they're going to do it, and oftentimes, it's on social media.”

According to LeMire, he is also aware of AI software that utilizes security camera footage to track people, vehicles and suspicious behavior. LeMire said he thinks AI programs for security cameras may be more effective in keeping the community safe than software geared toward social media.

“Just from a standpoint of somebody comes on campus, commits a crime and they say it was a black Ford F150 pickup truck, and the AI software was able to find a black F150 pickup truck and find out where it was, what time, what direction it traveled and get information,” LeMire said. “So I see it in the camera world probably a little bit more and there's really no expectation of privacy out there that way — where I think people could feel a little bit of invasion of privacy probably in the social media world.”

Although LeMire acknowledged that the general public is concerned about invasion of their privacy at the hands of AI, he said the younger generations tend to be more accepting of these softwares, programs and algorithms because they are used to this technology.

“It's kind of funny when stuff like this comes up. … There's older people that are a little more nervous about it when it comes to internet searching, cameras, other things,” LeMire said.  “Sometimes the age group of people that are going to school aren't as troubled by it because they just know it exists. That's kind of the world they grew up in.”

Despite this, the use of such softwares through social media has already sparked controversy on college campuses. Campus police at the University of North Carolina at Chapel Hill utilized “geofencing” technology — allowing them to collect social media posts of people who are within or entering a virtual boundary — from 2016 to 2019. 

According to NBC News, UNC paid $73,500 for a three-year geofencing software contract, in which state investigators and college police used it to collect personal cell phone information from antiracism protestors prior to the COVID-19 pandemic.

In July 2020, the Electronic Frontier Foundation and the Reynolds School of Journalism at University of Nevada, Reno launched a database of more than 7,000 surveillance technologies deployed by law enforcement agencies across the U.S. Through the database, the EFF reported that college campuses are implementing a “surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime.”

Some of these technologies include bodycam footage, drones, automated license plate readers, social media monitoring, biometric identification, gunshot detection and video analytics.

Senior Hailey Crawford, who is majoring in economic consulting and minoring in computer science, conducted data mining and machine learning research on surveillance last spring and said the use of AI softwares by universities can be largely positive without much risk to personal privacy.

“You have to think about you’re being monitored anyway,” Crawford said. “In terms of social media, I think that these algorithms are already being used to target against you. … You just have to be careful in what you're posting everywhere.”

Yet, both LeMire and Crawford said they feel the practical use of AI softwares is not yet ready for the present day. Through her research, Crawford said her biggest finding was that when scanning for potential crimes, threats or repeating offenses, AI held inherent biases for minority groups. 

Since AI software is typically developed from existing databases, Crawford emphasized that racial minority groups are profiled more frequently — often because of skewed population demographics and higher incarceration rates. Especially with the majority of Elon’s community being white and Christian, Crawford said she worries about the effectiveness of such programs if they were to ever be implemented at Elon.

“In some ways, it would be useful to know who's wearing or doing what, but obviously then it comes into cases of what is privacy and does that make me feel safer?” Crawford said. “Or is that going to end up targeting certain people or certain things.”

LeMire and Crawford also both expressed concern about the cost of implementing AI softwares, as LeMire estimated that a high quality AI software contract could cost the university up to $80,000 a year. As a student, Crawford said she would be worried about how much a contract of that nature would raise tuition. 

While LeMire said he could see AI software being implemented at Elon at some point in the future, he emphasized that the university does not currently have any concrete plans in the works.

“I think the pros outweigh the cons,” LeMire said. “Artificial intelligence is still a little bit on the new side, I'd rather somebody work out the bugs for us.”

In order to begin opening the Elon community up to the idea of AI software, Crawford said she believes Elon should incorporate more back-end or behind the scenes AI courses into its STEM curriculum. Not only did Crawford say she thinks this will better prepare students for a very important part of the world’s future, but she also said it might help ease general fears and confusions if people can better understand how it works.

“I think the awareness about AI in general is good and how it has negative aspects in terms of privacy and data collection,” Crawford said. “But also, if it can form the next cancer solving drug or can cure cancer without having to go through years and years of research methods, … I think that it can be really powerful.”