Page 22 - TIME NEWSLETTER - SUMMER 2025
P. 22

DANGERS OF AI IN CYBERSECURITY










                                                                    THE HIDDEN RISKS OF ARTIFICIAL INTELLIGENCE IN SCHOOL CYBERSECURITY
      Recently,  artificial  intelligence  has  started  playing  a
      larger role in how school districts defend against digital
      threats. From detecting suspicious activity on the network
      to helping filter phishing emails, AI tools can act like full
      digital security analysts behind the scenes. But, just like
      any powerful technology, it brings its own set of risks,
      especially when used nefariously or when it falls into the
      wrong hands.

      For school districts, understanding and assessing these
      risks is more than just a technical issue. It’s about protecting
      student and employee data, safeguarding teaching
      resources, and ensuring the continuity of business.


      AI: A Helpful Force with a Dark Side
      Let’s start with some good news: AI has real advantages!
      It has the ability  to process  huge amounts of data very
      quickly, detect unusual and suspicious behavior, and even
      stop cyberattacks before they spread. Many school district
      cybersecurity systems already use AI-powered tools in
      their content filters, email security platforms, or network
      monitoring solutions.

      But as with The Force, there’s an accompanying “dark
      side.” The same AI capabilities that help us defend also
      give cybercriminals powerful new weapons.


      Expanding Attacker Toolsets
      Cyberattacks on school districts are on the rise, and AI
      is making them faster and harder to detect. Data shows
      that breakout time–the time it takes an attacker to move
      laterally through a network after the initial compromise–
      has shown a significant decrease in the past five years,
      from around four hours in 2020 to just 48 minutes in 2025.
      Hackers are using AI to create emails that sound exactly
      like real people — even mimicking a specific person’s tone
      or writing style. These often trick staff into clicking unsafe
      links or sharing sensitive information leading to a possible
      compromise
      Fake audio and video clips are being created, mimicking
      higher-level employees like directors, chief executives, or
      even superintendents asking for emergency action. Even
      real-time video and voice alteration is being seen in the
      wild — so that person on the other side of your Google



     Page 22
   17   18   19   20   21   22   23   24   25   26   27