#Artificially #Inflating #Cybersecurity
Despite artificial intelligence (AI) being around for decades, it has become the new buzzword de’ jour for everything from design, content creation, business analytics, and even cyber security with many companies running headfirst into the quagmire of information, solutions, and strategies with little history or experience in the benefits and pitfalls of its use. Even sophisticated companies are struggling to determine what their strategy should be and how best to deploy it. Being a buzzword at the moment often confuses people as to what artificial intelligence actually is compared to data analytics. AI is really about learning from experience by reproducing cognitive abilities to automate tasks in an autonomous way that can use data analytics to drive the outcomes but is not necessarily dependent on the static process data analytics uses to draw conclusions about its particular data set.
A recent example of where the hype can be detrimental has been the use and integration of Internet of Things (IoT) devices that, when thoughtfully deployed, provided many advantages and opportunities to the user. These systems also could create havoc when unsophisticated or unwitting users went down the rabbit hole and cobbled together systems from various manufacturers of questionable origins creating virtual wide open back doors for bad actors or inadvertent and often catastrophic mistakes by well-meaning employees. For a while, it seemed like every manufacturer of technology was slapping an IoT label on their product to generate sales. This led to several products that had no security, left vulnerable back doors open, or broadcast data across public networks creating opportunities for mischievous actors to cause problems. It takes me back to the 1983 movie War Games, starring Matthew Broderick and Ally Sheedy, mistakenly hacking the United States military’s supercomputer, almost causing a nuclear war while thinking they were playing a video game.
While just a movie, it does highlight some of the current challenges with AI and cybersecurity in that no system is 100% secure and that there will always be two sides to the coin – good and bad. Artificial intelligence certainly has benefits, and these should be explored to fit a company’s unique circumstances with advice and direction from qualified players that will align the mission, vision, and actions appropriately in the most cost-effective way. Unfortunately, reading on the Internet or attending a seminar is simply not enough due to the complex challenges of integration and application of these strategies and tools, and not having qualified staff can hamper effectiveness. Many companies don’t realize the vast attack surface climate in today’s networks. Often, there are hundreds to thousands of devices that require attention, including personal devices that may find themselves behind a firewall, opening a vulnerability that could be masked. This, along with the massive amount of data being moved around, creates a mind-blistering number of attack vectors that has moved beyond the ability of humans alone to manage.
The biggest advancement of AI that most companies can take advantage of is in the growth of machine learning, much like in War Games where the supercomputer could rapidly test out nuclear strike scenarios to determine the best outcomes and either communicate the best course of action to real humans or act itself. Machine learning allows the system to test multiple events within given parameters at lightning speeds to find the best possible outcome but also to store all those scenarios and be able to recall them for comparative analysis against new variables within a given situation thereby improving the rate of response and accuracy of outcomes. This improvement, coupled with human training, can even better analyze specific activities weeding out false positives or negatives giving the security team the best opportunity to make the final judgement call.
The biggest advancement of AI that most companies can take advantage of is in the growth of machine learning
Not all is sunshine and roses either, as bad actors are also harnessing the power of AI for their own benefits at speeds that humans can’t keep up with. Hackers, for example, can use the same machine learning algorithms to target specific data and train their attack on anticipated warning flags. Additionally, hackers can use the power of neural networks and deep learning to develop mutating zero-day threats that can evade detection, particularly if that threat was brought in on a transient device or through sophisticated phishing software also developed by AI. AI can also be used to overload a system’s defenses by flooding it with massive amounts of potential breaches, much like the War Games Supercomputer running nuclear war simulations until it finds the vulnerability and quickly takes advantage of it. Even a company’s own AI solutions can be corrupted if the data sets are infiltrated undetected, often making it impossible to recover the correct data sets, opening a giant hole in any defenses. Lastly, bias and discrimination in the decision-making process of AI open further vulnerabilities as these can be exploited by various sources. These biases can also lead to false positives and discriminatory practices against employees or customers, often having significant consequences.
Decisions about what AI tools to use, including custom-built tools over commercially available options, are still harder than they seem. Some organizations have placed moratoriums on commercially available products like ChatGPT, Dali-2, Mid-Journey, Google Bard, and others out of fears of private data becoming public in addition to security concerns of accessing corporate databases without consent. This is also coupled with the risks of using AI tools to generate instruments of service, such as documents, drawings, legal briefs, and more, that can run a company afoul and open to litigation or other business disruptions. Deep fakes are also a reality now, leveraging AI to generate misinformation and vulnerabilities that can penetrate a corporate network or cause reputational harm to that corporation if misused or misinterpreted.
Several key factors in moving into what seems like the Wild West need to be considered and having the right voices at the table for making the decisions of what strategies to deploy and, more importantly, why you are deploying them, is critical to success. This is not a world where a Jurassic Park “No Expenses Spared” moment should happen. It is a “spend the most amount you can afford on the best quality tools in a thoughtful manner” moment. Pay attention to not only the quality of the data set that you will train your model on but also the problem you are trying to solve so you select the right model. Consider the hardware that will support the process you are working through with the necessary resources built-in for current use and some expansion should the model develop. That scalability will be critical as new models become relevant as technology evolves. Inventory on a regular basis what AI tools the team has deployed and assess their relevance and effectiveness. Are you getting what you thought you would, or does the model need to be tweaked or abandoned altogether? Pay particular attention to the security, privacy, and ethical implications of any solution you decide on. Reducing bias and mitigating threat potential can save the company time and resources in the long run.
Lastly, don’t forget to budget for maintenance and operations. It is great to have a shiny new car, but if you can’t get the oil changed or know how to drive it, it is just a shiny object in the driveway. Expect systems to require this and expect the right personnel to manage it. Now, let’s go play a game of chess.
Raymond Kent is an award-winning, internationally recognized technology consultant working in the architecture and engineering space who frequently works with top-tier clients across multiple sectors advising them on a multitude of topics including AI, IoT, sustainability, augmented and virtual reality, and more.