Artificial intelligence (AI) relies heavily on vast amounts of data for training. However, this dependence on data also makes AI vulnerable to attacks by cyber criminals. These attackers can introduce false or misleading information into the training data, causing the AI model to behave erratically.
The implications of such attacks go beyond minor errors in chatbots or image generators. It could potentially lead to disastrous outcomes, such as self-driving cars ignoring stop lights or power grid disruptions.
To address this threat, a team of cybersecurity researchers has developed a new approach that combines federated learning and blockchain technology. Federated learning involves training a mini model on individual devices and sharing updates with a central server. While this preserves privacy, it is susceptible to data poisoning attacks.
By integrating blockchain technology, which creates a tamper-proof database, the researchers were able to detect and remove dishonest data before it could compromise the training process. This innovative approach has broad applications, from critical infrastructure resilience to transportation cybersecurity.
Moving forward, the team plans to enhance data and system protection by incorporating quantum encryption. Their ultimate goal is to ensure the safety and security of America’s transportation infrastructure while harnessing the power of advanced AI. This research holds promise for developing secure AI algorithms for critical infrastructure security.