Thursday, June 06, 2024
On Certification of Artificial Intelligence Systems
Machine learning systems are today the main examples of the use of Artificial Intelligence in a wide variety of areas. From a practical point of view, we can say that machine learning is synonymous with the concept of Artificial Intelligence. The spread of machine learning technologies leads to the need for their application in the so-called critical areas: avionics, nuclear energy, automatic driving, etc. Traditional software, for example, in avionics, undergoes special certification procedures that cannot be directly transferred to machine learning models. The article discusses approaches to the certification of machine learning models. - from our new paper On Certification of Artificial Intelligence Systems
Monday, June 03, 2024
Attacks on Machine Learning Models Based on the PyTorch Framework
This research delves into the cybersecurity implications of neural network training in cloud-based services. Despite their recognition for solving IT problems, the resource-intensive nature of neural network training poses challenges, leading to increased reliance on cloud services. However, this dependence introduces new cybersecurity risks. The study focuses on a novel attack method exploiting neural network weights to discreetly distribute hidden malware.
It explores seven embedding methods and four trigger types for malware activation. Additionally, the paper introduces an open-source framework automating code injection into neural network weight parameters, allowing researchers to investigate and counteract this emerging attack vector. from our new paper
Monday, March 25, 2024
On Real-Time Model Inversion Attacks Detection
The article deals with the issues of detecting adversarial attacks on machine learning models. In the most general case, adversarial attacks are special data changes at one of the stages of the machine learning pipeline, which are designed to either prevent the operation of the machine learning system, or, conversely, achieve the desired result for the attacker. In addition to the well-known poisoning and evasion attacks, there are also forms of attacks aimed at extracting sensitive information from machine learning models. These include model inversion attacks. These types of attacks pose a threat to machine learning as a service (MLaaS). Machine learning models accumulate a lot of redundant information during training, and the possibility of revealing this data when using the model can become a serious problem.
from our new paper
Sunday, March 24, 2024
On the Automated Text Report Generation in Open Transport Data Analysis Platform
According to UN studies, more than 60% of the population will live in cities by 2030. It is important to receive reports on changes in the transport behavior of city residents to change existing and build new transport systems. In this article, we talk about the automated report generation module for an open platform for transport data analysis. We give the classification of transport changes in the city. The work of the module is demonstrated for various types of traffic changes in the example of the analysis of changes in traffic flows in Moscow during the celebrations in early May and the closure of the Smolenskaya metro station for long-term reconstruction. Possibilities of the analysis on time intervals of various extents and lobbies are shown. Proposed ways to further automate the management of transport systems using data collected by the platform. Possible directions for further development of an open platform for transport data analysis are proposed.
from our new paper
Subscribe to:
Posts (Atom)