Thursday, June 06, 2024

On Certification of Artificial Intelligence Systems

Machine learning systems are today the main examples of the use of Artificial Intelligence in a wide variety of areas. From a practical point of view, we can say that machine learning is synonymous with the concept of Artificial Intelligence. The spread of machine learning technologies leads to the need for their application in the so-called critical areas: avionics, nuclear energy, automatic driving, etc. Traditional software, for example, in avionics, undergoes special certification procedures that cannot be directly transferred to machine learning models. The article discusses approaches to the certification of machine learning models. - from our new paper On Certification of Artificial Intelligence Systems

Monday, June 03, 2024

Attacks on Machine Learning Models Based on the PyTorch Framework

This research delves into the cybersecurity implications of neural network training in cloud-based services. Despite their recognition for solving IT problems, the resource-intensive nature of neural network training poses challenges, leading to increased reliance on cloud services. However, this dependence introduces new cybersecurity risks. The study focuses on a novel attack method exploiting neural network weights to discreetly distribute hidden malware. It explores seven embedding methods and four trigger types for malware activation. Additionally, the paper introduces an open-source framework automating code injection into neural network weight parameters, allowing researchers to investigate and counteract this emerging attack vector. from our new paper

Monday, March 25, 2024

On Real-Time Model Inversion Attacks Detection

The article deals with the issues of detecting adversarial attacks on machine learning models. In the most general case, adversarial attacks are special data changes at one of the stages of the machine learning pipeline, which are designed to either prevent the operation of the machine learning system, or, conversely, achieve the desired result for the attacker. In addition to the well-known poisoning and evasion attacks, there are also forms of attacks aimed at extracting sensitive information from machine learning models. These include model inversion attacks. These types of attacks pose a threat to machine learning as a service (MLaaS). Machine learning models accumulate a lot of redundant information during training, and the possibility of revealing this data when using the model can become a serious problem.

from our new paper

Sunday, March 24, 2024

On the Automated Text Report Generation in Open Transport Data Analysis Platform

According to UN studies, more than 60% of the population will live in cities by 2030. It is important to receive reports on changes in the transport behavior of city residents to change existing and build new transport systems. In this article, we talk about the automated report generation module for an open platform for transport data analysis. We give the classification of transport changes in the city. The work of the module is demonstrated for various types of traffic changes in the example of the analysis of changes in traffic flows in Moscow during the celebrations in early May and the closure of the Smolenskaya metro station for long-term reconstruction. Possibilities of the analysis on time intervals of various extents and lobbies are shown. Proposed ways to further automate the management of transport systems using data collected by the platform. Possible directions for further development of an open platform for transport data analysis are proposed.

from our new paper

Thursday, December 21, 2023

On Audit and Certification of Machine Learning Systems

Obviously, machine learning applications are being used more and more in a wide variety of fields. The general rule today is that in the absence of analytical models, one always turns to machine learning. In itself, machine learning has become synonymous with artificial intelligence. The reverse is also true - artificial intelligence today is machine learning. Sometimes this definition is somewhat limited, and they only talk about artificial neural networks and deep learning in the context of artificial intelligence, but this does not change the essence of the matter. At the same time, it is also obvious that the spread of machine learning technologies leads to the need for their application in the so-called critical areas, where there are special requirements for confirming the operability and quality of software. These areas include, for example, avionics, nuclear power, autonomous vehicles, etc. Audit and, of course, certification are the procedures for evaluating machine learning models. - from our new paper

Friday, December 01, 2023

Certification & audit for machine learning systems

Presentation on audit of machine learning systems. Auditing should be a mandatory procedure for industrial AI systems.

Friday, November 10, 2023

On the analysis of individual data on transport usage

The percentage of the world's urban population is currently more than 50\% and will increase according to UN forecasts. Urban infrastructure must develop along with population growth. This article provides an overview of methods for improving the city's transport infrastructure based on data analysis. The article presents methods for reducing harmful emissions, optimizing the operation of taxis and public transport, as well as recognizing transportation modes and some other tasks. These methods operate with data describing the transport behavior of individual users of the transport network. The sources of such data are smart card validators, GPS sensors, and smartphone accelerometers. The article reveals the advantages and disadvantages of using each of the data types, as well as presents alternative ways to obtain them. These methods, along with methods for aggregated data analysis, can become the main part of a single platform that will allow city authorities in the process of improving the transport infrastructure. We propose architecture of this platform which will allows developers to extend range of available algorithms and methods dynamically.

DOI: 10.14357/20790279230104

Thursday, November 09, 2023

A Survey of Model Inversion Attacks and Countermeasures

This article provides a detailed overview of the so-called Model Inversion(MI) attacks. These attacks aim at Machine-Learning-as-a-Service (MLaaS) platforms, and the goal is to use some well-prepared adversarial samples to attack target models and gain sensitive information from ML models, such as items from the dataset on which ML model was trained or ML model's parameters. This kind of attack now becomes an enormous threat to ML models, therefore, it is necessary to research this attack, understand how it will affect ML models, and based on this knowledge, we can propose some strategies that may improve the robustness of ML models.

DOI: 10.14357/20790279230110

Friday, June 02, 2023

GSMA Open API


GSMA announced the creation of open telephony interfaces for third-party providers.
Lack of third party support has always been the Achilles' heel of telecom, both wired and wireless.
The need for such APIs is obvious, attempts have been made to create them, but there is no result.
Will a new attempt succeed or is it already too late?

from our presentation on FRUCT-2023

Wednesday, April 19, 2023

Local Services Based on Non-standard Wi-Fi Direct Usage Model

This article discusses a new model for building applied mobile services that use location information and operate in a certain limited spatial area. As a basis for building such applications, a new interpretation of the standard features of Wi-Fi Direct is used. The Wi-Fi Direct specification, in addition to defining the form of device connection, also introduces the concept of a service, when one device offers some service functions to another within the framework of a Wi-Fi Direct connection. Each device can both represent several services and send out search requests for other services. Based on the network proximity architecture, where connections are not used, and wireless network advertising tools are used to convey user information, Wi-Fi Direct services can be considered as key-value databases that exist on mobile devices and can be searched by keys in some local areas. It is these storages that underlie the two models of application services presented in the article that use the spatial proximity of mobile devices: direct messaging between devices without centralized control and the hyper-local Internet model.

source

Tuesday, May 03, 2022

On a formal verification of machine learning systems

The paper deals with the issues of formal verification of machine learning systems. With the growth of the introduction of systems based on machine learning in the so-called critical systems (systems with a very high cost of erroneous decisions and actions), the demand for confirmation of the stability of such systems is growing. How will the built machine learning system perform on data that is different from the set on which it was trained? Is it possible to somehow verify or even prove that the behavior of the system, which was demonstrated on the initial dataset, will always remain so? There are different ways to try to do this. The article provides an overview of existing approaches to formal verification. All the considered approaches already have practical applications, but the main question that remains open is scaling. How applicable are these approaches to modern networks with millions and even billions of parameters? - from our new paper

Sunday, May 01, 2022

A Survey of Adversarial Attacks and Defenses for image data on Deep Learning

This article provides a detailed survey of the so-called adversarial attacks and defenses. These are special modifications to the input data of machine learning systems that are designed to cause machine learning systems to work incorrectly. The article discusses traditional approaches when the problem of constructing adversarial examples is considered as an optimization problem - the search for the minimum possible modifications of correlative data that ”deceive” the machine learning system. As tasks (goals) for adversarial attacks, classification systems are almost always considered. This corresponds, in practice, to the so-called critical systems (driverless vehicles, avionics, special applications, etc.). Attacks on such systems are obviously the most dangerous. In general, sensitivity to attacks means the lack of robustness of the machine (deep) learning system. It is robustness problems that are the main obstacle to the introduction of machine learning in the management of critical systems. - from our new paper

Friday, March 25, 2022

Inaugural Issue JoSCaS

Journal of Smart Cities and Society

All articles are Open Access, the publisher is considering making the whole year Open Access for free to promote the newly created journal. Another incentive to submit reports on your research.