Sunday, October 26, 2025
FRUCT 39
On behalf of the FRUCT TPC team, I welcome you to submit papers to the IEEE-sponsored 39th FRUCT conference that will be held on April 28-30, 2026. The conference has a low registration fee and allows online participation. The submission deadline is March 2, 2026. The conference is included in the major indexes, e.g., Scopus, WoS, DBLP, etc. Its proceedings are included in SJR, CORE, and AMiner ratings and recognized by several national systems, e.g., JUFO=1 (FI), NSD=1 (NO), and BFI=1 (DK). For further details, please refer to www.fruct.org/cfp39 and submit your papers at www.fruct.org/submit39
Sunday, October 12, 2025
On Image Augmentation
The paper considers methods of natural image augmentation, i.e. those whose application results are close to natural impacts on environmental objects that machine learning models may encounter in industrial applications: the influence of weather conditions; operating features or malfunctions of device cameras, etc. The paper presents a taxonomy of methods of natural image augmentation, which includes weather artifacts, camera artifacts, and background substitution for the main object in the image. Existing software libraries for image augmentation are considered in detail, and their shortcomings and limitations are described. The architecture and implementation of a new open library for image augmentation are presented, and the results of its testing on specialized datasets are given. - On Natural Image Augmentation to Increase Robustness of Machine Learning Models
Friday, April 11, 2025
Large Language Models in Cyberattacks
The article provides an overview of the practice of using large language models (LLMs) in cyberat-tacks. Artificial intelligence models (machine learning and deep learning) are applied across various fields,with cybersecurity being no exception. One aspect of this usage is offensive artificial intelligence, specificallyin relation to LLMs. Generative models, including LLMs, have been utilized in cybersecurity for some time,primarily for generating adversarial attacks on machine learning models. The analysis focuses on how LLMs,such as ChatGPT, can be exploited by malicious actors to automate the creation of phishing emails and mal-ware, significantly simplifying and accelerating the process of conducting cyberattacks. Key aspects of LLMusage are examined, including text generation for social engineering attacks and the creation of maliciouscode. The article is aimed at cybersecurity professionals, researchers, and LLM developers, providing themwith insights into the risks associated with the malicious use of these technologies and recommendations forpreventing their exploitation as cyber weapons. The research emphasizes the importance of recognizingpotential threats and the need for active countermeasures against automated cyberattacks. - from our new paper
Subscribe to:
Comments (Atom)