UBITECH participates at the virtual kick-off meeting hosted by INTRASOFT (January 11-12, 2021) of the STAR Research and Innovation Action, officially started on January 1st, 2021. The project is funded by European Commission under Horizon 2020 Programme (Grant Agreement No. 956573) and spans on the period January 2021 – December 2023. The STAR project constitutes a joint effort of AI and digital manufacturing experts towards enabling the deployment of standard-based secure, safe reliable and trusted human centric AI systems in manufacturing environments. STAR will research and make available to novel technologies that will enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, STAR will research technologies that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks.
STAR’s will research and integration leading edge AI technologies with wide applicability in manufacturing environments, including: active learning systems that boost safety and accelerate the acquisition of knowledge; simulated reality systems that accelerate Reinforcement Learning (RL) in human robot collaboration scenarios; explainable AI (XAI) systems that boost the transparency of industrial systems and increase the trust on them; human Centric digital twins enabling worker monitoring for safer and trustful production processes; advanced RL techniques for optimal navigation of mobile robots and for the detection of safety zones in industrial plants; cyber-defence mechanisms for sophisticated poisoning and evasion attacks against deep neural networks operating over industrial data. These technologies will be validated in challenging scenarios in manufacturing lines in the areas of quality management, human robot collaboration and AI-based agile manufacturing. STAR will eliminate security and safety barriers against deploying sophisticated AI systems in production lines.
Within STAR, UBITECH undertakes the implementation of the Cyber-Defence Mechanisms against Poisoning and Evasion Attacks, researching defence strategies against data poisoning attacks that will attempt to train the deep neural networks in ways that compromise their correct operation. To this end, a machine learning system will be design and integrated on top of the provenance system, leveraging on features of XAI techniques in order to raise alerts for training examples that are suspicious. The system will provide the means for replaying the outcomes of the deep neural network and verifying its explainability following the removal of the suspicious sources.