The eighth edition of the ICMLT International Conference on Machine Learning Technologies took place in Stockholm in mid-March. The aim of the conference is to motivate the next generation of researchers to further their interests in the field of machine learning. It will discuss the latest technological advances and address recent trends in machine learning research and applications.
For Martin Nocker, project researcher at the department of Digital Business & Software Engineering, the ICMLT offered the opportunity to present two of his latest publications:
His first talk was on the topic: HE-MAN - Homomorphically Encrypted MAchine Learning with oNnx Models.
Machine Learning (ML) algorithms are critical to the success of many products today, but people are understandably reluctant to share sensitive data with ML service providers. Fully Homomorphic Encryption (FHE) is a promising technique that enables ML applications using sensitive data without compromising privacy or revealing the underlying model. However, existing FHE implementations are not user-friendly. This is where our implementation "HE-MAN" comes in. HE-MAN is an open-source toolset that enables machine learning with sensitive data by protecting data from unauthorized access using FHE. At the same time, cryptographic details are abstracted so that no FHE knowledge of the users is required. The evaluation shows that HE-MAN achieves accuracies close to those of computations on plaintext, but at the cost of increased runtime.
Original title of the 2nd talk: On the Effect of Adversarial Training Against Invariance-based Adversarial Examples
So-called adversarial examples trick machine learning classifiers. They can be categorized into two types: sensitivity-based and invariance-based adversarial examples. Sensitivity-based examples add changes to images that are imperceptible to humans, so that the classifier is "fooled" and its result changes. In contrast, invariance-based examples change the perception of humans, but not that of the machine learning model. Adversarial training can make classifiers more robust against both types. In this paper, we investigate the unexplored effects of adversarial training using invariance-based adversarial examples.
Martin Nocker emphasizes, "Conferences such as the ICMLT give us scientists the opportunity to present our research results, to exchange ideas and to develop possible opportunities for future collaboration with various stakeholders, be it industrialists, academics and other professionals from all over the world. That is why I appreciate these event formats and I am excited to participate time and time again!"