Dr Reza Shokri

As artificial intelligence (AI) becomes increasingly pervasive in our daily lives, its widespread use has inevitably raised significant concerns. Among these are crucial issues such as data privacy and security.

To address these challenges, it has become essential to develop ethical and trustworthy AI systems that can safeguard sensitive information and ensure robust decision-making processes. These systems must be designed to prevent data breaches, protect user privacy, and withstand cyber threats, thereby fostering a safe and reliable technological environment. As we continue to integrate AI into various aspects of society, the need for rigorous research and development in ethical AI and cybersecurity becomes even more paramount, ensuring that these advancements benefit us while minimising risks.

Leading the way in this critical area of research and developing tech for good is the Data Privacy and Trustworthy Machine Learning research laboratory at NUS. Headed by US Presidential Young Professor Dr Reza Shokri, this lab is pioneering efforts to make AI more secure and reliable.

The Risks of AI and Cybersecurity

“Advanced AI has enabled many new technologies. However, there are many risks in using machine learning on sensitive data and in critical decision-making processes. AI systems need to be trained on a large amount of data to learn how to make accurate decisions in a given task (for example, automatically translating documents),” Dr Shokri shared.

“The question is: Can we trust AI algorithms to have access to our personal data (for example, when they are used in medical and financial applications, or when they are used in our smartphones)? What if they leak our sensitive information to untrusted entities? Can we trust AI systems to be robust against hackers and adversarial manipulations? For example, when they are used in a variety of applications in smart cities,” explained Dr Shokri. 

“We know that computer systems can be very vulnerable to attacks. AI systems are no exception. So, although we want to take advantage of the power of AI algorithms, we need to make sure they are safe and trustworthy,” he continued.

The Transformative Impact of NUS’ Data Privacy and Trustworthy AI Lab

The lab develops theoretical foundations of data privacy in AI and designs algorithms for enabling machine learning across organisations. Additionally, it focuses on creating ethical and fair AI systems, striving to make AI technologies more transparent and accountable. These efforts ensure that AI applications are not only effective but also trustworthy, reducing risks related to data breaches and biased decision-making and fostering greater public confidence in AI.

The lab also develops tools to enable auditing privacy for personal data in AI algorithms: The tool, named ‘privacy meter’, is being used to quantitatively evaluate the privacy risks of AI algorithms across a wide range of systems.

The lab is also very active in educating students and researchers on ethical and responsible computing. This helps design systems that are safe, ethical and privacy-preserving. By instilling these principles in the next generation of technologists, the lab ensures that future AI developments will prioritise ethical considerations, thereby building public trust and advancing the field responsibly.

Essential Role of Donor Support in Advancing AI Research

This groundbreaking research is made possible with the support of donors who fund research initiatives like these. Their contributions enable the development of solutions to critical challenges such as AI privacy problems, ML and cybersecurity threats, and the creation of ethical AI.

Recognising the importance of this work, the lab has received support from donors from leading tech companies, such as Intel, Meta (Facebook), Google, and VMWare.

Google also supports PhD Fellowships to empower outstanding graduate students at NUS doing exceptional and innovative research in areas relevant to computer science and related fields.

“We are very grateful to donors for helping us to achieve our goals,” expressed Dr Shokri.

Investing in The Future of AI and Cybersecurity

We are only at the beginning of AI development, and the vast potential it holds makes research in ethical AI and cybersecurity crucial. The opportunities are limitless, but so are the challenges. Addressing these challenges requires significant investment in research and development to create AI systems that are not only powerful but also secure.

By supporting these efforts, you can help ensure that AI technology is developed responsibly, with a focus on safeguarding privacy, preventing data breaches, and protecting against cyber threats.

If you wish to be part of this groundbreaking work, consider making a gift to NUS. Your contribution will support a community dedicated to shaping a brighter, more secure future through innovation. Join us in this transformative journey and help create a positive impact for generations to come!