As the European Union finalises its third draft of the AI Code of Conduct, ePotentia’s scientific AI experts have been actively contributing to feedback sessions, highlighting crucial considerations for projects like AID4GREENEST.

While one of the focus points of the second draft focused on mitigating systemic risks from AI systems in domains like nuclear, chemical, and biological sciences, a delicate balance is required to maintain scientific utility.

The challenge lies not in simply removing obvious harmful content like weapon designs or illicit synthesis methods, but in preserving legitimate scientific knowledge essential for research and innovation. While regulatory focus often centres on removing explicit harmful content (like chemical weapon formulations), the greater challenge lies in addressing general scientific knowledge that could aid misuse indirectly.

For instance, restricting AI capabilities to troubleshoot precipitation issues or optimise reaction kinetics – intended to prevent illicit synthesis – would simultaneously undermine legitimate materials research, hampering our ability to solve critical challenges in green steel development such as controlling inclusions during continuous casting or optimising slag chemistry for impurity removal.

This illustrates how overly broad restrictions on general scientific problem-solving could inadvertently impede sustainable innovation. This creates a complex regulatory challenge: how to address potential systemic risks without compromising the scientific capabilities that make AI valuable for sustainable innovation.

As regulations evolve, ePotentia remains committed to developing AI systems that are both responsible and scientifically robust for advancing green technologies.