international and multidisciplinary environment at Prof. Pfeifer’s lab was so inspiring,” he recalls, “and continues to impact my approa� to science today.” Testing AI products before release Today, Gabriel Gomez’ team stress- tests large language models (LLMs), as well as image, video and audio generating AI models before public release. To do this, the team has a list of sensitive topics and systematically creates prompt variations to probe a model’s weaknesses. Te�niques include rephrasing questions, altering single �aracters to bypass filters or even turning prompts into poems. This approa� uncovers vulnerabilities in the model su� as misinformation, bias, hallucination, or failure to handle borderline cases, and includes exten- sive work ensuring that models do not create content that could contain violence or depictions of violence. The team then provides actionable feed- ba� to the developers before the product is deployed. The importance of Responsible AI Gomez’ commitment to Responsible AI stems from firsthand experience of the limitations and dangers of genera- tive te�nologies. Biased training data can lead to algorithmic discrimination. “TODAY’S TECHNOLOGY ALLOWS FOR SCAMMERS TO GENERATE VOICES SOUNDING EXACTLY LIKE THE VOICE OF A LOVED ONE, CALLING IN DISTRESS FOR YOUR HELP.” Oec. Juni 2025 Takeaways ● Large language models are powerful tools. But you cannot blindly trust them. ● Generative AI is here to stay, and children and vulnerable groups are going to use it. We need to insert ethics and responsibility into the models. ● Design, Development and Deployment: Responsible AI considerations need to be integrated every step of the way. For example, facial recognition systems often fail with people of color and voice recognition struggles with non-native accents. For years, Gomez has experimented with voice cloning. When he started, it took weeks and enormous computing power to clone a person’s voice. Today, with just six seconds of audio, anyone’s voice can be replicated. This has significant implica- tions for privacy and security. “We all have heard of scam calls with obviously automated voices. They don’t pose mu� of a risk. However, today’s te�nology allows for scam- mers to generate voices sounding exactly like the voice of a loved one, calling in distress for your help.” For Gomez, Responsible AI means proactively identifying and mitigating risks, especially for vulnerable pop u- lations, for example, young or old people, minorities, or otherwise disadvantaged communities. His team does this through embedding ethical principles throughout the lifecycle of AI systems, from design and training to deployment and monitoring of running systems. 1 5 Customer benefits from Accenture’s work Accenture’s Red Team conducts comprehensive risk assessments, classifying systems as high or low risk based on factors like data bias and regulatory compliance. Is there a right way to regulate AI? Not through a single, global regulatory solution, Gomez believes. Culture plays a large role: While the United States favors a flexible, innovation-driven approa�, countries like Japan and Canada occupy the middle ground, and the European Union has very strict and mandatory regulations, particularly around biometric data. This regulatory diversity means that multinational organizations must tailor their AI strategies to ea� region, balancing innovation with compliance. To handle this, Gomez’ team is working on a product, the AI Companion, that automatically �e�s for compliance with a local market’s regulatory requirements. Gomez has found his personal answer to the question of the ethical handling of AI: “I put my energy into preventing potential harm and providing solutions for others to do so too.” Just as his early resear� focused on making robots useful companions to humans, programmed to ensure seamless and non-harming interac- tions, his work in AI aims to a�ieve the same – harnessing the potential of the te�nology while protecting the humans working with it. For Gabriel Gomez, father of two teenage chil- dren, Responsible AI is not just a technical chal- lenge, but a societal imperative. By combining rigorous testing, ethical understanding, a multi- disciplinary approach and integrating global regu- lations, Gomez and his team help organizations harness the power of AI while safeguarding the interests of individuals and communities.