We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Principal AI Safety Practice Evangelist

Microsoft
United States, Washington, Redmond
Jan 09, 2025
OverviewArtificial Intelligence has the potential to change the world around us, but we must act ethically along the way. At Microsoft, we are committed to the advancement of AI driven by ethical principles. We are looking for a Principal AI Safety Practice Evangelist to join us and understand the safety and security risks related to AI systems that need to be addressed. Are you intersted about safety, security and technology in society? This may be a great opportunity for you! Who we are: We are the Artificial Generative Intelligence Security (AeGIS) team, and we are charged with ensuring justified confidence in the safety of Microsoft's generative AI products. This encompasses providing an infrastructure for AI safety; serving as a coordination point for all things AI incident response; researching the quickly evolving threat landscape; red teaming AI systems for failures; and empowering Microsoft with this knowledge. We partner closely with product engineering teams to mitigate and address the full range of threats that face AI services - from traditional security risks to novel security threats like indirect prompt injection and entirely AI-native threats like the manufacture of NCII or CSAM or the use of AI to run automated scams. We are a mission-driven team intent on delivering trustworthy AI and response processes when it does not live up to those standards. We are always learning. Insatiably curious. We lean into uncertainty, take risks, and learn quickly from our mistakes. We build on each other's ideas, because we are better together. We are motivated every day to empower others to do and achieve more through our technology and innovation. Together we make a difference for all of our customers, from end users to Fortune 50 enterprises. Our team has people from a wide variety of backgrounds, previous work histories, and life experiences, and we are eager to maintain and grow that diversity. Our diversity of backgrounds and experiences enables us to create innovative solutions for our customers. Our culture is collaborative and customer focused. What we do: While some aspects of safety can be formalized in software or process, many things require thinking and experience - things like threat modeling, identifying the right places and ways to mitigate risks, and building response strategies. In the world of AI safety, this requires an awareness and understanding of threats and risks far beyond those from traditional security; you don't just need to worry about an access control failure, you need to worry about the user of your system having an abusive partner who's spying on them. The Empowering Microsoft team within AeGIS is charged with continually distilling our understanding of AI safety into training, documentation, methodologies and tools that empower the people designing, building, testing, and using systems to do so safely. While the team's top priority is to train Microsoft's own teams, we are looking beyond that to provide these resources to the world at large. For us, AI Safety is not about compliance, it's about trust. How you can help: We are searching for a person who can identify patterns of AI safety risk as well as best practices from a broad spectrum of technical and other sources and distill those down to their essential pieces. This person will also be able to transform those essential pieces into content that can be communicated to a range of partner teams and audiences so that they understand the need for what needs to be addressed (for patterns that require mitigation) or how to incorporate into their work (for best practices). A particular challenge will be that these practices involve thinking in ways people aren't familiar with (thinking like adversaries, thinking about how systems will fail) and about issues people are unfamiliar or even uncomfortable with (the abusive partner example above is not hypothetical), and we need to light up a few hundred thousand people with a deep understanding of this nonetheless. Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft's mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers' heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond.
ResponsibilitiesAs part of the AI Safety Threat Understanding team, you will be a key contributor helping to identify patterns of risk from a diverse set of signals and helping partner teams develop strategies for addressing those patterns in a systematic way.Work with our training and education teams to create content across a variety of media that presents these patterns in practical human/societal centered ways. Our audience for this content is wide ranging and includes engineering, UX design, program management and business leaders.You will be responsible for managing some of the partner team relationships, building alignment with them, keeping them involved of the progress and ensuring that their perspective is represented.Help define new policies and procedures (or changes to existing ones) that ensure that customers can have justified trust in Microsoft's AI services.You will have the opportunity to contribute and shape the way AI safety is embedded day-to-day engineering at Microsoft.OtherEmbody our Culture and Values
Applied = 0

(web-776696b8bf-cvdwt)